Re: x86/random: Speculation to the rescue

From: Kees Cook
Date: Tue Oct 01 2019 - 12:37:43 EST


On Tue, Oct 01, 2019 at 06:15:02PM +0200, Ahmed S. Darwish wrote:
> On Sat, Sep 28, 2019 at 04:53:52PM -0700, Linus Torvalds wrote:
> > Ahmed - would you be willing to test this on your problem case (with
> > the ext4 optimization re-enabled, of course)?
> >
>
> So I pulled the patch and the revert of the ext4 revert as they're all
> now merged in master. It of course made the problem go away...
>
> To test the quality of the new jitter code, I added a small patch on
> top to disable all other sources of randomness except the new jitter
> entropy code, [1] and made quick tests on the quality of getrandom(0).
>
> Using the "ent" tool, [2] also used to test randomness in the Stephen
> Müller LRNG paper, on a 500000-byte file, produced the following
> results:
>
> $ ent rand-file
>
> Entropy = 7.999625 bits per byte.
>
> Optimum compression would reduce the size of this 500000 byte file
> by 0 percent.
>
> Chi square distribution for 500000 samples is 259.43, and randomly
> would exceed this value 41.11 percent of the times.
>
> Arithmetic mean value of data bytes is 127.4085 (127.5 = random).
>
> Monte Carlo value for Pi is 3.148476594 (error 0.22 percent).
>
> Serial correlation coefficient is 0.001740 (totally uncorrelated = 0.0).
>
> As can be seen above, everything looks random, and almost all of the
> statistical randomness tests matched the same kernel without the
> "jitter + schedule()" patch added (after getting it un-stuck).

Can you post the patch for [1]? Another test we should do is the
multi-boot test. Testing the stream (with ent, or with my dieharder run)
is mainly testing the RNG algo. I'd like to see if the first 8 bytes
out of the kernel RNG change between multiple boots of the same system.
e.g. read the first 8 bytes, for each of 100000 boots, and feed THAT
byte "stream" into ent...

--
Kees Cook