RE: Intel 810 Random Number Generator

From: nathan.zook@amd.com
Date: Mon Jan 24 2000 - 12:38:47 EST


Ahh, NOW we get to the heart of the matter: how many bits do we accept from
this source?

First, there is a matter of the sampling rate. Clearly, if we sample every
clock, the RNG data will be highly skewed by nearby gate transitions. I
don't know what the minimum interval between samples should be, but this
needs to be estimated.

Second, we need to check the correlation of the bits in the result, which is
what Sandy is discussing. As I recall from the press, the RNG calculates an
index into a 2^16 byte array of "true random data", which means that the
data is skewed by the fact that they are sampling a sample, even if the
index is truly random. Presumably, every system has the same array, so it
should be possible to find out what the entries are and see how bad the
effect is.

Third, we need to analize the indexing function to see how much entropy it
has. We know that the final output has no more entropy than the indexing
function. Furthermore, if there is any systematic bias to be expected, this
will excaberate the problem of sampling a sample.

To be properly analyze the first and third matters, we need to look at the
design, no exceptions. The second can probably be reverse engineered.
Without this, I would recommend being VERY pessimistic about the amount of
entropy being added.

Nathan

-----Original Message-----
From: Sandy Harris [mailto:sandy@storm.ca]
Sent: Monday, January 24, 2000 9:53 AM
To: linux-kernel@vger.rutgers.edu
Subject: Re: Intel 810 Random Number Generator

Frank v Waveren wrote:

> > Either you trust RNG or you don't. There is no middle ground.
> >
> > If you do not trust RNG, then using it as a /dev/random entropy source
> > is pointless. Why introduce theoretically non-random entropy to the
> > /dev/random pool?
>
> Am I missing something here? I thought that you could mix all the
non-random
> numbers you want into the entropy pool (long live xor), and it will never
make
> the entropy worse? So I'd say mixing in the RNG won't do any harm, so why
not?

As long as there is some entropy in it, it increases the entropy in the
pool.
The question is what the estimate you use for the input entropy does to
/dev/random's estimate of the total entropy in the pool.

If the input estimate is <= the actual input entropy, all is well. The
overall
estimate stays <= the actual pool entropy and /dev/random works as designed.

If your estimate is high, though, it makes /dev/random think it has more
entropy than it actually does and therefore become (at least theoretically)
somewhat insecure. This is like "slightly dead", not a state you want to get
into, so (as for any other source) you must use a conservative estimate of
the RNG's entropy contribution.

The input estimate in bits per byte of RNG output should obviously be in
the range:

           0 < x < 8

since x = 0 ignores the RNG's contribution entirley and x = 8 trusts it
completely, neither of which we want to do.

Does anyone have a reasonable argument that narrows this range down?

I'd pick 6, that being the largest number I feel comfortable with, but
this is pure guesswork. I'd like to see analysis, whether it supports
or destroys that guess.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jan 31 2000 - 21:00:12 EST