Re: monitoring entropy

Ingo Molnar (mingo@pc7537.hil.siemens.at)
Wed, 15 Oct 1997 12:45:38 +0100 (MET)


On Tue, 14 Oct 1997, Colin Plumb wrote:

> Given that draining the entropy pool is only a minor denial-of-service
> attack (a fork bomb is a much more effective one) [...]

my machine survives fork bombs (load 200-300) without any harm. Note that
there was a property of fork()+scheduling that caused very bad Linux
forkbomb behavior, but this (minor thing) was fixed in 2.0.30.

> , is this really worth
> worrying about? /dev/urandom is plenty good for any conceivable practical
> application, and that can't be denied.

eg. attacker sees some new user prepares a pgp run (you can just trigger
this when 'atime' of the pgp binary changes, and make it automatic ...),
then start a 'pool draining attack' (which is just a tight loop, doesnt
show up in populated systems).

> I'd like to emphasize that: for all practical purposes, /dev/urandom
> will deliver an infinite amount of random data unpredictable to
> any attacker who is not spying on you as you generate it.

no need to spy, just run some tight loop and read /dev/urandom, this
decreases randomness. Those few events here and there adding a few bits to
the pool should not be a problem for some brute-force (assuming limited
number of really random bits) attack ... me thinks.

> /dev/random provides a stronger guarantee: it is unpredictable to
> an attacker with *infinite* computational power. Doing this requires
> that you have as much entropy put into the accumulation system as you
> try to read out, so an attacker's uncertainty in the output can be traced
> back to their uncertainty in the input.
>
> To provide a "reserved" pool with a guarantee of this strength requires
> that some input entropy be set aside for root-only use and not used to
> generate user random output. Which takes up more data space in the
> kernel and makes the output weaker.

it's not just for root! It's for the kernel as well, to generate safe TCP
sequence numbers and (possible) other stuff.

> Doing this without slowing down the entropy-gathering operation (which
> is triggered each interrupt, so it has to be *fast*) is also tricky.

i was talking about limiting _output_. I dont see how this could influence
entropy adding speed in any way.

> Oh, a question for folks who understand the multi-platform coding style.
> On the Pentium, /dev/random takes advnatage of the clock cycle counter
> to get as much timing information as posible. Many other processors
> have this too (Alpha, MIPS, PowerPC, ...) and it would be nice for it
> to work on them too. Is there a better way to do this than to have
> a zillion #ifdefs? [...]

One problem is scaling, and the platform-specific knowledge of 'useful
bits'. get_fast_time() gives uniform (fast) time, but that is
microseconds.

-- mingo