Re: issue with /dev/random? gets depleted very quick

From: Matt Mackall
Date: Sun Jun 14 2009 - 15:34:57 EST


On Sun, 2009-06-14 at 21:04 +0200, Folkert van Heusden wrote:
> > [cc:ed to lkml]
>
> > > On an idle system (no gui, no daemons, nothing) system, /dev/random gets
> > > empty in a matter of 20 seconds with a 2.6.26 kernel.
> > > My test:
> > > add 1000 bits to the device:
> > > zolder:/tmp# cat test-RNDADDENTROPY.c
> ...
> > > }
> > > and then check whayt is in it:
> > > zolder:/tmp# ./a.out ; while true ; do echo `date` `cat /proc/sys/kernel/random/entropy_avail` ; sleep 1 ; done
> > > 0
> > > Sun Jun 14 14:50:44 CEST 2009 1117
> ...
> > > Sun Jun 14 14:50:55 CEST 2009 157
> > > Is there something wrong with it?
> > Does it go below 128? If not, that's the behavior of something depleting
> > the pool down to the anti-starvation threshold via either /dev/urandom
> > or get_random_bytes.
>
> No, it stays above 128. Sometimes around 13x, sometimes 151, so not
> always close to 128.
>
> > On my system, I'm seeing that behavior as well. fuser reports a bunch of
> > processes hold /dev/urandom open, but stracing them doesn't reveal a
> > culprit. Which means there's now probably something in the kernel
> > calling get_random_bytes continuously.
>
> Yes. On the systems I tried, nothing had /dev/*random open, also no
> cronjobs that could use it. And still it gets lower.
>
> > Is this a problem? It really shouldn't be. Everyone should be
> > using /dev/urandom anyway. And the anti-starvation threshold guarantees
>
> Well, if I understood correctly how /dev/*random works, urandom is fed
> by /dev/random. So if there's almost nothing left in the main pool and
> urandom demands bits then we have an issue.
> Also, if you frequently want to generate keys (thing gpg, ssl), I think
> you want bits from /dev/random and not urandom.

There is really no difference.

In an ideal world, we could accurately estimate input entropy and thus
guarantee that we never output more than we took in. But it's pretty
clear we don't have a solid theoretical basis for estimating the real
entropy in most, if not all, of our input devices. In fact, I'm pretty
sure they're all significantly more observable than we're giving them
credit for. And without that basis, we can only make handwaving
arguments about the relative strength of /dev/random vs /dev/urandom.

So if you're running into /dev/random blocking, my advice is to delete
the device and symlink it to /dev/urandom.

Also note that if something in the kernel is rapidly consuming entropy
but not visibly leaking it to the world, it is effectively not consuming
it. The simplest case is:

get_random_bytes(...);
memset(...); /* clear previous result */

In this case, if no one hears the tree fall, it hasn't actually fallen.
There is exactly as much 'unknown' data in the entropy pool as before.
If anything, the pool contents are now harder to guess because it's been
mixed more.

--
http://selenic.com : development and support for Mercurial and Linux


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/