Re: CONFIG_RANDOM (compromise?)

Albert Cahalan (albert@ccs.neu.edu)
Thu, 16 May 1996 14:18:13 -0400 (EDT)


You are asking for 16kB of unswappable kernel memory on every Linux
machine in the world, plus CPU cycles that many people are already
starved for. You are asking for CPU cycles from 386SX-16 users
trying to compile the kernel!

Pick a compromise:

Plan A: Reduce the pool size, reduce the number of times randomness
gets added, and double the bit estimate to make up for the lost
quantity. Use an existing CRC to do the hash.

Plan B: Reduce the number of times randomness gets added and double
the bit estimate to make up for the lost quantity. Put the entropy
pool and hash function into a user space daemon that grabs bits from
a new /dev/rawentropy when required. The daemon provides /dev/random
and /dev/urandom as named pipes.

Ted, you should write a light version of /dev/random. If you don't,
someone else will write one based on a really _bad_ pseudo-random
number generator or will even just get rid of /dev/random completely.
(Well, it's already been done! You'd better hurry if you want to
save /dev/random from obscurity.)