Huge amount of randomness with cuse and "urandompar"

From: Michael BÃsch
Date: Fri Oct 14 2011 - 16:39:07 EST


Hi there,

from time to time I need huge amounts of reasonably strong random
data. I want to use /dev/urandom for this, as it seems to have received
enough testing and has various good sources of entropy and good
support by tools such as rngd.

However, /dev/urandom does not seem to benefit at all from multicore systems.
Large parts of the urandom read-codepath in the kernel seem to be completely
lockless, though. So my basic idea was to throw a lot of reader threads at it
while preserving the convenient character device interface. Which lead
to this proof-of-concept CUSE project:

http://bues.ch/gitweb?p=urandompar.git;a=summary
git://git.bues.ch/urandompar.git

So from a first look it doesn't seem too bad, performance wise.
It seems that it currently scales well enough up to 4 CPUs.
Here's a simple chart of a trivial benchmark on a 6-core CPU:
http://bues.ch/misc/urandomparscale.pdf

I didn't profile it with oprofile, yet, but my guess is that it
runs more and more into entropy_store spinlock contention with >4 CPUs.
This has to be verified, though.

An open question still is: What are the implications on the data quality?
Does this massive parallelism affect the urandom algorithms in any way?

Can the bottleneck, that prevents it from scaling properly, be fixed?

Searching for testers, comments and answers. :)

--
Greetings, Michael.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/