Random numbers

Kent Borg kentborg at borg.org
Wed Nov 3 19:09:18 UTC 2010

C de-Avillez wrote:
> Yes indeed. But I have seen many applications that use /dev/random
> instead. Perhaps the reason is the perceived danger of /dev/urandom
> when it falls back to a PRNG when low on entropy. 
> Please note that I said *perceived*. To summarise how it works:
> * /dev/random: pretty good RNG, will block when it needs more entropy
> * /dev/urandom: uses same RNG as /dev/random, but falls back to a PRNG
>   when low on entropy. I have not heard of any published attack on this
>   PRNG.

I would upgrade your description to "very good RNG".

The difference between /dev/random and /dev/urandom is only the blocking 
behavior.  In both cases you are using the identical "PRNG" code, but 
/dev/random stops if it doesn't think the pool has had enough stirring 

The problem is that estimating entropy is pretty impossible.  Certainly 
the kernel can tell whether it is being starved of entropy, but if there 
is a network connected and in use, and if the kernel is allowed to 
collect entropy from it, then it can't be starved.  The question then 
becomes whether it is "under-fed".  That is an estimation that the 
kernel doesn't have enough information to make (it doesn't really know 
where the entropy is coming from nor whether the bad guys have detailed 
information about it).  So the difference between /dev/random and 
/dev/urandom becomes that "sometimes /dev/random stops working".  Denial 
of service issues surround /dev/random.

The weaknesses are these:

1. Does /dev/urandom have serious security bugs?
2. Is SHA-1 broken?
3. Is the machine this is running on secure?
4. Is there something outside of your responsibility that can compromise 

SHA-1 is used for lots of important work, if it has problems that make 
it a bad random number generator, that is very big news in the larger 
cryptography community.  (Tell people!)

If the internal state was initialized from a random source, really is 
kept secure, and SHA-1 works, then there is actually no need for added 

 - But if the initial state is not completely random, then stirring it 
with new entropy is good. 

 - If someone gets a hold of a backup of /var/lib/urandom it would be 
nice if the internal state has been stirred since then, new entropy is good.
 - If a flaw is discovered in SHA-1 an RNG that keeps stirring the pool 
with new entropy might keep you safe (likely the only exploit requires 
megabytes of data but you are stirring it much more frequently), new 
entropy is good.

Even if the quality of the new entropy is not very high (maybe you feed 
/dev/urandom a copy of the Bible and the bad guys have a copy of the 
Bible) it still doesn't hurt.  New entropy is only used to stir the 
pool; if the pool started out in a nicely mixed up state, stirring it 
will never order it again.

If an embedded box is worried about not having enough entropy at the 
beginning of time, at its first boot, then feeding the power-up contents 
of RAM into /dev/urandom can't hurt.  No, RAM is not entirely random in 
its power-up content, but neither is it completely predictable.  If only 
a couple percent of the contents are unpredictable, it doesn't take much 
RAM to completely stir the Linux entropy pool. 

> /dev/urandom should be quite secure, with the usual precaution to
> go to /dev/random for long-term key generation.

Much more important for long-term key generation is to use a secure 
machine, but go ahead and use one with a mouse to generate *plenty* of 
entropy...and no spyware.

> Yes. Perhaps a good example of how things that seem random turn out
> not to be so is Knuth's example of a sequence of operations that seem
> to be getting random sequences of digits, operating on them, etc,
> etc, only to find the output quickly converging to a stable
> (constant) value.

That was a long time ago.  A lot of study has gone into the subject 
since then.  He wasn't thinking of pseudo random number generation as 
being cryptography, which it is.

>> Yes, proving randomness is hard, but if a random number generator is
>> (a) designed with thought, (b) carefully constructed, and (c) built
>> out of good parts (i.e., SHA-1), the result can be very good.  The
>> Linux random number generator makes a good stab at all three of
>> those points.
> I would only add (d) public independent peer-review.

That is rare.  Getting really qualified minds to look at it hard is 
difficult, they are busy doing other things.

Slight change of topic: One should never invent one's own cryptographic 
primitives.  For even if they are as good (or better) than the public 
ones, how would you ever know?  OK, if you are really paranoid and want 
to layer your own cryptography on top of something standard (using 
unrelated keys!), it can't hurt.  Except for the distraction is will be 
from other aspects of your security.  RNGs are useless alone, and if the 
system they are part of into isn't secure, then it doesn't matter. 

Cut-to-the-crash, bottom line: /dev/urandom is great.  If you 
roll-your-own embedded system, make sure your entropy sources are turned 

After that you are free forget about issues surrounding /dev/urandom, 
but make sure the rest of your system is secure.

-kb, the Kent who has build RNGs (in Python and a physical box) but 
always out of established cryptographic primitives.

More information about the ubuntu-users mailing list