/dev/random Extremely Slow?
Asked Answered
P

7

52

Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.

I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.

I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?

Edit: Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)

Patio answered 27/1, 2011 at 16:57 Comment(4)
You seem to have the 2 devices mixed up; on a linux system, /dev/random is the high-quality, blocking random device. It will "hang" when there's no more collected entropy available to generate high-quality random numbers. /dev/urandom should be non-blocking and pseudorandom.Parian
Concerning /dev/random, see wiki: "When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered." /dev/urandom should be non-blocking though, are you sure you used that?Comus
as an aside, you ran head -1, this will have the effect of reading one line, ie. read until it encounters a newline. if you're trying to read a small amount of data, you should probably use dd instead.Uncompromising
Although it doesn't block, even /dev/urandom isn't all that well suited for generating large amounts of random data. It sounds like you're not too worried about security in this context, though, so maybe you could grab a few bytes from /dev/urandom and use that to seed the Python (C, whatever) PRNG?Blueprint
T
54

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

See Wikipedia for more info about /dev/random and /dev/urandom.

Torbernite answered 27/1, 2011 at 17:4 Comment(4)
Ah, it seems I've mixed up which of the two was not considered secure for cryptographic usage. If I do write to /dev/random (to mix in extra random data), would this help solve the problem and what "return on investment" would I be getting? (For example, if I write in 1MB of data, how many MB of reading will I be guaranteed to get out of it, or is that even the case?)Patio
@GigaWatt: That really depends on how many bits of entropy the system gets from your writes. This will also be fairly futile unless you possess a better source of entropy than /dev/random, in which case you should probably be letting the kernel handle that entropy source in the first placeUncompromising
@GigaWatt:yes, you can wtire to /dev/random but that wouldn't help because you wouldn't be calling the ioctl call to increase entropy and even if you get higher quality random data, it wont be faster. Read about rngd.Branson
@Torbernite Both /dev/random and /dev/urandom are seeded from the same CSPRNG. /dev/urandom is not distinguishable from true random, given existing time, cryptanalysis, and computing power. The myth that somehow /dev/random is more "true random" than /dev/urandom is just that- a myth. /dev/random only blocks with the input entropy pool is exhausted or there are less than 160-bits of data in the blocking entropy pool. But it uses the same CSPRNG as /dev/urandom, and there is no practical difference between the two. Unless you're looking as some informational theoretic alg. Moral? Use /dev/urandom.Bedspread
B
26

This question is pretty old. But still relevant so I'm going to give my answer. Many CPUs today come with a built-in hardware random number generator (RNG). As well many systems come with a trusted platform module (TPM) that also provide a RNG. There are also other options that can be purchased but chances are your computer already has something.

You can use rngd from rng-utils package on most linux distros to seed more random data. For example on fedora 18 all I had to do to enable seeding from the TPM and the CPU RNG (RDRAND instruction) was:

# systemctl enable rngd
# systemctl start rngd

You can compare speed with and without rngd. It's a good idea to run rngd -l -f from command line. That will show you detected entropy sources. Make sure all necessary modules for supporting your sources are loaded. To use TPM, it needs to be activated through the tpm-tools. update: here is a nice howto.

BTW, I've read on the Internet some concerns about TPM RNG often being broken in different ways, but didn't read anything concrete against the RNGs found in Intel, AMD and VIA chips. Using more than one source would be best if you really care about randomness quality.

urandom is good for most use cases (except sometimes during early boot). Most programs nowadays use urandom instead of random. Even openssl does that. See myths about urandom and comparison of random interfaces.

In recent Fedora and RHEL/CentOS rng-tools also support the jitter entropy. You can on lack of hardware options or if you just trust it more than your hardware.

UPDATE: another option for more entropy is HAVEGED (questioned quality). On virtual machines there is a kvm/qemu VirtIORNG (recommended).

UPDATE 2: In Linux 5.6 kernel does its own jitter entropy.

Branson answered 8/2, 2013 at 16:10 Comment(1)
Thank you!!! For Ubuntu 20.04 the package and the service name are both rng-toolsFacer
J
16

use /dev/urandom, its cryptographically secure.

good read: http://www.2uo.de/myths-about-urandom/

"If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter."

When in doubt in early boot, wether you have enough entropy gathered. use the system call getrandom() instead. [1] (from Linux kernel >= 3.17) Its best of both worlds,

  • it blocks until (only once!) enough entropy is gathered,
  • after that it will never block again.

[1] git kernel commit

Jaquiss answered 14/1, 2015 at 15:10 Comment(1)
Why was this answer downvoted? It seems like a good answer to me... but maybe I'm missing the problem with it.Vinita
R
2

If you want more entropy for /dev/random then you'll either need to purchase a hardware RNG or use one of the *_entropyd daemons in order to generate it.

Rouen answered 27/1, 2011 at 17:5 Comment(1)
generating entropy is a hard problem. It is safer for users (anybody without deep crypto knowledge) to avoid changing default mechanism. Better stick /dev/urandom as explained here 2uo.de/myths-about-urandomBranson
I
2

This fixed it for me. Use new SecureRandom() instead of SecureRandom.getInstanceStrong()

Some more info can be found here : https://tersesystems.com/blog/2015/12/17/the-right-way-to-use-securerandom/

Illboding answered 22/3, 2019 at 10:16 Comment(0)
A
1

If you are using randomness for testing (not cryptography), then repeatable randomness is better, you can get this with pseudo randomness starting at a known seed. There is usually a good library function for this in most languages.

It is repeatable, for when you find a problem and are trying to debug. It also does not eat up entropy. May be seed the pseudo random generator from /dev/urandom and record the seed in the test log. Perl has a pseudo random number generator you can use.

Alluvion answered 29/5, 2012 at 21:17 Comment(3)
/dev/random is same pseudo-generator as urandom see 2uo.de/myths-about-urandomBranson
@Branson The article says that /dev/random can block. Aso it and /dev/urandom are, as it says, unpredictable. Therefore as I said unrepeatable, so poor for testing.Alluvion
/dev/(u)random is a pseudorandom number generator. Predictability and repeatability are different properties. You just lack the necessary kernel controls to make them predictable and repeatable (and thanks God for that). I agree other generators are better for particular testing scenarios. Your answer I must agree gives a reasonable advise to op particular use case. But SO will not let me revert my vote until answer is edited. Which would be a good thing anyway.Branson
H
1

/dev/random should be pretty fast these days. However I did notice in OS X reading small bytes from /dev/urandom was really slow. Work around there seems to be to use arc4random instead? https://github.com/crystal-lang/crystal/pull/11974

Huntington answered 1/11, 2022 at 4:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.