Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.
I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.
I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?
Edit: Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)
/dev/random
, see wiki: "When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered."/dev/urandom
should be non-blocking though, are you sure you used that? – Comushead -1
, this will have the effect of reading one line, ie. read until it encounters a newline. if you're trying to read a small amount of data, you should probably usedd
instead. – Uncompromising/dev/urandom
isn't all that well suited for generating large amounts of random data. It sounds like you're not too worried about security in this context, though, so maybe you could grab a few bytes from/dev/urandom
and use that to seed the Python (C, whatever) PRNG? – Blueprint