Is this practical on a small supercomputer?
Asked Answered
P

5

6

I'm investigating WEP and as part of that, I'm toying with the RC4 algorithm. I'm trying to decide if an inverse table is feasible to write (although large... I don't have the space and I don't intend to write one). For that, I've decided to check how many matching outputs there are in the first 10 bytes. That will help me decide how well an inverse table would work.

Of course, a 64 bit RC4 encryption has 2^64 possible keys, so that would mean making ~ 2^128 comparisons. Plus, 10 bytes have to be generated for each comparison, which is approximately 265 loops. (256 for RC4 initialization, 10 for the bytes themselves).

Down to business:

On a supercomputer with around 100 cores , would it be possible to perform around 2^135 calculations in 20 days?

(20 days is the limit until I am kicked off. I could end up with only 8, or I could end up with 400+, but I'm guessing 100 cores.)

If it means anything, my program is written in Java. http://pastie.org/2118864

Peccant answered 25/6, 2011 at 1:47 Comment(1)
@Mitch It's in there, kind of buried. I should move the code to a pastebin. "On a supercomputer with around 100 cores (I could end up with only 8, or I could end up with 400+, but I'm guessing 100), would it be possible to perform around 2^135 calculations in 20 days?"Peccant
E
4

Interesting question, and very hard to answer properly. Scalability is one of those "try it and see" things most of the time.

One thing to note is that because of other factors, you're going to get less-than-linear scaling with multi-core systems.

Let's say your program can compare n keys per second. So an ideal (i.e. linear) 100-core system will compute 100n keys per second. To compare all keys (a worst-case scenario, realistic would be half of that) would take (2^135/100n)/86400 days.

If n is 1000, it would take 5041220250680569829087031221211 days, which is about 100 thousand million times longer than some estimates of the age of the universe are.

So I'm going to say... no :) Cryptography algorithms are designed for these kinds of attacks. Also, Java would be the last language to choose when writing such an application :p

Evidential answered 25/6, 2011 at 2:7 Comment(2)
Yep, rainbow tables don't generate the whole set of possible keys, they usually take dictionaries/wordlists and generate the keys from those, so that they can be stored and looked up. WPA offers some defense against that by hashing the key against the SSID before encryption (meaning rainbow tables need to be generated for each SSID).Evidential
Thanks for the help! I'll look more into rainbow tables.Peccant
D
4

In an ideal world, it's around:

2135 operations ÷ 20 days ÷ 24 hours/day ÷ 60 min/hour ÷ 60 sec/min ÷ 100 cores = 1032 operations per second per core (Hz/core), assuming my math isn't off.

You'd need 1032 Hz cores, which do a single calculation per clock. Normally, it'd need multiple. That's... not very reachable at the moment, to say the least. The best you'd reach with a supercomputer is probably around the general area of ~10 GHz = 1010 Hz/core, if you're lucky.

(And this is all ignoring Amdahl's law...)

Dinothere answered 25/6, 2011 at 2:2 Comment(9)
What if I'm doing multiple calculations simultaneously (threads). Does that change anything?Peccant
@Ryan: ... idk, you tell me. Say you have 1 TRILLION threads per core. What happens when you divide 10^32 by one trillion? How close is it to 10^10...?Dinothere
You'll be doing one calculation on one core, so that will speed things up AT MOST to 100 times faster. Still way offEvidential
@Mehrdad I suppose I'm too hopeful :PPeccant
@Ryan Amos - mathematics doesn't lie.Intransigent
@Stephen C: I'm well aware of that.Peccant
So why did you say "I suppose I'm too hopeful" ? Hope has nothing to do with it. As I said, mathematics doesn't lie. (I'm sorry, but I find it hard to accept that someone here is incapable of doing this simple calculation for themselves.)Intransigent
@Stephen C: I did the calculations and found the number of operations needed. I thought it was a bit ridiculous, but when I confronted my dad about it, he said it might be feasible. Please note that my dad has PhD in mathematics. So I decided that I may as well check in with SO to see if I had or had not wasted my time writing the program. Is that so wrong? As well, I haven't done any research on supercomputers, EVER.Peccant
@Ryan Amos - he must be a pure mathematician. Or perhaps he was just pulling your leg :-)Intransigent
E
4

Interesting question, and very hard to answer properly. Scalability is one of those "try it and see" things most of the time.

One thing to note is that because of other factors, you're going to get less-than-linear scaling with multi-core systems.

Let's say your program can compare n keys per second. So an ideal (i.e. linear) 100-core system will compute 100n keys per second. To compare all keys (a worst-case scenario, realistic would be half of that) would take (2^135/100n)/86400 days.

If n is 1000, it would take 5041220250680569829087031221211 days, which is about 100 thousand million times longer than some estimates of the age of the universe are.

So I'm going to say... no :) Cryptography algorithms are designed for these kinds of attacks. Also, Java would be the last language to choose when writing such an application :p

Evidential answered 25/6, 2011 at 2:7 Comment(2)
Yep, rainbow tables don't generate the whole set of possible keys, they usually take dictionaries/wordlists and generate the keys from those, so that they can be stored and looked up. WPA offers some defense against that by hashing the key against the SSID before encryption (meaning rainbow tables need to be generated for each SSID).Evidential
Thanks for the help! I'll look more into rainbow tables.Peccant
F
3

These numbers are somewhat fictions. They are mostly to make a point. The math is grossly over-optimistic to make it easier.

  1. A single core can process 4 billion (232) operations a second (this is hugely grossly optimistic figure)

  2. and since there are 86400 seconds (round up to 217) a day

  3. and 20 days (round up to 25)

  4. and 100 cores (round up to 27)

then... 232 * 217 * 25 * 27 == 2(32 + 17 + 5 + 7) == 261 calculations ... so:

Not a chance. Not even remotely close. The amount of calculations remaining is so staggering I can't even fathom what it really is. Sorry :-)

(With the above figures it would take 279 days...)

Fasten answered 25/6, 2011 at 2:4 Comment(1)
2^135 sounded a bit out of range, but I had to check, just to be sure :D Thanks for the helpPeccant
S
3

People don't realize how big a number can be.

2^135 is roughly 4e40, ok, 43556142965880123323311949751266331066368.

Suppose you have a computer capable of performing 1 exaflop, vastly faster than anything we have today. So if it was capable of one of these computations in EVERY floating point operation, then it could do 10^18 of them in a second. This still leaves you needing 4e22 seconds. There are roughly 31536000 seconds per year, so your little enterprise will still take more than 1e15 years.

Ok, depending on who you talk to, the universe is somewhere between 6000 years old, and 13 billion years or so.

Java or not, the answer is no. (Is Skynet here yet?)

Salvadorsalvadore answered 25/6, 2011 at 2:11 Comment(4)
So the answer is that I shouldn't have waited so long to start this project: I should have started 3 billion years ago.Peccant
Yes, and every second wasted is a few more gazzillion computations lost, so get to work!Salvadorsalvadore
+1 (And a little rant of my own) The number actually looks quite tiny when written as "43556142965880123323311949751266331066368". I mean, no funny exponents and I can write it on a single line of paper. "Just" write out 40 billion (American, not English) 3 times in a row and there it is ;-) ... the true impressiveness of "large" numbers isn't evident without some sort of scale to just show how they can only work with such concepts in theory. (It's much easier to double a [theoretical] number than it is to double the speed or capacity of a machine.)Fasten
@pst Write it out in tallies :D. And I agree, it's amazing how large numbers grow: exponentially. Try running the key generator I wrote. It demonstrates this perfectly. At first, it flies through the first row pretty quickly. It takes 256 times longer to go through the second row, but still manageable. By the 3rd and 4th rows, you just say "When is this going to end???".Peccant
T
1

There are at least 2^240 atoms in the universe, so you wouldn't even need half of them to compute it even at one computation per day. Then again, didn't Bill Gates once say "who would ever need more than half the atoms in the universe?"

Travancore answered 25/6, 2011 at 2:10 Comment(4)
How does this relate? We haven't gone into the 6th stage of evolution yet: en.wikipedia.org/wiki/…Peccant
"wouldn't even need half" is a bit of an understatement (or overstatement?) ;-)Fasten
@pst @Dax: 1/2 would be 2^239 ;) You would need around 2^240 * 2^-100 atoms, assuming 1 atom per computationPeccant
Certainly if we can harness every atom in the universe to do computations, they can compute a way to create new atoms from the ether. So yeah, I guess it doesn't relate.Travancore

© 2022 - 2024 — McMap. All rights reserved.