How many iterations of Rabin-Miller should I use for cryptographic safe primes?
Asked Answered
G

8

21

I am generating a 2048-bit safe prime for a Diffie-Hellman-type key, p such that p and (p-1)/2 are both prime.

How few iterations of Rabin-Miller can I use on both p and (p-1)/2 and still be confident of a cryptographically strong key? In the research I've done I've heard everything from 6 to 64 iterations for 1024-bit ordinary primes, so I'm a little confused at this point. And once that's established, does the number change if you are generating a safe prime rather than an ordinary one?

Computation time is at a premium, so this is a practical question - I'm basically wondering how to find out the lowest possible number of tests I can get away with while at the same time maintaining pretty much guaranteed security.

Great answered 13/6, 2011 at 0:8 Comment(2)
Might be a good question for security.stackexchange.comCullum
Voted to close and migrate to Information Security, as this is a crypto question (and it was crossposted there, anyway - but with fewer answers).Viipuri
S
35

Let's assume that you select a prime p by selecting random values until you hit one for which Miller-Rabin says: that one looks like a prime. You use n rounds at most for the Miller-Rabin test. (For a so-called "safe prime", things are are not changed, except that you run two nested tests.)

The probability that a random 1024-bit integer is prime is about 1/900. Now, you do not want to do anything stupid so you generate only odd values (an even 1024-bit integer is guaranteed non-prime), and, more generally, you run the Miller-Rabin test only if the value is not "obviously" non-prime, i.e. can be divided by a small prime. So you end up with trying about 300 values with Miller-Rabin before hitting a prime (on average). When the value is non-prime, Miller-Rabin will detect it with probability 3/4 at each round, so the number of Miller-Rabin rounds you will run on average for a single non-prime value is 1+(1/4)+(1/16)+... = 4/3. For the 300 values, this means about 400 rounds of Miller-Rabin, regardless of what you choose for n.

So if you select n to be, e.g., 40, then the cost implied by n is less than 10% of the total computational cost. The random prime selection process is dominated by the test on non-primes, which are not impacted by the value of n you choose. I talked here about 1024-bit integers; for bigger numbers the choice of n is even less important since primes become sparser as size increases (for 2048-bit integers, the "10%" above become "5%").

Hence you can choose n=40 and be happy with it (or at least know that reducing n will not buy you much anyway). On the other hand, using a n greater than 40 is meaningless, because this would get you to probabilities lower than the risk of a simple miscomputation. Computers are hardware, they can have random failures. For instance, a primality test function could return "true" for a non-prime value because a cosmic ray (a high-energy particle hurtling through the Universe at high speed) happens to hit just the right transistor at the right time, flipping the return value from 0 ("false") to 1 ("true"). This is very unlikely -- but no less likely than probability 2-80. See this stackoverflow answer for a few more details. The bottom line is that regardless of how you make sure that an integer is prime, you still have an unavoidable probabilistic element, and 40 rounds of Miller-Rabin already give you the best that you can hope for.

To sum up, use 40 rounds.

Sectional answered 13/6, 2011 at 12:1 Comment(1)
Thank you. You actually directly answered my question.Great
L
14

The paper Average case error estimates for the strong probable prime test by Damgard-Landrock-Pomerance points out that, if you randomly select k-bit odd number n and apply t independent Rabin-Miller tests in succession, the probability that n is a composite has much stronger bounds.

In fact for 3 <= t <= k/9 and k >= 21,

enter image description here

For a k=1024 bit prime, t=6 iterations give you an error rate less than 10^(-40).

Louie answered 30/1, 2014 at 7:42 Comment(2)
For a k=2048 bit prime, t=3 iterations gives you an error rate less than 10^(-40). Much less likely than the >>10^(-20) chance you have a random bit flip in your computation (10^-40 with ECC).Louie
This is only answer that is well-informed.Cher
T
4

Each iteration of Rabin-Miller reduces the odds that the number is composite by a factor of 1/4.

So after 64 iterations, there is only 1 chance in 2^128 that the number is composite.

Assuming you are using these for a public key algorithm (e.g. RSA), and assuming you are combining that with a symmetric algorithm using (say) 128-bit keys, an adversary can guess your key with that probability.

The bottom line is to choose the number of iterations to put that probability within the ballpark of the other sizes you are choosing for your algorithm.

[update, to elaborate]

The answer depends entirely on what algorithms you are going to use the numbers for, and what the best known attacks are against those algorithms.

For example, according to Wikipedia:

As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys.

So, if you are planning to use these primes to generate (say) a 1024-bit RSA key, then there is no reason to run more than 40 iterations or so of Rabin-Miller. Why? Because by the time you hit a failure, an attacker could crack one of your keys anyway.

Of course, there is no reason not to perform more iterations, time permitting. There just isn't much point to doing so.

On the other hand, if you are generating 2048-bit RSA keys, then 56 (or so) iterations of Rabin-Miller is more appropriate.

Cryptography is typically built as a composition of primitives, like prime generation, RSA, SHA-2, and AES. If you want to make one of those primitives 2^900 times stronger than the others, you can, but it is a little like putting a 10-foot-steel vault door on a log cabin.

There is no fixed answer to your question. It depends on the strength of the other pieces going into your cryptographic system.

All that said, 2^-128 is a ludicrously tiny probability, so I would probably just use 64 iterations :-).

Tetrabranchiate answered 13/6, 2011 at 0:20 Comment(16)
I know what the probabilities are. I'm asking (because I don't have the experience) if in the real world running the test 1000 times is useful in a practical security sense.Great
But that is entirely my point. Running for 1000 iterations is pointless if you are going to use this for (say) 1024-bit RSA keys, because. I will update my answer.Tetrabranchiate
(Actually I should say I'm asking specifically for the lower limit - what you would do to calculate that.)Great
@jnm2: We are talking about things that will not happen before the heat death of the universe (and then some). The point is that if the attacker can crack your key in less time than it takes to produce a bad key, what is the point of working harder to avoid producing a bad key?Tetrabranchiate
The point is, 64 iterations is slower than I'd like. Can I get away with 32 or 16 or 8 and still be confident in the cryptographic strength?Great
The argument works both ways. That is, unless you are also willing to reduce your RSA and symmetric cipher key sizes, too, I wouldn't.Tetrabranchiate
I'm not actually using RSA. It's RSP, an EKE varient of Diffie-Hellman. I'm implementing the entire thing myself. So I'm starting from scratch anyway and am interested in optimization.Great
So I would say estimate the strength of your RSP implementation and go from there...Tetrabranchiate
Even with the update, I still don't see what you're getting at - how does the probability of the number being composite directly affect how easy it is for an attacker to compromise your security?Lesleylesli
@Nick: They are not "directly" related. But remember that nothing is perfect. What we are really asking is, how frequently are we going to generate a bad key? I am saying that if you are willing to tolerate an attacker who can break your key in 1 year, you should be willing to tolerate a bad key about once a year (roughly). And making one of these 2^900 more than the other makes no sense.Tetrabranchiate
@Nemo: RSP is basically as strong as the keys I'm generating since there are no other weak points. In other words, I'm defining the strength of the RSP implementation. That's what I'm asking about. What's the lowest safe number of iterations?Great
@jnm2: Obviously the strength varies with the number of bits in your key. So how are you deciding how large to make your keys? Presumably it is based on the effort you estimate an adversary would need to break them. Whatever that estimated effort is, I would use it to pick this number, too.Tetrabranchiate
I have that in my question. 4096-bit, each prime 2048-bit. I might consider 2048-bit instead of 4096.Great
@jnm2: Ah, sorry I forgot the question :-). Yeah, in that case, I would just use RSA's estimates for key strengths (from the Wikipedia quote). As far as I know, the best attacks on RSA involve factoring the modulus, so those estimates (as of 2003) were presumably using best-known factoring algorithms.Tetrabranchiate
I read the Wikipedia article and didn't see anything about how many Rabin-Miller iterations were considered safe for a key around 4096 bits. That's all I'm asking.Great
@Tetrabranchiate I think that's an unfounded assumption, and depends entirely on the impact of a bad key, which isn't stated.Lesleylesli
R
2

From the libgcrypt source: /* We use 64 Rabin-Miller rounds which is better and thus sufficient. We do not have a Lucas test implementaion thus we can't do it in the X9.31 preferred way of running a few Rabin-Miller followed by one Lucas test. */ cipher/primegen.c line# 1295

Reorder answered 23/4, 2015 at 19:46 Comment(0)
D
2

I would run two or three iterations of Miller-Rabin (i.e., strong Fermat probable prime) tests, making sure that one of the bases is 2.

Then I would run a strong Lucas probable prime test, choosing D, P, and Q with the method described here: https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test

There are no known composites that pass this combination of Fermat and Lucas tests.

This is much faster than doing 40 Rabin-Miller iterations. In addition, as was pointed out by Pomerance, Selfridge, and Wagstaff in https://math.dartmouth.edu/~carlp/PDF/paper25.pdf, there are diminishing returns with multiple Fermat tests: if N is a pseudoprime to one base, then it is more likely than the average number to be a pseudoprime to other bases. That's why, for example, we see so many psp's base 2 are also psp's base 3.

Dotterel answered 15/4, 2019 at 0:46 Comment(0)
D
1

A smaller probability is usually better, but I would take the actual probability value with a grain of salt. Albrecht et al Prime and Prejudice: Primality Testing Under Adversarial Conditions break a number of prime-testing routines in cryptographic libraries. In one example, the published probability is 1/2^80, but the number they construct is declared prime 1 time out of 16.

In several other examples, their number passes 100% of the time.

Dotterel answered 20/4, 2019 at 0:4 Comment(0)
F
0

Only 2 iterations, assuming 2^-80 as a negligibly probability.

From (Alfred. J. Menezes. et al. 1996) §4.4 p.148:

enter image description here

Fomentation answered 19/5, 2022 at 1:43 Comment(0)
F
-2

Does it matter? Why not run for 1000 iterations? When searching for primes, you stop applying the Rabin-Miller test anyway the first time it fails, so for the time it takes to find a prime it doesn't really matter what the upper bound on the number of iterations is. You could even run a deterministic primality checking algorithm after those 1000 iterations to be completely sure.

That said, the probability that a number is prime after n iterations is 4^-n.

Fedak answered 13/6, 2011 at 0:28 Comment(3)
I'm trying to minimize the number of runs, so is 1000 safer than 64 in any practical sense, etc?Great
Last line should read "the probability that a *composite* number is reported as prime after n iterations is at most 4^-n".Console
So if 4^n > composite, then its is a prime for sure (Which means 1 iteration per 2bit in length)?Berseem

© 2022 - 2024 — McMap. All rights reserved.