What complexity are operations on BigInteger?
Asked Answered
G

4

12

What complexity are the methods multiply, divide and pow in BigInteger currently? There is no mention of the computational complexity in the documentation (nor anywhere else).

Gisser answered 28/1, 2010 at 11:33 Comment(5)
You could check the source code.Bathsheba
Actually I have already tried. But it seems quite difficult to analyse without a lot of knowledge. I hope someone reading StackOverflow just happens to have that knowledge. Perhaps that's being too optimistic?Gisser
Is there a specific reason why you mention Java 7? BigInteger exists since Java 1.1.Mink
I mentioned Java 7 because I wanted to consider changes that are being currently worked on too.Gisser
(I have edited this question to generalize to all versions of Java ... since hardly anyone cares about Java-7 and earlier these days.)Dippold
W
6

If you look at the code for BigInteger (provided with JDK), it appears to me that multiply(..) has O(n^2) (actually the method is multiplyToLen(..)). The code for the other methods is a bit more complex, but you can see yourself.

Note: this is for Java 6. I assume it won't differ in Java 7.

Wixted answered 28/1, 2010 at 11:42 Comment(4)
There are several complexities of multiplication: en.wikipedia.org/wiki/… ... can you tell O(n^2) apart from O(n^1.585) or O(n^1.465)?Unscientific
I believe there have been changes in Java 7. I can't remember the details I found searching, but they were scarce.Gisser
Rössel: There exist other algorithms for multiplication, but Java 6 doesn't use them. When multiplying large numbers you'd certainly notice the difference between the schoolbook algorithm and Karatsuba multiplication. The others are less of a jump unless you're filling up primary memory with the numbers.Trustless
OpenJDK 8 does a better job, with an adaptive strategy. For large values of n, its complexity is lower than O(n^2). See also https://mcmap.net/q/1010557/-biginteger-most-time-optimized-multiplicationPayton
D
3

As noted in the comments on @Bozho's answer, Java 8 and onwards use more efficient algorithms to implement multiplication and division than the naive O(N^2) algorithms in Java 7 and earlier.

Java 8 multiplication adaptively uses either the naive O(N^2) long multiplication algorithm, the Karatsuba algorithm or the 3 way Toom-Cook algorithm depending in the sizes of the numbers being multiplied. The latter are (respectively) O(N^1.58) and O(N^1.46).

Java 8 division adaptively uses either Knuth's O(N^2) long division algorithm or the Burnikel-Ziegler algorithm. (According to the research paper, the latter is 2K(N) + O(NlogN) for a division of a 2N digit number by an N digit number, where K(N) is the Karatsuba multiplication time for two N-digit numbers.)

Likewise some other operations have been optimized.


There is no mention of the computational complexity in the documentation (nor anywhere else).

Some details of the complexity are mentioned in the Java 8 source code. The reason that the javadocs do not mention complexity is that it is implementation specific, both in theory and in practice. (As illustrated by the fact that the complexity of some operations is significantly different between Java 7 and 8.)

Dippold answered 11/9, 2020 at 8:3 Comment(0)
F
2

There is a new "better" BigInteger class that is not being used by the sun jdk for conservateism and lack of useful regression tests (huge data sets). The guy that did the better algorithms might have discussed the old BigInteger in the comments.

Here you go http://futureboy.us/temp/BigInteger.java

Farber answered 16/3, 2010 at 1:37 Comment(0)
C
1

Measure it. Do operations with linearly increasing operands and draw the times on a diagram. Don't forget to warm up the JVM (several runs) to get valid benchmark results.

If operations are linear O(n), quadratic O(n^2), polynomial or exponential should be obvious.

EDIT: While you can give algorithms theoretical bounds, they may not be such useful in practice. First of all, the complexity does not give the factor. Some linear or subquadratic algorithms are simply not useful because they are eating so much time and resources that they are not adequate for the problem on hand (e.g. Coppersmith-Winograd matrix multiplication). Then your computation may have all kludges you can only detect by experiment. There are preparing algorithms which do nothing to solve the problem but to speed up the real solver (matrix conditioning). There are suboptimal implementations. With longer lengths, your speed may drop dramatically (cache missing, memory moving etc.). So for practical purposes, I advise to do experimentation.

The best thing is to double each time the length of the input and compare the times. And yes, you do find out if an algorithm has n^1.5 or n^1.8 complexity. Simply quadruple the input length and you need only the half time for 1.5 instead of 2. You get again nearly half the time for 1.8 if you multiply the length 256 times.

Carinacarinate answered 28/1, 2010 at 11:42 Comment(2)
That might work. I would need to test large values of n. If I measured the time to multiply two n-bit BigIntegers (t_0) and then two 2n-bit BigIntegers (t_1). Then I might expect the complexity to be O(n^(log2(t_1/t_0))). In general I am a little skeptical of empirical methods though (possibly unfairly).Gisser
This is a difficult approach to take, though. A priori, there's no reason to think that a single algorithm is used rather than a combination of algorithms. Thus the scaling from 10 digits to 1000 digits might be different from the scaling from 1000 digits to 3000 digits.Trustless

© 2022 - 2024 — McMap. All rights reserved.