Is there is some mathematical "optimum" base that would speed up factorial calculation?
Asked Answered
S

4

6

Is there is some mathematical "optimum" base that would speed up factorial calculation?

Background: Just for fun, I'm implementing my own bignum library. (-: Is this my first mistake? :-). I'm experimenting with various bases used in the internal representation and regression testing by printing out exact values (in decimal) for n factorial (n!).

The way my bignum library represents integers and does multiplication, the time is proportional to the total number of "1" bits in the internal representation n!. Using a base 2, 4, 8, 16, 2^8, 2^30, or etc. in my internal representation all give me exactly the same total number of "1" bits for any particular number.

Unless I've made some mistake, any given factorial (n!) represented in base 18 has fewer "1" bits than the same value represented in base 10 or or base 16 or base 19. And so (in principle), using base 18 would make my bignum library run faster than using base 10 or some binary 2^w base or base 19. I think this has something to do with the fact that n! is either shorter or has more "trailing zeros" or both when printed out in base 18 than in base 10 or or base 16 or base 19. Is there some other base that would work even better than base 18? In other words, Is there a base that represents n! with even fewer "1" bits than base 18?

This is not a dup of "What is a convenient base for a bignum library & primality testing algorithm?" because I suspect "the optimum base for working with integers that are known to be large factorials, with lots of factors of 2 and 3" is different than "the optimum base for working with integers that don't have any small factors and are possibly prime". (-: Is speeding up factorial calculations -- perhaps at the expense of other kinds of calculations -- my second mistake? :-)

edit: For example:

(decimal) 16! ==
(decimal    ) == 20,922,789,888,000 // uses decimal 14 "1" bits
(dozenal    ) ==  2,41A,B88,000,000 // uses decimal 10 "1" bits
(hexadecimal) ==    130,777,758,000 // uses decimal 18 "1" bits
(octadecimal) ==     5F,8B5,024,000 // uses decimal 14 "1" bits

(I'm more-or-less storing the digits on the right, without the commas, plus some metadata overhead). (While one might think that "As you increase the base you will use fewer "1" bits to represent a given number", or ""As you increase the base you will use fewer nonzero digits to represent a given number", the above example shows that is not always true.)

I'm storing each digit as a small integer ("int" or "long int" or "byte"). Is there any other reasonable way to store digits? I'm pretty sure my computer stores those integers in binary -- each "1", "2", "4", "8", and "G" digit use one "1" bit; each "3", "5", "6", "9", and "A" digit use two "1" bits; each "7" and "B" digit use three "1" bits; each "F" digit uses four "1" bits, etc.

Both the decimal and the octadecimal representation of this value (16!) require 14 "1" bits. So I made a mistake in my earlier calculation: For every n, representing n! in octadecimal doesn't always have fewer "1" bits than representing the same value in decimal. But the question still stands: is there some other "optimum" base that requires the fewest number of 1 bits for storing large factorials?

Someone asks: "How do you store those numbers?" Well, that's exactly my question -- what is the best way of storing numbers of the form n! ? I could internally use digits in base 10, or some power-of-two base, or base 18, or some other base. Which one is best? I could store these integers internally as a 1D array of digits, with a length however long is needed to store all the digits. Is there any reasonable way of printing out 100! in decimal without such an array?

Scarabaeoid answered 19/6, 2010 at 22:1 Comment(2)
I've never seen anyone use emoticons as parentheses like that before :)Ainsworth
What exactly are you referring to as "bits" when you talk about base 18 numbers? How do you store those numbers?Zsigmondy
D
5

If you're just trying to optimize running time for calculating factorial, and changing the base is the only parameter you're changing, then the optimum base will likely contain small factors. 60 might be a reasonable choice. If you want to experiment, I would try various bases of the form (2^a)(3^b)(5^c)

Improving the speed of multiplication is probably the best way performance. What algorithm are you using for multiplication? (school-book, Karatsuba, Toom-Cook, FFT, ...)

There are other factors to consider, too. If you will be converting the numbers to decimal frequently, then a base that is a power of 10 will make the conversion as fast as possible.

Many(*) years ago, I wrote a base-6 floating point library specifically to solve a problem with repeated multiplication/division by 2 and/or 3. But unless you are trying to solve a specific problem, I think you will be better served by optimizing your algorithms in general than by just trying to optimize factorial.

casevh

(*) I originally said "Several years ago" until I remembered the program ran for many days on a 12Mhz 80286.

Deeannadeeanne answered 19/6, 2010 at 22:1 Comment(1)
Yes, that's exactly what I'm doing. You are probably right that these other considerations probably have a much bigger impact than the choice of base, so maybe I'm overthinking this. As far as I know, Karatsuba multiplication -- switching to school-book multiplication once either factor is small enough -- is the fastest known algorithm when either factor has less than (very roughly, depending on compilers and hardware) 300 limbs. It looks like you are right -- of the bases I've tried, the bases that have the fewest "1" bits when representing n! are all versatile 5-smooth numbers. Thank you!Scarabaeoid
Z
2

While from purely mathematical viewpoint the optimal base is e (and after rounding to nearest integer - 3), from a practical standpoint for a bignum library on a computer, pick a machine word size as the base for your numeric system ( 2^32 or 2^64 ). Yes, it's huge, but the higher abstraction layer of your bignum system is the choke point, the underlying calculations on machine words are the fast part, so delegate as much computation to the CPU low-level instructions while keeping your own work to minimum.

And no, it's not a mistake. It's a very good learning exercise.

Zeebrugge answered 19/6, 2010 at 22:1 Comment(5)
How do you represent numbers in base e?Thedrick
@trentcl: Not that difficult though completely impractical. 0, 1, 2 exist normally. 3 doesn't; they represent integer multiples of powers of e that compound given number. 1=1, 2=2, 3[dec] = 1e^1 + 0e^0 + 0e^(-1) + 2e^(-2) +0e^(-3)+0e^(-4)+1e^-(5)+... = 10.0201[e]. 4[dec] = 1e^1 + 1e^0 + 1e^-1 + 0e^-2 + 1e^-3 +... = 11.101...[e] - about every integer except for 0,1,2 would be a fraction. Meanwhile, e^5 = 148.413[dec] = 100000[e]Zeebrugge
So how is that "optimal" for computing factorials? Or what did you mean by a "purely mathematical viewpoint"?Thedrick
@trentcl: I don't exactly know what metric was for the optimality, but I believe it was something along the lines of amount of data stored vs amount of data needed to address the storage location; it's a theoretical computer science problem of how big a bit should be; there were some ternary computers that - with 3 being closer to e than 2 - were theoretically more optimal (of course that optimality was out of the window with the immense growth of electronic complexity, that's why binary won out after all).Zeebrugge
Base e is the theoretical "sweet spot" which is absolutely purely theoretical because while the ratios may speak in favor of it. being a non-integer base it's so horribly unwieldy nobody takes the idea of using it in practice seriously. So, yes, it's the "mathematical optimum" from OP's titular question, but far, far from the practical optimum - which, in case of developing a bignum system in software is invariably the largest machine word with hardware support available; and in case of developing a CPU from scratch, as close to e as technical/machine complexity allows (=2.).Zeebrugge
O
1

If by '"1" bits' you mean digits, then I suggest a base of 256 or 65536. In other words, make each byte / word a "digit" for the purposes of your math. The computer handles these numbers routinely and is optimized for doing so. Your factorials will be swift, and so will other operations.

Not to mention the computer handles a great deal of the conversions from a similar base to these with ease. (rhyming unintentional)

Olfe answered 19/6, 2010 at 22:1 Comment(0)
I
1

I will not pretend I know any math, so do not take my answer as the holy "optimum" you are probably looking for. If I would have to do factorial as fast as possible, I would either try some approximation (something like Stirling approximation) or reduce the number of multiplications, because multiplication is expensive operation. If you represent the number in k-base, you can simulate multiplication by k with help of shifting. If you choose the 2-base, half of all multiplications will be shifts. The other multiplications are shifts and one bit switch. If you aim for minimizing the number of "1"s in your representation of the number, this depends on what numbers you represent. As you increase the base you will use fewer "1"s to represent a given number, but you will need to have more bits for every order, which means more potential "1"s. I hope it helps at least a bit, if not, just ask, I will try to answer.

Intermezzo answered 19/6, 2010 at 22:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.