What is the fastest way to calculate e to 2 trillion digits?
Asked Answered
P

2

30

I want to calculate e to 2 trillion (2,000,000,000,000) digits. This is about 1,8 TiB of pure e. I just implemented a taylor series expansion algorithm using GMP (code can be found here).

Unfortuanetly it crashes when summing more than 4000 terms on my computer, probably because it runs out of memory.

What is the current state of the art in computing e? Which algorithm is the fastest? Any open source implementations that are worth looking at? Please don't mention y-cruncher, it's closed source.

Popsicle answered 9/11, 2012 at 18:34 Comment(18)
I won't know the answer in either case, but are you looking for the binary expansion of e or its decimal expansion?Miquelon
Mentioning y-crunchers creator @MysticialBeset
@PascalCuoq I am looking for 2 trillion decimal places.Popsicle
Your computer has a limit. Check this #385002Fingerling
@fork It doesn't have to be stored as a native type.Woothen
@Woothen no but if you dont know it might cause problems, just saying.Fingerling
@fork: I am using a bignum library to circumvent problems of that kind. It's called GMP (check the link in my question) and allows arbitrary precicion computation, as long as the numbers fit into my RAM. When I am out of RAM, it just crashes. I could use 10 TB of swap space, but that would make things slightly slower :)Popsicle
Wikipedia says that current best result in computing e is one trillion decimal digits (result from from 2010 July 5) - en.wikipedia.org/wiki/E_(mathematical_constant)#Known_digits , so the task is not trivial, and probably would require special hardware setup.Polemoniaceous
I don't get the goal, what's wrong with 1.5 trillion digits? Why would you use somebody else's algorithm? If you want digits why not just download them? numberworld.org/ftpMadelon
I have access to a machine with 2 TiB of RAM but I doubt I would be able to give you back easily the result :) Perhaps you should use some of the streaming algorithmsForster
@Popsicle Last I looked, GMP used an int to count the limbs, so with the usual 32-bit two's complement ints, you couldn't get more than 2^37 - 64 bits of precision. That can be less than you RAM.Korey
Under linux there is a virtual memory system, meaning that the addresses seen by user programs do not directly correspond to the physical addresses used by the hardware. With virtual memory, programs running on the system can allocate far more memory than is physically available.Fingerling
+1 for "about 1,8 TiB of pure e".Lytta
It looks like that might double the current record, which took 9.3 days to compute in 2010.Dispossess
I quickly looked at the source and read this "Because calculating the fraction every time takes a long time, we just calculate the denominator and the numerator and add them. When we are done summing up all the terms, we finally do the division." If I understood what you meant, you cannot do this. 1 + 1/2 + 1/6 + ... is not the same as (1 + 1 + 1 + ...) / (1 + 2 + 6 + ...)Redbud
@Redbud I cannot claim that I understand the code by just reading it, but in the code, in mpq_add, I am quite sure that q stands for the field of rationals (and of course add stands for add). I wouldn't be too worried.Miquelon
Um... sigh.....Atalaya
I can't say I agree with the close votes. Just because the question/task is extremely difficult, doesn't mean it's not constructive.Atalaya
A
65

Since I'm the author of the y-cruncher program that you mention, I'll add my 2 cents.

For such a large task, the two biggest barriers that must be tackled are as follows:

  1. Memory
  2. Run-time Complexity

Memory

2 trillion digits is extreme - to say the least. That's double the current record set by Shigeru Kondo and myself back in 2010. (It took us more than 9 days to compute 1 trillion digits using y-cruncher.)

In plain text, that's about 1.8 TiB in decimal. In packed binary representation, that's 773 GiB.

If you're going to be doing arithmetic on numbers of this size, you're gonna need 773 GiB for each operand not counting scratch memory.

Feasibly speaking, y-cruncher actually needs 8.76 TiB of memory to do this computation all in ram. So you can expect other implementations to need the same give or take a factor of 2 at most.

That said, I doubt you're gonna have enough ram. And even if you did, it'd be heavily NUMA. So the alternative is to use disk. But this is not trivial, as to be efficient, you need to treat memory as a cache and micromanage all data that is transferred between memory and disk.


Run-time Complexity

Here we have the other problem. For 2 trillion digits, you're gonna need a very fast algorithm. Not just any fast algorithm, but a quasi-linear run-time algorithm.

Your current attempt runs in about O(N^2). So even if you had enough memory, it won't finish in your lifetime.

The standard approach to computing e to high precision runs in O(N log(N)^2) and combines the following algorithms:

Fortunately, GMP already uses FFT-based large multiplication. But it lacks two crucial features:

  1. Out-of-core (swap) computation to use disk when there isn't enough memory.
  2. It isn't parallelized.

The second point isn't as important since you can just wait longer. But for all practical purposes, you're probably gonna need to roll out your own. And that's what I did when I wrote y-cruncher.


That said, there are many other loose-ends that also need to be taken care of:

  1. The final division will require a fast algorithm like Newton's Method.
  2. If you're gonna compute in binary, you're gonna need to do a radix conversion.
  3. If the computation is gonna take a lot of time and a lot of resources, you may need to implement fault-tolerance to handle hardware failures.
Atalaya answered 9/11, 2012 at 23:4 Comment(0)
K
7

Since you have a goal how many digits you want (2 trillion) you can estimate how many terms you'll need to calculate e to that number of digits. From this, you can estimate how many extra digits of precision you'll need to keep track of to avoid rounding errors at the 2 trillionth place.

If my calculation from Stirling's approximation is correct, the reciprocal of 10 to the 2 trillion is about the reciprocal of 100 billion factorial. So that's about how many terms you'll need (100 billion). The story's a little better than that, though, because you'll start being able to throw away a lot of the numbers in the calculation of the terms well before that.

Since e is calculated as a sum of inverse factorials, all of your terms are rational, and hence they are expressible as repeating decimals. So the decimal expansion of your terms will be (a) an exponent, (b) a non-repeating part, and (c) a repeating part. There may be some efficiencies you can take advantage of if you look at the terms in this way.

Anyway, good luck!

Kerbstone answered 9/11, 2012 at 20:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.