Fastest 128 bit integer library [closed]
Asked Answered
M

3

13

I am working on a CPU-heavy numerical computation app. Without going into many details, it's a computational math research project that involves computing a certain function f(x) for large integer x.

Right now everything is implemented in C++ in x64 mode, using native 64-bit ints. That limits me to x<2^64~1.8*10^19. I want to go further, to do that, I need a library that does 128-bit arithmetic. And it has to be very fast. In particular, integer divisions should be fast. Otherwise I'll be sitting here waiting for the results till Thanksgiving. And I'd rather not reinvent the wheel.

I found a list of ~20 big integer libraries on Wikipedia, but most of those seem to be targeted towards arbitrary-precision numbers, which is overkill for my task, and I don't need extra costs associated with that.

Does anyone know what library can operate on 128 bit integers fastest?

Manganese answered 11/9, 2010 at 20:50 Comment(4)
x86-64.org/pipermail/discuss/2005-August/006412.htmlTheriot
That is interesting, did not know that. I'm working in Windows at the moment, but I'll try it with gcc in Unix. My code should be portable enough.Manganese
You could use Cygwin/GCC or MinGW.Chrysarobin
Libraries tend to be built to cover the general case and not the special case, unless the special case is common enough in practice. So, I guess if you think arbitrary precision is overkill, you're probably going to have to roll your own for 128bit values. It isn't too hard.Boob
V
16

You didn't mention your platform / portability requirements. If you are willing to use gcc or clang, on 64 bit platforms they have a builtin 128 bit types that come for free, __uint128_t and __int128_t. Maybe other platforms have similar type extensions.

In any case it should be possible to find the corresponding generic code in the gcc sources that assembles two integers of width N to synthesize one integer of width 2N. This would probably be a good starting point to make a standalone library for that purpose.

Vocable answered 11/9, 2010 at 22:23 Comment(0)
T
5

The ttmath library does what you want.

Thigh answered 15/9, 2010 at 10:16 Comment(0)
R
1

This might not be for everyone, but what I would do is pick the highest-performance arbitrary integer library with source code and otherwise suitable for the job, and hack it to be for fixed integer sizes. Change some variable "nbits" to 128 hard-coded. It probably allocates memory at runtime, not knowing the number of bytes until then. Change it to use struct with data in-place, saving a pointer dereferencing every time data is read. Unroll certain critical loops by hand. Hard-code anything else that might be critical. Then the compiler will probaby have an easier time optimizing things. Of course, much of this will be assembly, using fancy SIMD with whatever the technology is in use this week.

It would be fun! But then, as a programmer I started off with machine code and very low level stuff.

But for those not as crazy as I am, perhaps one of the available libraries uses templates or has some means of generating code custom to some size. And, some compilers have a "long long" integer type which might be suitable.

Rugg answered 14/9, 2010 at 3:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.