long double
on X86 compilers increases the mantissa only slightly from 52 bit to 64 bit, so it gives around 3.5 extra decimal digits of precision. The exponent size also increases from 11 to 15 bit, so long double
uses 80 bit in total including the sign bit. Due to 2 byte (on 32 bit systems) or even 6 byte (on 64 bit systems) of padding, the actual size of a long double
in memory is 12 or even 16 bytes, though.
But don't let the large value of sizeof(long double)
trick you into believing, that you get a lot of precision, if in reality, you just get a lot of padding on X86! You can always query the guaranteed precision on your platform via:
std::cout << "precision: " << std::numeric_limits<long double>.digits10() << std::endl;
The reason behind that long double
is an 80-bit-type on X86 is, that the legacy X87 numeric coprocessor used that format internally. So long double
made the full precision of the coprocessor available to C applications.
If you need higher floating point precision, the latest C++ standard C++23
will be to your rescue. In the #include <stdfloat>
header, you will find the new type std::float128_t
. Unfortunately, your compiler is not obliged to support it, though. As of writing this answer, only GCC and MSVC seem to have added support for std::float128_t
. The other major compilers will hopefully follow in the next few years, though.
If you cannot switch to C++23 yet or need one of the compilers, that don't support 128-Bit-Floating-point, yet, then using 128 bit floats as a C type, not C++ type, might be an alternative. A good starting point can be found in this article: https://cpufun.substack.com/p/portable-support-for-128b-floats