long double vs double
Asked Answered
S

4

82

I know that size of various data types can change depending on which system I am on.

I use XP 32bits, and using the sizeof() operator in C++, it seems like long double is 12 bytes, and double is 8.

However, most major sources states that long double is 8 bytes, and the range is therefore the same as a double.

How come I have 12 bytes? If long double is indeed 12 bytes, doesn't this extends the range of value also? Or the long signature is only used (the compiler figures) when the value exceed the range of a double, and thus, extends beyond 8 bytes?

Spaetzle answered 11/8, 2010 at 0:55 Comment(2)
This is the worst feature in ISO C. I strongly discourage you from using it. It only leads to problems because the specification is so loose.Epigone
@JeffHammond it can't be strict, otherwise how can C be available for 12, 16, 24, 36, 60, 72-bit systems? There's no way you can specify a fixed IEEE-754 floating-point format to run efficiently on a system with 72-bit non-IEEE floatColossus
D
78

Quoting from Wikipedia:

On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure .

and

Compilers may also use long double for a 128-bit quadruple precision format, which is currently implemented in software.

In other words, yes, a long double may be able to store a larger range of values than a double. But it's completely up to the compiler.

Dioptrics answered 11/8, 2010 at 0:58 Comment(6)
Data types depends heavily on the architecture you're developing for.Sharpie
It also depends on compiler options. The 80-bit type can be explicitly disabled on almost every x86 compiler.Twophase
@karlphillip, @greyfade: Yes, I just meant "up to the compiler" in the sense that it decides how to store your data. Obviously it's limited to what is available on the platform, and of course the compiler can choose to allow a user override.Dioptrics
Apple claims that long double is 128-bit: developer.apple.com/library/archive/documentation/Darwin/…Dunlavy
@AaronFranke that's the size after padding. The real underlying type is still 80-bit extended but padded to 12 bytes on x86 and 16 bytes on x86-64 for alignment reason. Almost all x86 compilers apart from MSVC do that. No one uses IEEE-754 quadruple precision for long double as that's extremely slowColossus
@AaronFranke that's also specified by the x86-64 SysV ABI which Unix (Apple included) uses. Just check LDBL_MANT_DIG and see news.ycombinator.com/item?id=19237884Colossus
N
20

For modern compilers on x64, Clang and GCC uses 16-byte double for long double while VC++ uses 8-byte double. In other words, with Clang and GCC you get higher precision double but for VC++ long double is same as double. The modern x86 CPUs do support these 16-byte doubles so I think Clang and GCC are doing the right thing and allows you to access lower level hardware capability using higher level language primitives.

Nessim answered 23/2, 2018 at 5:41 Comment(1)
The size is 16 bytes but 6 bytes of that is padding. long double is always 80-bit extended by default, padded to 12/16 bytes on x86 and x86-64 respectively. You can change the size via -mlong-double-64/80/128 options if you're willing to break the ABI or some APIs. There are also -m96/128bit-long-double to change the padding sizeColossus
B
5

The standard byte sizes for numbers are the guaranteed minimum sizes across all platforms. They may be larger on some systems, but they will never be smaller.

Bisectrix answered 11/8, 2010 at 1:12 Comment(0)
D
0

long double on X86 compilers increases the mantissa only slightly from 52 bit to 64 bit, so it gives around 3.5 extra decimal digits of precision. The exponent size also increases from 11 to 15 bit, so long double uses 80 bit in total including the sign bit. Due to 2 byte (on 32 bit systems) or even 6 byte (on 64 bit systems) of padding, the actual size of a long double in memory is 12 or even 16 bytes, though.

But don't let the large value of sizeof(long double) trick you into believing, that you get a lot of precision, if in reality, you just get a lot of padding on X86! You can always query the guaranteed precision on your platform via:

std::cout << "precision: " << std::numeric_limits<long double>.digits10() << std::endl;

The reason behind that long double is an 80-bit-type on X86 is, that the legacy X87 numeric coprocessor used that format internally. So long double made the full precision of the coprocessor available to C applications.

If you need higher floating point precision, the latest C++ standard C++23 will be to your rescue. In the #include <stdfloat> header, you will find the new type std::float128_t. Unfortunately, your compiler is not obliged to support it, though. As of writing this answer, only GCC and MSVC seem to have added support for std::float128_t. The other major compilers will hopefully follow in the next few years, though.

If you cannot switch to C++23 yet or need one of the compilers, that don't support 128-Bit-Floating-point, yet, then using 128 bit floats as a C type, not C++ type, might be an alternative. A good starting point can be found in this article: https://cpufun.substack.com/p/portable-support-for-128b-floats

Discrimination answered 3/6 at 14:17 Comment(1)
I believe the intent of Intel's 80-bit extended-precision type is to give you some extra precision for intermediate results — in the manner of guard digits — not necessarily to give you markedly higher precision for your own final values. You can often significantly improve the stability of floating-point calculations by giving those intermediate results somewhat higher precision than the initial and final values.Gilud

© 2022 - 2024 — McMap. All rights reserved.