long double (GCC specific) and __float128
Asked Answered
A

4

43

I'm looking for detailed information on long double and __float128 in GCC/x86 (more out of curiosity than because of an actual problem).

Few people will probably ever need these (I've just, for the first time ever, truly needed a double), but I guess it is still worthwile (and interesting) to know what you have in your toolbox and what it's about.

In that light, please excuse my somewhat open questions:

  1. Could someone explain the implementation rationale and intended usage of these types, also in comparison of each other? For example, are they "embarrassment implementations" because the standard allows for the type, and someone might complain if they're only just the same precision as double, or are they intended as first-class types?
  2. Alternatively, does someone have a good, usable web reference to share? A Google search on "long double" site:gcc.gnu.org/onlinedocs didn't give me much that's truly useful.
  3. Assuming that the common mantra "if you believe that you need double, you probably don't understand floating point" does not apply, i.e. you really need more precision than just float, and one doesn't care whether 8 or 16 bytes of memory are burnt... is it reasonable to expect that one can as well just jump to long double or __float128 instead of double without a significant performance impact?
  4. The "extended precision" feature of Intel CPUs has historically been source of nasty surprises when values were moved between memory and registers. If actually 96 bits are stored, the long double type should eliminate this issue. On the other hand, I understand that the long double type is mutually exclusive with -mfpmath=sse, as there is no such thing as "extended precision" in SSE. __float128, on the other hand, should work just perfectly fine with SSE math (though in absence of quad precision instructions certainly not on a 1:1 instruction base). Am I right in these assumptions?

(3. and 4. can probably be figured out with some work spent on profiling and disassembling, but maybe someone else had the same thought previously and has already done that work.)

Background (this is the TL;DR part):
I initially stumbled over long double because I was looking up DBL_MAX in <float.h>, and incidentially LDBL_MAX is on the next line. "Oh look, GCC actually has 128 bit doubles, not that I need them, but... cool" was my first thought. Surprise, surprise: sizeof(long double) returns 12... wait, you mean 16?

The C and C++ standards unsurprisingly do not give a very concrete definition of the type. C99 (6.2.5 10) says that the numbers of double are a subset of long double whereas C++03 states (3.9.1 8) that long double has at least as much precision as double (which is the same thing, only worded differently). Basically, the standards leave everything to the implementation, in the same manner as with long, int, and short.

Wikipedia says that GCC uses "80-bit extended precision on x86 processors regardless of the physical storage used".

The GCC documentation states, all on the same page, that the size of the type is 96 bits because of the i386 ABI, but no more than 80 bits of precision are enabled by any option (huh? what?), also Pentium and newer processors want them being aligned as 128 bit numbers. This is the default under 64 bits and can be manually enabled under 32 bits, resulting in 32 bits of zero padding.

Time to run a test:

#include <stdio.h>
#include <cfloat>

int main()
{
#ifdef  USE_FLOAT128
    typedef __float128  long_double_t;
#else
    typedef long double long_double_t;
#endif

long_double_t ld;

int* i = (int*) &ld;
i[0] = i[1] = i[2] = i[3] = 0xdeadbeef;

for(ld = 0.0000000000000001; ld < LDBL_MAX; ld *= 1.0000001)
    printf("%08x-%08x-%08x-%08x\r", i[0], i[1], i[2], i[3]);

return 0;
}

The output, when using long double, looks somewhat like this, with the marked digits being constant, and all others eventually changing as the numbers get bigger and bigger:

5636666b-c03ef3e0-00223fd8-deadbeef
                  ^^       ^^^^^^^^

This suggests that it is not an 80 bit number. An 80-bit number has 18 hex digits. I see 22 hex digits changing, which looks much more like a 96 bits number (24 hex digits). It also isn't a 128 bit number since 0xdeadbeef isn't touched, which is consistent with sizeof returning 12.

The output for __int128 looks like it's really just a 128 bit number. All bits eventually flip.

Compiling with -m128bit-long-double does not align long double to 128 bits with a 32-bit zero padding, as indicated by the documentation. It doesn't use __int128 either, but indeed seems to align to 128 bits, padding with the value 0x7ffdd000(?!).

Further, LDBL_MAX, seems to work as +inf for both long double and __float128. Adding or subtracting a number like 1.0E100 or 1.0E2000 to/from LDBL_MAX results in the same bit pattern.
Up to now, it was my belief that the foo_MAX constants were to hold the largest representable number that is not +inf (apparently that isn't the case?). I'm also not quite sure how an 80-bit number could conceivably act as +inf for a 128 bit value... maybe I'm just too tired at the end of the day and have done something wrong.

Adamandeve answered 22/11, 2012 at 16:7 Comment(3)
The 80-bit double can store uint64_t. It has 64 bits of mantissa (no optional/implicit leading bit), 15 bits of exponent and a sign bit. en.wikipedia.org/wiki/…Freddafreddi
Instead of adding or subtracting from LDBL_MAX, did you try dividing by two?Hebert
I don't observe what you saw. There are only 20 hex digits changing, corresponds to 10 bytes of the extended precision type. And an 80-bit number has 20 hex digits, not 18Appleton
H
25

Ad 1.

Those types are designed to work with numbers with huge dynamic range. The long double is implemented in a native way in the x87 FPU. The 128b double I suspect would be implemented in software mode on modern x86s, as there's no hardware to do the computations in hardware.

The funny thing is that it's quite common to do many floating point operations in a row and the intermediate results are not actually stored in declared variables but rather stored in FPU registers taking advantage of full precision. That's why comparison:

double x = sin(0); if (x == sin(0)) printf("Equal!");

Is not safe and cannot be guaranteed to work (without additional switches).

Ad. 3.

There's an impact on the speed depending what precision you use. You can change used the precision of the FPU by using:

void 
set_fpu (unsigned int mode)
{
  asm ("fldcw %0" : : "m" (*&mode));
}

It will be faster for shorter variables, slower for longer. 128bit doubles will be probably done in software so will be much slower.

It's not only about RAM memory wasted, it's about cache being wasted. Going to 80 bit double from 64b double will waste from 33% (32b) to almost 50% (64b) of the memory (including cache).

Ad 4.

On the other hand, I understand that the long double type is mutually exclusive with -mfpmath=sse, as there is no such thing as "extended precision" in SSE. __float128, on the other hand, should work just perfectly fine with SSE math (though in absence of quad precision instructions certainly not on a 1:1 instruction base). Am I right under these assumptions?

The FPU and SSE units are totally separate. You can write code using FPU at the same time as SSE. The question is what will the compiler generate if you constrain it to use only SSE? Will it try to use FPU anyway? I've been doing some programming with SSE and GCC will generate only single SISD on its own. You have to help it to use SIMD versions. __float128 will probably work on every machine, even the 8-bit AVR uC. It's just fiddling with bits after all.

The 80 bit in hex representation is actually 20 hex digits. Maybe the bits which are not used are from some old operation? On my machine, I compiled your code and only 20 bits change in long mode: 66b4e0d2-ec09c1d5-00007ffe-deadbeef

The 128-bit version has all the bits changing. Looking at the objdump it looks as if it was using software emulation, there are almost no FPU instructions.

Further, LDBL_MAX, seems to work as +inf for both long double and __float128. Adding or subtracting a number like 1.0E100 or 1.0E2000 to/from LDBL_MAX results in the same bit pattern. Up to now, it was my belief that the foo_MAX constants were to hold the largest representable number that is not +inf (apparently that isn't the case?).

This seems to be strange...

I'm also not quite sure how an 80-bit number could conceivably act as +inf for a 128-bit value... maybe I'm just too tired at the end of the day and have done something wrong.

It's probably being extended. The pattern which is recognized to be +inf in 80-bit is translated to +inf in 128-bit float too.

Heartrending answered 3/5, 2013 at 14:13 Comment(7)
There is nothing odd about adding 1E2000L to LDBL_MAX and getting back LDBL_MAX. As LDBL_MAX is over 1E4932L, 1E2000L is much smaller than 1ulp.Firedamp
It strikes me as unfortunate that standards writers would not require == to round its operands to their declared precision, since it's hard to think of any situation in which its results could be considered meaningful otherwise. Personally, I think the operator should generate a strong warning when used for floating-point comparisons with mismatched declared operand types, but I see no basis for allowing a compiler to arbitrarily substitute higher precision on == even if such substitution would be reasonable with most other operators.Millibar
@supercat: Use of higher precision than requested is non-conformant, and controlled via a switch such as -ffast-math.Hebert
@BenVoigt: Compilers may legitimately promote to type double the operands to arithmetic methods if FLT_EVAL_METHOD is 1, or long double if it's 2.Millibar
@supercat: I don't think compilers even conform to that (like you said, = should not store a higher precision value even if the lhs is enregistered), unless you disable fast math.Hebert
Why is the code with sin(0) not safe? Isn't sin(0) exact zero, and isn't zero promoted to any other floating-point type still exact zero?Topple
double x = sin(0); if (x == sin(0)) printf("Equal!"); Wouldn't the bits of x just be all zero, and the FPU's internal bits for sin(0) all be zero, and 0f == 0d is true so this should also be true?Executrix
M
9

IEEE-754 defined 32 and 64 floating-point representations for the purpose of efficient data storage, and an 80-bit representation for the purpose of efficient computation. The intention was that given float f1,f2; double d1,d2; a statement like d1=f1+f2+d2; would be executed by converting the arguments to 80-bit floating-point values, adding them, and converting the result back to a 64-bit floating-point type. This would offer three advantages compared with performing operations on other floating-point types directly:

  1. While separate code or circuitry would be required for conversions to/from 32-bit types and 64-bit types, it would only be necessary to have only one "add" implementation, one "multiply" implementation, one "square root" implementation, etc.

  2. Although in rare cases using an 80-bit computational type could yield results that were very slightly less accurate than using other types directly (worst-case rounding error is 513/1024ulp in cases where computations on other types would yield an error of 511/1024ulp), chained computations using 80-bit types would frequently be more accurate--sometimes much more accurate--than computations using other types.

  3. On a system without a FPU, separating a double into a separate exponent and mantissa before performing computations, normalizing a mantissa, and converting a separate mantissa and exponent into a double, are somewhat time consuming. If the result of one computation will be used as input to another and discarded, using an unpacked 80-bit type will allow these steps to be omitted.

In order for this approach to floating-point math to be useful, however, it is imperative that it be possible for code to store intermediate results with the same precision as would be used in computation, such that temp = d1+d2; d4=temp+d3; will yield the same result as d4=d1+d2+d3;. From what I can tell, the purpose of long double was to be that type. Unfortunately, even though K&R designed C so that all floating-point values would be passed to variadic methods the same way, ANSI C broke that. In C as originally designed, given the code float v1,v2; ... printf("%12.6f", v1+v2);, the printf method wouldn't have to worry about whether v1+v2 would yield a float or a double, since the result would get coerced to a known type regardless. Further, even if the type of v1 or v2 changed to double, the printf statement wouldn't have to change.

ANSI C, however, requires that code which calls printf must know which arguments are double and which are long double; a lot of code--if not a majority--of code which uses long double but was written on platforms where it's synonymous with double fails to use the correct format specifiers for long double values. Rather than having long double be an 80-bit type except when passed as a variadic method argument, in which case it would be coerced to 64 bits, many compilers decided to make long double be synonymous with double and not offer any means of storing the results of intermediate computations. Since using an extended precision type for computation is only good if that type is made available to the programmer, many people came to conclude regard extended precision as evil even though it was only ANSI C's failure to handle variadic arguments sensibly that made it problematic.

PS--The intended purpose of long double would have benefited if there had also been a long float which was defined as the type to which float arguments could be most efficiently promoted; on many machines without floating-point units that would probably be a 48-bit type, but the optimal size could range anywhere from 32 bits (on machines with an FPU that does 32-bit math directly) up to 80 (on machines which use the design envisioned by IEEE-754). Too late now, though.

Millibar answered 5/6, 2015 at 19:23 Comment(6)
Otherwise it shows an interesting point of view. I'm not sure whether to believe it though: it talks about intention, which is scarcely documented and thus one has nowhere to look to check.Topple
@Ruslan: Many processors without floating-point units can perform operations on IEEE-754 80-bit floating-point values faster than they can perform operations on IEEE-754 64-bit values. If that wasn't a major motivating factor for the design, it would seem a mighty huge coincidence. To be sure, coincidence can't be ruled out, since there are many cases where arbitrary design decisions have worked out amazingly well, but since I would expect computation efficiency was a design goal of IEEE-754, it would seem likely that such efficiency was deliberate.Millibar
@Ruslan: I don't know that the IEEE was focused on the C language in particular, but Kahan (one of the people behind IEEE-754) has written about the advantage of having extra precision for intermediate calculations and performing rounding at controlled times. In some cases adding even one extra bit to intermediate calculations can make a huge difference in a computed result (e.g. use Heron's Formula to compute the area of a triangle whose sides are 16777215.0f, 16777215.0f, and 4.0f; if the semiperimeter is computed as a "float", the result will be off by about 50%).Millibar
A point which Kahan makes repeatedly in his papers, but which language designers seemed to ignore, is that it's important that the rounding be done at precisely-controlled times; for accurate computations, there must be a type such that someType temp = a+b; result=temp+c; will be equivalent to result=a=b+c;. While the difference in precision between 64-bit and 80-bit precision is small, having rounding performed at wrong times can have massive effects on calculation accuracy.Millibar
BTW, the compilers which don't have 80-bit long double (e.g. MSVC), don't compute with such precision either: if you look at a Windows program startup environment, you'll see it has its FPU Control Word set to 53-bit precision instead of fninit-default 64.Topple
By "many compilers" do you mean just MSVC and Intel C++ without the long double flag, or are there others too?Executrix
C
1

It boils down to the difference between 4.9999999999999999999 and 5.0.

  1. Although the range is the main difference, it is precision that is important.
  2. These type of data will be needed in great circle calculations or coordinate mathematics that is likely to be used with GPS systems.
  3. As the precision is much better than normal double, it means you can retain typically 18 significant digits without loosing accuracy in calculations.
  4. Extended precision I believe uses 80 bits (used mostly in maths processors), so 128 bits will be much more accurate.
Crucifix answered 15/11, 2017 at 18:24 Comment(1)
But... the largest great circle that's possible on this planet can only have 11 significant decimal digits (actually 10, considering the maximal resolution of GPS) whereas a double has more than 15 significant decimal digits, and math is performed at 18+ decimal digits anyway...? Besides, if exactly reproducable results are needed, floating point is the wrong tool, too, fixed point does it. Floating point is never perfectly accurate, even with 20 times the size, since there's numbers you just cannot represent.Adamandeve
M
0

C99 and C++11 added types float_t and double_t which are aliases for built-in floating-point types. Roughly, float_t is the type of the result of doing arithmetic among values of type float, and double_t is the type of the result of doing arithmetic among values of type double.

Melee answered 31/5, 2018 at 22:22 Comment(1)
These types are designed to maximize efficiency without wasting hardware. float_t will be at least 32 bits but if the CPU implements operations using a higher precision then it will be that many bits.Executrix

© 2022 - 2024 — McMap. All rights reserved.