Why are double and long double completely the same on my 64 bit machine?
Asked Answered
I

2

10

This question may sound like for beginners, however when I found that out I thought I'm either a beginner or my comp is missing something:

int main()
{
    cout << sizeof(double) << endl;
    cout << sizeof(long double) << endl;

    cout << DBL_DIG << endl;
    cout << LDBL_DIG << endl;

    return 0;
}

PROGRAM OUTPUT:

8

8

15

15

I thought long double is 10 bytes and has 18 decimal digits while double is 8 bytes and has 15 digits but it seems I was wrong.

Why is that so?

Using MSVC 2010 on 64bit machine.

Instructions answered 19/1, 2012 at 7:3 Comment(4)
en.wikipedia.org/wiki/C_data_typesMairemaise
I asked something simliar a while ago: https://mcmap.net/q/494179/-why-did-microsoft-abandon-long-double-data-type-closed/893693Proportional
Are you targeting x64 or x86?Misteach
Possible duplicate of Why did Microsoft abandon long double data type?Quade
O
6

In MSVC++, long double is a synonym for double as you've found out. Apparently this is to take advantage of SSE/SSE2/SSE3 instruction sets which are limited to 64-bit operations.

See also here for more information.

Overuse answered 19/1, 2012 at 7:7 Comment(3)
Frightening that they'd decrease the size in a 64-bit version. Quick double-check, looks correct: I had to double-check: msdn.microsoft.com/en-us/library/9cx8xs15.aspxFlaherty
Well, it isn't a synonym, it maps to the same hardware but it is still a different type that you can overload functions on.Calderon
This sounds like "signed char is a synonym of char, which is also actually false. Whatever the internal representation is, long double is always a different nominal type than double, mandated by the standard. As said before, overloading counts. Plus there is an explicit std::numeric_limits<long double> specialization instead of the alias of std::numeric_limits<double>, and various macros specific to both types named differently.Waksman
C
2

The size of all of the basic types is implementation defined, with minimums. In particular, all that you are guaranteed is that double doesn't have less precision and range than float, and that long double doesn't have less precision and range than double.

From a quality of implementation point of view, the compiler should give you the best that the hardware offers. Many (most?) architectures only have two hardware supported floating point types; on such architectures, double and long double will normally be identical. On some architectures, it might make sense to have all three identical. On Intel, a quality implementation will have three different types, because that's what the hardware offers (but an implementation would still be compliant even if all three floating point types were identical). On the other hand, you can argue different sizes for long double: 10 (never seen), 12 (g++) or 16 bytes, for alignment reasons (with some of the bytes unused). An implementation for Intel where long double and double are identical is, however, simply poor quality, and not non-conformant.

Cliffordclift answered 19/1, 2012 at 8:31 Comment(8)
non-conformant? Also the x87 FPU is deprecated in 64-bit mode (or is it completely inaccessible, even?), so when targeting 64-bit mode, I'd expect double and long double to be identicalMisteach
@jalf I don't know about 64-bit mode, but g++ gives sizeof(long double) as 16, and std::numeric_limits<long double>::digits == 64, which looks like what I would expect. (In 32 bit mode, sizeof(long double) is 12, but all of the other values are identical.) If the hardware supports three different sizes of floating point (apparently the case), and the compiler doesn't allow access to them, then this is a serious quality of implementation issue.Cliffordclift
In 16-bit x86 implementations, sizeof(long double) is generally 10 bytes. Actually, even if individual long doubles are padded to 12 or 16 bytes each, it would seem like it might be helpful to have a ten-byte "packed long double" type which could be placed in a structure, especially if one could pad groups of three (e.g. an XYZ coordinate) to 32 bytes.Selfcongratulation
@Selfcongratulation Yes. The real issue on an x86 architecture is how to align the long double (which affects its size, since the size must be a multiple of the alignment). The size of the actual data is 10 bytes, but depending on the architecture, this can lead to various alignment problems, slowing things down considerably.Cliffordclift
@JamesKanze: My understanding is that the speed of data access on many processors is only affected by alignment in cases where it affects the number of 'chunk boundaries' an item spans. If data is transferred in 128-bit chunks, a randomly-byte-aligned collection of 64-bit doubles would be likely to have half of the items contained therein span a 128-bit boundary. If one has a collection of 80-bit doubles which are padded out to 128 bits, I would think access should be equally fast whether the padding is at the front or rear. Besides...Selfcongratulation
@JamesKanze: I would think that a lot of scenarios would benefit from using 80-bit types for computations even if items stored in arrays were generally rounded to 64 bits. For example, if one computes a 1024-element dot product of two arrays, having extra precision available when computing the products and sum may greatly improve the accuracy of the result, even if the result itself will be cast back to a 64-bit double after the computation is complete. Also, in some cases long double may be a nice type for performing some types of whole-number maths involving lots of multiplies.Selfcongratulation
@Selfcongratulation Where the padding is placed is up to the compiler (but all I know place it at the end).Cliffordclift
@Selfcongratulation Additional precision is usually a good thing, yes. As I said, as a quality of implementation issue, there is a problem if the hardware supports extended precision, and the compiler doesn't use it.Cliffordclift

© 2022 - 2024 — McMap. All rights reserved.