Why do C compilers specify long to be 32-bit and long long to be 64-bit?
Asked Answered
A

7

12

Wouldn't it have made more sense to make long 64-bit and reserve long long until 128-bit numbers become a reality?

Araujo answered 2/9, 2011 at 5:20 Comment(13)
Two things: firstly, long long ain't necessarily 64 bits. Second, isn't suggesting it be 128 bits wide similarly narrow-minded - we should be preparing for 1024 bit hardware to become commonplace, right?Jadwiga
Actually "C compilers" do not specify that long is 32 bit, nor that int is 32 bit, nor that long long is 64 bit. This all depends very much on the compiler... So your question is based on a false premise.Flitter
Wouldn't it make more sense to give standard types fixed sizes (int32 int64 etc.) from the very beginning, and save us from whole class of portability issues. Like it was done in C# for example.Amarillo
They finally did in C99: en.wikipedia.org/wiki/Stdint.hJuryrig
@Jadwiga I doubt we'll ever get to 1024 bit, also we are preparing for 128-bit. Clearly you've never heard of quad-precision floating point numbers.Araujo
@Flitter I'm talking about regular compilers like GCC or Visual C.Araujo
@Eugene True, but most people just use int, long, and long long.Araujo
I'd say we'll get to 1024 bit types in the form of SIMD registers. We're at 256-bits right now with AVX. Intel has plans to go up to 1024 bits. But as for basic integers, that might take a while...Juryrig
@seljuq70: of course I'm not suggesting that 1024 bit hardware is going to happen any time soon, or that 128 bit isn't. The point is that why skip the current 64 bit hardware in favour of future 128 bit hardware?Jadwiga
@seljuq70 "Most people" are not using those types, every professional programmer I know of either uses stdint.h from C99 or their own typedef:ed equivalents.Confluence
@seljuq70: long long can't be "reserved", since the C99 standard guarantees its existence. On a 16-bit system with a 16-bit int, 32-bit long and 64-bit long long they'd all be different, but those days are gone as far as desktop machines are concerned. We're not going to stick with 16-bit int just so that we don't feel there's a redundant type in the middle somewhere.Anarchic
@Eugene - For another discussion on why not everything is fixed by the standard, see this question Exotic-architectures-the-standard-committee-cares-aboutRoee
What does the C++ standard state the size of int, long type to be?Eolanda
J
14

Yes, it does make sense, but Microsoft had their own reasons for defining "long" as 32-bits.

As far as I know, of all the mainstream systems right now, Windows is the only OS where "long" is 32-bits. On Unix and Linux, it's 64-bit.

All compilers for Windows will compile "long" to 32-bits on Windows to maintain compatibility with Microsoft.

For this reason, I avoid using "int" and "long". Occasionally I'll use "int" for error codes and booleans (in C), but I never use them for any code that is dependent on the size of the type.

Juryrig answered 2/9, 2011 at 5:25 Comment(2)
I use long in cases where 32 bits is big enough, and I don't want int32_least_t or my own typedef all over my code. It's probably best to make the dependency obvious and explicit, and if it's in a struct you'd probably use int32_t to avoid bloating it where long is bigger, but there does come a point of "can't be bothered with this".Anarchic
Many embedded devices (billions per year in 2015) use 32-bit long. Hardly "all the mainstream systems ... it's 64-bit".Aun
H
5

The c standard have NOT specified the bit-length of primitive data type, but only the least bit-length of them. So compilers can have options on the bit-length of primitive data types. On deciding the bit-length of each primitive data type, the compiler designer should consider the several factors, including the computer architecture.

here is some references: http://en.wikipedia.org/wiki/C_syntax#Primitive_data_types

Higinbotham answered 2/9, 2011 at 5:35 Comment(0)
J
4

For the history, including why UNIX systems generally converged on LP64, and why Windows did not (big code base that had int 16 and long 32), and the various arguments: The Long Road to 64 Bits - Double, double, toil and trouble—Shakespeare, Macbeth https://queue.acm.org/detail.cfm?id=1165766 Queue 2006 OR https://dl.acm.org/doi/pdf/10.1145/1435417.1435431 CACM 2009

Note: I helped design the 64/32-bit MIPS R4000, suggested the idea that led to <inttypes.h>, and wrote the long long motivation section for C99.

Janise answered 16/12, 2022 at 7:37 Comment(0)
W
2

For historical reasons. For a long time (pun intended), "int" meant 16-bit; hence "long" as 32-bit. Of course, times changed. Hence "long long" :)

PS:

GCC (and others) currently support 128 bit integers as "(u)int128_t".

PPS:

Here's a discussion of why the folks at GCC made the decisions they did:

http://www.x86-64.org/pipermail/discuss/2005-August/006412.html

Whippet answered 2/9, 2011 at 5:24 Comment(0)
F
0

Ever since the days of the first C compiler for a general-purpose reprogrammable microcomputer, it has often been necessary for code to make use of types that held exactly 8, 16, or 32 bits, but until 1999 the Standard didn't explicitly provide any way for programs to specify that. On the other hand, nearly all compilers for 8-bit, 16-bit, and 32-bit microcomputers define "char" as 8 bits, "short" as 16 bits, and "long" as 32 bits. The only difference among them is whether "int" is 16 bits or 32.

While a 32-bit or larger CPU could use "int" as a 32-bit type, leaving "long" available as a 64-bit type, there is a substantial corpus of code which expects that "long" will be 32 bits. While the C Standard added "fixed-sized" types in 1999, there are other places in the Standard which still use "int" and "long", such as "printf". While C99 added macros to supply the proper format specifiers for fixed-sized integer types, there is a substantial corpus of code which expects that "%ld" is a valid format specifier for int32_t, since it will work on just about any 8-bit, 16-bit, or 32-bit platform.

Whether it makes more sense to have "long" be 32 bits, out of respect for an existing code base going back decades, or 64 bits, so as to avoid the need for the more verbose "long long" or "int64_t" to identify the 64-bit types is probably a judgment call, but given that new code should probably favor the use of specified-size types when practical, I'm not sure I see a compelling advantage to making "long" 64 bits unless "int" is also 64 bits (which will create even bigger problems with existing code).

Frederico answered 9/5, 2016 at 20:52 Comment(0)
N
0

d 32-bit microcomputers define "char" as 8 bits, "short" as 16 bits, and "long" as 32 bits. The only difference among them is whether "int" is 16 bits or 32.

While a 32-bit or larger CPU could use "int" as a 32-bit type, leaving "long" available as a 64-bit type, there is a substantial corpus of code which expects that "long" will be 32 bits. While the C Standard added "fixed-sized" types in 1999, there are other places in the Standard which still use "int" and "long", such as "printf". While C99 added macros to supply the

Nu answered 20/2, 2019 at 12:36 Comment(0)
C
-2

C99 N1256 standard draft

Sizes of long and long long are implementation defined, all we know are:

  • minimum size guarantees
  • relative sizes between the types

5.2.4.2.1 Sizes of integer types <limits.h> gives the minimum sizes:

1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown [...]

  • UCHAR_MAX 255 // 2 8 − 1
  • USHRT_MAX 65535 // 2 16 − 1
  • UINT_MAX 65535 // 2 16 − 1
  • ULONG_MAX 4294967295 // 2 32 − 1
  • ULLONG_MAX 18446744073709551615 // 2 64 − 1

6.2.5 Types then says:

8 For any two integer types with the same signedness and different integer conversion rank (see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a subrange of the values of the other type.

and 6.3.1.1 Boolean, characters, and integers determines the relative conversion ranks:

1 Every integer type has an integer conversion rank defined as follows:

  • The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
  • The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.
  • For all integer types T1, T2, and T3, if T1 has greater rank than T2 and T2 has greater rank than T3, then T1 has greater rank than T3
Cardiganshire answered 9/5, 2016 at 19:39 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.