Long Vs. Int C/C++ - What's The Point?
Asked Answered
C

5

85

As I've learned recently, a long in C/C++ is the same length as an int. To put it simply, why? It seems almost pointless to even include the datatype in the language. Does it have any uses specific to it that an int doesn't have? I know we can declare a 64-bit int like so:

long long x = 0;

But why does the language choose to do it this way, rather than just making a long well...longer than an int? Other languages such as C# do this, so why not C/C++?

Commonplace answered 17/9, 2011 at 18:23 Comment(8)
The reaon we use types like u8, s16, s32, u64, etc.Decosta
"a long in C/C++ is the same length as an int." Not always. The C++ standard specifies that an int be the "natural" size for the processor, which may not always be as big as a long. The standard also guarantees that a long is at least as long as an int, so the fact that they are equal sizes are not always guaranteed.Summarize
Historical and architectural baggage. Believe me, it can hurt making assumptions like you make about int and long always being the same size; I am currently fixing a ton of portability issues in one of our C++ static libraries as we transition to a 64-bit architectureFourteenth
Practically speaking, sizeof(long) == sizeof(int) only on 32-bit architectures or 64-bit Windows, where it comes as a shock to programmers who are used to sizeof(long) == sizeof(void *).Ona
IMO, the integers are char, short, long, and long long. int is just a "typedef" for whichever is fastest on my system.Stanleigh
I still find all this "long" and "short" nonsense to be mostly pointless. It may be this long on this architecture and that long on that one and it may have yet another length on another architecture. Utterly unportable. IMO there should only be a single integer type (with signed and unsigned variations, of course), and if you need anything of a specific size then there should be types (as there are) which have a fixed, guaranteed size (i.e. uint8, uint16, etc.).Cardiograph
@Cardiograph Agreed. long does not mean longer than int (It means that it's at least the size of int and possibly longer - Usually not the case). However, C does guarantee that, at a minimum, unsigned int can hold at least 65,535, while signed long is 2,147,483,647. Though I'd argue you should know more about the architecture of where you're building, this at least guarantees that you'll always be able to hold up to 2,147,483,647 number if you use long.Toper
Just for kicks I'll throw this out there: on 8-bit Arduino microcontroller development boards at least (using Atmel's ATmega328 microcontroller, for example), char is 8-bits, short is 16-bits, int is 16-bits, and long is 32-bits. So in this case short and int are identical, while long truly is longer.Scevour
G
79

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

Guaiacum answered 17/9, 2011 at 18:26 Comment(11)
So it's really just a legacy thing, then?Commonplace
@MGZero: You're sort of right, but you didn't read the answer fully. There was a time in the long past when int and short were the same size on many platforms. If int were still 16 bits, then that would be a 'legacy' thing. int is basically meant to represent a very efficient size for integers on whatever platform you happen to be on.Ene
+1 - also prob good to mention that this is why c99 defined the uintNN_t etc types to clear things up.Gall
@Brian Roach: Which many C implementations have stupidly still not implemented. Grr... sighEne
@MGZero, Not at all, this continues to vary by platform, and there are many platforms still define int and long of different size.Bisayas
This is completely wrong on many platforms. As a common example, int is 16 bits on AVR microcontrollers, which are common teaching tools in university CS courses.Detriment
@MostAwesomeDude: what is wrong? I have written it's compiler- and architecture-specific. Is that wrong? From what I see in your comment, you actually agreed with me...Guaiacum
@Griwes, you imply that this is a thing of the past, when in fact, the present has plenty of platforms where long and int are different sizes, and the future shows no signs of not having such platforms. You don't describe the actual rules governing the sizes of types in C at all.Bisayas
@Mike: As I read it, he explains the rationale for the current situation, a rationale which by necessity must be found in the past.Favourable
I wouldn't say it's a legacy thing or only found in the past. It's a portability thing.. The C/C++ standards are defined with some minimum degree of portability in mind, and were written in such a way as to only provide minimalistic definitions in relation to each other. A long is not the same size as an int, it's only guaranteed to be no smaller than an int, but if the platform/compiler requires it to be larger, it is free to do so. On some platforms, int may be 16 bits and long is 32 bits. On others, they may both be 32 bits. On still others, 32 and 64 bits, respectively, and so on.Krall
Implementations with 32-bit int and 64-bit long are very common. (I'm typing this comment on such a system.)Involucre
I
66

The specific guarantees are as follows:

  • char is at least 8 bits (1 byte by definition, however many bits it is)
  • short is at least 16 bits
  • int is at least 16 bits
  • long is at least 32 bits
  • long long (in versions of the language that support it) is at least 64 bits
  • Each type in the above list is at least as wide as the previous type (but may well be the same).

Thus it makes sense to use long if you need a type that's at least 32 bits, int if you need a type that's reasonably fast and at least 16 bits.

Actually, at least in C, these lower bounds are expressed in terms of ranges, not sizes. For example, the language requires that INT_MIN <= -32767, and INT_MAX >= +32767. The 16-bit requirements follows from this and from the requirement that integers are represented in binary.

C99 adds <stdint.h> and <inttypes.h>, which define types such as uint32_t, int_least32_t, and int_fast16_t; these are typedefs, usually defined as aliases for the predefined types.

(There isn't necessarily a direct relationship between size and range. An implementation could make int 32 bits, but with a range of only, say, -2**23 .. +2^23-1, with the other 8 bits (called padding bits) not contributing to the value. It's theoretically possible (but practically highly unlikely) that int could be larger than long, as long as long has at least as wide a range as int. In practice, few modern systems use padding bits, or even representations other than 2's-complement, but the standard still permits such oddities. You're more likely to encounter exotic features in embedded systems.)

Involucre answered 18/9, 2011 at 3:17 Comment(2)
Then there is a problem. Because almost everyone assumes that int will store 32 bit number. So they usually put million into the int variable. Nowadays everyone assumes it's 32bits I think. If we are going to be politically correct, everyone should start to use long as their default data type.Birdman
@off99555 There is no "default data type". You can safely assume that int is at least 32 bits if you're on an implementation that makes that guarantee; all POSIX-conforming systems do so. If you make that assumption, your code won't be portable to systems with 16-bit int. That may or may not be a good tradeoff. (And being "politically correct" is irrelevant.)Involucre
U
24

long is not the same length as an int. According to the specification, long is at least as large as int. For example, on Linux x86_64 with GCC, sizeof(long) = 8, and sizeof(int) = 4.

Ultramicroscope answered 17/9, 2011 at 18:27 Comment(0)
Y
15

long is not the same size as int, it is at least the same size as int. To quote the C++03 standard (3.9.1-2):

There are four signed integer types: “signed char”, “short int”, “int”, and “long int.” In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment); the other signed integer types are provided to meet special needs.

My interpretation of this is "just use int, but if for some reason that doesn't fit your needs and you are lucky to find another integral type that's better suited, be our guest and use that one instead". One way that long might be better is if you 're on an architecture where it is... longer.

Yves answered 17/9, 2011 at 18:28 Comment(2)
"One way that long might be better is if you 're on an architecture where it is... longer." Not really. long might be better if you write portable code and you need integer of at least 32-bits. long is guaranteed to be not less than 32-bits by C standard. int is not guaranteed (it may be 16-bits).Chapen
@SergeDundich: Very true. No excuses for not addressing the question in C for me.Yves
C
12

looking for something completely unrelated and stumbled across this and needed to answer. Yeah, this is old, so for people who surf on in later...

Frankly, I think all the answers on here are incomplete.

The size of a long is the size of the number of bits your processor can operate on at one time. It's also called a "word". A "half-word" is a short. A "doubleword" is a long long and is twice as large as a long (and originally was only implemented by vendors and not standard), and even bigger than a long long is a "quadword" which is twice the size of a long long but it had no formal name (and not really standard).

Now, where does the int come in? In part registers on your processor, and in part your OS. Your registers define the native sizes the CPU handles which in turn define the size of things like the short and long. Processors are also designed with a data size that is the most efficient size for it to operate on. That should be an int.

On todays 64bit machines you'd assume, since a long is a word and a word on a 64bit machine is 64bits, that a long would be 64bits and an int whatever the processor is designed to handle, but it might not be. Why? Your OS has chosen a data model and defined these data sizes for you (pretty much by how it's built). Ultimately, if you're on Windows (and using Win64) it's 32bits for both a long and int. Solaris and Linux use different definitions (the long is 64bits). These definitions are called things like ILP64, LP64, and LLP64. Windows uses LLP64 and Solaris and Linux use LP64:

Model      ILP64   LP64   LLP64
int        64      32     32
long       64      64     32
pointer    64      64     64
long long  64      64     64

Where, e.g., ILP means int-long-pointer, and LLP means long-long-pointer

To get around this most compilers seem to support setting the size of an integer directly with types like int32 or int64.

Compete answered 27/5, 2016 at 6:55 Comment(4)
"The size of a long is the size of the number of bits your processor can operate on at one time." -- The standard doesn't say or imply that, and it's not always true. On a system with 16-bit words that needs multiple instructions to operate on a 32-bit quantity, long is still required to be at least 32 bits. There is no requirement that short is half the width of long, or that long long is twice the width of long -- and neither is true on the system I'm using (16-bit short, 64-bit long, 64-bit long long).Involucre
Finally, all conforming C and C++ compilers define int32_t and int64_t in <stdint.h> (introduced in C in 1999 and later adopted by C++). That's assuming the implementation has types that support the requirements; not all do.Involucre
Since you didn't tag me in your comment, I didn't see it until just now. I did in fact read the entire answer. What about my comments makes you think I didn't?Involucre
this is a bad answer. it is confidently inaccurate, and even internally contradictoryGadgeteer

© 2022 - 2024 — McMap. All rights reserved.