Does the size of an int depend on the compiler and/or processor?
Asked Answered
B

10

85

Would the size of an integer depend upon the compiler, OS and processor?

Bemba answered 25/2, 2010 at 4:59 Comment(6)
i answered it was processor...probably i was wrong:(Bemba
Perhaps you should retitle your question post? You only seemed to be concerned with the size of an int, not a char (which is by definition of size 1 regardless of compiler or processor).Groceryman
Related SO post: https://mcmap.net/q/14666/-what-does-the-c-standard-say-about-the-size-of-int-longGroceryman
Almost duplicate: #1680611Flamingo
"why there is a close vote?" A person with more than 100 questions to his credit should know to search first...Luellaluelle
It's important to remember that "int" and "integer" are two different things: int is one of several integer types.Mirabella
S
147

The answer to this question depends on how far from practical considerations we are willing to get.

Ultimately, in theory, everything in C and C++ depends on the compiler and only on the compiler. Hardware/OS is of no importance at all. The compiler is free to implement a hardware abstraction layer of any thickness and emulate absolutely anything. There's nothing to prevent a C or C++ implementation from implementing the int type of any size and with any representation, as long as it is large enough to meet the minimum requirements specified in the language standard. Practical examples of such level of abstraction are readily available, e.g. programming languages based on "virtual machine" platform, like Java.

However, C and C++ are intended to be highly efficient languages. In order to achieve maximum efficiency a C or C++ implementation has to take into account certain considerations derived from the underlying hardware. For that reason it makes a lot of sense to make sure that each basic type is based on some representation directly (or almost directly) supported by the hardware. In that sense, the size of basic types do depend on the hardware.

In other words, a specific C or C++ implementation for a 64-bit hardware/OS platform is absolutely free to implement int as a 71-bit 1's-complement signed integral type that occupies 128 bits of memory, using the other 57 bits as padding bits that are always required to store the birthdate of the compiler author's girlfriend. This implementation will even have certain practical value: it can be used to perform run-time tests of the portability of C/C++ programs. But that's where the practical usefulness of that implementation would end. Don't expect to see something like that in a "normal" C/C++ compiler.

Southpaw answered 25/2, 2010 at 5:25 Comment(7)
Some of the other answers given are perfectly valid and correct, but I like this one best because it tries to give a deeper and more well-rounded explanation.Postscript
I'll just quote the C++ standard for completeness: "Plain ints have the natural size suggested by the architecture of the execution environment." It's up to the compiler writer to decide what "natural" means.Benitez
And the C standard has similar wording: "A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>)." The part about INT_MIN and INT_MAX isn't all that helpful, since they're defined as the lower and upper bounds of type int.Mirabella
I haven't understood your answer.Can you please improve your answer by discussing the roles hardware,OS,compiler play the role in deciding the size of data types such as int. The opening line says everything depends upon compiler but then you mention there are some hardware constraints for efficiency.Can you discuss this please.Aguilera
@Y.E.P: I don't see the contradiction. "Hardware constraints" are not really constraints at all. They are merely suggestions that have to be followed in order to result in the most efficient code. However, if you don't care about the maximum achievable efficiency of the compiled code, then you can implement a compiler that completely ignores any hardware considerations.Southpaw
I'd say the practical usefulness could go a little bit further. What if the compiler author needs to remember his girlfriend's birthdate? :vNoise
I like the idea that the compiler can do anything. I'm going to implement a fractal int, where what you're storing is the entropy of an integer, and what you get back is heat.Livia
C
41

Yes, it depends on both processors (more specifically, ISA, instruction set architecture, e.g., x86 and x86-64) and compilers including programming model. For example, in 16-bit machines, sizeof (int) was 2 bytes. 32-bit machines have 4 bytes for int. It has been considered int was the native size of a processor, i.e., the size of register. However, 32-bit computers were so popular, and huge number of software has been written for 32-bit programming model. So, it would be very confusing if 64-bit computer would have 8 bytes for int. Both Linux and Windows remain 4 bytes for int. But, they differ in the size of long.

Please take a look at the 64-bit programming model like LP64 for most *nix and LLP64 for Windows:

Such differences are actually quite embarrassing when you write code that should work both on Window and Linux. So, I'm always using int32_t or int64_t, rather than long, via stdint.h.

Corroborant answered 25/2, 2010 at 5:4 Comment(7)
A small typo, you mentioned that sizeof(int) is 16 on a 16-bit machine, but it is more likely to be 2.Udele
If you need a type that is "at least 32 bits", then long suffices, and shouldn't cause a problem if it's too long. The major exception is when you're directly reading from or writing to an on-disk or network format.Pushcart
@Pushcart - never assume long is >= 32 bits. It probably is, but don't count on it. I used a compiler that used 1 bit for short, 8 bits for int, and 16 bits for long. Use int32_t instead.Subfloor
@MarkLakata: Whatever language that compiler was compiling, it wasn't C. long must be able to represent all numbers in the range -2147483647 to 2147483648.Pushcart
@Pushcart - you are correct as far as ANSI C goes. short can't be 1 bit either, and int can't be 8-bits. But the real world is never 100% compliant with anything. If you don't believe me, here is the reference manual: ccsinfo.com/downloads/CReferenceManual.pdf see page 37Subfloor
@MarkLakata: If you can't trust the size of long, mandated by the standard, then you can't trust the size of int32_t either, since that also derives its guaranteed size from the standard.Pushcart
Thanks ! This url is awesome , I had a buggy piece of code that typedefed uint32 as long on a *nix platform which is wrong according Lp64 model.Brillatsavarin
N
8

Yes, it would. Did they mean "which would it depend on: the compiler or the processor"? In that case the answer is basically "both." Normally, int won't be bigger than a processor register (unless that's smaller than 16 bits), but it could be smaller (e.g. a 32-bit compiler running on a 64-bit processor). Generally, however, you'll need a 64-bit processor to run code with a 64-bit int.

Nabors answered 25/2, 2010 at 5:4 Comment(2)
First part of your answer is very enlightening to me. I am confused when you said "you'll need a 64-bit processor to run code with a 64-bit int". I have learnt that int bigger than the native size of the machine can be handled by the machine by breaking it down in smaller chunks. there would be some performance penalty. isn't that the case?Anchoveta
@Sarurabh: I was commenting very specifically about the type named int. Many (most?) 32-bit compilers have a 64-bit type, but it'll have a different name (e.g., __int64 or long long).Nabors
R
7

Based on some recent research I have done studying up for firmware interviews:

The most significant impact of the processors bit architecture ie, 8bit, 16bit, 32bit, 64bit is how you need to most efficiently store each byte of information in order to best compute variables in the minimum number of cycles.

The bit size of your processor tells you what the natural word length the CPU is capable of handling in one cycle. A 32bit machine needs 2 cycles to handle a 64bit double if it is aligned properly in memory. Most personal computers were and still are 32bit hence the most likely reason for the C compiler typical affinity for 32bit integers with options for larger floating point numbers and long long ints.

Clearly you can compute larger variable sizes so in that sense the CPU's bit architecture determines how it will have to store larger and smaller variables in order to achieve best possible efficiency of processing but it is in no way a limiting factor in the definitions of byte sizes for ints or chars, that is part of compilers and what is dictated by convention or standards.

I found this site very helpful, http://www.geeksforgeeks.org/archives/9705, for explaining how the CPU's natural word length effects how it will chose to store and handle larger and smaller variable types, especially with regards to bit packing into structs. You have to be very cognizant of how you chose to assign variables because larger variables need to be aligned in memory so they take the fewest number of cycles when divided by the CPU's word length. This will add a lot of potentially unnecessary buffer/empty space to things like structs if you poorly order the assignment of your variables.

Rambler answered 10/7, 2012 at 20:40 Comment(0)
R
2

The simple and correct answer is that it depends on the compiler. It doesn't mean architecture is irrelevant but the compiler deals with that, not your application. You could say more accurately it depends on the (target) architecture of the compiler for example if its 32 bits or 64 bits.

Consider you have windows application that creates a file where it writes an int plus other things and reads it back. What happens if you run this on both 32 bits and 64 bits windows? What happens if you copy the file created on 32 bits system and open it in 64 bits system?

You might think the size of int will be different in each file but no they will be the same and this is the crux of the question. You pick the settings in compiler to target for 32 bits or 64 bits architecture and that dictates everything.

Resound answered 9/9, 2014 at 18:35 Comment(0)
C
1

http://www.agner.org/optimize/calling_conventions.pdf

"3 Data representation" contains good overview of what compilers do with integral types.

Chapel answered 20/4, 2015 at 12:58 Comment(0)
T
0

Data Types Size depends on Processor, because compiler wants to make CPU easier accessible the next byte. for eg: if processor is 32bit, compiler may not choose int size as 2 bytes[which it supposed to choose 4 bytes] because accessing another 2 bytes of that int(4bytes) will take additional CPU cycle which is waste. If compiler chooses int as 4 bytes CPU can access full 4 bytes in one shot which speeds your application.

Thanks

Thracophrygian answered 25/2, 2010 at 5:29 Comment(1)
This is bogus: "Data Types Size depends on Processor, because compiler wants to make CPU easier accessible the next byte" The compiler is not a person, it has no wants. You can implement the compiler whichever way you want. Efficiency of the generated code, in whatever metric, is a concern, but is not a concern where the standard is concerned. As some other answerer has said, you're free to produce a binary image with the date of birth of your girlfriend intespersed between every instruction :) Your abstract discussion of what a CPU might or might not do w.r.t. cycles etc is pointless.Magi
A
0

Size of the int is equal to the word-length that depends upon the underlying ISA. Processor is just the hardware implementation of the ISA and the compiler is just the software-side implementation of the ISA. Everything revolves around the underlying ISA. Most popular ISA is Intel's IA-32 these days. it has a word length of 32bits or 4bytes. 4 bytes could be the max size of 'int' (just plain int, not short or long) compilers. based on IA-32, could use.

Anchoveta answered 1/3, 2011 at 1:40 Comment(0)
K
0

size of data type basically depends upon the type of compiler and compilers are designed on the basis of architecture of processors so externally data type can be considered to be compiler dependent.for ex size of integer is 2 byte in 16 bit tc compiler but 4 byte in gcc compiler although they are executed in same processor

Kolkhoz answered 25/1, 2015 at 6:36 Comment(0)
V
-6

Yes , I found that size of int in turbo C was 2 bytes where as in MSVC compiler it was 4 bytes.

Basically the size of int is the size of the processor registers.

Valor answered 25/2, 2010 at 5:3 Comment(1)
"Basically the size of int is the size of the processor registers." - This is incorrect, see other answers.Meacham

© 2022 - 2024 — McMap. All rights reserved.