To my understanding, int
was initially supposed to be a "native" integer type with additional guarantee that it should be at least 16 bits in size - something that was considered "reasonable" size back then.
When 32-bit platforms became more common, we can say that "reasonable" size has changed to 32 bits:
- Modern Windows uses 32-bit
int
on all platforms.
- POSIX guarantees that
int
is at least 32 bits.
- C#, Java has type
int
which is guaranteed to be exactly 32 bits.
But when 64-bit platform became the norm, no one expanded int
to be a 64-bit integer because of:
- Portability: a lot of code depends on
int
being 32 bit in size.
- Memory consumption: doubling memory usage for every
int
might be unreasonable for most cases, as in most cases numbers in use are much smaller than 2 billion.
Now, why would you prefer uint32_t
to uint_fast32_t
? For the same reason languages, C# and Java always use fixed size integers: programmer does not write code thinking about possible sizes of different types, they write for one platform and test code on that platform. Most of the code implicitly depends on specific sizes of data types. And this is why uint32_t
is a better choice for most cases - it does not allow any ambiguity regarding its behavior.
Moreover, is uint_fast32_t
really the fastest type on a platform with a size equal or greater to 32 bits? Not really. Consider this code compiler by GCC for x86_64 on Windows:
extern uint64_t get(void);
uint64_t sum(uint64_t value)
{
return value + get();
}
Generated assembly looks like this:
push %rbx
sub $0x20,%rsp
mov %rcx,%rbx
callq d <sum+0xd>
add %rbx,%rax
add $0x20,%rsp
pop %rbx
retq
Now if you change get()
's return value to uint_fast32_t
(which is 4 bytes on Windows x86_64) you get this:
push %rbx
sub $0x20,%rsp
mov %rcx,%rbx
callq d <sum+0xd>
mov %eax,%eax ; <-- additional instruction
add %rbx,%rax
add $0x20,%rsp
pop %rbx
retq
Notice how generated code is almost the same except for additional mov %eax,%eax
instruction after function call which is meant to expand 32-bit value into a 64-bit value.
There is no such issue if you only use 32-bit values, but you will probably be using those with size_t
variables (array sizes probably?) and those are 64 bits on x86_64. On Linux uint_fast32_t
is 8 bytes, so the situation is different.
Many programmers use int
when they need to return small value (let's say in the range [-32,32]). This would work perfectly if int
would be platforms native integer size, but since it is not on 64-bit platforms, another type which matches platform native type is a better choice (unless it is frequently used with other integers of smaller size).
Basically, regardless of what standard says, uint_fast32_t
is broken on some implementations anyway. If you care about additional instruction generated in some places, you should define your own "native" integer type. Or you can use size_t
for this purpose, as it will usually match native
size (I am not including old and obscure platforms like 8086, only platforms that can run Windows, Linux etc).
Another sign that shows int
was supposed to be a native integer type is "integer promotion rule". Most CPUs can only perform operations on native, so 32 bit CPU usually can only do 32-bit additions, subtractions etc (Intel CPUs are an exception here). Integer types of other sizes are supported only through load and store instructions. For example, the 8-bit value should be loaded with appropriate "load 8-bit signed" or "load 8-bit unsigned" instruction and will expand value to 32 bits after load. Without integer promotion rule C compilers would have to add a little bit more code for expressions that use types smaller than native type. Unfortunately, this does not hold anymore with 64-bit architectures as compilers now have to emit additional instructions in some cases (as was shown above).
uint32_fast_t
, which if I'm understanding correctly, is at least 32 bits (meaning it could be more? Sounds misleading to me). I'm currently usinguint32_t
and friends on my project because I'm packing up this data and sending it over network, and I want the sender and receiver to know exactly how big the fields are. Sounds like this may not be the most robust solution since a platform may not implementuint32_t
, but all of mine do apparently so I'm fine with what I'm doing. – Griddleuint32_t
doesn't give you that (and it's a pity there's nouint32_t_be
anduint32_t_le
, which would be more appropriate for almost every possible case whereuint32_t
is currently the best option). – Attention_be
and_le
types. I agree, such types would be ideal for networking applications. I currently only have 2 target systems, and they're both little endian, so endianess hasn't been an issue and as such I've decided to brush it under the rug for the moment. Maybe that will come back to bite me later, but accounting for endianess shouldn't be a terrible amount of re/additional work. – Griddleuint_fast32_t
is likely defined asuint64_t
. This can be a gotcha if you expect uint_fast32_t to behave like a 32-bit type, and also using fast types blindly and having a 64-bit type for every variable is likely to have negative performance characteristics. – Photofinishinguint32_t
- in some attempts at portability. I end up forcing a byte order by converting to a byte array before outputting them anyway, though. That was the main reason I figured the exactly 32bit requirement is often useless. – Wurthhtonl()
in a structure (e.g. likestruct myPacketFormat { uint32_t_le sequenceNumber; ... }
) so you end up withhtonl()
and friends scattered everywhere (except for that one place where you forgot that takes you 4 days to find). ;-) – Attention