When should I use UINT32_C(), INT32_C(),... macros in C?
Asked Answered
M

2

21

I switched to fixed-length integer types in my projects mainly because they help me think about integer sizes more clearly when using them. Including them via #include <inttypes.h> also includes a bunch of other macros like the printing macros PRIu32, PRIu64,...

To assign a constant value to a fixed length variable I can use macros like UINT32_C() and INT32_C(). I started using them whenever I assigned a constant value.

This leads to code similar to this:

uint64_t i;
for (i = UINT64_C(0); i < UINT64_C(10); i++) { ... }

Now I saw several examples which did not care about that. One is the stdbool.h include file:

#define bool    _Bool
#define false   0
#define true    1

bool has a size of 1 byte on my machine, so it does not look like an int. But 0 and 1 should be integers which should be turned automatically into the right type by the compiler. If I would use that in my example the code would be much easier to read:

uint64_t i;
for (i = 0; i < 10; i++) { ... }

So when should I use the fixed length constant macros like UINT32_C() and when should I leave that work to the compiler(I'm using GCC)? What if I would write code in MISRA C?

Mccarthy answered 26/11, 2016 at 19:38 Comment(6)
Personally I shy away from using the fixed width types: not all compilers support them and the promotion rules with these types are practically non-existent - in my humble opinion a big oversight in the C standard. Upvote for the question though.Lomasi
Interesting question, can't wait to see the answersDemoralize
You might need them if int is narrower than 32 bits.Solfatara
@Lomasi With the stdint types, you get a whole lot less implicit promotion problems than with the native types. This is because they behave deterministically, instead of having an arbitrary size. So with the stdint types you can write code that works portably no matter promotions, while with the native types you get code that may or may not work. But to write portable code properly, it is mostly a matter about the programmer actually being aware of implicit promotions, regardless of the choice of types.Diatomite
Possible duplicate of Initializing objects with macros for integer constantsPelecypod
Another dup: Which initializer is appropriate for an int64_t?Pelecypod
D
5

As a rule of thumb, you should use them when the type of the literal matters. There are two things to consider: the size and the signedness.

Regarding size:

An int type is guaranteed by the C standard values up to 32767. Since you can't get an integer literal with a smaller type than int, all values smaller than 32767 should not need to use the macros. If you need larger values, then the type of the literal starts to matter and it is a good idea to use those macros.

Regarding signedness:

Integer literals with no suffix are usually of a signed type. This is potentially dangerous, as it can cause all manner of subtle bugs during implicit type promotion. For example (my_uint8_t + 1) << 31 would cause an undefined behavior bug on a 32 bit system, while (my_uint8_t + 1u) << 31 would not.

This is why MISRA has a rule stating that all integer literals should have an u/U suffix if the intention is to use unsigned types. So in my example above you could use my_uint8_t + UINT32_C(1) but you can as well use 1u, which is perhaps the most readable. Either should be fine for MISRA.


As for why stdbool.h defines true/false to be 1/0, it is because the standard explicitly says so. Boolean conditions in C still use int type, and not bool type like in C++, for backwards compatibility reasons.

It is however considered good style to treat boolean conditions as if C had a true boolean type. MISRA-C:2012 has a whole set of rules regarding this concept, called essentially boolean type. This can give better type safety during static analysis and also prevent various bugs.

Diatomite answered 28/11, 2016 at 10:41 Comment(3)
Integer size doesn't matter either. Integer literals will be of the smallest type that can hold the literal's value, or (unsigned) int, whichever greater.Deen
... to make the language more detailed: (my_uint8_t + 1) can become a signed integer, and shifting it will make it a negative value.Esotropia
@JayLee No, since 1 has a type too and it is int, which is a larger integer type than uint8_t, with higher conversion rank. See this for details: #46073795Diatomite
H
3

It's for using smallish integer literals where the context won't result in the compiler casting it to the correct size.

I've worked on an embedded platform where int is 16 bits and long is 32 bits. If you were trying to write portable code to work on platforms with either 16-bit or 32-bit int types, and wanted to pass a 32-bit "unsigned integer literal" to a variadic function, you'd need the cast:

#define BAUDRATE UINT32_C(38400)
printf("Set baudrate to %" PRIu32 "\n", BAUDRATE);

On the 16-bit platform, the cast creates 38400UL and on the 32-bit platform just 38400U. Those will match the PRIu32 macro of either "lu" or "u".

I think that most compilers would generate identical code for (uint32_t) X as for UINT32_C(X) when X is an integer literal, but that might not have been the case with early compilers.

Henrie answered 28/11, 2016 at 2:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.