The x32 ABI specifies, among other things, 32-bit pointers for code generated for the x86_64 architecture. It combines the advantages of the x86_64 architecture (including 64-bit CPU registers) with the reduced overhead of 32-bit pointers.
The <stdint.h>
header defines typedefs int_fast8_t
, int_fast16_t
, int_fast32_t
, and int_fast64_t
(and corresponding unsigned types uint_fast8_t
et al), each of which is:
an integer type that is usually fastest to operate with among all integer types that have at least the specified width
with a footnote:
The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
(Quoted from the N1570 C11 draft.)
The question is, how should [u]int_fast16_t
and [u]int_fast32_t
types be defined for the x86_64 architecture, with or without the x32 ABI? Is there an x32 document that specifies these types? Should they be compatible with the 32-bit x86 definitions (both 32 bits) or, since x32 has access to 64-bit CPU registers, should they be the same size with or without the x32 ABI? (Note that the x86_64 has 64-bit registers regardless of whether the x32 ABI is in use or not.)
Here's a test program (which depends on the gcc-specific __x86_64__
macro):
#include <stdio.h>
#include <stdint.h>
#include <limits.h>
int main(void) {
#if defined __x86_64__ && SIZE_MAX == 0xFFFFFFFF
puts("This is x86_64 with the x32 ABI");
#elif defined __x86_64__ && SIZE_MAX > 0xFFFFFFFF
puts("This is x86_64 without the x32 ABI");
#else
puts("This is not x86_64");
#endif
printf("uint_fast8_t is %2zu bits\n", CHAR_BIT * sizeof (uint_fast8_t));
printf("uint_fast16_t is %2zu bits\n", CHAR_BIT * sizeof (uint_fast16_t));
printf("uint_fast32_t is %2zu bits\n", CHAR_BIT * sizeof (uint_fast32_t));
printf("uint_fast64_t is %2zu bits\n", CHAR_BIT * sizeof (uint_fast64_t));
}
When I compile it with gcc -m64
, the output is:
This is x86_64 without the x32 ABI
uint_fast8_t is 8 bits
uint_fast16_t is 64 bits
uint_fast32_t is 64 bits
uint_fast64_t is 64 bits
When I compile it with gcc -mx32
, the output is:
This is x86_64 with the x32 ABI
uint_fast8_t is 8 bits
uint_fast16_t is 32 bits
uint_fast32_t is 32 bits
uint_fast64_t is 64 bits
(which, apart from the first line, matches the output with gcc -m32
, which generates 32-bit x86 code).
Is this a bug in glibc (which defines the <stdint.h>
header), or is it following some x32 ABI requirement? There are no references to the [u]int_fastN_t
types in either the x32 ABI document or the x86_64 ABI document, but there could be something else that specifies it.
One could argue that the fast16 and fast32 types should be 64 bits with or with x32, since 64-bit registers are available; would that makes more sense that the current behavior?
(I've substantially edited the original question, which asked only about the x32 ABI. The question now asks about x86_64 with or without x32.)
<stdint.h>
is provided by glibc, not by gcc, you're right; I've updated the question. If you're saying it's not a bug, I'd be interested in your rationale. Since the system has 64-bit registers,int64_t
should be faster thanint32_t
, soint_fast32_t
should be 64 bits, just as it is in x86_64. – Kehrint64_t
faster thanint32_t
when working with values that only need 32 bits? – Calico[u]int_fast16_t
and[u]int_fast32_t
64 bits. Whatever rationale less to that decision should also apply to x32, unless I'm missing something. – Kehr[u]int32_t
32 bits on x86_64? They currently have different sizes on x32 vs. x86_64; is there any good reason form them to differ? – Kehrint_fast16_t
with 64-bits is not so attractive any more. – Clouseint
in most implementations) are 4-byte aligned, not 8-byte aligned. – Kehr