You can use a union
, which is endianess-dependent, together with bit shifts that don't depend on endianess. Run-time version:
uint32_t big_endian (uint32_t n)
{
union
{
uint32_t u32;
uint8_t u8 [4];
} be;
for(size_t i=0; i<4; i++)
{
size_t shift = (4-1-i) * 8;
be.u8[i] = (n >> shift) & 0xFFu;
}
return be.u32;
}
u8[0]
will always contain the MS byte on big endian machines. However, n >> shift
will grab the relevant byte portably. Notably the whole function is just overhead bloat when running on a big endian machine.
Converting this to an ugly compile-time macro would be something like this:
typedef union
{
uint32_t u32;
uint8_t u8 [4];
} be_t;
#define BIG_ENDIAN(n) ( _Generic((n), uint32_t: (void)0), \
(be_t){ .u8 = { ((n) >> 24)&0xFFu, \
((n) >> 16)&0xFFu, \
((n) >> 8)&0xFFu, \
(n)&0xFFu } }.u32)
The _Generic
check + ,
operator is just for type safety and can be removed if stuck with non-standard C. The macro uses a temporary union in the form of a compound literal (outer {}), initializes the u8
array (inner {}) then returns a uint32_t
value.
Trying BIG_ENDIAN(0x12345678)
on little endian x86 and disassembling, I get:
mov esi, 2018915346
2018915346 dec = 0x78563412