How to make GCC generate bswap instruction for big endian store without builtins?
Asked Answered
P

3

22

Update: This was fixed in GCC 8.1.

I'm working on a function that stores a 64-bit value into memory in big endian format. I was hoping that I could write portable C99 code that works on both little and big endian platforms and have modern x86 compilers generate a bswap instruction automatically without any builtins or intrinsics. So I started with the following function:

#include <stdint.h>

void
encode_bigend_u64(uint64_t value, void *vdest) {
    uint8_t *bytes = (uint8_t *)vdest;
    bytes[0] = value >> 56;
    bytes[1] = value >> 48;
    bytes[2] = value >> 40;
    bytes[3] = value >> 32;
    bytes[4] = value >> 24;
    bytes[5] = value >> 16;
    bytes[6] = value >> 8;
    bytes[7] = value;
}

This works fine for clang which compiles this function to:

bswapq  %rdi
movq    %rdi, (%rsi)
retq

But GCC fails to detect the byte swap. I tried a couple of different approaches but they only made things worse. I know that GCC can detect byte swaps using bitwise-and, shift, and bitwise-or, but why doesn't it work when writing bytes?

Edit: I found the corresponding GCC bug.

Pitapat answered 8/4, 2016 at 10:41 Comment(5)
clang with -march=native it gets even better: just one instr: movbeq %rdi, (%rsi)Costplus
updated my answer with a version that compiles to ideal code on gcc and clang, using GNU C __builtin_bswap64 via the glibc function htobe64.Marmara
If movbe is available, then the sequence can be reduced to movbeq %rdi, (%rsi). Maybe its another optimization bug with both Clang and GCC.Kalli
I was looking into this, for unrelated reasons, on RISC-V, where clang detects various sequences (or not) and generates better or worse code, depending... See gcc.godbolt.org/z/dwMH9S which includes a few of the suggestions on this page as well.Uppish
GCC >= 8 does the optimization: gcc.godbolt.org/z/bYYA5WPanhellenic
C
20

This seems to do the trick:

void encode_bigend_u64(uint64_t value, void* dest)
{
  value =
      ((value & 0xFF00000000000000u) >> 56u) |
      ((value & 0x00FF000000000000u) >> 40u) |
      ((value & 0x0000FF0000000000u) >> 24u) |
      ((value & 0x000000FF00000000u) >>  8u) |
      ((value & 0x00000000FF000000u) <<  8u) |      
      ((value & 0x0000000000FF0000u) << 24u) |
      ((value & 0x000000000000FF00u) << 40u) |
      ((value & 0x00000000000000FFu) << 56u);
  memcpy(dest, &value, sizeof(uint64_t));
}

clang with -O3

encode_bigend_u64(unsigned long, void*):
        bswapq  %rdi
        movq    %rdi, (%rsi)
        retq

clang with -O3 -march=native

encode_bigend_u64(unsigned long, void*):
        movbeq  %rdi, (%rsi)
        retq

gcc with -O3

encode_bigend_u64(unsigned long, void*):
        bswap   %rdi
        movq    %rdi, (%rsi)
        ret

gcc with -O3 -march=native

encode_bigend_u64(unsigned long, void*):
        movbe   %rdi, (%rsi)
        ret

Tested with clang 3.8.0 and gcc 5.3.0 on http://gcc.godbolt.org/ (so I don't know exactly what processor is underneath (for the -march=native) but I strongly suspect a recent x86_64 processor)


If you want a function which works for big endian architectures too, you can use the answers from here to detect the endianness of the system and add an if. Both the union and the pointer casts versions work and are optimized by both gcc and clang resulting in the exact same assembly (no branches). Full code on godebolt:

int is_big_endian(void)
{
    union {
        uint32_t i;
        char c[4];
    } bint = {0x01020304};

    return bint.c[0] == 1;
}

void encode_bigend_u64_union(uint64_t value, void* dest)
{
  if (!is_big_endian())
    //...
  memcpy(dest, &value, sizeof(uint64_t));
}

Intel® 64 and IA-32 Architectures Instruction Set Reference (3-542 Vol. 2A):

MOVBE—Move Data After Swapping Bytes

Performs a byte swap operation on the data copied from the second operand (source operand) and store the result in the first operand (destination operand). [...]

The MOVBE instruction is provided for swapping the bytes on a read from memory or on a write to memory; thus providing support for converting little-endian values to big-endian format and vice versa.

Costplus answered 11/4, 2016 at 14:58 Comment(14)
static_cast and std:: looks a lot like C++. the question is tagged as C thoughVanillin
@Vanillin yeah, didn't see the C tag. Corrected (sad face sad face)Costplus
@Pitapat of course. For bigendian architectures the function should be just a copy.Costplus
Use -march=haswell for godbolt links. Usually it's haswell, but for a couple weeks it was IvB. Also, post a "full link" to the code, not just to the front page. Half the benefit of godbolt for stackoverflow is generating permalinks to the code and compile options, so people can go and try with other compiler versions or compile options.Marmara
@Costplus What I want is a single function that works for both little and big endian platforms. Sorry for not making that clear enough.Pitapat
@Pitapat Is using a macro to determine the endianness and have 2 functions an option?Costplus
@Pitapat or just add an if inside the function. Since the condition will be known at compile time, the branch not taken will be optimized away.Costplus
edited the question and added link to code on godboltCostplus
The is_big_endian function doesn't handle mixed endian architectures, but your code should work for the vast majority of platforms, so I'm accepting the answer.Pitapat
I checked your code with gcc on ARM (bi endian) and PowerPC (big endian) on godbolt page and it seems that gcc always detects the swap and emits the reverse instructionSportive
Actually, this solution can result in unaligned memory access, so it's not as portable as I thought.Pitapat
@Pitapat How so? If the dest argument is pointer to an actual uint64_t object I think there should be no alignment issues. Can you please elaborate on how do you get an unaligned access? It is intriguing.Costplus
In the general case, the dest argument may not be aligned. But this can be resolved by changing encode_bigend_u64 to write the result with memcpy.Pitapat
@Pitapat feel free to edit the question if you want and add a note about alignment. You've spent more time analyzing this.Costplus
M
6

All functions in this answer with asm output on the Godbolt Compiler Explorer


GNU C has a uint64_t __builtin_bswap64 (uint64_t x), since GNU C 4.3. This is apparently the most reliable way to get gcc / clang to generate code that doesn't suck for this.

glibc provides htobe64, htole64, and similar host to/from BE and LE functions that swap or not, depending on the endianness of the machine. See the docs for <endian.h>. The man page says they were added to glibc in version 2.9 (released 2008-11).

#define _BSD_SOURCE             /* See feature_test_macros(7) */

#include <stdint.h>

#include <endian.h>
// ideal code with clang from 3.0 onwards, probably earlier
// ideal code with gcc from 4.4.7 onwards, probably earlier
uint64_t load_be64_endian_h(const uint64_t *be_src) { return be64toh(*be_src); }
    movq    (%rdi), %rax
    bswap   %rax

void store_be64_endian_h(uint64_t *be_dst, uint64_t data) { *be_dst = htobe64(data); }
    bswap   %rsi
    movq    %rsi, (%rdi)

// check that the compiler understands the data movement and optimizes away a double-conversion (which inline-asm `bswap` wouldn't)
// it does optimize away with gcc 4.9.3 and later, but not with gcc 4.9.0 (2x bswap)
// optimizes away with clang 3.7.0 and later, but not clang 3.6 or earlier (2x bswap)
uint64_t double_convert(uint64_t data) {
  uint64_t tmp;
  store_be64_endian_h(&tmp, data);
  return load_be64_endian_h(&tmp);
}
    movq    %rdi, %rax

You safely get good code even at -O1 from those functions, and they use movbe when -march is set to a CPU that supports that insn.


If you're targeting GNU C, but not glibc, you can borrow the definition from glibc (remember it's LGPLed code, though):

#ifdef __GNUC__
# if __GNUC_PREREQ (4, 3)

static __inline unsigned int
__bswap_32 (unsigned int __bsx) { return __builtin_bswap32 (__bsx);  }

# elif __GNUC__ >= 2
    // ... some fallback stuff you only need if you're using an ancient gcc version, using inline asm for non-compile-time-constant args
# endif  // gcc version
#endif // __GNUC__

If you really need a fallback that might compile well on compilers that don't support GNU C builtins, the code from @bolov's answer could be used to implement a bswap that compiles nicely. Pre-processor macros could be used to choose whether to swap or not (like glibc does), to implement host-to-BE and host-to-LE functions. The bswap used by glibc when __builtin_bswap or x86 asm isn't available uses the mask-and-shift idiom that bolov found was good. gcc recognizes it better than just shifting.


The code from this Endian-agnostic coding blog post compiles to bswap with gcc, but not with clang. IDK if there's anything that both their pattern-recognizers will recognize.

// Note that this is a load, not a store like the code in the question.
uint64_t be64_to_host(unsigned char* data) {
    return
      ((uint64_t)data[7]<<0)  | ((uint64_t)data[6]<<8 ) |
      ((uint64_t)data[5]<<16) | ((uint64_t)data[4]<<24) |
      ((uint64_t)data[3]<<32) | ((uint64_t)data[2]<<40) |
      ((uint64_t)data[1]<<48) | ((uint64_t)data[0]<<56);
}

    ## gcc 5.3 -O3 -march=haswell
    movbe   (%rdi), %rax
    ret

    ## clang 3.8 -O3 -march=haswell
    movzbl  7(%rdi), %eax
    movzbl  6(%rdi), %ecx
    shlq    $8, %rcx
    orq     %rax, %rcx
    ... completely naive implementation

The htonll from this answer compiles to two 32bit bswaps combined with shift/or. This kind of sucks, but isn't terrible with either gcc or clang.


I didn't have any luck with a union { uint64_t a; uint8_t b[8]; } version of the OP's code. clang still compiles it to a 64bit bswap, but I think compiles to even worse code with gcc. (See the godbolt link).

Marmara answered 12/4, 2016 at 21:39 Comment(0)
K
2

I like Peter's solution, but here's something else you can use on Haswell. Haswell has the movbe instruction, which is 3 uops there (no cheaper than bswap r64 + a normal load or store), but is faster on Atom / Silvermont (https://agner.org/optimize/):

// AT&T syntax, compile without -masm=intel
inline
uint64_t load_bigend_u64(uint64_t value)
{
    __asm__ ("movbe %[src], %[dst]"   // x86-64 only
             :  [dst] "=r" (value)
             :  [src] "m" (value)
            );
    return value;
}

Use it with something like uint64_t tmp = load_bigend_u64(array[i]);

You could reverse this to make a store_bigend function, or use bswap to modify a value in a register and let the compiler load/store it.


I change the function to return value because alignment of vdest was not clear to me.

Usually a feature is guarded by a preprocessor macro. I'd expect __MOVBE__ to be used for the movbe feature flag, but its not present (this machine has the feature):

$ gcc -march=native -dM -E - < /dev/null | sort
...
#define __LWP__ 1
#define __LZCNT__ 1
#define __MMX__ 1
#define __MWAITX__ 1
#define __NO_INLINE__ 1
#define __ORDER_BIG_ENDIAN__ 4321
...
Kalli answered 24/1, 2017 at 5:17 Comment(2)
You left out a comma, so no way this even compiles. Also, you seem to be using Intel-syntax operand-order without mentioning that fact. I wouldn't recommend using movbe from inline asm; you're likely to end up making the compiler store/reload and then store again. And BTW, with the source being memory, shouldn't you call it decode, not encode?Marmara
Apart from this difficulty of using it in inline if you don't know the source will be in memory, movbe is nice on AMD where it's only a single op.Trochanter

© 2022 - 2024 — McMap. All rights reserved.