Branchless count-leading-zeros on 32-bit RISC-V without Zbb extension
E

3

7

The context of this question is the creation of a side-channel resistant implementation of a IEEE-754 compliant single-precision square root for a 32-bit RISC-V platform without hardware support for floating-point arithmetic and without the Zbb extension for advanced bit manipulation. Integer multiplies, in particular the MUL and MULHU instructions, are supported by the hardware and can be assumed to have fixed latency. Counting the leading zero bits is required for normalization of subnormal operands, and the CLZ emulation should be branchless because of the side-channel resistant design.

I started with C99 code for a 32-bit leading-zero count that I used twenty years ago on ARMv4t processors. This is a full-range implementation, i.e. it returns 32 for an input of zero.

uint32_t cntlz (uint32_t a)
{ 
    uint32_t n = 0;
#if 0
    n = __builtin_clz (a);
#else
    n = !a + 1;
    if (a < 0x00010000u) { n |= 16;  a <<= 16; }
    if (a < 0x01000000u) { n |=  8;  a <<=  8; }
    if (a < 0x10000000u) { n |=  4;  a <<=  4; }
    if (a < 0x40000000u) { n +=  2;  a <<=  2; }
    n = n - (a >> 31);
#endif
    return n;
}

As a sanity check, I compiled the above source with clang 18.1 -marm -march=armv4t, resulting in the following code that, at 16 instruction without function return, uses one instruction more than the best ARMv4t implementation I am aware of (which uses 15 instructions without the function return):

cntlz:
        mov     r1, #1
        cmp     r0, #0
        moveq   r1, #2
        cmp     r0, #65536
        lsllo   r0, r0, #16
        orrlo   r1, r1, #16
        cmp     r0, #16777216
        lsllo   r0, r0, #8
        orrlo   r1, r1, #8
        cmp     r0, #268435456
        lsllo   r0, r0, #4
        orrlo   r1, r1, #4
        cmp     r0, #1073741824
        addlo   r1, r1, #2
        lsllo   r0, r0, #2
        add     r0, r1, r0, asr #31
        bx      lr

I am currently working without access to a RISC-V development platform and used Compiler Explorer to compile for a 32-bit RISC-V target. I could not figure out how to specify extensions properly to turn off floating-point support, so I used clang 18.1 with -march=rv32gc, which resulted in the following assembly code being generated:

cntlz:                                  # @cntlz
        seqz    a1, a0
        srli    a2, a0, 16
        seqz    a2, a2
        slli    a2, a2, 4
        or      a1, a1, a2
        sll     a0, a0, a2
        srli    a2, a0, 24
        seqz    a2, a2
        slli    a2, a2, 3
        or      a1, a1, a2
        sll     a0, a0, a2
        srli    a2, a0, 28
        seqz    a2, a2
        slli    a2, a2, 2
        or      a1, a1, a2
        sll     a0, a0, a2
        srli    a2, a0, 30
        seqz    a2, a2
        slli    a2, a2, 1
        or      a1, a1, a2
        sll     a0, a0, a2
        srai    a0, a0, 31
        add     a0, a0, a1
        addi    a0, a0, 1             
        ret

I am unable to identify any improvements to the code generated by Clang, that is, it appears to be as tight as possible. I am aware that RISC-V implementations could implement macro-op fusion. See: Christopher Celio, et al., "The Renewed Case for the Reduced Instruction Set Computer: Avoiding ISA Bloat with Macro-Op Fusion for RISC-V", UC Berkeley technical report EECS-2016-130. But none of the fusion idioms discussed in the report appear to apply to this code, leading me to assume an execution time of 24 cycles for this 24 instruction sequence (without the function return). I was curious what __builtin_clz() resolves to. Compiling with that code path enabled results in a 31-instruction sequence that converts the leading zeros into a left-justified mask of 1-bits and then applies a population count computation to the mask:

        srli    a1, a0, 1
        or      a0, a0, a1
        srli    a1, a0, 2
        or      a0, a0, a1
        srli    a1, a0, 4
        or      a0, a0, a1
        srli    a1, a0, 8
        or      a0, a0, a1
        srli    a1, a0, 16
        or      a0, a0, a1
        not     a0, a0          // a0 now left-aligned mask of 1-bits
        srli    a1, a0, 1
        lui     a2, 349525
        addi    a2, a2, 1365
        and     a1, a1, a2
        sub     a0, a0, a1
        lui     a1, 209715
        addi    a1, a1, 819
        and     a2, a0, a1
        srli    a0, a0, 2
        and     a0, a0, a1
        add     a0, a0, a2
        srli    a1, a0, 4
        add     a0, a0, a1
        lui     a1, 61681
        addi    a1, a1, -241
        and     a0, a0, a1
        lui     a1, 4112
        addi    a1, a1, 257
        mul     a0, a0, a1
        srli    a0, a0, 24
        ret

Again, I am not sure what instructions could be subject to macro-op fusion here, but the most likely candidate seems to be the LUI/ADDI idiom used to load 32-bit immediate data, similar to the way modern ARM processors fuse MOVW/MOVT pairs. With that assumption, the code would still appear to be slower than what I currently have. I tried half a dozen additional integer-based variants of 32-bit CLZ emulation and did not find any that resulted in fewer than 24 instructions. I also searched the internet and was unable to find anything superior to my current code.

Are there any branchless full-range implementations of leading-zero count for 32-bit RISC-V platforms that require fewer than 24 cycles? Conservatively, I want to assume the absence of macro-op fusion as this seems like an expensive feature in a low-end microcontroller, but answers relying on macro-op fusion as present in existing RISC-V implementations are also welcome.

Note: Table-based methods are not suitable in this context as table access could trigger cache misses which can be exploited for side-channel attacks.

Update 6/9/2024: After working on this issue for another 1.5 days, I found a variant that should reduce the number of instructions required from 24 to 22, however, not all compilers can actually deliver the desired code.

The basic observation is that the RISC-V ISA is not orthogonal with regard to SET-type instructions, in that it only supports a "less than" flavor. Other comparisons may require inversion of the comparison followed by inversion of the result, for example by XOR-ing 1, or applying SEQZ, adding one instruction per instance. My idea now is to avoid this per-instance inversion by transposing a positive operand into a negative one once, allowing the direct use of "less than" comparisons. Expressed in portable C++11 code, and annotated with the RISC-V instructions I expect to be generated:

// https://mcmap.net/q/1455819/-how-can-i-perform-arithmetic-right-shift-in-c-in-a-portable-way
// Only applies right shift to non-negative values to avoid implementation-defined behavior
int32_t sra (int32_t x, int32_t y)
{
     return (x < 0) ? (~(~x >> y)) : (x >> y);
}

uint32_t cntlz_rv32 (uint32_t a)
{
    uint32_t n, t;
    int32_t as;
    n = !a;                                           // 1: seqz
    t = ((a >> 16)!=0)*16; n = n - t; a = a >> t;     // 5: srli, snez, slli, sub, srl
    as = (int32_t)(~a);                               // 1: not
    t = (as < -256) * 8; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
    t = (as <  -16) * 4; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
    t = (as <   -4) * 2; n = n - t; as = sra (as, t); // 4: slti, slli, sub, sra
    t = (as <   -2) * 1; n = n - t;                   // 2: slti, sub
    n += 31;                                          // 1: addi
    return n;                                         // 22 instructions total w/o ret
}

With gcc 13.3 (not 14.1), the following branchless 22-instruction sequence (not counting the ret) is generated:

cntlz_rv32(unsigned int):
        srli    a3,a0,16
        snez    a3,a3
        slli    a3,a3,4
        srl     a4,a0,a3
        not     a4,a4
        slti    a1,a4,-256
        slli    a1,a1,3
        sra     a4,a4,a1
        slti    a2,a4,-16
        slli    a2,a2,2
        seqz    a5,a0
        addi    a5,a5,31
        sra     a0,a4,a2
        slti    a4,a0,-4
        sub     a5,a5,a3
        slli    a4,a4,1
        sub     a5,a5,a1
        sub     a5,a5,a2
        sra     a0,a0,a4
        sub     a5,a5,a4
        slti    a0,a0,-2
        sub     a0,a5,a0
        ret

For reasons I do not understand, gcc 14.1 refuses to generate the expected SLTI with -256. By moving the comparison prior to the application of the 1's complement it then requires the addition of an SEQZ to invert the result of the comparison. clang 18.1 generates an inefficient 33-instruction sequence for the above source code for no reason that I can discern.

Esophagus answered 7/6 at 1:20 Comment(2)
I wasn't able to find a shorter sequence in all of the methods described in Hacker's Delight that works for your problem. But there is one that might be interesting to others, who have floating point support, here is the C version: int clz32(uint32_t x) { union { unsigned u; float f; } uf; uf.f = x & ~(x>>1); int n = 158 - (uf.u>>23); return (n&31) + (n>>6); } This produces 12 instructions, with two float conversions: fcvt.s.wu, and fmv.x.w.Impressible
Is there a narrower than complete range of input that could be favored? If so, might be able to focus on that range, branchless, and use a taken branch to another code section only if outside of that favored range. So, not necessarily branchless for all inputs, but at least no taken branches for favored range.Chittagong
G
2

After some more thinking, I invented hybrid approach, when long shifts by 16/8/4 performed as in the 1st version (binary searches), and instead of last two ones, performed "table lookup", when table - just 32-bit word. I not sure, is this better than two previous approaches, but try to check it, also:

uint32_t myclz2(uint32_t x) {
    int r = !x, c;
    c = (x < 0x00010000) << 4;
    r += c; x <<= c; // off 16
    c = (x < 0x01000000) << 3;
    r += c; x <<= c; // off 8
    c = (x < 0x10000000) << 2;
    r += c; x <<= c; // off 4
    c = (x >> (32 - 4 - 1)) & 0x1e;
    r += (0x55af >> c) & 3;
    return r;
}
Goober answered 10/6 at 2:18 Comment(4)
Nice work using hybrid approach with in-register table lookup for final steps. That results in 23 instructions (not counting ret) when compiled with both clang 18.1 and gcc 14.1, so less brittle than my latest version in the question when using current compilers. By the way, you can check compilation yourself at Compiler Explorer which is what I use.Esophagus
@Esophagus Do you care about constant materialization? Because in a loop, where the constants are already in registers, this code only requires 17 instructions, vs 23 from your cntlz_rv32. See: godbolt.org/z/z5Wq59G3xImpressible
@Esophagus Clang generates branchfree code if you enable the zicond extension: godbolt.org/z/GeTEnE1Mh With zicond enabled, the clz from this answer produces smaller code. If you run your sqrt in a inlined loop, then the instruction count matches the arm one. I'd try to modify the generated assembly to replace the zicond with equivalent rv32im instructions, there are 8 in total, so it might be tricky to get a good result.Impressible
@Esophagus btw, here is a full patch that implements Zicond the open source processor XiangShan: github.com/OpenXiangShan/XiangShan/pull/2941/files This is a high performance ooo processor though, and it's written in chisel. CVA6 also has Zicond support: github.com/… Their CV32A60AX configuration for example implements RV32IMACZicsr_Zifencei_Zicount_Zba_Zbb_Zbc_Zbs_Zcb_Zicond.Impressible
G
0

I hope, this will help you:

int myclz(uint32_t x) {
    int r = 0, c;
    c = (x < 0x00010000) << 4;
    r += c; x <<= c; // off 16
    c = (x < 0x01000000) << 3;
    r += c; x <<= c; // off 8
    c = (x < 0x10000000) << 2;
    r += c; x <<= c; // off 4
    c = (x < 0x40000000) << 1;
    r += c; x <<= c; // off 2
    c = x < 0x80000000;
    r += c; x <<= c; // off 1
    r += x == 0;
    return r;
}
Goober answered 9/6 at 1:52 Comment(1)
This is branchless, without any "if".Goober
G
0

OK. Maybe, this will help you?

uint32_t myclz2(uint32_t x) {
    static const uint8_t clz_table[64] = { 
    32, 31, 26, 30, 255, 25, 3, 29, 255, 255, 20, 24, 12, 255, 2,
    28, 255, 255, 14, 255, 255, 19, 255, 255, 23, 17, 11, 255, 255,
    8, 1, 255, 27, 255, 4, 255, 21, 13, 255, 255, 15, 255, 255, 255,
    18, 255, 9, 255, 5, 22, 255, 16, 255, 255, 10, 6, 255, 255, 255,
    7, 255, 255, 0, 255 
    };  
    x |= x >> 1;
    x |= x >> 2;
    x |= x >> 4;
    x |= x >> 8;
    x |= x >> 16; 
    return clz_table[(x * 0x432d379) >> (32 - 6)];
}
Goober answered 9/6 at 21:58 Comment(2)
OK, I have no more ideas. Thus, maybe, you have best solution. What else I can suggest - you can add "noise", by adding useless operations with random behavior, independents on your argument. For example - get hardware CPU counter, and do several empty iterations.Goober
Thanks for appreciation of my efforts! If you would like - we can contact more "tightly". This is my LI profile: linkedin.com/in/oleg-khovayko-78a2165 ; I am open for discussion and possible collaboration.Goober

© 2022 - 2024 — McMap. All rights reserved.