Bring code into the L1 instruction cache without executing it
Asked Answered
M

3

3

Let's say I have a function that I plan to execute as part of a benchmark. I want to bring this code into the L1 instruction cache prior to executing since I don't want to measure the cost of I$ misses as part of the benchmark.

The obvious way to do this is to simply execute the code at least once before the benchmark, hence "warming it up" and bringing it into the L1 instruction cache and possibly the uop cache, etc.

What are my alternatives in the case I don't want to execute the code (e.g., because I want the various predictors which key off of instruction addresses to be cold)?

Mize answered 1/2, 2018 at 20:32 Comment(0)
B
4

In Granite Rapids and later, PREFETCHIT0 [rip+rel32] to prefetch code into "all levels" of cache, or prefetchit1 to prefetch into all levels except L1i. These instructions are a NOP with an addressing-mode other than RIP-relative, or on CPUs that don't support them. (Perhaps they also prime iTLB or even uop cache, or at least could on paper, in which case this isn't what you want.) The docs in Intel's "future extensions" manual as of 2022 Dec recommends that the target address be the start of some instruction.


Note that this Q&A is about priming things for a microbenchmark. Not things that would be worth doing to improve overall performance. For that, probably just best-effort prefetch into L2 cache (the inner-most unified cache) with prefetcht1 SW prefetch, also priming the dTLB in a way that possibly helps the iTLB (although it will evict a possibly-useful dTLB entry). Or not, if the L2TLB is a victim cache. see X86 prefetching optimizations: "computed goto" threaded code for more discussion.


Caveat: some of this answer is Intel-centric. If I just say "the uop cache", I'm talking about Sandybridge-family. I know Ryzen has a uop-cache too, but I haven't read much of anything about its internals, and only know some basics about Ryzen from reading Agner Fog's microarch guide.


You can at least prefetch into L2 with software prefetch, but that doesn't even necessarily help with iTLB hits. (The 2nd-level TLB is a victim cache, I think, so a dTLB miss might not populate anything that the iTLB checks.)

But this doesn't help with L1I$ misses, or getting the target code decoded into the uop cache.

If there is a way to do what you want, it's going to be with some kind of trick. x86 has no "standard" way to do this; no code-prefetch instruction. Darek Mihoka wrote about code-prefetch as part of a CPU-emulator interpreter loop: The Common CPU Interpreter Loop Revisited: the Nostradamus Distributor back in 2008 when P4 and Core2 were the hot CPU microarchitectures to tune for.

But of course that's a different case: the goal is sustained performance of indirect branches, not priming things for a benchmark. It doesn't matter if you spend a lot of time achieving the microarchitectural state you want outside the timed portion of a micro-benchmark.

Speaking of which, modern branch predictors aren't just "cold", they always contain some prediction based on aliasing1. This may or may not be important.


Prefetch the first / last lines (and maybe others) with call to a ret

I think instruction fetch / prefetch normally continues past an ordinary ret or jmp, because it can't be detected until decode. So you could just call a function that ends at the end of the previous cache line. (Make sure they're in the same page, so an iTLB miss doesn't block prefetch.)

ret after a call will predict reliably if no other call did anything to the return-address predictor stack, except in rare cases if an interrupt happened between the call and ret, and the kernel code had a deep enough call tree to push the prediction for this ret out of the RSB (return-stack-buffer). Or if Spectre mitigation flushed it intentionally on a context switch.

; make sure this is in the same *page* as function_under_test, to prime the iTLB
ALIGN 64
    ; could put the label here, but probably better not
    60 bytes of (long) NOPs or whatever padding
prime_Icache_first_line:
    ret
    ; jmp back_to_benchmark_startup ; alternative if JMP is handled differently than RET.
    lfence          ; prevent any speculative execution after RET, in case it or JMP aren't detected as soon as they decode

;;; cache-line boundary here
function_under_test:
    ...
 prime_Icache_last_line:  ; label the last RET in function_under_test
    ret                   ; this will prime the "there's a ret here" predictor (correctly)


 benchmark_startup:
     call prime_Icache_first_line
     call prime_Icache_first_line  ; IDK if calling twice could possibly help in case prefetch didn't get far the first time?  But now the CPU has "seen" the RET and may not fetch past it.

     call prime_Icache_last_line   ; definitely only need to call once; it's in the right line
     lfence
     rdtsc

 .timed_loop:
    call  function_under_test
    ...
    jnz .time_loop

We can even extend this technique to more than 2 cache lines by calling to any 0xC3 (ret) byte inside the body of function_under_test. But as @BeeOnRope points out, that's dangerous because it may prime branch prediction with "there's a ret here" causing a mispredict you otherwise wouldn't have had when calling function_under_test for real.

Early in the front-end, branch prediction is needed based on fetch-block address (which block to fetch after this one), not on individual branches inside each block, so this could be a problem even if the ret byte was part of another instruction.

But if this idea is viable, then you can look for a 0xc3 byte as part of an instruction in the cache line, or at worst add a 3-byte NOP r/m32 (0f 1f c3 nop ebx,eax). c3 as a ModR/M encodes a reg,reg instruction (with ebx and eax as operands), so it doesn't have to be hidden in a disp8 to avoid making the NOP even longer, and it's easy to find in short instructions: e.g. 89 c3 mov ebx,eax, or use the other opcode so the same modrm byte gives you mov eax,ebx. Or 83 c3 01 add ebx,0x1, or many other instructions with e/rbx, bl (and r/eax or al).

With a REX prefix, those can be you have a choice of rbx / r11 (and rax/r8 for the /r field if applicable). It's likely you can choose (or modify for this microbenchmark) your register allocation to produce an instruction using the relevant registers to produce a c3 byte without any overhead at all, especially if you can use a custom calling convention (at least for testing purposes) so you can clobber rbx if you weren't already saving/restoring it.

I found these by searching for (space)c3(space) in the output of objdump -d /bin/bash, just to pick a random not-small executable full of compiler-generated code.


Evil hack: end the cache line before with the start of a multi-byte instruction.

; at the end of a cache line
prefetch_Icache_first_line:
    db 0xe9    ; the opcode for 5-byte jmp rel32

function_under_test:
    ... normal code   ; first 4 bytes will be treated as a rel32 when decoding starts at prefetch_I...
    ret


 ; then at function_under_test+4 + rel32:
 ;org whatever  (that's not how ORG works in NASM, so actually you'll need a linker script or something to put code here)
 prefetch_Icache_branch_target:
    jmp  back_to_test_startup

So it jumps to a virtual address which depends on the instruction bytes of function_under_test. Map that page and put code in it that jumps back to your benchmark-prep code. The destination has to be within 2GiB, so (in 64-bit code) it's always possible to choose a virtual address for function_under_test that makes the destination a valid user-space virtual address. Actually, for many rel32 values, it's possible to choose the address of function_under_test to keep both it and the target within the low 2GiB of virtual address space, (and definitely 3GiB) and thus valid 32-bit user-space addresses even under a 32-bit kernel.

Or less crazy, using the end of a ret imm16 to consume a byte or two, just requiring a fixup of RSP after return (and treating whatever is temporarily below RSP as a "red zone" if you don't reserve extra space):

; at the end of a cache line
prefetch_Icache_first_line:
    db 0xc2      ; the opcode for 3-byte ret imm16
    ; db 0x00      ; optional: one byte of the immediate at least keeps RSP aligned  
                 ; But x86 is little-endian, so the last byte is most significant
;; Cache-line boundary here
function_under_test:
    ... normal code   ; first 2 bytes will be added to RSP when decoding starts at prefetch_Icache_first_line
    ret


prefetch_caller:
    push  rbp
    mov   rbp, rsp   ; save old RSP
    ;sub  rsp, 65536  ; reserve space in case of the max RSP+imm16.
    call  prefetch_Icache_first_line
  ;;; UNSAFE HERE in case of signal handler if you didn't SUB.
    mov   rsp, rbp   ; restore RSP; if no signal handlers installed, probably nothing could step on stack memory
    ...
    pop   rbp
    ret

Using sub rsp, 65536 before calling to the ret imm16 makes it safe even if there might be a signal handler (or interrupt handler in kernel code, if your kernel stack is big enough, otherwise look at the actual byte and see how much will really be added to RSP). It means that call's push/store will probably miss in data cache, and maybe even cause a pagefault to grow the stack. That does happen before fetching the ret imm16, so that won't evict the L1I$ line we wanted to prime.

This whole idea is probably unnecessary; I think the above method can reliably prefetch the first line of a function anyway, and this only works for the first line. (Unless you put a 0xe9 or 0xc2 in the last byte of each relevant cache line, e.g. as part of a NOP if necessary.)

But this does give you a way to non-speculatively do code-fetch from from the cache line you want without architecturally executing any instructions in it. Hopefully a direct jmp is detected before any later instructions execute, and probably without any others even decoding, except ones that decoded in the same block. (And an unconditional jmp always ends a uop-cache line on Intel). i.e. the mispredict penalty is all in the front-end from re-steering the fetch pipeline as soon as decode detects the jmp. I hope ret is like this too, in cases where the return-predictor stack is not empty.

A jmp r/m64 would let you control the destination just by putting the address in the right register or memory. (Figure out what register or memory addressing mode the first byte(s) of function_under_test encode, and put an address there). The opcode is FF /4, so you can only use a register addressing mode if the first byte works as a ModRM that has /r = 4 and mode=11b. But you could put the first 2 bytes of the jmp r/m64 in the previous line, so the extra bytes form the SIB (and disp8 or disp32). Whatever those are, you can set up register contents such that the jump-target address will be loaded from somewhere convenient.

But the key problem with a jmp r/m64 is that default-prediction for an indirect branch can fall through and speculatively execute function_under_test, affecting the branch-prediction entries for those branches. You could have bogus values in registers so you prime branch prediction incorrectly, but that's still different from not touching them at all.

How does this overlapping-instructions hack to consume bytes from the target cache line affect the uop cache?

I think (based on previous experimental evidence) Intel's uop cache puts instructions in the uop-cache line that corresponds to their start address, in cases where they cross a 32 or 64-byte boundary. So when the real execution of function_under_test begins, it will simply miss in the uop-cache because no uop-cache line is caching the instruction-start-address range that includes the first byte of function_under_test. i.e. the overlapping decode is probably not even noticed when it's split across an L1I$ boundary this way.

It is normally a problem for the uop cache to have the same bytes decode as parts of different instructions, but I'm optimistic that we wouldn't have a penalty in this case. (I haven't double-checked that for this case. I'm mostly assuming that lines record which range of start-addresses they cache, and not the whole range of x86 instruction bytes they're caching.)


Create mis-speculation to fetch arbitrary lines, but block exec with lfence

Spectre / Meltdown exploits and mitigation strategies provide some interesting ideas: you could maybe trigger a mispredict that fetches at least the start of the code you want, but maybe doesn't speculate into it.

lfence blocks speculative execution, but (AFAIK) not instruction prefetch / fetch / decode.

I think (and hope) the front-end will follow direct relative jumps on its own, even after lfence, so we can use jmp target_cache_line in the shadow of a mispredict + lfence to fetch and decode but not execute the target function.

If lfence works by blocking the issue stage until the reservation station (OoO scheduler) is empty, then an Intel CPU should probably decode past lfence until the IDQ is full (64 uops on Skylake). There are further buffers before other stages (fetch -> instruction-length-decode, and between that and decode), so fetch can run ahead of that. Presumably there's a HW prefetcher that runs ahead of where actual fetch is reading from, so it's pretty plausible to get several cache lines into the target function in the shadow of a single mispredict, especially if you introduce delays before the mispredict can be detected.

We can use the same return-address frobbing as a retpoline to reliably trigger a mispredict to jmp rel32 which sends fetch into the target function. (I'm pretty sure a re-steer of the front-end can happen in the shadow of speculative execution without waiting to confirm correct speculation, because that would make every re-steer serializing.)

function_under_test:
    ...
some_line:   ; not necessarily the first cache line
    ...
    ret
;;; This goes in the same page as the test function,
;;; so we don't iTLB-miss when trying to send the front-end there

ret_frob:
    xorps   xmm0,xmm0
    movq    xmm1, rax

    ;; The point of this LFENCE is to make sure the RS / ROB are empty so the front-end can run ahead in a burst.
    ;; so the sqrtpd delay chain isn't gradually issued.
    lfence
    ;; alternatively, load the return address from the stack and create a data dependency on it, e.g. and eax,0

    ;; create a many-cycle dependency-chain before the RET misprediction can be detected
    times 10 sqrtpd xmm0,xmm0       ; high latency, single uop
    orps    xmm0, xmm1             ; target address with data-dep on the sqrtpd chain

    movq   [rsp], xmm0             ; overwrite return address
    ; movd  [rsp], xmm0            ; store-forwarding stall: do this *as well* as the movq

    ret             ; mis-speculate to the lfence/jmp some_line
                    ; but architecturally jump back to the address we got in RAX


 prefetch_some_line:
    lea     rax, [rel back_to_bench_startup]
    ; or  pop rax  or load it into xmm1 directly,
    ; so this block can be CALLed as a function instead of jumped to

    call  ret_frob 
    ; speculative execution goes here, but architecturally never reached
    lfence    ; speculative *execution* stops here, fetch continues
    jmp   some_line

I'm not sure the lfence in ret_frob is needed. But it does make it easier to reason about what the front-end is doing relative to the back-end. After the lfence, the return address has a data dependency on the chain of 10x sqrtpd. (10x 15 to 16 cycle latency on Skylake, for example). But the 10x sqrtpd + orps + movq only take 3 cycles to issue (on 4-wide CPUs), leaving at least 148 cycles + store-forwarding latency before ret can read the return address back from the stack and discover that the return-stack prediction was wrong.

This should be plenty of time for the front-end to follow the jmp some_line and load that line into L1I$, and probably load several lines after that. It should also get some of them decoded into the uop cache.

You need a separate call / lfence / jmp block for each target line (because the target address has to be hard-coded into a direct jump for the front-end to follow it without the back-end executing anything), but they can all share the same ret_frob block.


If you left out the lfence, you could use the above retpoline-like technique to trigger speculative execution into the function. This would let you jump to any target branch in the target function with whatever args you like in registers, so you can mis-prime branch prediction however you like.


Footnote 1:

Modern branch predictors aren't just "cold", they contain predictions from whatever aliased the target virtual addresses in the various branch-prediction data structures. (At least on Intel where SnB-family pretty definitely uses TAGE prediction.)

So you should decide whether you want to specifically anti-prime the branch predictors by (speculatively) executing the branches in your function with bogus data in registers / flags, or whether your micro-benchmarking environment resembles the surrounding conditions of the real program closely enough.

If your target function has enough branching in a very specific complex pattern (like a branchy sort function over 10 integers), then presumably only that exact input can train the branch predictor well, so any initial state other than a specially-warmed-up state is probably fine.

You may not want the uop-cache primed at all, to look for cold-execution effects in general (like decode), so that might rule out any speculative fetch / decode, not just speculative execution. Or maybe speculative decode is ok if you then run some uop-cache-polluting long-NOPs or times 800 xor eax,eax (2-byte instructions -> 16 per 32-byte block uses up all 3 entries that SnB-family uop caches allow without running out of room and not being able to fit in the uop cache at all). But not so many that you evict L1I$ as well.

Even speculative decode without execute will prime the front-end branch prediction that knows where branches are ahead of decode, though. I think that a ret (or jmp rel32) at the end of the previous cache line

Bathysphere answered 1/2, 2018 at 21:52 Comment(4)
I find it interesting that our written-in-parallel answers apparently submitted in the same minute chose identical names for function_under_test. Good point about the iTLB - I hadn't considered that.Mize
Maybe the meltdown flaw works for the NXE (exec disable) bit too? Then mapping a code page not executable and attempting to do it anyway will still cache the data (and trigger an exception).Uziel
@MargaretBloom: hmm, but then you have to remap it to executable and invalidate the TLB entry before you can execute it. Oh, unless you have the same page mapped at two virtual addresses, one without execute! L1I$ is physically addressed on all the x86 CPUs we care about. Oh, but branch prediction is virtually addressed, so you could warm up the code at a different address.Bathysphere
@BeeOnRope: See my edit: I haven't tested any of this, but if the front-end still follows lfence / jmp rel32 in the shadow of a retpoline mispredict, we can fetch but not execute arbitrary lines. (And by delaying mispredict detection, one call can get multiple cache lines fetched/decoded.) I posted the first version of this answer in a hurry before I headed out for curling, and had time to think about the problem between shots :)Bathysphere
B
2

Map the same physical page to two different virtual addresses.

L1I$ is physically addressed. (VIPT but with all the index bits from below the page offset, so effectively PIPT).

Branch-prediction and uop caches are virtually addressed, so with the right choice of virtual addresses, a warm-up run of the function at the alternate virtual address will prime L1I, but not branch prediction or uop caches. (This only works if branch aliasing happens modulo something larger than 4096 bytes, because the position within the page is the same for both mappings.)

Prime the iTLB by calling to a ret in the same page as the test function, but outside it.


After setting this up, no modification of the page tables are required between the warm-up run and the timing run. This is why you use two mappings of the same page instead of remapping a single mapping.

Margaret Bloom suggests that CPUs vulnerable to Meltdown might speculatively fetch instructions from a no-exec page if you jump there (in the shadow of a mispredict so it doesn't actually fault), but that would then require changing the page table, and thus a system call which is expensive and might evict that line of L1I. But if it doesn't pollute the iTLB, you could then re-populate the iTLB entry with a mispredicted branch anywhere into the same page as the function. Or just a call to a dummy ret outside the function in the same page.

None of this will let you get the uop cache warmed up, though, because it's virtually addressed. OTOH, in real life, if branch predictors are cold then probably the uop cache will also be cold.

Bathysphere answered 2/2, 2018 at 1:2 Comment(2)
Could a hyperthread with same pid do this by executing a warmup? IIRC just about every part of the microarch is partitioned but icache / itlb will be shared.Prognostic
@Noah: Yeah, that should work. That will fill the uop-cache, too, though. But according to en.wikichip.org/wiki/intel/microarchitectures/…, it's statically partitioned between hyperthreads. So I guess even having both threads using a CR3 page table with the same PCID (process-context-ID) would only let them share TLB entries, not virtually-addressed uop cache hits. (The 4k-page iTLB entries are "dynamically partitioned" I assume meaning competitively shared, but the hugepage entries are duplicated for each thread). I assume branch-prediction is split.Bathysphere
M
1

One approach that could work for small functions would be to execute some code which appears on the same cache line(s) as your target function, which will bring in the entire cache line.

For example, you could organize your code as follows:

ALIGN 64
function_under_test:
; some code, less than 64 bytes
dummy:
ret

and then call the dummy function prior to calling function_under_test - if dummy starts on the same cache line as the target function, it would bring the entire cache line into L1I. This works for functions of 63 bytes or less1.

This can probably be extended to functions up to ~126 bytes or so by using this trick both at before2 and after the target function. You could extend it to arbitrarily sized functions by inserting dummy functions on every cache line and having the target code jump over them, but this comes at a cost of inserting the otherwise-unnecessary jumps your code under test, and requires careful control over the code size so that the dummy functions are placed correctly.

You need fine control over function alignment and placement to achieve this: assembler is probably the easiest, but you can also probably do it with C or C++ in combination with compiler-specific attributes.


1 You could even reuse the ret in the function_under_test itself to support slightly longer functions (e.g., those whose ret starts within 64 bytes of the start).

2 You'd have to be more careful about the dummy function appearing before the code under test: the processor might fetch instructions past the ret and it might (?) even execute them. A ud2 after the dummy ret is likely to block further fetch (but you might want fetch if populating the uop cache is important).

Mize answered 1/2, 2018 at 21:52 Comment(5)
How much can you control the code generated? If one could find a 0xc3 in each line, it would be possible to perform a series of controlled returns that touch each line.Uziel
In the common case I am writing it in assembly and so can control it more or less completely. Good point, I could organize for a ret 0xC3 bytecode to appear in every line as part of a constant or whatever! Then the question is how that affects the performance/behavior when you go back and execute it and the instruction needs to be reinterpreted as not being a ret. I can see two potential gotchas: (1) the branch predictor might remember "there was a ret here" causing a spurious mispredict or (2) if the uop cache got populated it will have the "wrong" instruction(s). @MargaretBloomMize
Good points, any chance to force the eviction of those caches by polluting them?Uziel
I think yes for both cases, although the ret predictor may be tricky, i.e., you'd have to know enough about how the ret predictor works to come up with a sequence to evict any info from the rets embedded in the code under test. In particular, you'd need to know the hash function from address to history bucket or whatever. The uop cache is easier and I'm not sure instructions even go into the uop cache on the first hit, so maybe they don't end up in there at all.Mize
lfence should only block execution, not fetch, so I'd use that instead of ud2. Good point about "there's a ret here" prediction: that is a potential problem with using 0xc3 bytes that appear inside other instructions. (e.g. as a disp8 in a 4-byte long NOP if you can't find one anything else, which would be significantly less intrusive to the function under test than jmp.)Bathysphere

© 2022 - 2024 — McMap. All rights reserved.