This is a follow up to some comments made in this prior thread:
The following code snippets calculate Fibonacci, the first example with a loop, the second example with a computed jump (indexed branch) into an unfolded loop. This was tested using Visual Studio 2015 Desktop Express on Windows 7 Pro 64 bit mode with an Intel 3770K 3.5ghz processor. With a single loop testing fib(0) thru fib(93), the best time I get for loop version is ~1.901 microseconds, and for computed jump is ~ 1.324 microseconds. Using an outer loop to repeat this process 1,048,576 times, the loop version takes about 1.44 seconds, the computed jump takes about 1.04 seconds. In both sets of tests, the loop version is about 40% slower than computed jump version.
Question: Why is the loop version much more sensitive to code location than the computed jump version? In prior tests, some code location combinations caused the loop version time to increase from about 1.44 seconds to 1.93 seconds, but I never found a combination that significantly affected the computed jump version time.
Partial answer: The computed jump version branches into 94 possible target locations within a 280 byte range, and apparently a branch target buffer (cache) does a good job of optimizing this. For the loop version, using align 16 to put the assembly based fib() function on a 16 byte boundary solved the loop version time issue for most cases, but some changes to main() were still affecting the time. I need to find a reasonably small and repeatable test case.
loop version (note I've read that | dec
| jnz
| is faster than | loop
|) :
align 16
fib proc ;rcx == n
mov rax,rcx ;br if < 2
cmp rax,2
jb fib1
mov rdx,1 ;set rax, rdx
and rax,rdx
sub rdx,rax
shr rcx,1
fib0: add rdx,rax
add rax,rdx
dec rcx
jnz fib0
fib1: ret
fib endp
computed jump (indexed branch) into unfolded loop version:
align 16
fib proc ;rcx == n
mov r8,rcx ;set jmp adr
mov r9,offset fib0+279
lea r8,[r8+r8*2]
neg r8
add r8,r9
mov rax,rcx ;set rax,rdx
mov rdx,1
and rax,rdx
sub rdx,rax
jmp r8
fib0: ; assumes add xxx,xxx takes 3 bytes
rept 46
add rax,rdx
add rdx,rax
endm
add rax,rdx
ret
fib endp
Test code that runs 1 million (1048576) loops to calculate fib(0)
to fib(93)
using multiples of 37%93 so the order is not sequential. On my system, the loop version took about 1.44 seconds and the indexed branch version took about 1.04 seconds.
#include <stdio.h>
#include <time.h>
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
extern "C" uint64_t fib(uint64_t);
/* multiples of 37 mod 93 + 93 at end */
static uint64_t a[94] =
{0,37,74,18,55,92,36,73,17,54,
91,35,72,16,53,90,34,71,15,52,
89,33,70,14,51,88,32,69,13,50,
87,31,68,12,49,86,30,67,11,48,
85,29,66,10,47,84,28,65, 9,46,
83,27,64, 8,45,82,26,63, 7,44,
81,25,62, 6,43,80,24,61, 5,42,
79,23,60, 4,41,78,22,59, 3,40,
77,21,58, 2,39,76,20,57, 1,38,
75,19,56,93};
/* x used to avoid compiler optimizing out result of fib() */
int main()
{
size_t i, j;
clock_t cbeg, cend;
uint64_t x = 0;
cbeg = clock();
for(j = 0; j < 0x100000; j++)
for(i = 0; i < 94; i++)
x += fib(a[i]);
cend = clock();
printf("%llx\n", x);
printf("# ticks = %u\n", (uint32_t)(cend-cbeg));
return 0;
}
The output for x is 0x812a62b1dc000000. The sum of fib(0) to fib(93) in hex is 0x1bb433812a62b1dc0, and add 5 more zeros for looping 0x100000 times: 0x1bb433812a62b1dc000000. The upper 6 nibbles are truncated due to 64 bit math.
I made an all assembly version to better control code location. The "if 1" is changed to "if 0" for loop version. The loop version takes about 1.465 to 2.000 seconds depending on nop padding used to put key locations on even or odd 16 byte boundaries (see comments below). The computed jump version takes about 1.04 seconds and boundaries make less than 1% difference in timing.
includelib msvcrtd
includelib oldnames
.data
; multiples of 37 mod 93 + 93 at the end
a dq 0,37,74,18,55,92,36,73,17,54
dq 91,35,72,16,53,90,34,71,15,52
dq 89,33,70,14,51,88,32,69,13,50
dq 87,31,68,12,49,86,30,67,11,48
dq 85,29,66,10,47,84,28,65, 9,46
dq 83,27,64, 8,45,82,26,63, 7,44
dq 81,25,62, 6,43,80,24,61, 5,42
dq 79,23,60, 4,41,78,22,59, 3,40
dq 77,21,58, 2,39,76,20,57, 1,38
dq 75,19,56,93
.data?
.code
; parameters rcx,rdx,r8,r9
; not saved rax,rcx,rdx,r8,r9,r10,r11
; code starts on 16 byte boundary
main proc
push r15
push r14
push r13
push r12
push rbp
mov rbp,rsp
and rsp,0fffffffffffffff0h
sub rsp,64
mov r15,offset a
xor r14,r14
mov r11,0100000h
; nop padding effect on loop version (with 0 padding in padx below)
; 0 puts main2 on odd 16 byte boundary clk = 0131876622h => 1.465 seconds
; 9 puts main1 on odd 16 byte boundary clk = 01573FE951h => 1.645 seconds
rept 0
nop
endm
rdtsc
mov r12,rdx
shl r12,32
or r12,rax
main0: xor r10,r10
main1: mov rcx,[r10+r15]
call fib
main2: add r14,rax
add r10,8
cmp r10,8*94
jne main1
dec r11
jnz main0
rdtsc
mov r13,rdx
shl r13,32
or r13,rax
sub r13,r12
mov rdx,r14
xor rax,rax
mov rsp,rbp
pop rbp
pop r12
pop r13
pop r14
pop r15
ret
main endp
align 16
padx proc
; nop padding effect on loop version with 0 padding above
; 0 puts fib on odd 16 byte boundary clk = 0131876622h => 1.465 seconds
; 16 puts fib on even 16 byte boundary clk = 01A13C8CB8h => 2.000 seconds
; nop padding effect on computed jump version with 9 padding above
; 0 puts fib on odd 16 byte boundary clk = 00D979792Dh => 1.042 seconds
; 16 puts fib on even 16 byte boundary clk = 00DA93E04Dh => 1.048 seconds
rept 0
nop
endm
padx endp
if 1 ;0 = loop version, 1 = computed jump version
fib proc ;rcx == n
mov r8,rcx ;set jmp adr
mov r9,offset fib0+279
lea r8,[r8+r8*2]
neg r8
add r8,r9
mov rax,rcx ;set rax,rdx
mov rdx,1
and rax,rdx
sub rdx,rax
jmp r8
fib0: ; assumes add xxx,xxx takes 3 bytes
rept 46
add rax,rdx
add rdx,rax
endm
add rax,rdx
ret
fib endp
else
fib proc ;rcx == n
mov rax,rcx ;br if < 2
cmp rax,2
jb fib1
mov rdx,1 ;set rax, rdx
and rax,rdx
sub rdx,rax
shr rcx,1
fib0: add rdx,rax
add rax,rdx
dec rcx
jnz fib0
fib1: ret
fib endp
endif
end
br_misp_retired.all_branches
) for the whole program with the computed jump version, so apparently indirect-branch target prediction is working fantastically well here. I didn't think a pattern this long would have any chance, but I guess it does. Could maybe defeat it with another 94 entries in a different order. It definitely should be counting indirect-branch mispredicts, not just conditional, because there's abr_misp_retired.conditional
for just conditional. Will try again after my curling game. – Watcherx
for jump vs. loop?) BTW, the loop version mispredicts about the expected amount; I think once per loop-exit, or 51M per program or 1.88% of all branches. – Watcherfib-indexed-branch.c:35:3: warning: iteration 93 invokes undefined behavior [-Waggressive-loop-optimizations]
. Apparently I copy/pasted a version from before you fixed that in the C. I didn't copy/paste the asm table. – Watcherx += fib(a[i]);
along withprintf(...x)
was enough to prevent the compiler from optimizing out x. All assembly loop version was slightly slower (1.44 to 1.46 seconds), which could be luck (I take the best of several runs), or a minor code location issue. – Stanleystanlyadd
instructions were run after the last indirect branch. Does "branch history" include how many instructions separated the branches, or what the target of the last indirect branch was? I was thinking it probably wouldn't. – Watchery[]
stuff looked over-complicated, so I just added my own#ifdef USE_RESULT
/x +=
/#endif
/fib(a[i])
. – WatcherN
(not taken) as the branch history at the moment it needs to predict loop exit. – Crawlyshr ecx,1
) thenN
would be between 1 and 93, with an average in the middle. We know the branch predictors on modern Intel seem to have a history length in the 20ish bits, so most of these will cases will see "all zeros" and mispredict the exit. Essentially TAGE is bad at predicting loop exits because the branch history in that case has low entropy (mostly zeros) and so it isn't storing the history in an efficient format and you quickly exceed the history size. We may see a return of the loop predictors at some point. – CrawlyTTTTTTTTTTTNTTTTTTTTTT
in the history buffer: i.e., they will usually just see back to the "last" loop exit - but that doesn't help you predict the current one. You need to see the last two to effectively localize yourself in the benchmark input array, and that's even less likely. Analyzing the 2x unroll version that appears above is harder, but it doesn't work "much" better. You compress the history by 2:1, but now you have 2x aliasing also. – Crawlyperf stat
to get performance counters for the entire executable (statically linked so there's no CRT startup/cleanup, but still moderate overhead). There are libraries for using perf counters from user-space, but I'd have to set that up. – Watchercycles
counter, but you can add any others you want with--extra-events
. – Crawlymov rdx,1
=>mov edx,1
,xor rax, rax
=>xor eax, eax
. Those save a REX prefix – Mcauley