This question continues on my question here (on the advice of Mystical):
Continuing on my question, when i use packed instructions instead of scalar instructions the code using intrinsics would look very similar:
for(int i=0; i<size; i+=16) {
y1 = _mm_load_ps(output[i]);
…
y4 = _mm_load_ps(output[i+12]);
for(k=0; k<ksize; k++){
for(l=0; l<ksize; l++){
w = _mm_set_ps1(weight[i+k+l]);
x1 = _mm_load_ps(input[i+k+l]);
y1 = _mm_add_ps(y1,_mm_mul_ps(w,x1));
…
x4 = _mm_load_ps(input[i+k+l+12]);
y4 = _mm_add_ps(y4,_mm_mul_ps(w,x4));
}
}
_mm_store_ps(&output[i],y1);
…
_mm_store_ps(&output[i+12],y4);
}
The measured performance of this kernel is about 5.6 FP operations per cycle, although i would expect it to be exactly 4x the performance of the scalar version, i.e. 4.1,6=6,4 FP ops per cycle.
Taking the move of the weight factor into account (thanks for pointing that out), the schedule looks like:
It looks like the schedule doesn't change, although there is an extra instruction after the movss
operation that moves the scalar weight value to the XMM register and then uses shufps
to copy this scalar value in the entire vector. It seems like the weight vector is ready to be used for the mulps
in time taking the switching latency from load to the floating point domain into account, so this shouldn't incur any extra latency.
The movaps
(aligned, packed move),addps
& mulps
instructions that are used in this kernel (checked with assembly code) have the same latency & throughput as their scalar versions, so this shouldn't incur any extra latency either.
Does anybody have an idea where this extra cycle per 8 cycles is spent on, assuming the maximum performance this kernel can get is 6.4 FP ops per cycle and it is running at 5.6 FP ops per cycle?
By the way here is what the actual assembly looks like:
…
Block x:
movapsx (%rax,%rcx,4), %xmm0
movapsx 0x10(%rax,%rcx,4), %xmm1
movapsx 0x20(%rax,%rcx,4), %xmm2
movapsx 0x30(%rax,%rcx,4), %xmm3
movssl (%rdx,%rcx,4), %xmm4
inc %rcx
shufps $0x0, %xmm4, %xmm4 {fill weight vector}
cmp $0x32, %rcx
mulps %xmm4, %xmm0
mulps %xmm4, %xmm1
mulps %xmm4, %xmm2
mulps %xmm3, %xmm4
addps %xmm0, %xmm5
addps %xmm1, %xmm6
addps %xmm2, %xmm7
addps %xmm4, %xmm8
jl 0x401ad6 <Block x>
…
shufps
instruction add 1 cycle every 1.6 iterations?" That's a tough one... – Avunculateshufps
should directly be available to themultps
op since it's both FP domain – Globuliferousshufps
by using aload
instruction, but the performance didn't increase, which to my opinion means that theshufps
isn't the bad guy here. Any other explanations? Maybe the packedmovaps
instructions have some extra latency from cache stuff (misses, misalignment) that isn't there with themovss
instructions in the scalar version? – Globuliferousload
out of the loop and thus removing theshufps
instruction every iteration the performance remains almost the same (goes up by a little because one load is gone), so i assume it is caused by the cache – Globuliferousfor(i=0;i<2*size;i++) input[i] = i/3; output[i] = i/5; weight[i] = i/8;
and keep theksize
in the loop low (mine is 6) – Globuliferous