C code loop performance
Asked Answered
M

3

44

I have a multiply-add kernel inside my application and I want to increase its performance.

I use an Intel Core i7-960 (3.2 GHz clock) and have already manually implemented the kernel using SSE intrinsics as follows:

 for(int i=0; i<iterations; i+=4) {
    y1 = _mm_set_ss(output[i]);
    y2 = _mm_set_ss(output[i+1]);
    y3 = _mm_set_ss(output[i+2]);
    y4 = _mm_set_ss(output[i+3]);

    for(k=0; k<ksize; k++){
        for(l=0; l<ksize; l++){
            w  = _mm_set_ss(weight[i+k+l]);

            x1 = _mm_set_ss(input[i+k+l]);
            y1 = _mm_add_ss(y1,_mm_mul_ss(w,x1));
            …
            x4 = _mm_set_ss(input[i+k+l+3]);
            y4 = _mm_add_ss(y4,_mm_mul_ss(w,x4));
        }
    }
    _mm_store_ss(&output[i],y1);
    _mm_store_ss(&output[i+1],y2);
    _mm_store_ss(&output[i+2],y3);
    _mm_store_ss(&output[i+3],y4);
 }

I know I can use packed fp vectors to increase the performance and I already did so succesfully, but I want to know why the single scalar code isn't able to meet the processor's peak performance.

The performance of this kernel on my machine is ~1.6 FP operations per cycle, while the maximum would be 2 FP operations per cycle (since FP add + FP mul can be executed in parallel).

If I'm correct from studying the generated assembly code, the ideal schedule would look like follows, where the mov instruction takes 3 cycles, the switch latency from the load domain to the FP domain for the dependent instructions takes 2 cycles, the FP multiply takes 4 cycles and the FP add takes 3 cycles. (Note that the dependence from the multiply -> add doesn't incur any switch latency because the operations belong to the same domain).

schedule

According to the measured performance (~80% of the maximum theoretical performance) there is an overhead of ~3 instructions per 8 cycles.

I am trying to either:

  • get rid of this overhead, or
  • explain where it comes from

Of course there is the problem with cache misses & data misalignment which can increase the latency of the move instructions, but are there any other factors that could play a role here? Like register read stalls or something?

I hope my problem is clear, thanks in advance for your responses!


Update: The assembly of the inner-loop looks as follows:

...
Block 21: 
  movssl  (%rsi,%rdi,4), %xmm4 
  movssl  (%rcx,%rdi,4), %xmm0 
  movssl  0x4(%rcx,%rdi,4), %xmm1 
  movssl  0x8(%rcx,%rdi,4), %xmm2 
  movssl  0xc(%rcx,%rdi,4), %xmm3 
  inc %rdi 
  mulss %xmm4, %xmm0 
  cmp $0x32, %rdi 
  mulss %xmm4, %xmm1 
  mulss %xmm4, %xmm2 
  mulss %xmm3, %xmm4 
  addss %xmm0, %xmm5 
  addss %xmm1, %xmm6 
  addss %xmm2, %xmm7 
  addss %xmm4, %xmm8 
  jl 0x401b52 <Block 21> 
...
Menado answered 3/4, 2012 at 11:9 Comment(17)
It really depends a lot of the compiler (even of its version) and the optimization flags you are passing to it. If numerical performance is so crucial to you, you might also invest your time and effort in learning numerical libraries and/or OpenCL or CUDA (to take advantage of GPGPU). Ther is also cache considerations to have. Predicting the actual time of a loop is difficult on present processors.Tasker
I don't see why you would think that loop control can be always done in parallel, while it actually creates a perfect dependency chain in the out-of-order execution scheme. The INC instruction modifies a register. The CMP instruction has to wait for INC to finish in order to check the value in that register and modify the flags accordingly. Then, the conditional jump instruction has to wait for CMP to write the flags in order to decide whether to actually jump or not. No parallelization there, I'm afraid. Not to mention that jumps cause pipeline stalls -the branch predictor takes care of that.Dzoba
Not to mention that the INC instruction has to wait for whichever previous instruction that modified the flags in order to preserve the state of the CF flag. You can remedy that simply by replacing the INC with its corresponding ADD.Dzoba
@DanielKozar: The branch predictor doesn't take care of pipeline stalls. Its job is to avoid these stalls. It predicts the result of the conditional jump, and so long as the prediction is correct, the contents of the loop can be executed with no dependency on the CMP/Jump. In fact, you'll probably have a few dozen extra iterations of the loop until the final jump executes, and the CPU understands that it mispredicted the branch and cancels the extras.Autotoxin
Are you asking why a loop of scalars performs slower than a loop of packeds?Autotoxin
no i want to know why the kernel using scalars isn't able to reach 2 FP operations / cycle, i have a version with packed vectors that performs almost at 4x the scalar versionMenado
Are you measuring this on a dedicated processor? Or is it in a pre-emptive multitasking environment?Dittman
Can you post the raw assembly?Detached
The machine runs linux so i guess it is indeed a pre-emptive environmentMenado
& i will add the assembly to my first post in a minuteMenado
By the way about the loop control part, i assumed that if the 'inc','cmp' and 'jmp' are able to finish within 4 cycles they cannot cause any additional latency, and since the loop control parts have a total latency of 2 cycles (jump has 0 latency) and never has port conflicts (p5 is always free) it will not increase the latency of the loopMenado
@DanielKozar Of course the dependency chain of the loop control doesn't play a role - it is shorter than the cycle length of the loopGamesmanship
So how many cycles does your loop need to execute? If I get it right from your explanation 5 cycles where you expect 4?Gamesmanship
yes exactly and i wonder where this cycle is coming fromMenado
Nost likely register read stalls are the problem in your inner loop. A Nehalem can sustain at most 3 register reads from different registers per clock cycle (including xmm, gpr and flags). I count 13 different registers used in your inner loop, so the loop can't be executed in 4 cycles.Gamesmanship
On a non-server Linux distro I believe the interrupt timer is usually set to 250Hz by default. That interrupt timer is used to preempt code. That means 250 times per second your code is interrupted and the scheduler code runs and decides what to give more time to. It sounds like you're doing great to simply get 80% of max speed, no problems there. If you need better install say, Ubuntu Server and tweak the kernel a bit.Dittman
@OrgnlDave: so? You get interrupted every ~4ms, and run some kernel code that will take at worst some µs. That overhead is way below 20%, I would be surprised if it is indeed > 1%.Engineer
O
30

I noticed in the comments that:

  • The loop takes 5 cycles to execute.
  • It's "supposed" to take 4 cycles. (since there's 4 adds and 4 mulitplies)

However, your assembly shows 5 SSE movssl instructions. According to Agner Fog's tables all floating-point SSE move instructions are at least 1 inst/cycle reciprocal throughput for Nehalem.

Since you have 5 of them, you can't do better than 5 cycles/iteration.


So in order to get to peak performance, you need to reduce the # of loads that you have. How you can do that I can't see immediately this particular case - but it might be possible.

One common approach is to use tiling. Where you add nesting levels to improve locality. Although it's used mostly for improving cache access, it can also be used in registers to reduce the # of load/stores that are needed.

Ultimately, your goal is to reduce the number of loads to be less than the numbers of add/muls. So this might be the way to go.

Odoric answered 3/4, 2012 at 16:55 Comment(6)
I'll also mention that integer SSE register-to-register movs have 3 inst/cycle throughput, but that's irrelevant. All loads/stores are still 1 inst/cycle.Odoric
How can you say this on a multitasking system? Seriously? 80% theoretical throughput with Linux's desktop scheduler and the context-switching involved...I'd really like to see if he could reduce the loop by 1 instruction and get better speed (using an incomplete kernel)Dittman
@OrgnlDave OS/kernel overhead is usually less than you think it is. From my experience, it's negligible (< 1%). See this question for examples of code that achieves 97+% of peak flops on both Windows and Linux.Odoric
OK, I'll grant you that it's usually negligible. But the cost of context switching is high, this is an honest question - how many context windows does Nehalem have? The only way I can see this approaching peak usage regardless of O/S is if it's stuck on one core and is mostly the only thing scheduled on that core. Which is probably true, come to think of it. Also remember that those %'s of time don't refer to actual %'s but rather %'s of time slices givenDittman
Actually, in the question that I linked to. Those %'s are computed from wall times - literally by counting up the # of computed flops and dividing it by the total elapsed wall time.Odoric
@std''OrgnlDave, I get better than 97% of the peak for some of my read/write/compute tests on some systems using Linux.Cream
M
1

Thanks a lot for your answers, this explained a lot. Continuing on my question, when i use packed instructions instead of scalar instructions the code using intrinsics would look very similar:

for(int i=0; i<size; i+=16) {
    y1 = _mm_load_ps(output[i]);
    …
    y4 = _mm_load_ps(output[i+12]);

    for(k=0; k<ksize; k++){
        for(l=0; l<ksize; l++){
            w  = _mm_set_ps1(weight[i+k+l]);

            x1 = _mm_load_ps(input[i+k+l]);
            y1 = _mm_add_ps(y1,_mm_mul_ps(w,x1));
            …
            x4 = _mm_load_ps(input[i+k+l+12]);
            y4 = _mm_add_ps(y4,_mm_mul_ps(w,x4));
        }
    }
    _mm_store_ps(&output[i],y1);
    …
    _mm_store_ps(&output[i+12],y4);
    }

The measured performance of this kernel is about 5.6 FP operations per cycle, although i would expect it to be exactly 4x the performance of the scalar version, i.e. 4.1,6=6,4 FP ops per cycle.

Taking the move of the weight factor into account (thanks for pointing that out), the schedule looks like:

schedule

It looks like the schedule doesn't change, although there is an extra instruction after the movss operation that moves the scalar weight value to the XMM register and then uses shufps to copy this scalar value in the entire vector. It seems like the weight vector is ready to be used for the mulps in time taking the switching latency from load to the floating point domain into account, so this shouldn't incur any extra latency.

The movaps (aligned, packed move),addps & mulps instructions that are used in this kernel (checked with assembly code) have the same latency & throughput as their scalar versions, so this shouldn't incur any extra latency either.

Does anybody have an idea where this extra cycle per 8 cycles is spent on, assuming the maximum performance this kernel can get is 6.4 FP ops per cycle and it is running at 5.6 FP ops per cycle?

Thanks again for all of your help!

Menado answered 4/4, 2012 at 7:41 Comment(3)
I think this is suitable as a separate question. Since now you have a new issue with the shuffle. (which I don't see the answer right now) You can link it back to this one and state that it is a continuation.Odoric
Easy to find out. Make sure that the weight vector does not contain any denormalized values values. Try the loop without the shuffle instruction. It will not produce any useful results, but maybe your find which instruction does cost you additional cycles (I suspect the shuffle, of course).Gamesmanship
@drhirsch The new question is here: #10007743 So repost your comment there.Odoric
O
0

Making this an answer from my comment.

On a non-server Linux distro I believe the interrupt timer is usually set to 250Hz by default, though that varies by distro it's almost always over 150. That speed is necessary to provide a 30+fps interactive GUI. That interrupt timer is used to preempt code. That means 150+ times per second your code is interrupted and the scheduler code runs and decides what to give more time to. It sounds like you're doing great to simply get 80% of max speed, no problems there. If you need better install say, Ubuntu Server (100Hz default) and tweak the kernel (preemption off) a bit

EDIT: On a 2+ core system this has much less impact as your process will almost definitely be slapped onto one core and more-or-less left to do its own thing.

Olpe answered 3/4, 2012 at 16:28 Comment(4)
Sorry, but this is nonsense. I am able to measure processor cycles for simple instruction sequences on a linux system, preemptive and with 1kHz scheduler. Even with X running the overhead from the system is usually well below 1%. Additionally it would be be a very unlikely coincidence if the cycle count in the OP questions goes from 4 to exactly 5 because of the overhead - the more natural explanation is that the loop actually needs 5 cycles.Gamesmanship
@drhirsch I'll bet you have two cores. This was addressed in the comments to another question. I'll edit this to reflect that.Dittman
Doesn't change a thing. I still can do the same measurements while running n instances of the test program, where n is the number of cores.Gamesmanship
@drhirsch Please do so, I've had an issue sort of like this and it would be very illuminating for me (in that I was looking in the WRONG place to solve the problem). Please peg all your cores and measure wall clock time with runs at least 1 second long, running a full desktop distro.Dittman

© 2022 - 2024 — McMap. All rights reserved.