FLOPS per cycle for sandy-bridge and haswell SSE2/AVX/AVX2
Asked Answered
S

2

60

I'm confused on how many flops per cycle per core can be done with Sandy-Bridge and Haswell. As I understand it with SSE it should be 4 flops per cycle per core for SSE and 8 flops per cycle per core for AVX/AVX2.

This seems to be verified here, How do I achieve the theoretical maximum of 4 FLOPs per cycle? ,and here, Sandy-Bridge CPU specification.

However the link below seems to indicate that Sandy-bridge can do 16 flops per cycle per core and Haswell 32 flops per cycle per core http://www.extremetech.com/computing/136219-intels-haswell-is-an-unprecedented-threat-to-nvidia-amd.

Can someone explain this to me?

Edit: I understand now why I was confused. I thought the term FLOP only referred to single floating point (SP). I see now that the test at How do I achieve the theoretical maximum of 4 FLOPs per cycle? are actually on double floating point (DP) so they achieve 4 DP FLOPs/cycle for SSE and 8 DP FLOPs/cycle for AVX. It would be interesting to redo these test on SP.

Sherly answered 27/3, 2013 at 9:48 Comment(4)
In response to your edit: The numbers would be exactly double the DP numbers. That's because the latencies and throughputs are identical for the SP and DP versions of the SIMD instructions. (In some cases, the SP ones have even lower latency.)Shoshana
I have converted the code to use SP as best as I understand and compiled it with Visual Studio 2012. However, I don't see a difference in speed and the sum reports an error so likely I need to change some more code. I'll have to get back to this.Sherly
You need to double the numbers since the counter is assuming DP. (Change: 48 * 1000 * iterations * tds * 2 to 48 * 1000 * iterations * tds * 4) Furthermore, you need to change the renormalization mask to work on SP: uint64 iMASK = 0x800fffffffffffffull;Shoshana
4 due to four SP floats per SSE register. Thanks again. I also changed the renormalization mask to unsigned int iMASK = 0x80fffffu. Now it works and I get twice like you said.Sherly
C
124

Here are theoretical max FLOPs counts (per core) for a number of recent processor microarchitectures and explanation how to achieve them.

In general, to calculate this look up the throughput of the FMA instruction(s) e.g. on https://agner.org/optimize/ or any other microbenchmark result, and multiply
(FMAs per clock) * (vector elements / instruction) * 2 (FLOPs / FMA).
Note that achieving this in real code requires very careful tuning (like loop unrolling), and near-zero cache misses, and no bottlenecks on anything else. Modern CPUs have such high FMA throughput that there isn't much room for other instructions to store the results, or to feed them with input. e.g. 2 SIMD loads per clock is also the limit for most x86 CPUs, so a dot product will bottleneck on 2 loads per 1 FMA. A carefully-tuned dense matrix multiply can come close to achieving these numbers, though.

If your workload includes any ADD/SUB or MUL that can't be contracted into FMAs, the theoretical max numbers aren't an appropriate goal for your workload. Haswell/Broadwell have 2-per-clock SIMD FP multiply (on the FMA units), but only 1 per clock SIMD FP add (on a separate vector FP add unit with lower latency). Skylake dropped the separate SIMD FP adder, running add/mul/fma the same at 4c latency, 2-per-clock throughput, for any vector width.

Intel

Note that Celeron/Pentium versions of recent microarchitectures don't support AVX or FMA instructions, only SSE4.2.

Intel Core 2 and Nehalem (SSE/SSE2):

  • 4 DP FLOPs/cycle: 2-wide SSE2 addition + 2-wide SSE2 multiplication
  • 8 SP FLOPs/cycle: 4-wide SSE addition + 4-wide SSE multiplication

Intel Sandy Bridge/Ivy Bridge (AVX1):

  • 8 DP FLOPs/cycle: 4-wide AVX addition + 4-wide AVX multiplication
  • 16 SP FLOPs/cycle: 8-wide AVX addition + 8-wide AVX multiplication

Intel Haswell/Broadwell/Skylake/Kaby Lake/Coffee/... (AVX+FMA3):

  • 16 DP FLOPs/cycle: two 4-wide FMA (fused multiply-add) instructions
  • 32 SP FLOPs/cycle: two 8-wide FMA (fused multiply-add) instructions
  • (Using 256-bit vector instructions can reduce max turbo clock speed on some CPUs.)

Intel Skylake-X/Skylake-EP/Cascade Lake/etc (AVX512F) with 1 FMA units: some Xeon Bronze/Silver

  • 16 DP FLOPs/cycle: one 8-wide FMA (fused multiply-add) instruction
  • 32 SP FLOPs/cycle: one 16-wide FMA (fused multiply-add) instruction
  • Same computation throughput as with narrower 256-bit instructions, but speedups can still be possible with AVX512 for wider loads/stores, a few vector operations that don't run on the FMA units like bitwise operations, and wider shuffles.
  • (Having 512-bit vector instructions in flight shuts down the vector ALU on port 1. Also reduces the max turbo clock speed, so "cycles" isn't a constant in your performance calculations.)

Intel Skylake-X/Skylake-EP/Cascade Lake/etc (AVX512F) with 2 FMA units: Xeon Gold/Platinum, and i7/i9 high-end desktop (HEDT) chips.

  • 32 DP FLOPs/cycle: two 8-wide FMA (fused multiply-add) instructions
  • 64 SP FLOPs/cycle: two 16-wide FMA (fused multiply-add) instructions
  • (Having 512-bit vector instructions in flight shuts down the vector ALU on port 1. Also reduces the max turbo clock speed, although much smaller penalty on Ice Lake and especially newer CPUs)

Future: Intel Cooper Lake (successor to Cascade Lake) introduced Brain Float, a float16 format for neural-network workloads, with support only for SIMD dot-product (into an f32 sum) and conversion of f32 to bf16 (AVX512_BF16). The current F16C extension with AVX2 only has support for load/store with conversion to float32. https://uops.info/ reports that the instructions are multi-uop on Alder Lake (and presumably Sapphire Rapids), but single-uop on Zen 4. Ice Lake lacks BF16, but it's found in Sapphire Rapids and later.

Intel chips before Sapphire Rapids only have actual computation directly on standard float16 in the iGPU. With AVX512_FP16 (Sapphire Rapids), math ops are native operations without having to convert to f32 and back. https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512 . Unlike bf16 support, the full set of add/sub/mul/fma/div/sqrt/compare/min/max/etc ops are available for fp16, with the same per-vector throughput, doubling FLOPs.


AMD

AMD K10:

  • 4 DP FLOPs/cycle: 2-wide SSE2 addition + 2-wide SSE2 multiplication
  • 8 SP FLOPs/cycle: 4-wide SSE addition + 4-wide SSE multiplication

AMD Bulldozer/Piledriver/Steamroller/Excavator, per module (two cores):

  • 8 DP FLOPs/cycle: 4-wide FMA on 128-bit execution units
  • 16 SP FLOPs/cycle: 8-wide FMA

AMD Ryzen (Zen 1)

  • 8 DP FLOPs/cycle: 2-wide or 4-wide FMA on 128-bit execution units
  • 16 SP FLOPs/cycle: 4-wide or 8-wide FMA

AMD Zen 2 and later: 2 FMA/MUL units and two ADD units on separate ports

  • 24 DP FLOPs/cycle: 4-wide FMA + 4-wide ADD on 256-bit execution units

  • 48 SP FLOPs/cycle: 8-wide FMA + 8-wide ADD

  • with only FMAs like for a matmul, 16 DP / 32 SP FLOPs/cycle using 256-bit instructions (or 512-bit on Zen 4 which has single-uop but double-pumped 512-bit instructions.)

  • Zen 4 and later:


x86 low power

Intel Atom (Bonnell/45nm, Saltwell/32nm, Silvermont/22nm):

  • 1.5 DP FLOPs/cycle: scalar SSE2 addition + scalar SSE2 multiplication every other cycle
  • 6 SP FLOPs/cycle: 4-wide SSE addition + 4-wide SSE multiplication every other cycle

Intel Gracemont (Alder Lake E-core):

  • 8 DP FLOPs/cycle: 2-wide or 4-wide FMA on 128-bit execution units
  • 16 SP FLOPs/cycle: 4-wide or 8-wide FMA

AMD Bobcat:

  • 1.5 DP FLOPs/cycle: scalar SSE2 addition + scalar SSE2 multiplication every other cycle
  • 4 SP FLOPs/cycle: 4-wide SSE addition every other cycle + 4-wide SSE multiplication every other cycle

AMD Jaguar:

  • 3 DP FLOPs/cycle: 4-wide AVX addition every other cycle + 4-wide AVX multiplication in four cycles
  • 8 SP FLOPs/cycle: 8-wide AVX addition every other cycle + 8-wide AVX multiplication every other cycle


ARM

ARM Cortex-A9:

  • 1.5 DP FLOPs/cycle: scalar addition + scalar multiplication every other cycle
  • 4 SP FLOPs/cycle: 4-wide NEON addition every other cycle + 4-wide NEON multiplication every other cycle

ARM Cortex-A15:

  • 2 DP FLOPs/cycle: scalar FMA or scalar multiply-add
  • 8 SP FLOPs/cycle: 4-wide NEONv2 FMA or 4-wide NEON multiply-add

Qualcomm Krait:

  • 2 DP FLOPs/cycle: scalar FMA or scalar multiply-add
  • 8 SP FLOPs/cycle: 4-wide NEONv2 FMA or 4-wide NEON multiply-add

IBM POWER

IBM PowerPC A2 (Blue Gene/Q), per core:

  • 8 DP FLOPs/cycle: 4-wide QPX FMA every cycle
  • SP elements are extended to DP and processed on the same units

IBM PowerPC A2 (Blue Gene/Q), per thread:

  • 4 DP FLOPs/cycle: 4-wide QPX FMA every other cycle
  • SP elements are extended to DP and processed on the same units

Intel MIC / Xeon Phi

Intel Xeon Phi (Knights Corner), per core:

  • 16 DP FLOPs/cycle: 8-wide FMA every cycle
  • 32 SP FLOPs/cycle: 16-wide FMA every cycle

Intel Xeon Phi (Knights Corner), per thread:

  • 8 DP FLOPs/cycle: 8-wide FMA every other cycle
  • 16 SP FLOPs/cycle: 16-wide FMA every other cycle

Intel Xeon Phi (Knights Landing), per core:

  • 32 DP FLOPs/cycle: two 8-wide FMA every cycle
  • 64 SP FLOPs/cycle: two 16-wide FMA every cycle

The reason why there are per-thread and per-core datum for IBM Blue Gene/Q and Intel Xeon Phi (Knights Corner) is that these cores have a higher instruction issue rate when running more than one thread per core.

Caius answered 27/3, 2013 at 9:49 Comment(31)
Thanks! I see now that the the link stackoverflow.com/questions/8389648/… is testing DP FLOPSs/cycle and not SP FLOPs/cycle. I wonder if I changed the code the code to be SP (_ps instead of _pd) if I will get 16 SP FLOPS/cycle on my Sandy Bridge system? For Nvidia Fermi I read en.wikipedia.org/wiki/GeForce_500_Series "Each SP can fulfil up to two single precision operations FMA per clock". I guess that's similar to Haswell which can do 2 FMA instructions/cycle.Sherly
If you change _ps to _pd you will double the performance. Whether you will get 16 SP FLOPs/cycle depends on the other parts of your code (e.g. how many memory loads it perform).Caius
Is there a reason you wrote SSE2 for DP and only SSE for SP? I thought SSE2 and SSE were the same for floating point and the main difference was that SSE2 added integer support.Sherly
DP support was added in SSE2 as wellCaius
What about AVX2 in Intel MIC (Xeon PHI)?Luettaluevano
@Luettaluevano Added Xeoh Phi. However, it does not support AVX2.Caius
@MaratDukhan: Excellent list, thank you. Could you add Cortex-A8? (and M0/M3/M4?)Madancy
@Alex I do not have details for these Cortex processorsCaius
Cortex-M0 and M3 don’t even have FPUs, so they do zero FLOPs/cycle. Even on M4 the FPU is optional. Cortex-A8 can do 2 SP FLOPs/cycle with NEON. Double-precision … well, VFP isn't pipelined on A8, so it’s about 1/8 DP FLOPs/cycle.Flawy
It's worth noting that the AMD Bulldozer/Piledriver/Steamroller processors use a shared FP unit (two cores per FP unit). Thus, the Intel CPUs offer twice the performance of the AMD CPUs, because each Intel core has its own FP unit.Omophagia
Are the Bulldozer/Piledriver/Steamroller numbers for one core or for one module?Minica
@Minica They are per-moduleCaius
have you got a reference for this data or did you produce it yourself?Transcendence
Data is from my testsCaius
For BGQ, you should add the "per core" caveat just like Xeon Phi. A single hardware thread cannot issue FMA on consecutive cycles; therefore 2+ threads per core are required to achieve the peak flop rate of 8 per cycle.Bagdad
How does CortexA7 compare with CortexA9? I'm interested in the raspberry pi2.Claustrophobia
For Cortex-A9 you write "1.5 DP FLOPs/cycle: scalar addition + scalar multiplication every other cycle" How does "scalar addition + scalar multiplication every other cycle" equal 1.5? Shouldn't it be 1.0?Claustrophobia
Do you mean one scalar mult every other cycle and one addition every cycle? That would be 1.5 DP FLOPs/cycle.Claustrophobia
@Zboson In 2 cycles Cortex-A9 can do one FMLA (2 FLOPs) + one FADD (1 FLOP)Caius
@DylRicho I don't have access to those platformsCaius
@MaratDukhan Okay, thank you anyway. May I ask what AMD FX processor (and/or which APU) you tested to get those figures?Macedo
AMD FX-6300, AMD A10-7850K, and some Bulldozer-based Opteron (don't remember the model and don't have access to it anymore)Caius
The last entry (Intel MIC (Xeon Phi), per thread) is odd, since it leads to ~2TFlop/s for a 5011P, which is twice Intel's advertised value. Perhaps it needs the caveat "with up to two threads per core active"?Brisson
@Brisson You interpret it incorrectly. The right interpretation is "what performance I can get if I run single-threaded code on Xeon Phi". If you run more than 1 thread, you are limited by per-core performance.Caius
@MaratDukhan Ah, I see, thanks. And the Sandy Bridge/Ivy Bridge don't state it explicitly but they are both "per core" and "per thread", right? I.e. you could keep one core's floating point units busy with just one thread, given the right benchmark?Brisson
@Brisson Yes, other processors (except PowerPC A2) have the same per-core and per-thread peakCaius
It would be helpful with some references or explanation of how to obtain this information.Protectionist
Does this mean Intel i7-3770K CPU @ 3.50GHz(Intel Sandy Bridge) worse then Intel i5-4210M CPU @ 2.60GHz(Haswell architecture) in terms of flops? It does not seem right.Atony
Skylake-X comes in configurations with either 1 or 2 AVX512 FMA units... software.intel.com/en-us/forums/intel-isa-extensions/topic/…Sincerity
@Sincerity As far as I know (#IamIntel), all of the Xeon W and Skylake-X SKUs have 2 FMA units. I aggregated all of the public information here: github.com/jeffhammond/vpu-count.Bagdad
@MaratDukhan Since this is the most popular source of information about this topic on the internet :-), you should add Cavium ThunderX2. WikiChip or other sources should provide the necessary info.Bagdad
W
21

The throughput for Haswell is lower for addition than for multiplication and FMA. There are two multiplication/FMA units, but only one f.p. add unit. If your code contains mainly additions then you have to replace the additions by FMA instructions with a multiplier of 1.0 to get the maximum throughput.

The latency of FMA instructions on Haswell is 5 and the throughput is 2 per clock. This means that you must keep 10 parallel operations going to get the maximum throughput. If, for example, you want to add a very long list of f.p. numbers, you would have to split it in ten parts and use ten accumulator registers.

This is possible indeed, but who would make such a weird optimization for one specific processor?

Wilhelm answered 24/7, 2013 at 13:35 Comment(4)
You don't need to manually break the loop, a little bit of compiler unrolling and out-of-order HW (assuming you don't have dependencies) can let you reach a considerable throughput bottleneck. Add to that hyperthreading and 2 operations per clock become quite necessary.Babby
@Leeor, maybe you could post some code to show this? Unrolling 10 times with FMA gives me the best result. See my answer at stackoverflow.com/questions/21090873/…Claustrophobia
Most HPC codes that are compute-bound (i.e. flop-bound) do a lot of FMA. In my experience, the places where one does a lot of add are bandwidth-bound such that more add throughput won't help.Bagdad
The newest Intel generation has a more balanced throughput. Floating point addition, multiplication and FMA all have a throughput of 2 instructions per clock cycle and a latency of 4.Wilhelm

© 2022 - 2024 — McMap. All rights reserved.