Using a description largely based on Intel Pentium architecture, to cut a very very long story short:
- the processor has a number of "execution units" that can perform different types of 'micro-ops'; instructions may be split into several micro-ops
- the different execution units essentially run in parallel
- each micro-op ties up the corresponding execution unit for a certain number of clock cycles so meanwhile no other instruction can use that execution unit: e.g. "floating point add" may tie up the "FP execute" unit for 2 clock cycles
- execution units are grouped by "port", and each clock cycle, a new micro-op can be sent to each port (assuming the relevant execution unit is free at that moment); some units can also be sent an "extra op" halfway through the cycle; so each clock cycle, a certain number of ops can start executing;
- the processor can re-order micro-ops where this doesn't break dependencies (or where the result can still be reconstructed) to take advantage of which execution units are free at a given moment
- so instructions can be executing in parallel, but which parts of which instructions are executing at any one time is quite a complex situation
- the overall time for a given instruction thus depends on how long it had to "wait" for the necessary execution units to become available, the actual time that those ops spent running on the given units, plus any extra time required to "tie up the result"
Since the timing of an instruction depends on the surrounding instructions, in practice, it's usually best to time a representative piece of code than try and worry about individual instructions. However:
- Intel (and presumably other manufacturers) publish a list of instruction throughput and latency timings
- the throughput is the number of clock cycles actually needed on the relevant execution unit(s)
- the latency is a "worst case" number of clock cycles required, once an instruction starts executing, before the result of that execution is available as input to another instruction
So for example, if, say, floating point add and multiply instructions each have a throughput of 2 and a latency of 5 (actually, for multiply it's a bit greater I think), that means that adding a register to itself or multiplying it by itself will likely take two clock cycles (since there are no other dependent values), whereas adding it the result of a previous multiplication will take something like or a bit less than 2+5 clock cycles, depending where you start/finish timing, and on all sorts of other things. (During some of those clock cycles, another add/multiply operation could be taking place, so it's arguable how many cycles you actually attribute to the individual add/mutliply instructions anyway...)
Oh, and just as a concrete example. For following Java code
public void runTest(double[] data, double randomVal) {
for (int i = data.length-1; i >= 0; i--) {
data[i] = data[i] + randomVal;
}
}
Hotspot 1.6.12 JIT-compiles the inner loop sequence to the following Intel code, consisting of a load-add-store for each position in the array (with 'randomVal' being held in XMM0a in this case):
0b3 MOVSD XMM1a,[EBP + #16]
0b8 ADDSD XMM1a,XMM0a
0bc MOVSD [EBP + #16],XMM1a
0c1 MOVSD XMM1a,[EBP + #8]
0c6 ADDSD XMM1a,XMM0a
0ca MOVSD [EBP + #8],XMM1a
...
each group of load-add-store appears to take 5 clock cycles.
i
. And yes, clang does recognize sum-of-linear-sequence loops and optimize with a closed form that avoids overflow.) – Avarice