Ok, first of all, i don't know how the compilers does this automatically. And i'm pretty sure there are at least 10s if not 100s of algorithms that compilers have to choose from.
And it's probably compiler-specific anyway.
But, I can help you with calculating its effectiveness.
Just note that this technique usually doesn't give you a great performance boost.
But in repeated looped calculations and can give high percentage performance.
This is because usually the function inside the loop takes much more computation time than the loop's condition checking.
So, lets say we have a simple loop with a constant, cause you were too lazy to do copy-paste or just thought it would look better:
for (int i = 0; i < 5; i++)
{
DoSomething();
}
Here you have 5 int comparisons, 5 incrementations, and 5 DoSomethig() calls.
So if DoSomething() is relatively quick, then we got 15 operations.
Now if you'll unroll this, you'll reduce it to just 5 operations:
DoSomething();
DoSomething();
DoSomething();
DoSomething();
DoSomething();
Now with constants it's easier, so lets see how it would work with a variable:
for (int i = 0; i < n; i++)
{
DoSomething();
}
Here you have n int comparisons, n incrementations, and n DoSomethig() calls = 3n .
Now, we can't unroll it entirely, but we could unroll it by a constant factor (the higher n is expected to be, the more we should unroll it):
int i;
for (i = 0; i < n; i = i+3)
{
DoSomething();
DoSomething();
DoSomething();
}
if (i - n == 2)
{
DoSomething(); // We passed n by to, so there's one more left
}
else if (i - n == 1)
{
DoSomething(); //We passed n by only 1, so there's two more left
DoSomething();
}
Now here we have Here you have n/3+2 int comparisons, n/3 incrementations, and n DoSomethig() calls = (1 2/3)*n.
We saved ourselves (1 1/3)*n operations. Which cuts the computation time almost in half.
FYI, another neat unrolling technique is called Duff's device.
But it's very compiler and language-implementation specific. There are languages where this would actually be worse.