EDIT: I HAVE AN ANSWER (TL;DR: SKIP TO THE END)
I've done some tests on my own.
function whileFn() {
var i = 0;
while (i < 10) {
document.write(i);
i++;
}
}
function doWhileFn() {
var i = 0;
do {
document.write(i);
i++;
} while (i < 10)
}
console.time('doWhileFn');
doWhileFn();
console.timeEnd('doWhileFn');
document.write('<br/>');
console.time('whileFn');
whileFn();
console.timeEnd('whileFn');
I've inverted the two functions and the timing is still the same.
That is, the first is always slower than the second one.
This is proof that the loop has no meaning whatsoever, it is completely bound by the rendering engine. (rendering is irrelevant)
If you remove document.write()
altogether, the difference is reduced even more. (irrelevant)
To correctly measure the time, you have to take into account the measurement of time itself, in fact this shows the overhead of measuring time:
console.time('outer');
console.time('inner');
for (var i = 0; i < 10; i++);
console.timeEnd('inner');
console.timeEnd('outer');
The difference between the inner
and outer
measurement is a measurement overhead and impacts on the measurement itself (Heisenberg anyone?) so much that timing very fast functions (next to the ms mark) is prone to measurement errors. TRUE BUT IRRELEVANT
Try wrapping your code in huge cycles (like repeat 1000-100000 times) to reduce the impact of measurement. THIS PROVES TO BE NOT THE CASE
By the above statement long cycles would have a tiny measurement difference, but tests show that the difference scales with the number of cycles, and as such is NOT just a measurement overhead.
To recap the findings so far:
- it is not a matter of
while
and do..while
, because inverting the order of the two functions does not invert the timing: the first always is the slower one;
- it is not a matter of measurement overhead because the difference scales to macroscopic proportions (it should be a variable, yet tiny amount -- but it's not);
- it is not about rendering because I've removed it altogether at some point;
- the inner-outer snippet shows that long cycles have a tiny measurement overhead by replacing
10
with a large number, but this is not the case for the original code in the question -- here the difference is proportional to the number of cycles.
EDIT: conclusion
This is an alternating test. Measure A, B, A again, B again and finally A again: the more you move forward, the more it converges.
Proof:
function whileFn() {
var i = 0;
while (i < 10) {
document.write(i);
i++;
}
}
function doWhileFn() {
var i = 0;
do {
document.write(i);
i++;
} while (i < 10)
}
console.time('doWhileFn');
doWhileFn();
console.timeEnd('doWhileFn');
document.write('<br/>');
console.time('whileFn');
whileFn();
console.timeEnd('whileFn');
document.write('<br/>');
console.time('doWhileFn');
doWhileFn();
console.timeEnd('doWhileFn');
document.write('<br/>');
console.time('whileFn');
whileFn();
console.timeEnd('whileFn');
document.write('<br/>');
console.time('doWhileFn');
doWhileFn();
console.timeEnd('doWhileFn');
Explanation: the JS engine compiles the source JS into native code on-the-fly. It has gradual performance scaling, but it can only compile a function AFTER it has returned. This means that the function is compiled and gradually optimized over a longer period of time. This, in fact, is a well known feature of V8. What is measured in the A-B scenario is not representative because of this edge condition (initial measures are inaccurate). The A-B-A-B-A scenario shows that A and B converge over time and measurements settle when they are far away from the edge (initial) condition.