Is there a performance difference between i++
and ++i
if the resulting value is not used?
Executive summary: No.
i++
could potentially be slower than ++i
, since the old value of i
might need to be saved for later use, but in practice all modern
compilers will optimize this away.
We can demonstrate this by looking at the code for this function,
both with ++i
and i++
.
$ cat i++.c
extern void g(int i);
void f()
{
int i;
for (i = 0; i < 100; i++)
g(i);
}
The files are the same, except for ++i
and i++
:
$ diff i++.c ++i.c
6c6
< for (i = 0; i < 100; i++)
---
> for (i = 0; i < 100; ++i)
We'll compile them, and also get the generated assembler:
$ gcc -c i++.c ++i.c
$ gcc -S i++.c ++i.c
And we can see that both the generated object and assembler files are the same.
$ md5 i++.s ++i.s
MD5 (i++.s) = 90f620dda862cd0205cd5db1f2c8c06e
MD5 (++i.s) = 90f620dda862cd0205cd5db1f2c8c06e
$ md5 *.o
MD5 (++i.o) = dd3ef1408d3a9e4287facccec53f7d22
MD5 (i++.o) = dd3ef1408d3a9e4287facccec53f7d22
no
is only tested for integers, however, the question does not mention the type of i
. Then your generalisation in line 1. Sorry, -1. –
Ehf ++i
instead of i++
. There's absolutely no reason not to, and if your software ever passes through a toolchain that doesn't optimize it out your software will be more efficient. Considering it is just as easy to type ++i
as it is to type i++
, there is really no excuse to not be using ++i
in the first place. –
Instance i;
) since the underlying code "should" at least copy a register when using the postfix operator. –
Molecule ++i
should be the go-to default is because it's easier to see that something is being incremented. Take reallylongvariablename++;
for instance vs. ++reallylongvariablename;
. It's much easier to see that the first one is being incremented (e.g. like when it's on a line by itself). And is more in line with how you read it in your mind, "increment i" -> "++i". –
Mind size_t
and my hashes are different. Running g++ 5.3.1 on Ubuntu 16.04. –
Desolation diff i++.s ++i.s
shows just the filename and a 1c1 at the top. Diffing the cpp files just shows the difference in i++ and ++i. I did a small test and found that the ++i was a little faster. But the total run time was about 3 seconds. After 10 runs each ++i was avg 3.3153 and i++ was avg 3.3248. Saw this same relative difference when moved loops to ~30sec. Not a big difference at all. –
Desolation From Efficiency versus intent by Andrew Koenig :
First, it is far from obvious that
++i
is more efficient thani++
, at least where integer variables are concerned.
And :
So the question one should be asking is not which of these two operations is faster, it is which of these two operations expresses more accurately what you are trying to accomplish. I submit that if you are not using the value of the expression, there is never a reason to use
i++
instead of++i
, because there is never a reason to copy the value of a variable, increment the variable, and then throw the copy away.
So, if the resulting value is not used, I would use ++i
. But not because it is more efficient: because it correctly states my intent.
i++
create a copy instead of simply postponing the increment until after the expression referencing the variable is completed? Were older compilers not smart enough to do that? –
Gimel i++
the same way I'd code i += n
or i = i + n
, i.e., in the form target verb object, with the target operand to the left of the verb operator. In the case of i++
, there is no right object, but the rule still applies, keeping the target to the left of the verb operator. –
Hemorrhoid i++
/ ++i
as part of a statement. –
Opaline A better answer is that ++i
will sometimes be faster but never slower.
Everyone seems to be assuming that i
is a regular built-in type such as int
. In this case there will be no measurable difference.
However if i
is complex type then you may well find a measurable difference. For i++
you must make a copy of your class before incrementing it. Depending on what's involved in a copy it could indeed be slower since with ++i
you can just return the final value.
Foo Foo::operator++()
{
Foo oldFoo = *this; // copy existing value - could be slow
// yadda yadda, do increment
return oldFoo;
}
Another difference is that with ++i
you have the option of returning a reference instead of a value. Again, depending on what's involved in making a copy of your object this could be slower.
A real-world example of where this can occur would be the use of iterators. Copying an iterator is unlikely to be a bottle-neck in your application, but it's still good practice to get into the habit of using ++i
instead of i++
where the outcome is not affected.
Short answer:
There is never any difference between i++
and ++i
in terms of speed. A good compiler should not generate different code in the two cases.
Long answer:
What every other answer fails to mention is that the difference between ++i
versus i++
only makes sense within the expression it is found.
In the case of for(i=0; i<n; i++)
, the i++
is alone in its own expression: there is a sequence point before the i++
and there is one after it. Thus the only machine code generated is "increase i
by 1
" and it is well-defined how this is sequenced in relation to the rest of the program. So if you would change it to prefix ++
, it wouldn't matter in the slightest, you would still just get the machine code "increase i
by 1
".
The differences between ++i
and i++
only matters in expressions such as array[i++] = x;
versus array[++i] = x;
. Some may argue and say that the postfix will be slower in such operations because the register where i
resides has to be reloaded later. But then note that the compiler is free to order your instructions in any way it pleases, as long as it doesn't "break the behavior of the abstract machine" as the C standard calls it.
So while you may assume that array[i++] = x;
gets translated to machine code as:
- Store value of
i
in register A. - Store address of array in register B.
- Add A and B, store results in A.
- At this new address represented by A, store the value of x.
- Store value of
i
in register A // inefficient because extra instruction here, we already did this once. - Increment register A.
- Store register A in
i
.
the compiler might as well produce the code more efficiently, such as:
- Store value of
i
in register A. - Store address of array in register B.
- Add A and B, store results in B.
- Increment register A.
- Store register A in
i
. - ... // rest of the code.
Just because you as a C programmer is trained to think that the postfix ++
happens at the end, the machine code doesn't have to be ordered in that way.
So there is no difference between prefix and postfix ++
in C. Now what you as a C programmer should be vary of, is people who inconsistently use prefix in some cases and postfix in other cases, without any rationale why. This suggests that they are uncertain about how C works or that they have incorrect knowledge of the language. This is always a bad sign, it does in turn suggest that they are making other questionable decisions in their program, based on superstition or "religious dogmas".
"Prefix ++
is always faster" is indeed one such false dogma that is common among would-be C programmers.
Taking a leaf from Scott Meyers, More Effective c++ Item 6: Distinguish between prefix and postfix forms of increment and decrement operations.
The prefix version is always preferred over the postfix in regards to objects, especially in regards to iterators.
The reason for this if you look at the call pattern of the operators.
// Prefix
Integer& Integer::operator++()
{
*this += 1;
return *this;
}
// Postfix
const Integer Integer::operator++(int)
{
Integer oldValue = *this;
++(*this);
return oldValue;
}
Looking at this example it is easy to see how the prefix operator will always be more efficient than the postfix. Because of the need for a temporary object in the use of the postfix.
This is why when you see examples using iterators they always use the prefix version.
But as you point out for int's there is effectively no difference because of compiler optimisation that can take place.
First of all: The difference between i++
and ++i
is neglegible in C.
To the details.
1. The well known C++ issue: ++i
is faster
In C++, ++i
is more efficient iff i
is some kind of an object with an overloaded increment operator.
Why?
In ++i
, the object is first incremented, and can subsequently passed as a const reference to any other function. This is not possible if the expression is foo(i++)
because now the increment needs to be done before foo()
is called, but the old value needs to be passed to foo()
. Consequently, the compiler is forced to make a copy of i
before it executes the increment operator on the original. The additional constructor/destructor calls are the bad part.
As noted above, this does not apply to fundamental types.
2. The little known fact: i++
may be faster
If no constructor/destructor needs to be called, which is always the case in C, ++i
and i++
should be equally fast, right? No. They are virtually equally fast, but there may be small differences, which most other answerers got the wrong way around.
How can i++
be faster?
The point is data dependencies. If the value needs to be loaded from memory, two subsequent operations need to be done with it, incrementing it, and using it. With ++i
, the incrementation needs to be done before the value can be used. With i++
, the use does not depend on the increment, and the CPU may perform the use operation in parallel to the increment operation. The difference is at most one CPU cycle, so it is really neglegible, but it is there. And it is the other way round then many would expect.
++i
or i++
is used within another expression, changing between them changes semantics of the expression, so any possible performance gain/loss is out of question. If they are standalone, i.e., the result of the operation is not used immediately, then any decent compiler would compile it to the same thing, for example an INC
assembly instruction. –
Vanzant i++
and ++i
can be used interchangeably in almost every possible situation by adjusting loop constants by one, so they are near equivalent in what they do for the programmer. 2) Even though both compile to the same instruction, their execution differs for the CPU. In the case of i++
, the CPU can compute the increment in parallel to some other instruction that uses the same value (CPUs really do this!), while with ++i
the CPU has to schedule the other instruction after the increment. –
Coriolanus if(++foo == 7) bar();
and if(foo++ == 6) bar();
are functionally equivalent. However, the second may be one cycle faster, because the compare and the increment can be computed in parallel by the CPU. Not that this single cycle matters much, but the difference is there. –
Coriolanus <
for example vs <=
) where ++
is usually used, so the conversion between the though is often easily possible. –
Vanzant ++i
or i++
is not used, the two are fully equivalent, and should compile to exactly the same code. (Unless i
is an object of a class that has non-standard, distinct pre/post increment operators). –
Coriolanus Here's an additional observation if you're worried about micro optimisation. Decrementing loops can 'possibly' be more efficient than incrementing loops (depending on instruction set architecture e.g. ARM), given:
for (i = 0; i < 100; i++)
On each loop you you will have one instruction each for:
- Adding
1
toi
. - Compare whether
i
is less than a100
. - A conditional branch if
i
is less than a100
.
Whereas a decrementing loop:
for (i = 100; i != 0; i--)
The loop will have an instruction for each of:
- Decrement
i
, setting the CPU register status flag. - A conditional branch depending on CPU register status (
Z==0
).
Of course this works only when decrementing to zero!
Remembered from the ARM System Developer's Guide.
a[N+i]
instead of a[i]
. –
Mariannemariano Please don't let the question of "which one is faster" be the deciding factor of which to use. Chances are you're never going to care that much, and besides, programmer reading time is far more expensive than machine time.
Use whichever makes most sense to the human reading the code.
@Mark Even though the compiler is allowed to optimize away the (stack based) temporary copy of the variable and gcc (in recent versions) is doing so, doesn't mean all compilers will always do so.
I just tested it with the compilers we use in our current project and 3 out of 4 do not optimize it.
Never assume the compiler gets it right, especially if the possibly faster, but never slower code is as easy to read.
If you don't have a really stupid implementation of one of the operators in your code:
Alwas prefer ++i over i++.
I have been reading through most of the answers here and many of the comments, and I didn't see any reference to the one instance that I could think of where i++
is more efficient than ++i
(and perhaps surprisingly --i
was more efficient than i--
). That is for C compilers for the DEC PDP-11!
The PDP-11 had assembly instructions for pre-decrement of a register and post-increment, but not the other way around. The instructions allowed any "general-purpose" register to be used as a stack pointer. So if you used something like *(i++)
it could be compiled into a single assembly instruction, while *(++i)
could not.
This is obviously a very esoteric example, but it does provide the exception where post-increment is more efficient(or I should say was, since there isn't much demand for PDP-11 C code these days).
--i
and i++
. –
Katiekatina In C, the compiler can generally optimize them to be the same if the result is unused.
However, in C++ if using other types that provide their own ++ operators, the prefix version is likely to be faster than the postfix version. So, if you don't need the postfix semantics, it is better to use the prefix operator.
I can think of a situation where postfix is slower than prefix increment:
Imagine a processor with register A
is used as accumulator and it's the only register used in many instructions (some small microcontrollers are actually like this).
Now imagine the following program and their translation into a hypothetical assembly:
Prefix increment:
a = ++b + c;
; increment b
LD A, [&b]
INC A
ST A, [&b]
; add with c
ADD A, [&c]
; store in a
ST A, [&a]
Postfix increment:
a = b++ + c;
; load b
LD A, [&b]
; add with c
ADD A, [&c]
; store in a
ST A, [&a]
; increment b
LD A, [&b]
INC A
ST A, [&b]
Note how the value of b
was forced to be reloaded. With prefix increment, the compiler can just increment the value and go ahead with using it, possibly avoid reloading it since the desired value is already in the register after the increment. However, with postfix increment, the compiler has to deal with two values, one the old and one the incremented value which as I show above results in one more memory access.
Of course, if the value of the increment is not used, such as a single i++;
statement, the compiler can (and does) simply generate an increment instruction regardless of postfix or prefix usage.
As a side note, I'd like to mention that an expression in which there is a b++
cannot simply be converted to one with ++b
without any additional effort (for example by adding a - 1
). So comparing the two if they are part of some expression is not really valid. Often, where you use b++
inside an expression you cannot use ++b
, so even if ++b
were potentially more efficient, it would simply be wrong. Exception is of course if the expression is begging for it (for example a = b++ + 1;
which can be changed to a = ++b;
).
I always prefer pre-increment, however ...
I wanted to point out that even in the case of calling the operator++ function, the compiler will be able to optimize away the temporary if the function gets inlined. Since the operator++ is usually short and often implemented in the header, it is likely to get inlined.
So, for practical purposes, there likely isn't much of a difference between the performance of the two forms. However, I always prefer pre-increment since it seems better to directly express what I"m trying to say, rather than relying on the optimizer to figure it out.
Also, giving the optmizer less to do likely means the compiler runs faster.
My C is a little rusty, so I apologize in advance. Speedwise, I can understand the results. But, I am confused as to how both files came out to the same MD5 hash. Maybe a for loop runs the same, but wouldn't the following 2 lines of code generate different assembly?
myArray[i++] = "hello";
vs
myArray[++i] = "hello";
The first one writes the value to the array, then increments i. The second increments i then writes to the array. I'm no assembly expert, but I just don't see how the same executable would be generated by these 2 different lines of code.
Just my two cents.
foo[i++]
to foo[++i]
without changing anything else would obviously change program semantics, but on some processors when using a compiler without a loop-hoisting optimization logic, incrementing p
and q
once and then running a loop which performs e.g. *(p++)=*(q++);
would be faster than using a loop which performs *(++pp)=*(++q);
. For very tight loops on some processors the speed difference may be significant (more than 10%), but that's probably the only case in C where post-increment is materially faster than pre-increment. –
Flyn © 2022 - 2024 — McMap. All rights reserved.