Is there any performance gain/loss by using unsigned integers over signed integers?
If so, does this goes for short and long as well?
Is there any performance gain/loss by using unsigned integers over signed integers?
If so, does this goes for short and long as well?
Division by powers of 2 is faster with unsigned int
, because it can be optimized into a single shift instruction. With signed int
, it usually requires more machine instructions, because division rounds towards zero, but shifting to the right rounds down. Example:
int foo(int x, unsigned y)
{
x /= 8;
y /= 8;
return x + y;
}
Here is the relevant x
part (signed division):
movl 8(%ebp), %eax
leal 7(%eax), %edx
testl %eax, %eax
cmovs %edx, %eax
sarl $3, %eax
And here is the relevant y
part (unsigned division):
movl 12(%ebp), %edx
shrl $3, %edx
shrl
should be a literal? –
Pamphlet In C++ (and C), signed integer overflow is undefined, whereas unsigned integer overflow is defined to wrap around. Notice that e.g. in gcc, you can use the -fwrapv flag to make signed overflow defined (to wrap around).
Undefined signed integer overflow allows the compiler to assume that overflows don't happen, which may introduce optimization opportunities. See e.g. this blog post for discussion.
INT_MAX + 1
is undefined behavior, as is INT_MIN % -1
. What has changed is that signed_integer_type(any_integer_value)
is now defined for any non-indeterminate value, and guaranteed to give you the two's complement result. –
Lucilius unsigned
leads to the same or better performance than signed
.
Some examples:
signed
numbers; gcc does it with 1 instruction, just like in the unsigned
case)short
usually leads to the same or worse performance than int
(assuming sizeof(short) < sizeof(int)
). Performance degradation happens when you assign a result of an arithmetic operation (which is usually int
, never short
) to a variable of type short
, which is stored in the processor's register (which is also of type int
). All the conversions from short
to int
take time and are annoying.
Note: some DSPs have fast multiplication instructions for the signed short
type; in this specific case short
is faster than int
.
As for the difference between int
and long
, i can only guess (i am not familiar with 64-bit architectures). Of course, if int
and long
have the same size (on 32-bit platforms), their performance is also the same.
A very important addition, pointed out by several people:
What really matters for most applications is the memory footprint and utilized bandwidth. You should use the smallest necessary integers (short
, maybe even signed/unsigned char
) for large arrays.
This will give better performance, but the gain is nonlinear (i.e. not by a factor of 2 or 4) and somewhat unpredictable - it depends on cache size and the relationship between calculations and memory transfers in your application.
short
is different than yours/anyone else's) –
Wagner short
today (with non-cache RAM being effectively infinite), and a very good reason. –
Wagner short
is faster than int
when memory bound. In my experience is they have the same performance on x86, and short
is slower on ARM. –
Wagner short
instructions are slower than their int
versions. A few may be faster (like mul/div). Even if they have the same performance you may still be faster using int in higher level code, because using short
may result in some zero/sign extension and/or truncation –
Let This will depend on exact implementation. In most cases there will be no difference however. If you really care you have to try all the variants you consider and measure performance.
+1
for "if you want to know, you need to measure". It's very annoying that this needs the be answered almost weekly. –
Endeavor This is pretty much dependent on the specific processor.
On most processors, there are instructions for both signed and unsigned arithmetic, so the difference between using signed and unsigned integers comes down to which one the compiler uses.
If any of the two is faster, it's completely processor specific, and most likely the difference is miniscule, if it exists at all.
The performance difference between signed and unsigned integers is actually more general than the acceptance answer suggests. Division of an unsigned integer by any constant can be made faster than division of a signed integer by a constant, regardless of whether the constant is a power of two. See http://ridiculousfish.com/blog/posts/labor-of-division-episode-iii.html
At the end of his post, he includes the following section:
A natural question is whether the same optimization could improve signed division; unfortunately it appears that it does not, for two reasons:
The increment of the dividend must become an increase in the magnitude, i.e. increment if n > 0, decrement if n < 0. This introduces an additional expense.
The penalty for an uncooperative divisor is only about half as much in signed division, leaving a smaller window for improvements.
Thus it appears that the round-down algorithm could be made to work in signed division, but will underperform the standard round-up algorithm.
Not only division by powers of 2 are faster with unsigned type, division by any other values are also faster with unsigned type. If you look at Agner Fog's Instruction tables you'll see that unsigned divisions have similar or better performance than signed versions
For example with the AMD K7
Instruction | Operands | Ops | Latency | Reciprocal throughput |
---|---|---|---|---|
DIV | r8/m8 | 32 | 24 | 23 |
DIV | r16/m16 | 47 | 24 | 23 |
DIV | r32/m32 | 79 | 40 | 40 |
IDIV | r8 | 41 | 17 | 17 |
IDIV | r16 | 56 | 25 | 25 |
IDIV | r32 | 88 | 41 | 41 |
IDIV | m8 | 42 | 17 | 17 |
IDIV | m16 | 57 | 25 | 25 |
IDIV | m32 | 89 | 41 | 41 |
The same thing applies to Intel Pentium
Instruction | Operands | Clock cycles |
---|---|---|
DIV | r8/m8 | 17 |
DIV | r16/m16 | 25 |
DIV | r32/m32 | 41 |
IDIV | r8/m8 | 22 |
IDIV | r16/m16 | 30 |
IDIV | r32/m32 | 46 |
Of course those are quite ancient. Newer architectures with more transistors might close the gap but the basic things apply: generally you need more micro-ops, more logic, more die area, more latency to do a signed division
In short, don't bother before the fact. But do bother after.
If you want to have performance you have to use performance optimizations of a compiler which may work against common sense. One thing to remember is that different compilers can compile code differently and they themselves have different sorts of optimizations. If we're talking about a g++
compiler and talking about maxing out it's optimization level by using -Ofast
, or at least an -O3
flag, in my experience it can compile long
type into code with even better performance than any unsigned
type, or even just int
.
This is from my own experience and I recommend you to first write your full program and care about such things only after that, when you have your actual code on your hands and you can compile it with optimizations to try and pick the types that actually perform best. This is also a good very general suggestion about code optimization for performance, write quickly first, try compiling with optimizations, tweak things to see what works best. And you should also try using different compilers to compile your program and choosing the one that outputs the most performant machine code.
An optimized multi-threaded linear algebra calculation program can easily have a >10x performance difference finely optimized vs unoptimized. So this does matter.
Optimizer output contradicts logic in plenty of cases. For example, I had a case when a difference between a[x]+=b
and a[x]=b
changed program execution time almost 2x. And no, a[x]=b
wasn't the faster one.
Here's for example NVidia stating that for programming their GPUs:
Note: As was already the recommended best practice, signed arithmetic should be preferred over unsigned arithmetic wherever possible for best throughput on SMM. The C language standard places more restrictions on overflow behavior for unsigned math, limiting compiler optimization opportunities.
Traditionally int
is the native integer format of the target hardware platform. Any other integer type may incur performance penalties.
EDIT:
Things are slightly different on modern systems:
int
may in fact be 32-bit on 64-bit systems for compatibility reasons. I believe this happens on Windows systems.
Modern compilers may implicitly use int
when performing computations for shorter types in some cases.
int
is still 32 bits wide, but 64 bit types (long
or long long
, depending on the OS) should be at least as fast. –
Bevy int
is always 32 bits wide on all systems I know (Windows, Linux, Mac OS X, regardless whether the processor is 64-bit or not). It's the long
type that is different: 32 bits on Windows, but one word on Linux and OS X. –
Bevy int
does not have to be always 32 bits wide. –
Archivolt int
is a type that the CPU can operate on efficiently. e.g. 32-bit on most 64-bit platforms, because they all have efficient 32-bit integer ops, especially x86-64 where int
is the native integer format, the default operand-size in machine code. (The full register width is 64-bit, but using that takes an extra byte of machine-code per instruction). And usually int
is not too huge; it's very rarely 64-bit. But of course it actually is on some systems, IIRC some older Cray. –
Refract int
being 32-bit "for compatibility" had a performance downside. It doesn't. 32-bit is also the best choice for performance, especially on x86-64 (fewer advantages than on AArch64, where uint64_t
doesn't cost extra code-size). See The advantages of using 32bit registers/instructions in x86-64 –
Refract IIRC, on x86 signed/unsigned shouldn't make any difference. Short/long, on the other hand, is a different story, since the amount of data that has to be moved to/from RAM is bigger for longs (other reasons may include cast operations like extending a short to long).
Signed and unsigned integers will always both operate as single clock instructions and have the same read-write performance but according to Dr Andrei Alexandrescu unsigned is preferred over signed. The reason for this is you can fit twice the amount of numbers in the same number of bits because you're not wasting the sign bit and you will use fewer instructions checking for negative numbers yielding performance increases from the decreased ROM. In my experience with the Kabuki VM, which features an ultra-high-performance Script Implementation, it is rare that you actually require a signed number when working with memory. I've spend may years doing pointer arithmetic with signed and unsigned numbers and I've found no benefit to the signed when no sign bit is needed.
Where signed may be preferred is when using bit shifting to perform multiplication and division of powers of 2 because you may perform negative powers of 2 division with signed 2's complement integers. Please see some more YouTube videos from Andrei for more optimization techniques. You can also find some good info in my article about the the world's fastest Integer-to-String conversion algorithm.
Unsigned integer is advantageous in that you store and treat both as bitstream, I mean just a data, without sign, so multiplication, devision becomes easier (faster) with bit-shift operations
© 2022 - 2024 — McMap. All rights reserved.