In C, why is signed int
faster than unsigned int
? True, I know that this has been asked and answered multiple times on this website (links below). However, most people said that there is no difference. I have written code and accidentally found a significant performance difference.
Why would the "unsigned" version of my code be slower than the "signed" version (even when testing the same number)? (I have a x86-64 Intel processor).
Similar links
Compile Command: gcc -Wall -Wextra -pedantic -O3 -Wl,-O3 -g0 -ggdb0 -s -fwhole-program -funroll-loops -pthread -pipe -ffunction-sections -fdata-sections -std=c11 -o ./test ./test.c && strip --strip-all --strip-unneeded --remove-section=.note --remove-section=.comment ./test
signed int
version
NOTE: There is no difference if I explicitly declare signed int
on all numbers.
int isprime(int num) {
// Test if a signed int is prime
int i;
if (num % 2 == 0 || num % 3 == 0)
return 0;
else if (num % 5 == 0 || num % 7 == 0)
return 0;
else {
for (i = 11; i < num; i += 2) {
if (num % i == 0) {
if (i != num)
return 0;
else
return 1;
}
}
}
return 1;
}
unsigned int
version
int isunsignedprime(unsigned int num) {
// Test if an unsigned int is prime
unsigned int i;
if (num % (unsigned int)2 == (unsigned int)0 || num % (unsigned int)3 == (unsigned int)0)
return 0;
else if (num % (unsigned int)5 == (unsigned int)0 || num % (unsigned int)7 == (unsigned int)0)
return 0;
else {
for (i = (unsigned int)11; i < num; i += (unsigned int)2) {
if (num % i == (unsigned int)0) {
if (i != num)
return 0;
else
return 1;
}
}
}
return 1;
}
Test this in a file with the below code:
int main(void) {
printf("%d\n", isprime(294967291));
printf("%d\n", isprime(294367293));
printf("%d\n", isprime(294967293));
printf("%d\n", isprime(294967241)); // slow
printf("%d\n", isprime(294967251));
printf("%d\n", isprime(294965291));
printf("%d\n", isprime(294966291));
printf("%d\n", isprime(294963293));
printf("%d\n", isprime(294927293));
printf("%d\n", isprime(294961293));
printf("%d\n", isprime(294917293));
printf("%d\n", isprime(294167293));
printf("%d\n", isprime(294267293));
printf("%d\n", isprime(294367293)); // slow
printf("%d\n", isprime(294467293));
return 0;
}
Results (time ./test
):
Signed - real 0m0.949s
Unsigned - real 0m1.174s
signed
also ? – Eerieidivl
vsdivl
) on my machine, but execution time is roughly the same OP showed. This difference is then due to these instructions... Is there any border effect due to instruction parallelism, out-of-order execution, etc? – Bifilardiv
(unsigned) takes one more µop (10 vs 9) and has a slightly worse (on average) throughput compared to 32bitidiv
(signed), but that seems like too small a difference to explain this – Jamajamaalidiv
is a lot worse than the 64bitdiv
. But the difference in time here is over 20% right, while the difference in throughput between 32bitidiv
anddiv
is nowhere near that – Jamajamaal(unsigned int)3
can be more clearly written3u
– Safkodiff
the generated assembly code. – Safko2U
which is unsigned literal, making(unsigned)
cast unnecessary (and more readable code). – Singlebreasted(signed)2
is exactly the same as2
because integer literals must be int unless it doesn't fit in anint
. Same to(unsigned)2
and2U
– Braziernum
whereas this at least could be optimized down to only factors in the form6k±1
ofnum
up untilsqrt(num)
which significantly reduces the number of loops needed – Brazierint x
, you are not explicitly declaring a signed int. You are declaring it implicitly. To be explicit, you would writesigned int x
. To explicitly declarex
signed, and implicitly declare it an integer, you would writesigned x
. – MatthusC
In C, why is “signed int” faster than “unsigned int”? is not the true question here. – Herald