What is the difference in CPU cycles (or, in essence, in 'speed') between
x /= y;
and
#include <cmath>
x = sqrt(y);
EDIT: I know the operations aren't equivalent, I'm just arbitrarily proposing x /= y
as a benchmark for x = sqrt(y)
What is the difference in CPU cycles (or, in essence, in 'speed') between
x /= y;
and
#include <cmath>
x = sqrt(y);
EDIT: I know the operations aren't equivalent, I'm just arbitrarily proposing x /= y
as a benchmark for x = sqrt(y)
The answer to your question depends on your target platform. Assuming you are using most common x86 cpus, I can give you this link http://instlatx64.atw.hu/ This is a collection of measured instruction latency (How long will it take to CPU to get result after it has argument) and how they are pipelined for many x86 and x86_64 processors. If your target is not x86, you can try to measure cost yourself or consult with your CPU documentation.
Firstly you should get a disassembler of your operations (from compiler e.g. gcc: gcc file.c -O3 -S -o file.asm
or via dissasembly of compiled binary, e.g. with help of debugger).
Remember, that In your operation there is loading and storing a value, which must be counted additionally.
Here are two examples from friweb.hu:
For Core 2 Duo E6700 latency (L) of SQRT (both x87, SSE and SSE2 versions)
of DIVIDE (of floating point numbers):
For newer processors, the cost is less and is almost the same for DIV and for SQRT, e.g. for Sandy Bridge Intel CPU:
Floating-point SQRT is
Floating-point DIVIDE is
SQRT even a tick faster for 32bit.
So: For older CPUs, sqrt is itself 30-50 % slower than fdiv; For newer CPU the cost is the same. For newer CPU, cost of both operations become lower that it was for older CPUs; For longer floating format you needs more time; e.g. for 64-bit you need 2x time than for 32bit; but 80-bit is cheapy compared with 64-bit.
Also, newer CPUs have vector operations (SSE, SSE2, AVX) of the same speed as scalar (x87). Vectors are of 2-4 same-typed data. If you can align your loop to work on several FP values with same operation, you will get more performance from CPU.
cmath
uses same sqrt()
as C version of math.h
. But internally sqrt()
may have a bit more then just FSQRT
asm code, e.g. error handling. Also, sometimes gcc will not inline call to sqrt()
in place of call, so overhead of function call will be here. You need to check disassembler of YOUR function and grep it for machine codes with "sqrt" in their names. Also try option -ffast-math
. –
Clinkerbuilt If the square root function isn't implemented in special hardware or software, most library functions would calculate it using Newton's method, which converges quadratically.
Newton's method is an iterative method: you make an initial guess, calculate a trial result, and use that for the next guess. You repeat until you think you have a result that's "close enough." It so happens that you can prove how many iterations you need with square root. Every time through the cycle you get another two digits of accuracy, so most implementations will converge to the precision limit of doubles in 8-9 cycles.
If you read this carefully, you'll see that the iterative Newton's method is doing two subtractions, one multiplication, and one division per iteration.
As a general rule of thumb: Both floating point division and square root are considered as slow operation (compared to fast ones like addition or multiplication). Square root can be expect to be approximately the same speed or somewhat slower (i.e. approx. 1x - 2x lower performance) compared to a division. E.g. on Pentium Pro
Division and square root have a latency of 18 to 36 and 29 to 69 cycles, respectively
To get more accurate answer, you need to dig into architecture manual for your platform or perform a benchmark.
Note: many modern platforms also offer inverse square root, which has the speed approximately the same as sqrt, but is often more useful (e.g. by having invsqrt you can compute both sqrt and div with one multiplication for each).
© 2022 - 2024 — McMap. All rights reserved.
printf
andnew
? In any case, just compare the assembly! – Barosqrt
is dependent at least partly on the value ofx
– Urialy
, these snippets are still completely different. – Urialsqrt
doesn't "slow down your code". Whethersqrt
will be "fast" or not depends on your compiler & your platform. – Shaitan