Behaviour of CLOCKS_PER_SEC in different Operating Systems
Asked Answered
L

2

6

I was running a cpp code , but one thing i noticed that on windows 7, CLOCKS_PER_SEC in C++ code gives 1000 while on linux fedora 16 it gives 1000000. Can anyone justify this behaviour?

Lainelainey answered 3/9, 2012 at 8:11 Comment(2)
it depends on clock() implementation on your OS, see this question for more info #588807Ijssel
Easy: if it didn't vary between implementations, the constant wouldn't be necessary. It exists because it is up to the implementation what kind of timer resolution to provide under this API. And Windows goes for 1000 ticks per secondRefined
P
4

What's to justify? CLOCKS_PER_SEC is implementation defined, and can be anything. All it indicates it the units returned by the function clock(). It doesn't even indicate the resolution of clock(): Posix requires it to be 1000000, regardless of the actual resolution. If Windows is returning 1000, that's probably not the actual resolution either. (I find that my Linux box has a resolution of 10ms, and my Windows box 15ms.)

Psalter answered 3/9, 2012 at 8:59 Comment(3)
ok.. so clock() function has nothing to do with the clock speed of the processor and its just to compute the time taken by the process. Am i right?Lainelainey
@AkashdeepSaluja ...to compute the CPU time taken by the process, not the real time. Cf. the great sleep example here.Resolved
@AkashdeepSaluja Right. clock() is sort of a primitive benchmarking tool. It returns arbitrary values (but on the systems I've used, the first call always returns 0). The difference between two calls returns the CPU time used between the two calls, measured in 1 second/CLOCKS_PER_SEC units. (Note however that under Windows, it will return elapsed time, rather than CPU time.)Psalter
H
2

Basically the implementation of the clock() function has some leeway for different operating systems. On Linux Fedora, the clock ticks faster. It ticks 1 million times a second.

This clock tick is distinct from the clock rate of your CPU, on a different layer of abstraction. Windows tries to make the number of clock ticks equal to the number of milliseconds.

This macro expands to an expression representing the number of clock ticks in a second, as returned by the function clock.

Dividing a count of clock ticks by this expression yields the number of seconds.

CLK_TCK is an obsolete alias of this macro.

Reference: http://www.cplusplus.com/reference/clibrary/ctime/CLOCKS_PER_SEC/

You should also know that the Windows implementation is not for true real-time applications. The 1000 tick clock is derived by dividing a hardware clock by a power of 2. That means that they actually get a 1024 tick clock. To convert it to a 1000 tick clock, Windows will skip certain ticks, meaning some ticks are slower than others!

A separate hardware clock (not the CPU clock) is normally used for timing. Reference: http://en.wikipedia.org/wiki/Real-time_clock

Heckelphone answered 3/9, 2012 at 8:14 Comment(8)
The last paragraph doesn't make sense, honestly. If you divide a 3.000.0000.000 Hz CPU Clock rate by powers of 2, you don't get a 1024 Hz clock. And you'd get yet another result for a 3.1 Ghz CPU. I.e. it just can't work like you explained.Sugarplum
Plus many CPUs don't even run at a fixed rate these days with power saving mechanisms so real time clocks in general don't count clock cycles any more.Spires
I am a bit confused, if the CLOCKS_PER_SEC is different from the actual CPU clocks, then what exactly it gives?Lainelainey
sorry, @MSalters, I made a mistake, it actually uses a separate hardware clock, I updated the answer.Heckelphone
@AkashdeepSaluja An arbitrary value, which defines the units returned by clock(). That's really all you can say about it (except that clock() doesn't work under Windows---it's supposed to return the CPU time.)Psalter
@JamesKanze thanks i got it, but another doubt strucked in my mind i.e. if i run a program with the same input many times the clock functions return different time which varies from 1%-2%. if the clock() returns the CPU time and does not include any waiting time for the process than why there is a difference?Lainelainey
All sorts of possibilities - caching? Cache-misses are expensive. Also first time you run a program is probably going to take longer to load the libraries and stuffHeckelphone
@AkashdeepSaluja First, are you running it under Windows? If so: Windows clock() is broken, and returns elapsed time. Otherwise: there are always issues of cache misses or hits, etc. which will affect the CPU time, as well as pipeline issues. These depend on what else is happening in the processor at the same time, but will be charged to the process.Psalter

© 2022 - 2024 — McMap. All rights reserved.