What's the relationship between the real CPU frequency and the clock_t (the unit is clock tick) in C?
Let's say I have the below piece of C code which measures the time that CPU consumed for running a for
loop.
But since the CLOCKS_PER_SEC is a constant value (basically 1000,000) in the C standard library, I wonder how the clock
function does measure the real CPU cycles that are consumed by the program while it runs on different computers with different CPU frequencies (for my laptop, it is 2.6GHz).
And if they are not relevant, how does the CPU timer work in the mentioned scenario?
#include <time.h>
#include <stdio.h>
int main(void) {
clock_t start_time = clock();
for(int i = 0; i < 10000; i++) {}
clock_t end_time = clock();
printf("%fs\n", (double)(end_time - start_time) / CLOCKS_PER_SEC);
return 0;
}
CLOCKS_PER_SEC
simply gives the unit of measurement for the value returned byclock()
. It isn't "basically 1000,000" but whatever the OS/compiler decide it should be. For example on my system it is 1000. That's one reason why it is a fairly blunt tool for timing purposes - it's granularity will vary from one system to another. – Irvinclock()
works or whether you might indeed want to know how to measure the CPU ticks spent on the current program. Or maybe how to measure the time spent on the current program in a multithreading (possibly multi-CPU) environment. Those are different questions and you should ask the one you want answered, instead of getting lost on a detail which you think will give you the answer. – Towel