I am reading the chapter single.dvi of OSTEP. In the homework part, it says:
One thing you’ll have to take into account is the precision and accuracy of your timer. A typical timer that you can use is
gettimeofday();
read the man page for details. What you’ll see there is thatgettimeofday()
returns the time in microseconds since 1970; however, this does not mean that the timer is precise to the microsecond. Measure back-to-back calls togettimeofday()
to learn something about how precise the timer re- ally is; this will tell you how many iterations of your null system-call test you’ll have to run in order to get a good measurement result. Ifgettimeofday()
is not precise enough for you, you might look into using therdtsc
instruction available on x86 machines
I wrote some code to test the cost of calling gettimeofday()
function as below:
#include <stdio.h>
#include <sys/time.h>
#define MAX_TIMES 100000
void m_gettimeofday() {
struct timeval current_time[MAX_TIMES];
int i;
for (i = 0; i < MAX_TIMES; ++i) {
gettimeofday(¤t_time[i], NULL);
}
printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[0].tv_sec, current_time[0].tv_usec);
printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[MAX_TIMES - 1].tv_sec, current_time[MAX_TIMES - 1].tv_usec);
printf("the average time of a gettimeofday function call is: %ld us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / MAX_TIMES);
}
int main(int argc, char *argv[]) {
m_gettimeofday();
return 0;
}
However, the output will always be 0 microseconds. It seems like the precision of the gettimeofday()
function is exactly one microsecond. What's wrong with my test code? Or have I misunderstood the author's meaning? Thanks for the help!
clock_getres
function on theCLOCK_REALTIME
clock to get the resolution in nanoseconds. – Disobedient