What is the precision of the gettimeofday function?
Asked Answered
C

4

5

I am reading the chapter single.dvi of OSTEP. In the homework part, it says:

One thing you’ll have to take into account is the precision and accuracy of your timer. A typical timer that you can use is gettimeofday(); read the man page for details. What you’ll see there is that gettimeofday() returns the time in microseconds since 1970; however, this does not mean that the timer is precise to the microsecond. Measure back-to-back calls to gettimeofday() to learn something about how precise the timer re- ally is; this will tell you how many iterations of your null system-call test you’ll have to run in order to get a good measurement result. If gettimeofday() is not precise enough for you, you might look into using the rdtsc instruction available on x86 machines

I wrote some code to test the cost of calling gettimeofday() function as below:

#include <stdio.h>
#include <sys/time.h>

#define MAX_TIMES 100000

void m_gettimeofday() {
    struct timeval current_time[MAX_TIMES];
    int i;
    for (i = 0; i < MAX_TIMES; ++i) {
        gettimeofday(&current_time[i], NULL);
    }
    printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[0].tv_sec, current_time[0].tv_usec);
    printf("seconds: %ld\nmicro_seconds: %ld\n", current_time[MAX_TIMES - 1].tv_sec, current_time[MAX_TIMES - 1].tv_usec);
    printf("the average time of a gettimeofday function call is: %ld us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / MAX_TIMES);
}

int main(int argc, char *argv[]) {
    m_gettimeofday();
    return 0;
}

However, the output will always be 0 microseconds. It seems like the precision of the gettimeofday() function is exactly one microsecond. What's wrong with my test code? Or have I misunderstood the author's meaning? Thanks for the help!

Cryotherapy answered 8/9, 2021 at 8:40 Comment(5)
pubs.opengroup.org/onlinepubs/9699919799/functions/… ==> "The resolution of the system clock is unspecified." and "Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function."Emmalynn
Does this mean that the accuracy of the function is uncertain?Cryotherapy
You can use the clock_getres function on the CLOCK_REALTIME clock to get the resolution in nanoseconds.Disobedient
A function that is not guaranteed to be accurate may still give a perfectly accurate result. It just cannot be relied on to do so.Cosper
The resolution is unspecified by POSIX. Does not mean it is unspecified by your environment. In your computer, with your current version of the Operating System/C libraries it is a specific value; you need to check your documentation.Emmalynn
P
6

The average microseconds passed between consecutive calls to gettimeofday is usually less than one - on my machine it is somewhere between 0.05 and 0.15.

Modern CPUs usually run at GHz speeds - i.e. billions of instructions per second, and so two consecutive instructions should take on the order of nanoseconds, not microseconds (obviously two calls to a function like gettimeofday is more complex than two simple opcodes, but it should still take on the order of tens of nanoseconds and not more).

But you are performing a division of ints - dividing (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) by MAX_TIMES - which in C will return an int as well, in this case 0.


To get the real measurement, divide by (double)MAX_TIMES (and print the result as a double):

printf("the average time of a gettimeofday function call is: %f us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / (double)MAX_TIMES);

As a bonus - on Linux systems the reason gettimeofday is so fast (you might imagine it to be a more complex function, calling into the kernel and incurring the overhead of a syscall) is thanks to a special feature called vdso which lets the kernel provide information to user space without going through the kernel at all.

Prau answered 8/9, 2021 at 8:48 Comment(2)
so does that mean there is no accuracy problem with getTimeofday on my machine? though I know I should use clock_gettime instead of it.Cryotherapy
Yes, you just need to fix the way you calculate the average.Prau
T
3

gettimeofday(2) as been declared obsolete in behalf of clock_gettime(2) that has better resolution than the older one (It uses nanosecond resolution)

Precission is another issue (different) in that it depends on how the hardware allows you to get a timestamp and how the operating system implements it.

In linux/intel based systems, there's good hardware normally available and it is very well implemented by linux, so normally you can get true nanosecond precision when dealing with timestamps. But don't try to get this precision in a machine that has poor quartz oscillator and is not PPS sincronized. You don't specify what kind of timestamps you need to acquire, but if you need to acquire absolute timestamps, to be compared with official time, don't expect them to be closer than some hundred milliseconds (on a basis of a NTP syncronized machine with a normal quartz oscillator)

Anyway, to get the average time of the calls you schedule, you have two problems:

  • You need to call MAX_TIMES + 1 times your gettimeofday(2) system call, as you are measuring the time in between both timestamps (so you compute the time between you call the system call and it is capable of taking the timestamp, and the time from the timestamp take to the return value is delivered to the calling routine ---but in reverse order) The best way to do this is to take a timestamp t0 at the beginning, and MAX_TIMES timestamps in t1 at the end. Only then you can determine the time between t0 to t1 and divide it between MAX_TIMES. To do that, subtract the t0.tv_usec from t1.tv_usec, and if the result is less than zero, add 1000000 to it, and increment the difference in t1.tv_sec - t0.tv_sec. The tv_sec will have the difference in secs, and tv_usec will have the excess microseconds to the second.
  • This assumes that the system call overhead doesn't change, and this is not true. Sometimes the system call takes more time than others, and the value you are interested in is not the average of them, but the minimum value it can reach. But you can conform with the average, based in that you are not going to get into under usec resolution.

I recommend you to use the clock_gettime(2) system call, anyway, as it has nanosecond resolution.

Tom answered 10/9, 2021 at 11:11 Comment(0)
O
0
    z=x=gettimeofday()
    for(i=1; x==z; i++){ z=gettimeofday()}
    # >10,000 loops until diff gettimeofday()
    y=sprintf("%.24f",z-x);
    #: 0.015600204 = diff gettimeofday() on my desktop. Your mileage may vary.
    #:     ^  ^  ^
    #: ns=nanosecond= 10^-9; 1/1,000,000,000=> fmt=".9f"
    #: us=microsecond= 10^-6; 1/1,000,000=> fmt=".6f"
    #: ms=millisecond= 10^-3; 1/1,000 => fmt=".3f"
Ophidian answered 1/3, 2023 at 16:57 Comment(3)
Please edit this answer to improve clarity.Giovanna
On my desktop, the best precision I get is about 2/10th of 1 u-sec. That means when I take two readings and subtract, t1-t0, the smallest difference is almost 1 microsec. If I only execute a few instructions, the difference will appear to be 0, even though I know I used some CPU time. NOTE: when the system is loaded with backups, etc. the apparent precision of the timestamp is much less, maybe 1/2 second because my program gets a small slice of CPU time and others (backup, etc.) get their share and the computer is "slow". Does that makes things clearer?Ophidian
Yes, this makes the answer clearer, but it is better to include that information as an edit in the answer itself rather than a comment.Giovanna
P
0

You are misunderstanding the author. The author is asking two different questions:

  1. What is the precision of gettimeofday() on your computer? Specifically, is it to the microsecond level of precision?
  2. What is the time cost of a system call on your computer? Specifically, a null read system call?

With question 1. the author is asking if your computer can handle microsecond-level precision. If you know about precision (significant digits), then the solution to 1. is quite easy. Here is a simple C program for checking the precision of time that your computer can handle:

#include <sys/time.h>
#include <stdio.h>

int main() {
    struct timeval tv1;

    gettimeofday(&tv1, NULL);

    printf("To check gettimeofday() precision, run this program multiple times.\n");
    printf("If the microseconds consistently vary or are non-zero, it suggests microsecond-level precision.\n");
    printf("If the microseconds are consistently zero, it may indicate limitations in precision.\n");
    printf("If the microseconds are consistently in the form XYZ000, it suggests precision only up to milliseconds.\n\n");

    printf("microseconds = %ld \n", tv1.tv_usec);
    return 0;
}

Whether your computer can or can't handle micro-second precision will determine the number of system calls you'll need to make for question 2. If it can, then you don't need to do that many. If it can't, then you'll need to do a lot. I'm being vague because I don't even know what a good estimate would be for either.

For the 2nd question, you answered this question with the code you pasted in your question, except not for a null read system call but for a gettimeofday() system call.

Preferment answered 1/6 at 22:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.