How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?
Asked Answered
R

2

24

Wall clock time is usually provided by the systems RTC. This mostly only provides times down to the millisecond range and typically has a granularity of 10-20 miliseconds. However the resolution/granularity of gettimeofday() is often reported to be in the few microseconds range. I assume the microsecond granularity must be taken from a different source.

How is the microsecond resolution/granularity of gettimeofday() accomplished?

When the part down to the millisecond is taken from the RTC and the mircoseconds are taken from a different hardware, a problem with phasing of the two sources arises. The two sources have to be synchronized somehow.

How is the synchronization/phasing between these two sources accomplished?

Edit: From what I've read in links provided by amdn, particulary the following Intel link, I would add a question here:

Does gettimeofday() provide resolution/granularity in the microsecond regime at all?


Edit 2: Summarizing the amdns answer with some more results of reading:

Linux only uses the realtime clock (RTC) at boot time to synchronize with a higher resolution counter, i.g. the Timestampcounter (TSC). After the boot gettimeofday() returns a time which is entirely based on the TSC value and the frequency of this counter. The initial value for the TSC frequency is corrected/calibrated by means of comparing the system time to an external time source. The adjustment is done/configured by the adjtimex() function. The kernel operates a phase locked loop to ensure that the time results are monotonic and consistent.

This way it can be stated that gettimeofday() has microsecond resolution. Taking into account that more modern Timestampcounter are running in the GHz regime, the obtainable resolution could be in the nanosecond regime. Therefore this meaningfull comment

/**
407  * do_gettimeofday - Returns the time of day in a timeval
408  * @tv:         pointer to the timeval to be set
409  *
410  * NOTE: Users should be converted to using getnstimeofday()
411  */

can be found in Linux/kernel/time/timekeeping.c. This suggest that there will possibly be an even higher resolution function available at a later point in time. Right now getnstimeofday() is only available in kernel space.

However, looking through all the code involved to get this about right, shows quite a few comments about uncertainties. It may be possible to obtain microsecond resolution. The function gettimeofday() may even show a granularity in the microsecond regime. But: There are severe daubts about its accuracy because the drift of the TSC frequency cannot be accurately corrected for. Also the complexity of the code dealing with this matter inside Linux is a hint to believe that it's in fact too difficult to get it right. This is particulary but not solely caused by the huge number of hardware platforms Linux is supposed to run on.

Result: gettimeofday() returns monotonic time with microsecond granularity but the time it provides is almost never is phase to one microsecond with any other time source.

Raine answered 5/11, 2012 at 11:0 Comment(1)
At the end of the day the granuality depends on the hardware involved.I would just assume using 100ms for algorithms. This is due to taking into account context switching etc.Spearmint
R
19

How is the microsecond resolution/granularity of gettimeofday() accomplished?

Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.

How is this synchronization accomplished?

Your first question was about how the Linux clock provides high resolution, this second question is about synchronization, this is the distinction between precision and accuracy. Most systems have a clock that is backed up by battery to keep time of day when the system is powered down. As you might expect this clock doesn't have high accuracy or precision, but it will get the time of day "in the ballpark" when the system starts. To get accuracy most systems use an optional component to get time from an external source on the network. Two common ones are

  1. Network Time Protocol
  2. Precision Time Protocol

These protocols define a master clock on the network (or a tier of clocks sourced by an atomic clock) and then measure network latencies to estimate offsets from the master clock. Once the offset from the master is determined the system clock is disciplined to keep it accurate. This can be done by

  1. Stepping the clock (a relatively large, abrupt, and infrequent time adjustment), or
  2. Slewing the clock (defined as how much the clock frequency should be adjusted by either slowly increasing or decreasing the frequency over a given time period)

The kernel provides the adjtimex system call to allow clock disciplining. For details on how modern Intel multi-core processors keep the TSC synchronized between cores see CPU TSC fetch operation especially in multicore-multi-processor environment.

The relevant kernel source files for clock adjustments are kernel/time.c and kernel/time/timekeeping.c.

Reservoir answered 5/11, 2012 at 11:57 Comment(8)
the Linux kernel determines the frequency at initialization, So it assumed being a constant. But gettimeofday() also uses the RTC to synchronize with. Thus, there seem to be two sources, the RTC and the TSC. My question was about synchronization of those two sources. Particulary the TSC frequency seems relevant here, because the generating hardware certainly has some tolerances and a deviation of just a 10 ppm will cause an error of 10 microseconds/second. Thus a second given by the RTC may correspond to 1.000010 s given by the TSC.Raine
I wanted to get an answer to this ambiguity since the TSC frequency is treated by Linux as an arbitrary constant which will deviate from actual frequency.Raine
The TSC frequency is not assumed to be constant, it is periodically tweaked, that is what the adjtimex system call is for. A user space program/daemon (like NTP and PTP) uses adjtimex to tell Linux how to slew the frequency over time.Reservoir
In other words: A periodic procedure calibrates the TSC frequency and the period is given by the RTC (system clock)? So the change of the TSC counter is compared to an elapsed time of the RTC (system clock). Thus the resulting TSC frequency is calibrated to give exactly one million microseconds/second? And that's all running constantly behind the scenes?Raine
I read in Linux timekeeping.c that gettimeofday() is backed by getnstimeofday() which implies that the RTC is never ever consulted after system initialization. Therfore the result of gettimeofday() is purely derived by the TSC and shall not be compared to RTC results. As a consequence it may drift but has microsecond granularity. Is that it?Raine
@Amo: I believe the CMOS RTC is used to approximate the TSC frequency at boot time and to slew the clock over time to keep it in sync, unless NTP (or PTP or some other process using adjtimex) is being used, in which case that takes precedence... but that is speculation on my part. Unless an expert in this part of the kernel contributes our next step would be to study the kernel source, I provided two links in the answer above. I'll try to get you a definitive answer this evening.Reservoir
@Amo: I see you already started reading the source and may have answered that question... feel free to update my answer above as you see fit. Cheers.Reservoir
I've summarized it in my question.Raine
V
0

When Linux starts, it initializes the software clock using the hardware clock. See the chapter How Linux Keeps Track of Time in the Clock HOWTO.

Valve answered 5/11, 2012 at 11:3 Comment(4)
How Linux Keeps Track of Time: "... the "system clock" (sometimes called the "kernel clock" or "software clock") which is a software counter based on the timer interrupt." I can't see how the microsecond granularity can be obtained this way since I do have doubts that the timer interrupt runs at such high frequencies.Raine
You could just query the current counter value of the timer to get a resolution that is finer than the timer frequency.Classless
The timing of the CPU is far more accurate than the timing of the RTC. The CPU can simply count cycles of the clock cycle. The downside is that the timer of the CPU can not be used when the computer is turned off.Valve
I' know of high resolution counters. But gettimeofday() seems to link the system time with a high resolution counter since it provides both: Wall clock and microseconds in just one call. So what is behind the scenes of gettimeofday()? That was what the questions were all about.Raine

© 2022 - 2024 — McMap. All rights reserved.