How can I get the Windows system time with millisecond resolution?
Asked Answered
M

10

25

How can I get the Windows system time with millisecond resolution?

If the above is not possible, then how can I get the operating system start time? I would like to use this value together with timeGetTime() in order to compute a system time with millisecond resolution.

Macguiness answered 16/9, 2010 at 17:32 Comment(2)
Related: Acquiring high-resolution time stamps (MSDN)Hildredhildreth
Isn't the answer just GetSystemTime ??Breadboard
S
8

GetTickCount will not get it done for you.

Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.

Seanseana answered 16/9, 2010 at 17:35 Comment(4)
QPC is not what I want. I want the system time with millisecond precision, not a timer.Macguiness
Power saving (even just idle power saving) and Turbo Boost play merry hell with QPF. It used to be very effective, but with modern CPUs it's no good anymore.Sonyasoo
"The only gotcha here is CPU scaling" - not true. These functions don't cope with systems where the Hardware Abstraction Layer hasn't synchronised the TSC values across cores/cpus: that can be a several-second delta. It bites when one sample is taken on one core, then compared to a sample taken on another core. There are workarounds for multi-core use, but none I've seen packaged up in a publicly available library. You can bind your thread to a single core, but sometimes that's more destructive than the timing is useful.Semite
QPF only didn't work correctly on some old multi core design and broken BIOS's. In general QPF works very well. Starting with vista special high precision counters provided by the mainboard are used for QPCSaddlebag
I
21

Try this article from MSDN Magazine. It's actually quite complicated.

Implement a Continuously Updating, High-Resolution Time Provider for Windows
(archive link)

Isodimorphism answered 16/9, 2010 at 17:37 Comment(6)
I do not want a timer, I want the system time with millisecond precision.Macguiness
Did you read the article? That what it is about! This guy did exactly what you are asking and shows how it is done including code.Isodimorphism
I up voted... I can't get the code on that site to work though. It locks my PC hard most of the time. Sometimes it works and when it does, it's dead accurate on a very busy CPU. Still rummanging through the code to see why it won't work. My set up is Win7 Pro 64bit with Visual Studio 2005 and C++ Builder XE. I ported the code to C++ Builder if anyone is interested. Might make for a good open source project ;)Crinkly
It would make a good open source project since this is a common problem. Most people don't need the extreme accuracy the code strives for but it is was packaged and easy to use I bet people would use it even if it was overkill for them.Isodimorphism
From the linked article "The code is not intended to be used as-is on any available system as problems might arise due to...". It's noteworthy that the article doesn't even mention a serious issue: performance counters often aren't correctly synchronised across cores/cpus (Microsoft blames the Hardware Abstraction Layer (HAL)), so calibration done on one thread may not lear to correct results on another. In my measurements on different systems, mis-sync of TSC values can be several seconds.Semite
The link died for me. web.archive.org/web/20121018065403/http://msdn.microsoft.com/…Notice
A
14

In Windows, the base of all time is a function called GetSystemTimeAsFileTime.

  • It returns a structure that is capable of holding a time with 100ns resoution.
  • It is kept in UTC

The FILETIME structure records the number of 100ns intervals since January 1, 1600; meaning its resolution is limited to 100ns.

This forms our first function:

enter image description here

A 64-bit number of 100ns ticks since January 1, 1600 is somewhat unwieldy. Windows provides a handy helper function, FileTimeToSystemTime that can decode this 64-bit integer into useful parts:

record SYSTEMTIME {
   wYear: Word;
   wMonth: Word;
   wDayOfWeek: Word;
   wDay: Word;
   wHour: Word;
   wMinute: Word;
   wSecond: Word;
   wMilliseconds: Word;
}

Notice that SYSTEMTIME has a built-in resolution limitation of 1ms

Now we have a way to go from FILETIME to SYSTEMTIME:

enter image description here

We could write the function to get the current system time as a SYSTEIMTIME structure:

SYSTEMTIME GetSystemTime()
{
    //Get the current system time utc in it's native 100ns FILETIME structure
    FILETIME ftNow;
    GetSytemTimeAsFileTime(ref ft);

    //Decode the 100ns intervals into a 1ms resolution SYSTEMTIME for us
    SYSTEMTIME stNow;
    FileTimeToSystemTime(ref stNow);

    return stNow;
}

Except Windows already wrote such a function for you: GetSystemTime

enter image description here

Local, rather than UTC

Now what if you don't want the current time in UTC. What if you want it in your local time? Windows provides a function to convert a FILETIME that is in UTC into your local time: FileTimeToLocalFileTime

enter image description here

You could write a function that returns you a FILETIME in local time already:

FILETIME GetLocalTimeAsFileTime()
{
   FILETIME ftNow;
   GetSystemTimeAsFileTime(ref ftNow);

   //convert to local
   FILETIME ftNowLocal
   FileTimeToLocalFileTime(ftNow, ref ftNowLocal);

   return ftNowLocal;
}

enter image description here

And lets say you want to decode the local FILETIME into a SYSTEMTIME. That's no problem, you can use FileTimeToSystemTime again:

enter image description here

Fortunately, Windows already provides you a function that returns you the value:

enter image description here

Precise

There is another consideration. Before Windows 8, the clock had a resolution of around 15ms. In Windows 8 they improved the clock to 100ns (matching the resolution of FILETIME).

  • GetSystemTimeAsFileTime (legacy, 15ms resolution)
  • GetSystemTimePreciseAsFileTime (Windows 8, 100ns resolution)

This means we should always prefer the new value:

enter image description here

You asked for the time

You asked for the time; but you have some choices.

The timezone:

  • UTC (system native)
  • Local timezone

The format:

  • FILETIME (system native, 100ns resolution)
  • SYTEMTIME (decoded, 1ms resolution)

Summary

  • 100ns resolution: FILETIME
    • UTC: GetSytemTimePreciseAsFileTime (or GetSystemTimeAsFileTime)
    • Local: (roll your own)
  • 1ms resolution: SYSTEMTIME
    • UTC: GetSystemTime
    • Local: GetLocalTime
Attaint answered 22/8, 2017 at 16:9 Comment(1)
Very nice explanation. Thanks.Gaunt
T
12

This is an elaboration of the above comments to explain the some of the whys.

First, the GetSystemTime* calls are the only Win32 APIs providing the system's time. This time has a fairly coarse granularity, as most applications do not need the overhead required to maintain a higher resolution. Time is (likely) stored internally as a 64-bit count of milliseconds. Calling timeGetTime gets the low order 32 bits. Calling GetSystemTime, etc requests Windows to return this millisecond time, after converting into days, etc and including the system start time.

There are two time sources in a machine: the CPU's clock and an on-board clock (e.g., real-time clock (RTC), Programmable Interval Timers (PIT), and High Precision Event Timer (HPET)). The first has a resolution of around ~0.5ns (2GHz) and the second is generally programmable down to a period of 1ms (though newer chips (HPET) have higher resolution). Windows uses these periodic ticks to perform certain operations, including updating the system time.

Applications can change this period via timerBeginPeriod; however, this affects the entire system. The OS will check / update regular events at the requested frequency. Under low CPU loads / frequencies, there are idle periods for power savings. At high frequencies, there isn't time to put the processor into low power states. See Timer Resolution for further details. Finally, each tick has some overhead and increasing the frequency consumes more CPU cycles.

For higher resolution time, the system time is not maintained to this accuracy, no more than Big Ben has a second hand. Using QueryPerformanceCounter (QPC) or the CPU's ticks (rdtsc) can provide the resolution between the system time ticks. Such an approach was used in the MSDN magazine article Kevin cited. Though these approaches may have drift (e.g., due to frequency scaling), etc and therefore need to be synced to the system time.

Tungus answered 20/9, 2010 at 18:58 Comment(2)
I like your answer. If its so hard for windows to run with 1ms period then how can Linux do it?Dahliadahlstrom
I don't know how well either OS performs with a 1ms period, but both can do it. In any software design, trade offs must be made. Should the code be designed for a 1ms period or 16ms (specifically 15.625ms)? Even then, some design points will execute better.Tungus
S
8

GetTickCount will not get it done for you.

Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.

Seanseana answered 16/9, 2010 at 17:35 Comment(4)
QPC is not what I want. I want the system time with millisecond precision, not a timer.Macguiness
Power saving (even just idle power saving) and Turbo Boost play merry hell with QPF. It used to be very effective, but with modern CPUs it's no good anymore.Sonyasoo
"The only gotcha here is CPU scaling" - not true. These functions don't cope with systems where the Hardware Abstraction Layer hasn't synchronised the TSC values across cores/cpus: that can be a several-second delta. It bites when one sample is taken on one core, then compared to a sample taken on another core. There are workarounds for multi-core use, but none I've seen packaged up in a publicly available library. You can bind your thread to a single core, but sometimes that's more destructive than the timing is useful.Semite
QPF only didn't work correctly on some old multi core design and broken BIOS's. In general QPF works very well. Starting with vista special high precision counters provided by the mainboard are used for QPCSaddlebag
G
4

Starting with Windows 8 Microsoft has introduced the new API command GetSystemTimePreciseAsFileTime

Unfortunately you can't use that if you create software which must also run on older operating systems.

My current solution is as follows, but be aware: The determined time is not exact, it is only near to the real time. The result should always be smaller or equal to the real time, but with a fixed error (unless the computer went to standby). The result has a millisecond resolution. For my purpose it is exact enough.

void GetHighResolutionSystemTime(SYSTEMTIME* pst)
{
    static LARGE_INTEGER    uFrequency = { 0 };
    static LARGE_INTEGER    uInitialCount;
    static LARGE_INTEGER    uInitialTime;
    static bool             bNoHighResolution = false;

    if(!bNoHighResolution && uFrequency.QuadPart == 0)
    {
        // Initialize performance counter to system time mapping
        bNoHighResolution = !QueryPerformanceFrequency(&uFrequency);
        if(!bNoHighResolution)
        {
            FILETIME ftOld, ftInitial;

            GetSystemTimeAsFileTime(&ftOld);
            do
            {
                GetSystemTimeAsFileTime(&ftInitial);
                QueryPerformanceCounter(&uInitialCount);
            } while(ftOld.dwHighDateTime == ftInitial.dwHighDateTime && ftOld.dwLowDateTime == ftInitial.dwLowDateTime);
            uInitialTime.LowPart  = ftInitial.dwLowDateTime;
            uInitialTime.HighPart = ftInitial.dwHighDateTime;
        }
    }

    if(bNoHighResolution)
    {
        GetSystemTime(pst);
    }
    else
    {
        LARGE_INTEGER   uNow, uSystemTime;

        {
            FILETIME    ftTemp;
            GetSystemTimeAsFileTime(&ftTemp);
            uSystemTime.LowPart  = ftTemp.dwLowDateTime;
            uSystemTime.HighPart = ftTemp.dwHighDateTime;
        }
        QueryPerformanceCounter(&uNow);
        
        LARGE_INTEGER   uCurrentTime;
        uCurrentTime.QuadPart = uInitialTime.QuadPart + (uNow.QuadPart - uInitialCount.QuadPart) * 10000000 / uFrequency.QuadPart;
        
        if(uCurrentTime.QuadPart < uSystemTime.QuadPart || abs(uSystemTime.QuadPart - uCurrentTime.QuadPart) > 1000000)
        {
            // The performance counter has been frozen (e. g. after standby on laptops)
            // -> Use current system time and determine the high performance time the next time we need it
            uFrequency.QuadPart = 0;
            uCurrentTime = uSystemTime;
        }

        FILETIME ftCurrent;
        ftCurrent.dwLowDateTime  = uCurrentTime.LowPart;
        ftCurrent.dwHighDateTime = uCurrentTime.HighPart;
        FileTimeToSystemTime(&ftCurrent, pst);
    }
}
Griff answered 29/4, 2015 at 7:48 Comment(0)
S
2

GetSystemTimeAsFileTime gives the best precision of any Win32 function for absolute time. QPF/QPC as Joel Clark suggested will give better relative time.

Sonyasoo answered 16/9, 2010 at 17:37 Comment(3)
Isn't there anything better than GetSystemTimeAsFileTime? the get-system-time functions have a precision of 10 to 15 milliseconds.Macguiness
That's the accuracy, not precision. And a call to timeBeginPeriod(1); will set the accuracy to 1ms.Sonyasoo
Ok, it's the accuracy then (I always mix those two since in my language there is only one word that represents both). Is there a way to get the system time in Windows with 1 millisecond accuracy? I am using timeBeginPeriod(1), but the time is still returned with 10-15 milliseconds accuracy.Macguiness
C
2

Since we all come here for quick snippets instead of boring explanations, I'll write one:

FILETIME t;
GetSystemTimeAsFileTime(&t); // unusable as is

ULARGE_INTEGER i;
i.LowPart = t.dwLowDateTime;
i.HighPart = t.dwHighDateTime;

int64_t ticks_since_1601 = i.QuadPart; // now usable
int64_t us_since_1601   = (i.QuadPart * 1e-1);
int64_t ms_since_1601   = (i.QuadPart * 1e-4);
int64_t sec_since_1601  = (i.QuadPart * 1e-7);

// unix epoch
int64_t unix_us  = (i.QuadPart * 1e-1) - 11644473600LL * 1000000;
int64_t unix_ms  = (i.QuadPart * 1e-4) - 11644473600LL * 1000;
double  unix_sec = (i.QuadPart * 1e-7) - 11644473600LL;

// i.QuadPart is # of 100ns ticks since 1601-01-01T00:00:00Z
// difference to Unix Epoch is 11644473600 seconds (attention to units!)

No idea how drifting performance-counter-based answers went up, don't do slippage bugs, guys.

Contumelious answered 17/11, 2018 at 0:32 Comment(0)
C
0

QueryPerformanceCounter() is built for fine-grained timer resolution.

It is the highest resolution timer that the system has to offer that you can use in your application code to identify performance bottlenecks

Here is a simple implementation for C# devs:

    [DllImport("kernel32.dll")]
    extern static short QueryPerformanceCounter(ref long x);
    [DllImport("kernel32.dll")]
    extern static short QueryPerformanceFrequency(ref long x);
    private long m_endTime;
    private long m_startTime;
    private long m_frequency;

    public Form1()
    {
        InitializeComponent();
    }
    public void Begin()
    {
        QueryPerformanceCounter(ref m_startTime);
    }
    public void End()
    {
        QueryPerformanceCounter(ref m_endTime);
    }

    private void button1_Click(object sender, EventArgs e)
    {
        QueryPerformanceFrequency(ref m_frequency);
        Begin();
        for (long i = 0; i < 1000; i++) ;
        End();
        MessageBox.Show((m_endTime - m_startTime).ToString());
    }

If you are a C/C++ dev, then take a look here: How to use the QueryPerformanceCounter function to time code in Visual C++

Chariness answered 4/12, 2012 at 23:35 Comment(1)
From the support article "•The API call may fail under some circumstances. Check the return value, and then adjust your application code to make sure that you receive valid results." - yeah well. Those functions have many well documented problems, including problems as CPU speeds vary due to power saving modes, and unsynchronised TSC registers across cores.Semite
F
0

Well, this one is very old, yet there is another useful function in Windows C library _ftime, which returns a structure with local time as time_t, milliseconds, timezone, and daylight saving time flag.

Feller answered 22/3, 2014 at 15:37 Comment(0)
T
0

In C11 and above (or C++17 and above) you can use timespec_get() to get time with higher precision portably

#include <stdio.h>
#include <time.h>
 
int main(void)
{
    struct timespec ts;
    timespec_get(&ts, TIME_UTC);
    char buff[100];
    strftime(buff, sizeof buff, "%D %T", gmtime(&ts.tv_sec));
    printf("Current time: %s.%09ld UTC\n", buff, ts.tv_nsec);
}

If you're using C++ then since C++11 you can use std::chrono::high_resolution_clock, std::chrono::system_clock (wall clock), or std::chrono::steady_clock (monotonic clock) in the new <chrono> header. No need to use Windows-specific APIs anymore

auto start1 = std::chrono::high_resolution_clock::now();
auto start2 = std::chrono::system_clock::now();
auto start3 = std::chrono::steady_clock::now();
// do some work
auto end1 = std::chrono::high_resolution_clock::now();
auto end2 = std::chrono::system_clock::now();
auto end3 = std::chrono::steady_clock::now();

std::chrono::duration<long long, std::milli> diff1 = end1 - start1;
std::chrono::duration<double, std::milli>    diff2 = end2 - start2;
auto diff3 = std::chrono::duration_cast<std::chrono::milliseconds>(end3 - start3);

std::cout << diff.count() << ' ' << diff2.count() << ' ' << diff3.count() << '\n';
Transubstantiation answered 12/4, 2022 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.