Time difference in C++
Asked Answered
B

8

37

Does anyone know how to calculate time difference in C++ in milliseconds? I used difftime but it doesn't have enough precision for what I'm trying to measure.

Bountiful answered 21/11, 2008 at 1:59 Comment(0)
B
22

You have to use one of the more specific time structures, either timeval (microsecond-resolution) or timespec (nanosecond-resolution), but you can do it manually fairly easily:

#include <time.h>

int diff_ms(timeval t1, timeval t2)
{
    return (((t1.tv_sec - t2.tv_sec) * 1000000) + 
            (t1.tv_usec - t2.tv_usec))/1000;
}

This obviously has some problems with integer overflow if the difference in times is really large (or if you have 16-bit ints), but that's probably not a common case.

Bromley answered 21/11, 2008 at 2:4 Comment(4)
I think you ment *1000 not *1000000Hydrocephalus
You might want to add +500 usec before dividing by 1000 there, so that 999usec is rounded up to 1msec not down to 0msec.Armelda
No, I did mean *1000000. It's doing the calculation in us and then converting to ms at the end. The +500 suggestion is a good one, though.Bromley
Only 5 years late to the party, but I agree with @SoapBox, you can minimize your overflow issue if you take that multiplication outside of the inner parens and mult by 1000, i.e. make the addition operate on MS. Alternatively we can use the standard timersub, then convert the result tv to MS.Barbarous
T
83

I know this is an old question, but there's an updated answer for C++0x. There is a new header called <chrono> which contains modern time utilities. Example use:

#include <iostream>
#include <thread>
#include <chrono>

int main()
{
    typedef std::chrono::high_resolution_clock Clock;
    typedef std::chrono::milliseconds milliseconds;
    Clock::time_point t0 = Clock::now();
    std::this_thread::sleep_for(milliseconds(50));
    Clock::time_point t1 = Clock::now();
    milliseconds ms = std::chrono::duration_cast<milliseconds>(t1 - t0);
    std::cout << ms.count() << "ms\n";
}

50ms

More information can be found here:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm

There is also now a boost implementation of <chrono>.

Talkingto answered 11/2, 2011 at 22:9 Comment(6)
Is this accurate for nanoseconds ? I mean you have written a very different approach over hereDziggetai
@Wildling: The other approach is in another context, where other answers are using the rdtsc assembly instruction. That answer simply shows how to integrate the rdtsc assembly instruction into a chrono clock. This answer shows how to get a time difference in milliseconds using the chrono facility. The accuracy will be dependent upon the supplied high_resolution_clock. The resolution of this clock is inspectable via high_resolution_clock::period. On my system that happens to be nanoseconds. On yours it may be something different.Talkingto
I just tried both your ways (this answer and the other one) to profile some code. The results from the class clock came out to be half of what the above code shows. Would you know why?Dziggetai
It is hard to know without seeing your exact code. However guesses might include: you neglected to convert the clock ticks to a known unit such as nanoseconds. Or perhaps the period you entered for your clock was not an accurate representation of your processor speed. Or perhaps the timed code was so short that you are pushing the lower limits of what can be accurately timed (all clocks have overhead). It is good that you are experimenting with these different clocks. That is a good way to learn about them.Talkingto
I just did some more runs and realised the results are actually not very far from each other ! However sometimes the code executes in a very short time. Can you please look at the results : pastebin.com/zWGERp3tDziggetai
Looks fine to me. You may need to shut down background processes to get a more stable result. E.g. turn off the music player, email, auto backup process, perhaps reboot to make sure you have a "clean machine." I find that I get fairly stable results on OS X, until "time machine" starts backing up, and then my timings are all over the place.Talkingto
B
22

You have to use one of the more specific time structures, either timeval (microsecond-resolution) or timespec (nanosecond-resolution), but you can do it manually fairly easily:

#include <time.h>

int diff_ms(timeval t1, timeval t2)
{
    return (((t1.tv_sec - t2.tv_sec) * 1000000) + 
            (t1.tv_usec - t2.tv_usec))/1000;
}

This obviously has some problems with integer overflow if the difference in times is really large (or if you have 16-bit ints), but that's probably not a common case.

Bromley answered 21/11, 2008 at 2:4 Comment(4)
I think you ment *1000 not *1000000Hydrocephalus
You might want to add +500 usec before dividing by 1000 there, so that 999usec is rounded up to 1msec not down to 0msec.Armelda
No, I did mean *1000000. It's doing the calculation in us and then converting to ms at the end. The +500 suggestion is a good one, though.Bromley
Only 5 years late to the party, but I agree with @SoapBox, you can minimize your overflow issue if you take that multiplication outside of the inner parens and mult by 1000, i.e. make the addition operate on MS. Alternatively we can use the standard timersub, then convert the result tv to MS.Barbarous
S
7

if you are using win32 FILETIME is the most accurate that you can get: Contains a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).

So if you want to calculate the difference between two times in milliseconds you do the following:

UINT64 getTime()
{
    SYSTEMTIME st;
    GetSystemTime(&st);

    FILETIME ft;
    SystemTimeToFileTime(&st, &ft);  // converts to file time format
    ULARGE_INTEGER ui;
    ui.LowPart=ft.dwLowDateTime;
    ui.HighPart=ft.dwHighDateTime;

    return ui.QuadPart;
}

int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
    //! Start counting time
    UINT64   start, finish;

    start=getTime();

    //do something...

    //! Stop counting elapsed time
    finish = getTime();

    //now you can calculate the difference any way that you want
    //in seconds:
    _tprintf(_T("Time elapsed executing this code: %.03f seconds."), (((float)(finish-start))/((float)10000))/1000 );
    //or in miliseconds
    _tprintf(_T("Time elapsed executing this code: %I64d seconds."), (finish-start)/10000 );
}
Supererogation answered 22/1, 2009 at 23:53 Comment(1)
+1 for a pure win32 environment. Simple and efficient. And again I learned something.Am‚lie
M
6

The clock function gives you a millisecond timer, but it's not the greatest. Its real resolution is going to depend on your system. You can try

#include <time.h>

int clo = clock();
//do stuff
cout << (clock() - clo) << endl;

and see how your results are.

Motif answered 21/11, 2008 at 2:5 Comment(3)
That's pretty typical on Unix and Linux systems. I think it can be as bad as about 50 ms, though.Motif
The CLOCKS_PER_SEC macro in <time.h> tells you how many ticks there are per second. It was classically 50 or 60, giving 20 or 16.7 ms.Coniferous
Actually, CLOCKS_PER_SEC gives you the number of clock_t units per second. For example, you might have 1000 CLOCKS_PER_SEC (clock() returns milliseconds) yet have clock() return multiples of 16 ms. Call clock() in a tight loop and it will return: x, ..., x, x+16, ..., x+16, x+32... on my systemDisarray
H
2

You can use gettimeofday to get the number of microseconds since epoch. The seconds segment of the value returned by gettimeofday() is the same as that returned by time() and can be cast to a time_t and used in difftime. A millisecond is 1000 microseconds.

After you use difftime, calculate the difference in the microseconds field yourself.

Hydrocephalus answered 21/11, 2008 at 2:3 Comment(0)
D
2

You can get micro and nanosecond precision out of Boost.Date_Time.

Destructive answered 21/11, 2008 at 2:52 Comment(0)
S
1

If you're looking to do benchmarking, you might want to see some of the other threads here on SO which discuss the topic.

Also, be sure you understand the difference between accuracy and precision.

Spikenard answered 21/11, 2008 at 2:27 Comment(0)
D
0

I think you will have to use something platform-specific. Hopefully that won't matter? eg. On Windows, look at QueryPerformanceCounter() which will give you something much better than milliseconds.

Dodgem answered 21/11, 2008 at 2:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.