C++ obtaining milliseconds time on Linux -- clock() doesn't seem to work properly
Asked Answered
D

16

106

On Windows, clock() returns the time in milliseconds, but on this Linux box I'm working on, it rounds it to the nearest 1000 so the precision is only to the "second" level and not to the milliseconds level.

I found a solution with Qt using the QTime class, instantiating an object and calling start() on it then calling elapsed() to get the number of milliseconds elapsed.

I got kind of lucky because I'm working with Qt to begin with, but I'd like a solution that doesn't rely on third party libraries,

Is there no standard way to do this?

UPDATE

Please don't recommend Boost ..

If Boost and Qt can do it, surely it's not magic, there must be something standard that they're using!

Despumate answered 25/2, 2009 at 22:59 Comment(2)
About edit - but to do it in portable way is some pain.Remillard
Relevant: #28396514Shimmer
S
38

You could use gettimeofday at the start and end of your method and then difference the two return structs. You'll get a structure like the following:

struct timeval {
  time_t tv_sec;
  suseconds_t tv_usec;
}

EDIT: As the two comments below suggest, clock_gettime(CLOCK_MONOTONIC) is a much better choice if you have it available, which should be almost everywhere these days.

EDIT: Someone else commented that you can also use modern C++ with std::chrono::high_resolution_clock, but that isn't guaranteed to be monotonic. Use steady_clock instead.

Spellbound answered 25/2, 2009 at 23:12 Comment(2)
terrible for serious work. Big issues twice a year, when someone does date -s, and of course NTP sync. Use clock_gettime(CLOCK_MONOTONIC,)Noninterference
@AndrewStone: UNIX time does not change twice per year. Or even once per year. But, yes, CLOCK_MONOTONIC is great for avoiding localised system time adjustments.Rhapsody
U
140
#include <sys/time.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
    struct timeval start, end;

    long mtime, seconds, useconds;    

    gettimeofday(&start, NULL);
    usleep(2000);
    gettimeofday(&end, NULL);

    seconds  = end.tv_sec  - start.tv_sec;
    useconds = end.tv_usec - start.tv_usec;

    mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5;

    printf("Elapsed time: %ld milliseconds\n", mtime);

    return 0;
}
Unemployed answered 25/2, 2009 at 23:20 Comment(7)
Why do you add +0.5 to the difference?Zollverein
@Computer Guru, it's a common technique for rounding positive values. When the value gets truncated to an integer value, anything between 0.0 and 0.4999... before the addition gets truncated to 0, and between 0.5 and 0.9999... gets truncated to 1.Edinburgh
This is great. I just used some make up and am using it. Seconds = End.tv_sec - Start.tv_sec; Milliseconds = End.tv_usec - Start.tv_usec; Elapsed = (Seconds * 1000 + Milliseconds / 1000.0) + 0.5; printf("Elapsed seconds: %.3f \n", (float)Elapsed/1000.0);Spectrohelioscope
tv_usec is not milliseconds, it's microcseconds.Eugeniusz
terrible for serious work. Big issues twice a year, when someone does date -s, and of course NTP syncNoninterference
@Noninterference is right, use clock_gettime(2) with CLOCK_REALTIME to compare times on the same computer. From the gettimeofday(2) manpage: POSIX.1-2008 marks gettimeofday() as obsolete, recommending the use of clock_gettime(2) instead. @CTT, could you update the example by changing the struct timeval to struct timespec, and gettimeofday(&start, NULL) to clock_gettime(CLOCK_MONOTONIC, &start) so that people don't run into trouble?Hopeless
@Bobby Powers: Warning for MacOS users: MacOS doesn't have clock_gettime().Meandrous
T
60

Please note that clock does not measure wall clock time. That means if your program takes 5 seconds, clock will not measure 5 seconds necessarily, but could more (your program could run multiple threads and so could consume more CPU than real time) or less. It measures an approximation of CPU time used. To see the difference consider this code

#include <iostream>
#include <ctime>
#include <unistd.h>

int main() {
    std::clock_t a = std::clock();
    sleep(5); // sleep 5s
    std::clock_t b = std::clock();

    std::cout << "difference: " << (b - a) << std::endl;
    return 0;
}

It outputs on my system

$ difference: 0

Because all we did was sleeping and not using any CPU time! However, using gettimeofday we get what we want (?)

#include <iostream>
#include <ctime>
#include <unistd.h>
#include <sys/time.h>

int main() {
    timeval a;
    timeval b;

    gettimeofday(&a, 0);
    sleep(5); // sleep 5s
    gettimeofday(&b, 0);

    std::cout << "difference: " << (b.tv_sec - a.tv_sec) << std::endl;
    return 0;
}

Outputs on my system

$ difference: 5

If you need more precision but want to get CPU time, then you can consider using the getrusage function.

Tributary answered 15/3, 2009 at 10:33 Comment(1)
⁺¹ about mention a sleep() — I am already thought to ask a question (why it does works fine for everybody except me?!) , when found your answer.Calais
S
38

You could use gettimeofday at the start and end of your method and then difference the two return structs. You'll get a structure like the following:

struct timeval {
  time_t tv_sec;
  suseconds_t tv_usec;
}

EDIT: As the two comments below suggest, clock_gettime(CLOCK_MONOTONIC) is a much better choice if you have it available, which should be almost everywhere these days.

EDIT: Someone else commented that you can also use modern C++ with std::chrono::high_resolution_clock, but that isn't guaranteed to be monotonic. Use steady_clock instead.

Spellbound answered 25/2, 2009 at 23:12 Comment(2)
terrible for serious work. Big issues twice a year, when someone does date -s, and of course NTP sync. Use clock_gettime(CLOCK_MONOTONIC,)Noninterference
@AndrewStone: UNIX time does not change twice per year. Or even once per year. But, yes, CLOCK_MONOTONIC is great for avoiding localised system time adjustments.Rhapsody
R
18

I also recommend the tools offered by Boost. Either the mentioned Boost Timer, or hack something out of Boost.DateTime or there is new proposed library in the sandbox - Boost.Chrono: This last one will be a replacement for the Timer and will feature:

  • The C++0x Standard Library's time utilities, including:
    • Class template duration
    • Class template time_point
    • Clocks:
      • system_clock
      • monotonic_clock
      • high_resolution_clock
  • Class template timer, with typedefs:
    • system_timer
    • monotonic_timer
    • high_resolution_timer
  • Process clocks and timers:
    • process_clock, capturing real, user-CPU, and system-CPU times.
    • process_timer, capturing elapsed real, user-CPU, and system-CPU times.
    • run_timer, convenient reporting of |process_timer| results.
  • The C++0x Standard Library's compile-time rational arithmetic.

Here is the source of the feature list

Remillard answered 25/2, 2009 at 23:33 Comment(1)
For now you can use the Boost Timer and then gracefully migrate to Chrono when it is reviewed/accepted.Remillard
A
13

I've written a Timer class based on CTT's answer. It can be used in the following way:

Timer timer = Timer();
timer.start();
/* perform task */
double duration = timer.stop();
timer.printTime(duration);

Here is its implementation:

#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
using namespace std;

class Timer {
private:

    timeval startTime;

public:

    void start(){
        gettimeofday(&startTime, NULL);
    }

    double stop(){
        timeval endTime;
        long seconds, useconds;
        double duration;

        gettimeofday(&endTime, NULL);

        seconds  = endTime.tv_sec  - startTime.tv_sec;
        useconds = endTime.tv_usec - startTime.tv_usec;

        duration = seconds + useconds/1000000.0;

        return duration;
    }

    static void printTime(double duration){
        printf("%5.6f seconds\n", duration);
    }
};
Alasdair answered 9/9, 2010 at 14:55 Comment(1)
This is cool but the "nseconds" is misleading because timeval doesn't hold nanoseconds, it holds microseconds, so I would suggest people call this "useconds".Blackheart
L
9

If you don't need the code to be portable to old unices, you can use clock_gettime(), which will give you the time in nanoseconds (if your processor supports that resolution). It's POSIX, but from 2001.

Lenitalenitive answered 26/2, 2009 at 3:17 Comment(0)
C
4

clock() has a often a pretty lousy resolution. If you want to measure time at the millisecond level, one alternative is to use clock_gettime(), as explained in this question.

(Remember that you need to link with -lrt on Linux).

Cornwallis answered 15/3, 2009 at 7:32 Comment(0)
D
4

With C++11 and std::chrono::high_resolution_clock you can do this:

#include <iostream>
#include <chrono>
#include <thread>
typedef std::chrono::high_resolution_clock Clock;

int main()
{
    std::chrono::milliseconds three_milliseconds{3};

    auto t1 = Clock::now();
    std::this_thread::sleep_for(three_milliseconds);
    auto t2 = Clock::now();

    std::cout << "Delta t2-t1: " 
              << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
              << " milliseconds" << std::endl;
}

Output:

Delta t2-t1: 3 milliseconds

Link to demo: http://cpp.sh/2zdtu

Dinin answered 7/6, 2016 at 8:21 Comment(0)
M
2

clock() doesn't return milliseconds or seconds on linux. Usually clock() returns microseconds on a linux system. The proper way to interpret the value returned by clock() is to divide it by CLOCKS_PER_SEC to figure out how much time has passed.

Metronymic answered 25/2, 2009 at 23:7 Comment(2)
not in the box I'm working on! plus, I am dividing by CLOCKS_PER_SEC, but it's pointless because the resolution is only down to the secondDespumate
well to be fair, the units is microseconds (CLOCKS_PER_SEC is 1000000 on all POSIX systems). Just it has seconds resolution. :-P.Mondragon
C
2

This should work...tested on a mac...

#include <stdio.h>
#include <sys/time.h>

int main() {
        struct timeval tv;
        struct timezone tz;
        struct tm *tm;
        gettimeofday(&tv,&tz);
        tm=localtime(&tv.tv_sec);
        printf("StartTime: %d:%02d:%02d %d \n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec);
}

Yeah...run it twice and subtract...

Complicated answered 25/2, 2009 at 23:18 Comment(0)
I
1

In the POSIX standard clock has its return value defined in terms of the CLOCKS_PER_SEC symbol and an implementation is free to define this in any convenient fashion. Under Linux, I have had good luck with the times() function.

Invasive answered 25/2, 2009 at 23:8 Comment(0)
V
1

gettimeofday - the problem is that will can have lower values if you change you hardware clock (with NTP for example) Boost - not available for this project clock() - usually returns a 4 bytes integer, wich means that its a low capacity, and after some time it returns negative numbers.

I prefer to create my own class and update each 10 miliseconds, so this way is more flexible, and I can even improve it to have subscribers.

class MyAlarm {
static int64_t tiempo;
static bool running;
public:
static int64_t getTime() {return tiempo;};
static void callback( int sig){
    if(running){
        tiempo+=10L;
    }
}
static void run(){ running = true;}
};

int64_t MyAlarm::tiempo = 0L;
bool MyAlarm::running = false;

to refresh it I use setitimer:

int main(){
struct sigaction sa; 
struct itimerval timer; 

MyAlarm::run();
memset (&sa, 0, sizeof (sa)); 
sa.sa_handler = &MyAlarm::callback; 

sigaction (SIGALRM, &sa, NULL); 


timer.it_value.tv_sec = 0; 
timer.it_value.tv_usec = 10000; 



timer.it_interval.tv_sec = 0; 
timer.it_interval.tv_usec = 10000; 


setitimer (ITIMER_REAL, &timer, NULL); 
.....

Look at setitimer and the ITIMER_VIRTUAL and ITIMER_REAL.

Don't use the alarm or ualarm functions, you will have low precision when your process get a hard work.

Vinni answered 4/9, 2012 at 12:25 Comment(0)
S
0

I prefer the Boost Timer library for its simplicity, but if you don't want to use third-parrty libraries, using clock() seems reasonable.

Shankle answered 25/2, 2009 at 23:18 Comment(0)
D
0

As an update,appears that on windows clock() measures wall clock time (with CLOCKS_PER_SEC precision)

 http://msdn.microsoft.com/en-us/library/4e2ess30(VS.71).aspx

while on Linux it measures cpu time across cores used by current process

http://www.manpagez.com/man/3/clock

and (it appears, and as noted by the original poster) actually with less precision than CLOCKS_PER_SEC, though maybe this depends on the specific version of Linux.

Dameron answered 11/8, 2010 at 14:29 Comment(0)
D
0

I like the Hola Soy method of not using gettimeofday(). It happened to me on a running server the admin changed the timezone. The clock was updated to show the same (correct) local value. This caused the function time() and gettimeofday() to shift 2 hours and all timestamps in some services got stuck.

Dekow answered 9/10, 2014 at 12:27 Comment(0)
J
0

I wrote a C++ class using timeb.

#include <sys/timeb.h>
class msTimer 
{
public:
    msTimer();
    void restart();
    float elapsedMs();
private:
    timeb t_start;
};

Member functions:

msTimer::msTimer() 
{ 
    restart(); 
}

void msTimer::restart() 
{ 
    ftime(&t_start); 
}

float msTimer::elapsedMs() 
{
    timeb t_now;
    ftime(&t_now);
    return (float)(t_now.time - t_start.time) * 1000.0f +
           (float)(t_now.millitm - t_start.millitm);
}

Example of use:

#include <cstdlib>
#include <iostream>

using namespace std;

int main(int argc, char** argv) 
{
    msTimer t;
    for (int i = 0; i < 5000000; i++)
        ;
    std::cout << t.elapsedMs() << endl;
    return 0;
}

Output on my computer is '19'. Accuracy of the msTimer class is of the order of milliseconds. In the usage example above, the total time of execution taken up by the for-loop is tracked. This time included the operating system switching in and out the execution context of main() due to multitasking.

Jemine answered 23/1, 2016 at 4:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.