C++ time measurement looks too slow
Asked Answered
W

2

8

I am programming a game using OpenGL GLUT code, and I am applying a game developing technique that consists in measuring the time consumed on each iteration of the game's main loop, so you can use it to update the game scene proportionally to the last time it was updated. To achieve this, I have this at the start of the loop:

void logicLoop () {

    float finalTime = (float) clock() / CLOCKS_PER_SEC;
    float deltaTime = finalTime - initialTime;
    initialTime = finalTime;

    ...
    // Here I move things using deltaTime value
    ...
}

The problem came when I added a bullet to the game. If the bullet does not hit any target in two seconds, it must be destroyed. Then, what I did was to keep a reference to the moment the bullet was created like this:

class Bullet: public GameObject {

    float birthday;

public:
    Bullet () {
        ...
        // Some initialization staff
        ...

        birthday = (float) clock() / CLOCKS_PER_SEC;
    }

    float getBirthday () { return birthday; }

}

And then I added this to the logic just beyond the finalTime and deltaTime measurement:

if (bullet != NULL) {
    if (finalTime - bullet->getBirthday() > 2) {
        world.remove(bullet);
        bullet = NULL;
    }
}

It looked nice, but when I ran the code, the bullet keeps alive too much time. Looking for the problem, I printed the value of (finalTime - bullet->getBirthday()), and I watched that it increases really really slow, like it was not a time measured in seconds.

Where is the problem? I though that the result would be in seconds, so the bullet would be removed in two seconds.

Woad answered 2/5, 2017 at 19:1 Comment(3)
Not sure whats going on here but if you have C++11 you can try using <chrono> and store a std::chrono::time_point in bullet.Fleshy
Why not use glutGet(GLUT_ELAPSED_TIME)?Dashtikavir
@genpfault: Although it seems likely and obvious, let's be sure as the documentation does not say; that's wall time, yeah?Irmgard
I
11

This is a common mistake. clock() does not measure the passage of actual time; it measures how much time has elapsed while the CPU was running this particular process.

Other processes also take CPU time, so the two clocks are not the same. Whenever your operating system is executing some other process's code, including when this one is "sleeping", does not count to clock(). And if your program is multithreaded on a system with more than one CPU, clock() may "double count" time!

Humans have no knowledge or perception of OS time slices: we just perceive the actual passage of actual time (known as "wall time"). Ultimately, then, you will see clock()'s timebase being different to wall time.

Do not use clock() to measure wall time!

You want something like gettimeofday() or clock_gettime() instead. In order to allay the effects of people changing the system time, on Linux I personally recommend clock_gettime() with the system's "monotonic clock", a clock that steps in sync with wall time but has an arbitrary epoch unaffected by people playing around with the computer's time settings. (Obviously switch to a portable alternative if needs be.)

This is actually discussed on the cppreference.com page for clock():

std::clock time may advance faster or slower than the wall clock, depending on the execution resources given to the program by the operating system. For example, if the CPU is shared by other processes, std::clock time may advance slower than wall clock. On the other hand, if the current process is multithreaded and more than one execution core is available, std::clock time may advance faster than wall clock.

Please get into the habit of reading documentation for all the functions you use, when you are not sure what is going on.

Edit: Turns out GLUT itself has a function you can use for this, which is might convenient. glutGet(GLUT_ELAPSED_TIME) gives you the number of wall milliseconds elapsed since your call to glutInit(). So I guess that's what you need here. It may be slightly more performant, particularly if GLUT (or some other part of OpenGL) is already requesting wall time periodically, and if this function merely queries that already-obtained time… thus saving you from an unnecessary second system call (which costs).

Irmgard answered 2/5, 2017 at 19:8 Comment(1)
This is the correct answer. In low latency systems and we also use clock_gettime for clocking our applications.Theomorphic
L
1

If you are on windows you can use QueryPerformanceFrequency / QueryPerformanceCounter which gives pretty accurate time measurements.

Here's an example.

#include <Windows.h>
using namespace std;

int main()
    {
    LARGE_INTEGER freq = {0, 0};
    QueryPerformanceFrequency(&freq);

    LARGE_INTEGER startTime = {0, 0};
    QueryPerformanceCounter(&startTime);

    // STUFF.
    for(size_t i = 0; i < 100; ++i) {
        cout << i << endl;
        }

    LARGE_INTEGER stopTime = {0, 0};
    QueryPerformanceCounter(&stopTime);

    const double ellapsed = ((double)stopTime.QuadPart - (double)startTime.QuadPart) / freq.QuadPart;
    cout << "Ellapsed: " << ellapsed << endl;

    return 0;
    }
Lombardi answered 2/5, 2017 at 19:12 Comment(1)
It's hard to tell for sure because MSDN is so poor, but this seems to be process-independent and stable, so a good choice on Windows.Irmgard

© 2022 - 2024 — McMap. All rights reserved.