Why does simple glfw program eat all the available cpu even though progam is idle (according to process explorer)?
Asked Answered
A

2

8

I have a very simple game loop using GLFW as follows (windows x64 Release mode)

I would expect the program to execute very rapidly, however it seems my delta as computed below is always 16.667ms which would appear that glfw is somehow limiting the speed of my main loop . This is not a problem as I dont care about getting more than 60hz. However process explorer and windows task manager report that my program is using most of the cpu core .

Specifically eat seems that glfwSwapBuffers() eats a lot of cpu, even though I am drawing nothing. Removing that call drops cpu usage to 0.5%.

Incidentally my Sleep function is almost never called because the delta is always near exactly 16.6ms.

main ()
{
    double prevTime = glfwGetTime();
    //init glfw ..
    while(!glfwWindowShouldClose(window)) 
    {
        double time0=glfwGetTime();
        double delta = time0- prevTime;

        if (delta >= g_FrameInterval)
        {
            glfwPollEvents();
            prevTime= time0;
            glfwSwapBuffers(window);
        }
        else 
        {
            Sleep(10);
        }
    }
}
Achievement answered 19/9, 2014 at 16:53 Comment(1)
You probably should add the Windows tag, since your question is specific to Windows.Tao
C
4

glfwSwapBuffers is waiting for the monitor vsync. This is why your loop runs at 60 Hz (that is the frequency of your monitor refresh rate). As for the high CPU, the OS likely doesn't put your process to sleep. This would likely cause it to miss the vsync because it can't wake back up quickly enough. Instead, the CPU is put in a busy loop until the vsync. Here is a more full explanation of the issue.

Cycling answered 19/9, 2014 at 16:59 Comment(1)
ahh interesting . This makes sense. I just discovered after posting that glfwSwapBuffers appears to be blocking and that setting glfwSwapInterval(0); stops this and conequently dramatically drops the cpu usage reported by windows. I suppose the downside of disabling this and using my sleep will be tearing.Achievement
O
3

It seems you need to synchronize your thread based on when swap buffers returns. Do a couple of "dummy" swap buffer calls (with a start up screen), reading a timer after each call to get the frequency rate (it could be 120 hz on some monitors, or if an old CRT monitor, 60hz, 75hz, 85hz, 100hz, 120hz, 160hz, 200hz) and to set an initial timer count.

If it's ok just to run at the monitor rate, then you could use a fixed Sleep() value assuming some maximum overhead for your code (depending on the slowest possible target system). The default tick rate for Windows is 64hz (15.625 ms), but this can be sped up using timeBeginPeriod(1), in which case Sleep(n) takes about n ms on Windows 7 or later, but up to n+1 ms on Windows XP. As an example, if your code needs less than 5 ms of cpu time for each frame, then at 60hz you could just use a fixed Sleep(10) ( or Sleep(9) if Windows XP) after each swap buffers call, or if at 120hz, then Sleep(2) (or Sleep(1) if Windows XP).

Many games use a separate thread for the physics that runs at a fixed frequency unrelated to video frequency. Here is an example of this without any drifting over time (the delta is based off an original reading of a high frequency clock). It would be in a separate thread from the graphic thread, and signal the graphics thread whenever a frame update is ready (mutex, semaphore, some type of messaging function).

/* code for a thread to run at fixed frequency */
typedef unsigned long long UI64;        /* unsigned 64 bit int */
#define FREQ    400                     /* frequency */

LARGE_INTEGER liPerfTemp;               /* used for query */
UI64 uFreq = FREQ;                      /* process frequency */
UI64 uOrig;                             /* original tick */
UI64 uWait;                             /* tick rate / freq */
UI64 uRem = 0;                          /* tick rate % freq */
UI64 uPrev;                             /* previous tick based on original tick */
UI64 uDelta;                            /* current tick - previous */
UI64 u2ms;                              /* 2ms of ticks */
UI64 i;

    /* ... */ /* wait for some event to start thread */
    timeBeginPeriod(1);                 /* set period to 1ms */
    Sleep(128);                         /* wait for it to stabilize */

    u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);

    QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
    uOrig = uPrev = liPerfTemp.QuadPart;

    for(i = 0; i < (uFreq*30); i++){
        /* update uWait and uRem based on uRem */
        uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
        uRem  = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
        /* wait for uWait ticks */
        while(1){
            QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
            uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
            if(uDelta >= uWait)
                break;
            if((uWait - uDelta) > u2ms)
                Sleep(1);
        }
        if(uDelta >= (uWait*2))
            dwLateStep += 1;
        uPrev += uWait;
        /* fixed frequency code goes here */
        /*  along with some type of break when done */
    }

    timeEndPeriod(1);                   /* restore period */
Overweening answered 19/9, 2014 at 17:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.