Is double buffering needed any more
Asked Answered
A

2

14

As today's cards seem to keep a list of render commands and flush only on a call to glFlush or glFinish, is double buffering really needed any more? An OpenGL game I am developing on Linux (ATI Mobility radeon card) with SDL/OpenGL actually flickers less when SDL_GL_swapbuffers() is replaced by glFinish() and with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,0) in the init code. Is this a particular case of my card or are such things likely on all cards?

EDIT: I've discovered that the cause for this is KWin. It appears that as datenwolf said, compositing without sync was the cause. When I switched off KWin compositing, the game works fine without ANY source code patches

Afghan answered 1/7, 2011 at 5:28 Comment(0)
K
21

Double buffering and glFinish are two very different things.

glFinish blocks the program, until all drawing operations are completed.

Double buffering is used to hide the rendering process from the user. Without double buffering, each and every single drawing operation would become visible immediately, assuming that the display refresh frequency is infinitely high. In practice you will get some display artifacts, like parts of the scene visible in one state, the rest not visible or in some other state, the picture could be incomplete, etc. Double buffering avoids this by first rendering into a back buffer, and only after the rendering has been finished swapping this back with the front buffer, that gets sent to the display device.

Now today compositing window management becomes prevalent: Windows has Aero, MacOS X Quartz Extreme and on Linux at least Unity and the GNOME3 shell use compositing if available. The point is: Compositing technically creates doublebuffering: Windows draw to offscreen buffers and of these the final screen is composited. So if you're running on a machine with compositing, then double buffering is kind of redundant if performed in your program, and all it'd take was some kind of synchronization mechanism, to tell the compositor when the next frame is ready. MacOS X has this. X11 still lacks a proper synchronization scheme, see this post on the maillist: http://lists.freedesktop.org/archives/xorg/2004-May/000607.html

TL;DR: Double buffering and glFinish are different things, and you need double buffering (of some sort) to make things look good.

Karly answered 1/7, 2011 at 6:36 Comment(5)
The post mentioned is over 6 years old. Does X11 still lack a sync scheme? And what about Windows?Afghan
In addition to datenwolf's explanation, you should note that you will usually never want to call either glFlush or glFinish, except maybe in some very very very rare special cases. glFinish does nothing that (wgl|glx)SwapBuffers does not already do (presumed that vsync is enabled), and glFlush only flushes the queued commands and signals the server to begin processing them, which does nothing in the best case (but is a useless call and context switch), and results in worse performance in the worst case (because of sub-optimal scheduling of GPU resources).Expend
Ideally, you will want to throw as many commands at the GL as you can, with dependencies spread as far as you can (i.e. if you use a texture, first send the commands to define the texture image, set the texture state etc, then send some commands that do something else, and only then draw something that uses this texture). This ensures that a) the commands in your command stream are less likely to block because of dependencies and b) the driver can schedule some other commands to utilize the GPU (OpenCL or other program?) if the commands in your queue would stall.Expend
With that, your program should always kept running at maximum speed without any delays (never sleep or such!), and the vertical sync will block it when it's appropriate and doesn't hurt. Thus, your program does not burn 100% CPU but runs at optimal speed.Expend
@Sudarshan S: Unfortunately no, no real advantage has been made there, which is a pitty. It is really neccesary though, but one has to admit that the topic is highly nontrivial. ATM this goes by using the XDamage extension to tell the compositor the image has been finished. But then you're still left with the task, how to blank until the next VSync. If you just glXSwapBuffers you'll introduce a one frame lag because glXSwapBuffers will also block your program.Karly
K
2

I would expect that it has more to do with what you're rendering or your hardware than anything that could be generalized to something not on your machine. So no: don't try to do this.

Oh, and don't forget multisampling. Many implementations only multisample the back buffer; the front buffer is not multisampled. Doing a swap will downsample from the multisampled buffer.

Kelleher answered 1/7, 2011 at 5:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.