How to query Vsync phase in Linux
Asked Answered
V

4

9

I need to create a C++ function that will return the number of seconds until the next Vsync interval as a floating point value.

Why?

I am creating programs that display rectangles that follow the mouse cursor. Ostensibly OpenGL provides a vsync mechanism in the glXSwapBuffers function, but I have found this to be unreliable. With some card drivers you get vsync; with others you don't. On some you get vsync but you also get an extra 2-frames of latency.

But this is not a bug in OpenGL. The spec is intentionally vague: "The contents of the back buffer then become undefined. The update typically takes place during the vertical retrace of the monitor, rather than immediately after glXSwapBuffers is called." The key word being "typically"... basically glXSwapBuffers doesn't promise squat w.r.t. vsync. Go figure.

In my current attempt to solve this basic problem, I currently guess an initial vsync time and then afterwords assume the phase equals elapsed time MOD 1/(59.85Hz) which seems to sync up with my current monitor. But this doesn't work so well because I don't actually know the initial phase. So I get one tear. At least it doesn't move around. But what I really need is to just measure the current vsync phase somehow.

No, I don't want to rely on some OpenGL call to do a vsync for me. Because of the vagueness in the spec, this gives the OpenGL implementation to add as much latency as it pleases.

No, I don't want to rely on some SGI extension or some other thing that has to be installed to make it work. This is graphics 101. Vsync. Just need a way to query its state. SOME builtin, always-installed API must have this.

Maybe I can create a secondary thread that waits for Vsync somehow, and records the time when this happens? But be aware that the following sequence:

#include <sys/ioctl.h>
#include <fcntl.h>
#include <linux/types.h>
#include <linux/ioctl.h>
#include <linux/fb.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>

int main()
{
  int fb = open("/dev/fb0", O_RDWR);
  assert(fb != -1);
  int zero = 0;
  if (ioctl(fb, FBIO_WAITFORVSYNC, &zero) == -1)
    printf("fb ioctl failed: %s\n", strerror(errno));
}

does NOT work in Debian. Result:

% ./a.out
fb ioctl failed: Inappropriate ioctl for device
% ls -l /dev/fb0
crw-rw-rw- 1 root video 29, 0 Sep  1 20:52 /dev/fb0

There must be some way to just read the phase from a device, or some other OpenGL call. OpenGL is THE THING for graphics. Vsync is graphics 101.

Please help.

Vase answered 2/9, 2016 at 19:8 Comment(4)
Did you check that /dev/fb0 exists in your environment in 1st place, and the open() call went OK?Annalisaannalise
Yes, have the /dev/fb0, and the open() did not return -1.Vase
The specific error message from the FBIO_WAIT_FORVSYNC is "Inappropriate ioctl for device". The ls -l says "crw-rw-rw- 1 root video 29, 0 Sep 1 20:52 /dev/fb0". I am not root but all users have rw access as you can see.Vase
Shrug, I can't tell more than I already did.Annalisaannalise
C
4

When you search for FBIO_WAITFORVSYNC in the Linux kernel sources, you can see, that it is implemented only for a few graphics cards, but not for all.

So, if you happen to have one of the many other cards, you get "Inappropriate ioctl for device", which just means not implemented for this graphics card driver.

Maybe How to wait for VSYNC in Xlib app? gives you some hint into the right direction.

Carib answered 2/9, 2016 at 20:54 Comment(2)
Sadly, this seems to be the right answer. The VSYNC is working on some kernels on some cards, but not others. Beyond that, it doesn't seem to matter which API I use.Vase
Just thought of a way to make it work in all Linux versions. See my answer.Vase
V
3

Outline of a solution that is better than giving up:

  1. Search on digi-key for a MAX chip that outputs the sync signal.

  2. Install RS232 card.

  3. Connect the sync signal to a handshake line on the RS232.

  4. Use standard termios API that will work on any Linux.

  5. Encase amazing product in ceramic epoxy block and sell for $500.

Vase answered 20/9, 2017 at 16:11 Comment(5)
Well, OK, Hmm, since we haven't done this yet, I have to go back to Olaf's answer for now.Vase
I am so confused. Are you talking about the sync line in obsolete VGA cables? 0_oSnips
Did you get your $500? :-)Modify
@Snips Essentially, yes. There is an equivalent in HDMI, but you need an IC to decode the LVDS symbols. That's why I can charge $500 for it! Modern technology creates new opportunities. :)Vase
@Scott Sorry, I still haven't gotten around to actually building this.Vase
M
0

This is graphics 101. Vsync. Just need a way to query its state. SOME builtin, always-installed API must have this.

No, there "must"n't be a way to do that. At least, not anything that gets exposed to you. And certainly not anything cross-platform.

After all, you do not own the screen. The system owns the screen; you are only renting some portion of it, and thus are at the mercy of the system. The system deals with vsync; your job is to fill in the image(s) that get displayed there.

Consider Vulkan, which is about as low level as you're going to get these days without actually being the graphics driver. Its WSI interface is explicitly designed to avoid allowing you to do things like "wait until the next vsync".

Its presentation system does offer a variety of modes, but the only one that implementations are required to support is FIFO: strict vsync, but no tearing. Of course, Vulkan's WSI does at least allow you to choose how much image buffering you want. But if you use FIFO with only a double-buffer, and you're late at providing that image, then your swap isn't going to be visible until the next vsync.

Maggot answered 2/9, 2016 at 20:32 Comment(2)
The system owns the screen, but VSYNC affects my window. So I ought to be able to access it.Vase
Nicol is saying that if you stack up N buffered frames, you let the system worry about when to pull the frames out of the queue and apply them to the screen. Just wanted to make that clear.Modify
M
0

A short answer is: vsync used to be popular on computers when video buffering was expensive. Nowadays, with general use of double buffer animation, it is less important. I used to get access to vsync from the graphics card on a IBM-PC before Windowing systems, and would not mind getting VSYNC even now. With double buffering, you still have the risk that your raster scan can occur while bltting the buffer to the video memory, so it would be nice to sync that. However, with double buffering you are going to eliminate a lot of the "sparkle" effects and other artifacts from direct video drawing, because you are doing a linear blt instead of individual pixel manipulation.

Its also possible (as the previous poster implied) that both the fact that both of your buffers exist in video memory, and the idea that the display manager can carefully manage the blts to the screen (compositing) can render effects nonexistent.

How do I handle this now? I keep a frame timer, of say 30 times per second, that I use to flip the buffers. It is not particularly synchronized to the actual frame time on the graphics card.

Modify answered 3/5, 2021 at 23:13 Comment(1)
Sure, the extra buffer itself is cheap. But then you're adding a more complex API. As noted in the question, I found glXSwapBuffers not to behave consistently enough (across system configurations) to be able to rely on it. And double buffering also adds up to 1 frame of latency.Vase

© 2022 - 2024 — McMap. All rights reserved.