QGLWidget and fast offscreen rendering
Asked Answered
D

2

7

Is it possible to render totally offscreen in a QGLWidget with Qt without the need to repaint the scene to screen thus avoiding totally the buffers flip on monitor?

I need to save every frame generated on the framebuffer but, since the sequence is made of 4000 frames and the time interval on the screen is 15ms I spend 4000*15ms=60s but I need to be much much faster than 60s (computations are not a bottleneck here, is just the update the problem).

Can rendering offscreen on a framebuffer be faster? Can I avoid the monitor refresh rate in my QGLWidget?

How do I render completely on framebuffer without the slow paintGL() calls?

Disgrace answered 12/11, 2013 at 16:7 Comment(4)
Would QGLWidget::renderPixmap help?Marabout
How about swapBuffers? qt-project.org/doc/qt-5.0/qtopengl/qglwidget.html#swapBuffersPiacular
@CrazyIvanovich: swapBuffers() will not help the OP mend their problem, because the swap interval is dependent on VSync. Also, you don't actually need double-buffering depending on what you want to render. The OP needs to clarify what they're rendering.Flowerless
I've found a QGL format called QGL::IndirectRendering what is that needed for?Disgrace
F
5

For now I'll assume we're talking Qt4.

Is it possible to render totally offscreen in a QGLWidget

Off-screen rendering isn't really a window-system-dependent task at all. The only problem with WGL (at least) and GLX in most toolkits is that you cannot have a surfaceless context, i.e. a context that's not bound to a drawable provided by the window-system. In other words, you'll always have window-system provided default framebuffer that immutable as long as the current context exists.

There are means to create a context that doesn't require a window manually with X11 but it's usually not worth the trouble. For EGL and OpenGL ES, for instance, this problem doesn't exist because there is an extension tending exactly to this problem, i.e. off-screen rendering.

You can, however, simply hide the QGLWidget after a valid context has been set up and use framebuffer objects to do everything without default framebuffer intervention.

Can I avoid the monitor refresh rate in my QGLWidget?

No, to my knowledge, the OpenGL module of Qt4 has no means to turn of vsync programmatically. You can turn to SDL or GLFW for something like that (not sure about FreeGLUT).

However, you can always turn stuff off in your driver settings. This will also affect QGLWidget (or better put, the swapping behavior of the underlying window-system.)

Can rendering offscreen on a framebuffer be faster?

It really shouldn't matter in the end. You're gonna want the image data some place else than VRAM, so after having rendered the current frame to a FBO, you need to get the image anyway. You either blit the results to the front buffer (or back buffer if you need double buffering and swap), or you need to read-back stuff prior to further processing your current frame.

However, as with anything OpenGL and performance related, don't guess - profile!

How do I render completely on framebuffer without the slow paintGL() calls?

Once a context is set up, you don't need the widget at all. You can do all the magic yourself without Qt's intervention. The only reason paintGL() exists is to provide the user with an easy to use interface that's guaranteed to be called when the widget needs to be updated.

EDIT: As to your query in the comments, see this minimal code example which should work cross-platform without change.

#include <iostream>
#include <QtOpenGL/QGLWidget>
#include <QtGui/QApplication>

void renderOffScreen ()
{
  std::cout << glGetString(GL_VENDOR)   << std::endl;
  std::cout << glGetString(GL_RENDERER) << std::endl;
  std::cout << glGetString(GL_VERSION)  << std::endl;

  // do whatever you want here, e.g. setup a FBO, 
  // render stuff, read the results back until you're done
  // pseudocode:
  //     
  //      setupFBO();
  //   
  //      while(!done)
  //      {
  //        renderFrame();
  //        readBackPixels();
  //        processImage();
  //      }
}

int main(int argc, char* argv[])
{
  QApplication app(argc, argv);
  QGLWidget gl;

  // after construction, you should have a valid context
  // however, it is NOT made current unless show() or
  // similar functions are called
  if(!gl.isValid ())
  {
    std::cout << "ERROR: No GL context!" << std::endl;
    return -1;
  }

  // do some off-screen rendering, the widget has never been made visible
  gl.makeCurrent (); // ABSOLUTELY CRUCIAL!
  renderOffScreen ();

  return 0;
}

On my current machine the programs prints:

ATI Technologies Inc.

AMD Radeon HD 7900 Series

1.4 (2.1 (4.2.12337 Compatibility Profile Context 13.101))

Please note how the QGLWidget is never actually made visible and no event processing takes place. The Qt OpenGL library is merely used for the context creation. Anything else is done without Qt intervention. Just don't forget to set the viewport and stuff according to your needs.

Please note: If all you need is some convenient way to setup a context, you might want to switch to some toolkit which is more lightweight than Qt4, like FreeGLUT. Personally I found FreeGLUT to be much more reliable when it comes to setting up a valid context exactly the way I want it on some hardware, e.g. Sandy Bridge CPUs.

Flowerless answered 12/11, 2013 at 16:27 Comment(1)
Let me understand you very detailed answer. The trick with QGLWidget is 1) setup the widget 2) make the current context current 3) setup the FBO 4) hide the QGLWidget 5) loop on the rendering process but drawing on FBO Right?Disgrace
D
0

I've found a solution involving the use QGLFrameBuffer objects and glReadPixels.

First I initialize my QGLFrameBuffer object in the QGLWidget::initializeGL in order to have a valid GL context where the QGLFrameBuffer can "lie".

This is a first implementation. The framerate is 10 times higher and does not update anything depending on VSync!!

MyGLWidget::MyGLWidget(QWidget *parent) :
    //    QGLWidget(parent)
    QGLWidget( QGLFormat(QGL::SampleBuffers), parent) //this format doesn't matter it's the QGLWidget format on the monitor
{
    //some initializations
}



void MyGLWidget::initializeGL()
{

    qglClearColor(Qt::black);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glShadeModel(GL_SMOOTH);
    glEnable(GL_DEPTH_TEST);
    glEnable (GL_BLEND);
    glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glTranslatef(0,0,-10);

    this->makeCurrent();

    // Initializing frame buffer object
    // Here we create a framebuffer object with the smallest necessary precision, i.e. GL_LUMINANCE in order to make
    // the subsequent calls to glReadPixels MUCH faster because the internal format is simpler and no casts are needed
    QGLFramebufferObjectFormat fboFormat;
    fboFormat.setMipmap(false);
    fboFormat.setSamples(0);
    fboFormat.setInternalTextureFormat(GL_LUMINANCE);
    // Create the framebuffer object
    fbo = new QGLFramebufferObject(QSize(this->width(),this->height()),fboFormat);
}


void MyGLWidget::generateFrames()
{
    //keep unsigned int because of possible integer overflow 
    //when resizing the vector and consequent std::bad_alloc() exceptions
    unsigned int slicesNumber = 1000;
    unsigned int w = this->width();
    unsigned int h = this->height();

    // This vector contains all the frames generated as unsigned char.
    vector<unsigned char> allFrames;
    allFrames.resize(w*h*slicesNumber);

    fbo->bind();
    // Inside this block the rendering is done on the framebuffer object instead of the MyGLWidget
    for ( int i=0; i<slicesNumber; i++ )
    {
            this->paintGL();
            // Read the current frame buffer object
            glReadPixels(0, 0, w, h, GL_LUMINANCE, GL_UNSIGNED_BYTE, allFrames.data()+i*w*h);
        // update scene()
    }
    fbo->release();
}
Disgrace answered 13/11, 2013 at 12:1 Comment(1)
There is no need to call makeCurrent() inside initializeGL(). At the time the latter is called, the context has already been made current. Also, you don't seem to delete the QGLFramebufferObject you create on the heap - unless it's done in the sub-class' dtor which isn't visible in your example. Also, it's not surprising you gain an order-of-magnitude in frametimes. If you don't swapBuffers(), vsync is completely irrelevant.Flowerless

© 2022 - 2024 — McMap. All rights reserved.