OpenGL Scale Single Pixel Line
Asked Answered
A

2

2

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.

I have achieved this by setting the internal resolution via glOrtho():

glOrtho(0, 320, 240, 0, 0, 1);

And then I scale up the output resolution by a factor of 3, like this:

glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);

I draw rectangles like this:

glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();

It works perfectly at 320x240 (not scaled):

enter image description here

When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(

enter image description here

How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?

Aspirin answered 1/1, 2017 at 2:43 Comment(1)
If FBOs aren't your scene there's always DIY wide lines using triangles.Zsazsa
A
1

As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:

  1. set low res view

    do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)

  2. copy rendered screen into texture

  3. set target resolution view
  4. render the texture

    do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.

Here C++ source for this:

void gl_draw()
    {
    // render resolution and multiplier
    const int xs=320,ys=200,m=2;

    // [low res render pass]
    glViewport(0,0,xs,ys);
    glClearColor(0.0,0.0,0.0,1.0);
    glClear(GL_COLOR_BUFFER_BIT);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_TEXTURE_2D);
    // 50 random lines
    RandSeed=0x12345678;
    glColor3f(1.0,1.0,1.0);
    glBegin(GL_LINES);
    for (int i=0;i<100;i++)
     glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
    glEnd();

    // [multiply resiolution render pass]
    static bool _init=true;
    GLuint  txrid=0;        // texture id
    BYTE map[xs*ys*3];      // RGB
    // init texture
    if (_init)              // you should also delte the texture on exit of app ...
        {
        // create texture
        _init=false;
        glGenTextures(1,&txrid);
        glEnable(GL_TEXTURE_2D);
        glBindTexture(GL_TEXTURE_2D,txrid);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);   // must be nearest !!!
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
        glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
        glDisable(GL_TEXTURE_2D);
        }
    // copy low res screen to CPU memory
    glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
    // and then to GPU texture
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,txrid);         
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
    // set multiplied resolution view
    glViewport(0,0,m*xs,m*ys);
    glClear(GL_COLOR_BUFFER_BIT);
    // render low res screen as texture
    glBegin(GL_QUADS);
    glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
    glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
    glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
    glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
    glEnd();
    glDisable(GL_TEXTURE_2D);

    glFlush();
    SwapBuffers(hdc);   // swap buffers only here !!!
    }

And preview:

preview

I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

Alms answered 27/4, 2017 at 9:53 Comment(0)
Q
7

There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.

All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).

But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.

Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.

The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.

You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).

Quotation answered 1/1, 2017 at 2:54 Comment(3)
Yep I agree but need to add if you are going to render to texture you can expect that Intel gfx cards will not work as expected or at all until they fix the drivers which for older gfx cards is never based on experience... in such case you might try glReadPixels instead ...Alms
@Spektre: If you want to add an answer that discusses specific driver bugs, feel free. I don't know which bugs precisely that you're talking about or which hardware you're referring to, so I'm not qualified to discuss it.Quotation
I added the slow OpenGL 1.0 approach which is working even on Intel.Alms
A
1

As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:

  1. set low res view

    do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)

  2. copy rendered screen into texture

  3. set target resolution view
  4. render the texture

    do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.

Here C++ source for this:

void gl_draw()
    {
    // render resolution and multiplier
    const int xs=320,ys=200,m=2;

    // [low res render pass]
    glViewport(0,0,xs,ys);
    glClearColor(0.0,0.0,0.0,1.0);
    glClear(GL_COLOR_BUFFER_BIT);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_TEXTURE_2D);
    // 50 random lines
    RandSeed=0x12345678;
    glColor3f(1.0,1.0,1.0);
    glBegin(GL_LINES);
    for (int i=0;i<100;i++)
     glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
    glEnd();

    // [multiply resiolution render pass]
    static bool _init=true;
    GLuint  txrid=0;        // texture id
    BYTE map[xs*ys*3];      // RGB
    // init texture
    if (_init)              // you should also delte the texture on exit of app ...
        {
        // create texture
        _init=false;
        glGenTextures(1,&txrid);
        glEnable(GL_TEXTURE_2D);
        glBindTexture(GL_TEXTURE_2D,txrid);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);   // must be nearest !!!
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
        glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
        glDisable(GL_TEXTURE_2D);
        }
    // copy low res screen to CPU memory
    glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
    // and then to GPU texture
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,txrid);         
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
    // set multiplied resolution view
    glViewport(0,0,m*xs,m*ys);
    glClear(GL_COLOR_BUFFER_BIT);
    // render low res screen as texture
    glBegin(GL_QUADS);
    glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
    glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
    glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
    glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
    glEnd();
    glDisable(GL_TEXTURE_2D);

    glFlush();
    SwapBuffers(hdc);   // swap buffers only here !!!
    }

And preview:

preview

I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

Alms answered 27/4, 2017 at 9:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.