Creating OpenGL texture from SDL2 surface - strange pixel values
Asked Answered
M

1

10

I'm trying use SDL2 to load a texture for OpenGL rendering of Wavefront Objects (currently I'm testing with the fixed pipeline, but I eventually plan to move to shaders). The problem is that the loaded texture applied to a quad (and a model that uses a small part in the bottom right of the texture) looks like that:

A sample of the effect
(source: image-upload.de)

This is the texture I used

The image loads fine and looks perfectly normal when drawn with SDL functions, so it's probably the conversion to an OGL texture that's broken. Note that I have alpha blending enabled and the texture is still completely opaque - so the values are not completely random, and probably not uninitialized memory. This is my code for converting the surface (cobbled together from various tutorials and questions on this site here):

GLuint glMakeTexture(bool mipmap = false, int request_size = 0) { // Only works on 32 Bit Surfaces
    GLuint texture = 0;
    if ((bool)_surface) {
        int w,h;
        if (request_size) { // NPOT and rectangular textures are widely supported since at least a decade now; you should never need this...
            w = h = request_size;
            if (w<_surface->w || h<_surface->h) return 0; // No can do.
        } else {
            w = _surface->w;
            h = _surface->h;
        }

        SDL_LockSurface(&*_surface);
        std::cout<<"Bits: "<<(int)_surface->format->BytesPerPixel<<std::endl;

        Uint8 *temp = (Uint8*)malloc(w*h*sizeof(Uint32)); // Yes, I know it's 4...
        if (!temp) return 0;

        // Optimized code
        /*for (int y = 0; y<h; y++) { // Pitch is given in bytes, so we need to cast to 8 bit here!
            memcpy(temp+y*w*sizeof(Uint32),(Uint8*)_surface->pixels+y*_surface->pitch,_surface->w*sizeof(Uint32));
            if (w>_surface->w) memset(temp+y*w*sizeof(Uint32)+_surface->w,0,(w-_surface->w)*sizeof(Uint32));
        }
        for (int y = _surface->h; y<h; y++) memset(temp+y*w*sizeof(Uint32),0,w*sizeof(Uint32));
        GLenum format = (_surface->format->Rmask==0xFF)?GL_RGBA:GL_BGRA;*/

        // Naive code for testing
        for (int y = 0; y<_surface->h; y++)
            for (int x = 0; x<_surface->w; x++) {
                int mempos = (x+y*w)*4;
                SDL_Color pcol = get_pixel(x,y);
                temp[mempos] = pcol.r;
                temp[mempos+1] = pcol.g;
                temp[mempos+2] = pcol.b;
                temp[mempos+3] = pcol.a;
            }
        GLenum format = GL_RGBA;

        SDL_UnlockSurface(&*_surface);

        glGenTextures(1, &texture);
        glBindTexture(GL_TEXTURE_2D, texture);
        if (mipmap) glTexParameteri(texture, GL_GENERATE_MIPMAP, GL_TRUE);
        glTexImage2D(GL_TEXTURE_2D, 0, format, w, h, 0, format, GL_UNSIGNED_BYTE, temp);
        if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
        else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

        free(temp); // Always clean up...
    }
    return texture;
}

EDIT: _surface is actually a std::shared_ptr to SDL_Surface. Thus, the &* when (un)locking it.

Btw, SDL claims the surface is formatted as 32 Bit RGBA on my machine, I checked that already.

The code that binds the texture and draws the quad is here:

glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBindTexture(GL_TEXTURE_2D,_texture[MAP_KD]);

static bool once = true;
if (once) {
    int tex;
    glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);
    bool valid = glIsTexture(tex);
    std::cout<<tex<<" "<<valid<<std::endl;
    once = false;
}
glBegin(GL_TRIANGLE_STRIP);
    //glColor3f(1.f,1.f,1.f);
    glNormal3f(0,1,0);
    glTexCoord2f(0.f,0.f); glVertex3f(0,0,0);
    glTexCoord2f(0.f,1.f); glVertex3f(0,0,1);
    glTexCoord2f(1.f,0.f); glVertex3f(1,0,0);
    glTexCoord2f(1.f,1.f); glVertex3f(1,0,1);
glEnd();

The axe is drawn later from an index list; the code is way too long to share here (and besides, it works fine apart from the texture).

I also tried the naive method found in many tutorials of passing _surface->pixels to glTexImage2D(), but that doesn't help either (and I heard it's the wrong way to do it anyway, because pitch!=width*BytesPerPixel in general). The outcommented "optimized" code looks exactly the same, btw (as expected). I wrote the lower part for better testing. Setting all pixels to a specific color or making the texture partially transparent works as expected, so I assume OpenGL loads the values in temp correctly. It's likely my understanding of the memory layout in SDL2 Surfaces that's messed up.

FINAL EDIT: Solution (resetting GL_UNPACK_ROW_LENGTH is key, thanks to Peter Clark):

GLuint glTexture(bool mipmap = false) {
    GLuint texture = 0;
    if ((bool)_surface) {
        GLenum texture_format, internal_format, tex_type;
        if (_surface->format->BytesPerPixel == 4) {
            if (_surface->format->Rmask == 0x000000ff) {
                texture_format = GL_RGBA;
                tex_type = GL_UNSIGNED_INT_8_8_8_8_REV;
            } else {
                texture_format = GL_BGRA;
                tex_type = GL_UNSIGNED_INT_8_8_8_8;
            }
            internal_format = GL_RGBA8;
        } else {
            if (_surface->format->Rmask == 0x000000ff) {
                texture_format = GL_RGB;
                tex_type = GL_UNSIGNED_BYTE;
            } else {
                texture_format = GL_BGR;
                tex_type = GL_UNSIGNED_BYTE;
            }
            internal_format = GL_RGB8;
        }

        int alignment = 8;
        while (_surface->pitch%alignment) alignment>>=1; // x%1==0 for any x
        glPixelStorei(GL_UNPACK_ALIGNMENT,alignment);

        int expected_pitch = (_surface->w*_surface->format->BytesPerPixel+alignment-1)/alignment*alignment;
        if (_surface->pitch-expected_pitch>=alignment) // Alignment alone wont't solve it now
            glPixelStorei(GL_UNPACK_ROW_LENGTH,_surface->pitch/_surface->format->BytesPerPixel);
        else glPixelStorei(GL_UNPACK_ROW_LENGTH,0);

        glGenTextures(1, &texture);
        glBindTexture(GL_TEXTURE_2D, texture);

        glTexImage2D(GL_TEXTURE_2D, 0, internal_format, _surface->w, _surface->h, 0, texture_format, tex_type, _surface->pixels);
        if (mipmap) {
            glGenerateMipmap(GL_TEXTURE_2D);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
        } else {
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        }
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

        glPixelStorei(GL_UNPACK_ALIGNMENT,4);
        glPixelStorei(GL_UNPACK_ROW_LENGTH,0);
    }
    return texture;
}
Microphotograph answered 10/9, 2014 at 17:40 Comment(7)
Do you have a reference for "pitch!=width*BytesPerPixel" in regards to sending pixels directly? I haven't heard that before.Barbrabarbuda
I got that from this discussion: gamedev.net/topic/184477-sdl_surface-to-opengl-texture - might be wrong, but that wouldn't explain the crash.Microphotograph
Thanks for the info, I think I found a solution, edited my answer.Barbrabarbuda
Where exactly does it crash (what line) and do you get any kind of info when it does crash (such as access violation at 0x****, etc.)? You mentioned in your comment on my answer that you get the message "too many errors", which looks like it might be a message from NVIDIA's OGL driver. Do you get any info beyond that?Barbrabarbuda
Had a typo in my answer - align should be largest power of 2 not smallest (otherwise it would always be 1). Shouldn't fix the crash, but an important thing to fix for performance.Barbrabarbuda
Nope, it's exactly the NVidia message. The only other thing it says is "Error Code 1" (which is probably not helpful). It doesn't crash immediately when walking through the code in the debugger. What precisely happens is that the screen starts flashing and the application doesn't react for about half a second, and then the NVidia error message appears.Microphotograph
Updated my answer with details about creating a "complete" texture.Barbrabarbuda
B
5

Pixel Storage Alignment

You need to tell OpenGL what the alignment of the image is with glPixelStorei(GL_UNPACK_ALIGNMENT, [1,2,4,8]). This will be the largest power of 2 that is a divisor of pitch up to 8. If it is not one of the accepted values, you may have to additionally set GL_UNPACK_ROW_LENGTH - see this answer for more details and advice on that topic. One thing to note - GL_UNPACK_ROW_LENGTH is the row length in pixels, where as SDL_Surface::pitch is the row length in bytes. On top of that you have to ensure that internal_format, format, and pixel_type are set to match what the SDL_Surface contains. One more resource on this topic.

"Complete" Textures

You're also not creating a complete texture when not using mipmaps. To create a "complete" texture (one that is ready to be read from or written to) with no mipmaps you must specify that the maximum mipmap level is 0 (the base image) by using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0), since the default is 1000.

Just to note: you're using glTexParameteri(texture, GL_GENERATE_MIPMAP, GL_TRUE) to automatically generate mipmaps. While this should work (though I am not familiar with it), be aware that this method is deprecated in favor of glGenerateMipmaps in modern OpenGL.

A Possible Solution

// Load texture into surface...
// Error check..
// Bind GL texture...

// Calculate required align using pitch (largest power of 2 that is a divisor of pitch)

glPixelStorei(GL_UNPACK_ALIGNMENT, align);
//glPixelStorei(GL_UNPACK_ROW_LENGTH, row_length); // row_length = pitch / bytes_per_pixel

glTexImage2D(
  GL_TEXTURE_2D, 
  0, 
  internal_format,
  sdl_surface->w,
  sdl_surface->h,
  0, 
  format,
  pixel_type, 
  sdl_surface->pixels);
// Check for errors

if(use_mipmaps)
{
  glGenerateMipmap(GL_TEXTURE_2D);
  // Check for errors

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, /* filter mode */);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, /* filter mode */);
  // Check for errors
}
else
{
  // This makes the texture complete 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, /* filter mode */);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, /* filter mode */);
  // Check for errors
}

// Potentially reset GL_UNPACK_ALIGNMENT and/or GL_UNPACK_ROW_LENGTH to their default values
// Cleanup

Error Checking

Note that it wouldn't be a bad idea to add some error checking with glGetError() where I've marked Check for errors. You could perhaps print the error if there is one and then place a breakpoint/assert. I generally use a macro for this so I can disable most error checking in a release build - something to the effect of:

#ifdef MYPROJ_GRAPHICS_DEBUG
#define ASSERT_IF_GL_ERROR \
{ \
  GLenum last_error = glGetError(); \
  if(last_error != GL_NO_ERROR); \
  { \
    printf("GL Error: %d", last_error); \
  } \
  __debugbreak(); // Visual Studio intrinsic - other compilers have similar intrinsics
} 
#else
#define ASSERT_IF_GL_ERROR
#endif

It is always a good idea to do error checking, and it may reveal some information about what it going on. Though, since it sounds like the driver is crashing after some undefined behavior, it's possible it won't in this instance.

A Possible Alternative

I think it worth mentioning that I was not aware of this issue before answering this question. I hadn't run into it because I generally use stb_image to load textures. The reason I bring it up is that in the documentation for stb_image it is stated that "There is no padding between image scanlines or between pixels, regardless of format.", meaning stb_image handles it for you. If you can control the images you have to load (say if you're making a game and you control creation of the assets) stb_image is another image loading option.

Barbrabarbuda answered 10/9, 2014 at 18:8 Comment(7)
Yes, this is the simple, but incorrect, solution found in many tutorials. It works on some machines, but others have padding bytes between the lines of the surface. On my Windows machine, this code just crashes with the OpenGL message "too many errors". I can't test it on the Linux machine right now, but I can post the result tomorrow.Microphotograph
I could test your solution on the linux machine, and it doesn't crash there - it just displays a white texture (even if I set MAX_LEVEL and BASE_LEVEL). The version with the intermediate buffer looks the same on both systems (like on the screenshot). I'm not using mipmaps right now, but thanks for the hint about the new method. STB also looks interesting; I'll definitely give it a try.Microphotograph
Using STBI now, the problem still persists; so SDL_Image might not be guilty after all. I'm completely at a loss as to what could be the cause, though.Microphotograph
Found the reason. Setting GL_UNPACK_ROW_LENGTH explicitly to zero (and not to pitch) did the trick. Probably SDL set the value somewhere and "forgot" to reset it. Since you pointed me in the right direction, I'll accept your answer (and post a working example once I sorted out the mipmaps and different image formats).Microphotograph
@Microphotograph Interesting. If you set it to just pitch, I think that might not have worked since SDL documentation defined pitch as "the length of a row of pixels in bytes", where as GL_ROW_LENGTH is expected to be "the number of pixels in a row.". I didn't notice this either until I just went to try and figure out why. Setting row length to the right value should work - even though it's not necessary in this case apparently (since just align works). I'll update my answer.Barbrabarbuda
@Microphotograph Note the above comment assumes you were using SDL when you found the solution. If you were using STB and were setting it to STB's pitch (just the width), I'm not sure why that wouldn't have worked (though with STB it is not necessary).Barbrabarbuda
Yup, if I never used SDL for anything, the problem likely wouldn't have happened. But I need SDL to draw my GUI, so that was not an option. I didn't test STBI with resetting GL_UNPACK_ROW_LENGTH (I erroneously assumed it would still have the default value because I didn't touch it), but a value of zero should always work there.Microphotograph

© 2022 - 2024 — McMap. All rights reserved.