How to normalize image coordinates for texture space in OpenGL?
Asked Answered
R

2

6

Say I have an image of size 320x240. Now, sampling from an sampler2D with integer image coordinates ux, uy I must normalize for texture coordinates in range [0, size] (size may be width or height).

Now, I wonder if I should normalize like this

texture(image, vec2(ux/320.0, uy/240.0))

or like this

texture(image, vec2(ux/319.0, uy/239.0))

Because ux = 0 ... 319 and uy = 0 ... 239. The latter one will actually cover the whole range of [0, 1] correct? That means 0 corresponds to the e.g. left-most pixels and 1 corresponds to the right most pixels, right?

Also I want to maintain filtering, so I would like to not use texelFetch.

Can anyone tell something about this? Thanks.

Redfaced answered 13/11, 2016 at 14:4 Comment(0)
G
2

No, the first one is actually correct:

texture(image, vec2(ux/320.0, uy/240.0))

Your premise that "ux = 0 ... 319 and uy = 0 ... 239" is incorrect. If you render a 320x240 quad, say, then it is actually ux = 0 ... 320 and uy = 0 ... 240.

This is because pixels and texels are squares sampled at half-integer coordinates. So, for example, let's assume that you render your 320x240 texture on a 320x240 quad. Then the bottom-left pixel (0,0) will actually be sampled at screen-coordinates (.5,.5). You normalize it by dividing by (320,240), but then OpenGL will multiply the normalized coordinates back by (320,240) to get the actual texel coordinates, so it will sample (.5,.5) from the texture, which corresponds to the center of the (0,0) pixel, which returns its exact color.

It is important to think of pixels in OpenGL as squares, so that coordinates (0,0) correspond the bottom-left corner of the bottom-left pixel and the non normalized (w,h) corresponds to the top-right corner of the top-right pixel (for texture of size (w,h)).

Geyer answered 13/11, 2016 at 14:14 Comment(4)
ux, uy are threads in the compute shader with range [0,1,2,... 319], [0,1,2 ... 239], ergo in total 320*240. Now, I would like to map them in the texture space. So I guess there is not much room to change that premise (expect if I add more threads).Redfaced
Ah, then you should do the half-pixel shift yourself, that is add (.5,.5) to (ux,vx) when sampling from the texture. Otherwise 319 will correspond to in-between the 318th and 319th columns.Geyer
You still have to divide by 320 and 240, but you have to add vec2(1/(2*width), 1/(2*height)) in each thread which is half a texel. This is since you don't actually want to sample at position 1 but at a maximum of 1 - (1/(2*texelSize)).Kinkajou
In general: If you want to sample from a texture without having any interpolation performed, texelFetch is the way to go and already takes integer coordinates as input.Kinkajou
H
8

Texture coordinates (and pixel coordinates) go from 0 to 1 on the edges of the pixels no matter how many pixels.

A 4 pixel wide texture

   0          0.5          1    <- texture coordinate
   |           |           |
   V           V           v 
   +-----+-----+-----+-----+
   |     |     |     |     |
   |     |     |     |     |   <- texels
   +-----+-----+-----+-----+

A 5 pixel wide texture

   0             0.5             1    <- texture coordinate
   |              |              |
   V              V              v 
   +-----+-----+-----+-----+-----+
   |     |     |     |     |     |
   |     |     |     |     |     |   <- texels
   +-----+-----+-----+-----+-----+

A 6 pixel wide texture

   0                0.5                1    <- texture coordinate
   |                 |                 |
   V                 V                 V 
   +-----+-----+-----+-----+-----+-----+
   |     |     |     |     |     |     |
   |     |     |     |     |     |     |   <- texels
   +-----+-----+-----+-----+-----+-----+

A 1 pixel wide texture

   0 0.5 1   <- texture coordinate 
   |  |  |
   V  V  V 
   +-----+
   |     |
   |     |   <- texels
   +-----+

If you use u = integerTextureCoordinate / width for each texture coordinate you'd get these coordinates

   0    0.25  0.5   0.75       <- u = intU / width;
   |     |     |     |     
   V     V     V     V     
   +-----+-----+-----+-----+
   |     |     |     |     |
   |     |     |     |     |   <- texels
   +-----+-----+-----+-----+

Those coordinates point directly between texels.

But, the texture coords you want if you want to address specific texels are like this

     0.125 0.375 0.625 0.875   
      |     |     |     |   
      V     V     V     V  
   +-----+-----+-----+-----+
   |     |     |     |     |
   |     |     |     |     |   <- texels
   +-----+-----+-----+-----+

Which you get from

   u = (integerTextureCoord + .5) / width
Houphouetboigny answered 14/8, 2018 at 13:20 Comment(4)
Hi gman please help me in understand this a bit more.. Say i have an array of RGBA values i.e. like R0B0G0A0 R1G1B1A1 R2G2B2A2 R3G3B3A3 R4G4B4A4 how can i co-relate this to values of texture co-ordinates 0, 0.125, 0.25 and all ?Jiminez
Normally u don't think about the issues above. You just think of u = 0 is the left edge of the texture, u = 1 is the right edge. v = 0 is the bottom edge, v = 1 is the top edge. You generate UV / texture coordinates with that in mind or you use a 3d modeling package. It's only when you're using a texture as math data and you want to pull specific values out of a texture that you need to know the math above. I'm not sure that answers your question. You could try this or thisHouphouetboigny
I am looking for using texture() in GLSL compute shaders... just to understand say i dont want to use texelFetch function. When using texture() with WRAP, i am seeing the values getting wrapped over at texture boundaries if i normalize the coordinates with width but i am getting right values with out getting wrapped up at texture boundaries when i add + 0.5/width. I am trying to understand why ?Jiminez
Say i have RGBA Values as texture in a global memory say its stored as a linear array and i want to use them like each thread in work group picking up one texel and do some operation on that and store back at some other memory location.. I thought of using gl_globalInvocationID as the co-ordinates and i normalized this with width and pass them to texture(sampler, normGLInvId).. And this this was wrapping at the boundaries but when i added 0.5/width texture(sampler, normGLInvId + vec2(0.5/width, 0.5/width)) its not.. i am unable to understand why ?Jiminez
G
2

No, the first one is actually correct:

texture(image, vec2(ux/320.0, uy/240.0))

Your premise that "ux = 0 ... 319 and uy = 0 ... 239" is incorrect. If you render a 320x240 quad, say, then it is actually ux = 0 ... 320 and uy = 0 ... 240.

This is because pixels and texels are squares sampled at half-integer coordinates. So, for example, let's assume that you render your 320x240 texture on a 320x240 quad. Then the bottom-left pixel (0,0) will actually be sampled at screen-coordinates (.5,.5). You normalize it by dividing by (320,240), but then OpenGL will multiply the normalized coordinates back by (320,240) to get the actual texel coordinates, so it will sample (.5,.5) from the texture, which corresponds to the center of the (0,0) pixel, which returns its exact color.

It is important to think of pixels in OpenGL as squares, so that coordinates (0,0) correspond the bottom-left corner of the bottom-left pixel and the non normalized (w,h) corresponds to the top-right corner of the top-right pixel (for texture of size (w,h)).

Geyer answered 13/11, 2016 at 14:14 Comment(4)
ux, uy are threads in the compute shader with range [0,1,2,... 319], [0,1,2 ... 239], ergo in total 320*240. Now, I would like to map them in the texture space. So I guess there is not much room to change that premise (expect if I add more threads).Redfaced
Ah, then you should do the half-pixel shift yourself, that is add (.5,.5) to (ux,vx) when sampling from the texture. Otherwise 319 will correspond to in-between the 318th and 319th columns.Geyer
You still have to divide by 320 and 240, but you have to add vec2(1/(2*width), 1/(2*height)) in each thread which is half a texel. This is since you don't actually want to sample at position 1 but at a maximum of 1 - (1/(2*texelSize)).Kinkajou
In general: If you want to sample from a texture without having any interpolation performed, texelFetch is the way to go and already takes integer coordinates as input.Kinkajou

© 2022 - 2024 — McMap. All rights reserved.