Say I have an image of size 320x240. Now, sampling from an sampler2D
with integer image coordinates ux, uy
I must normalize for texture coordinates in range [0, size] (size may be width or height).
Now, I wonder if I should normalize like this
texture(image, vec2(ux/320.0, uy/240.0))
or like this
texture(image, vec2(ux/319.0, uy/239.0))
Because ux = 0 ... 319 and uy = 0 ... 239. The latter one will actually cover the whole range of [0, 1] correct? That means 0 corresponds to the e.g. left-most pixels and 1 corresponds to the right most pixels, right?
Also I want to maintain filtering, so I would like to not use texelFetch
.
Can anyone tell something about this? Thanks.
ux, uy
are threads in the compute shader with range [0,1,2,... 319], [0,1,2 ... 239], ergo in total 320*240. Now, I would like to map them in the texture space. So I guess there is not much room to change that premise (expect if I add more threads). – Redfaced