How to rasterize OpenGL triangle on half-integer pixels centers
Asked Answered
H

0

13

OpenGL pixels/fragments are conceptually 1x1 squares centered on half-integer pixels. The OpenGL 4.5 specification states:

A fragment is located by its lower left corner, which lies on integer grid coordinates. Rasterization operations also refer to a fragment’s center, which is offset by (1/2,1/2) from its lower left corner (and so lies on half-integer coordinates).

Rasterizers typically assume that pixel centers lie on the integer grid. Since I am attempting to implement a correct OpenGL triangle fill I want to know if the following procedures seems sound.

Let's take the simple case of a triangle with clip coordinates (-1,-1), (+1,-1), (0,+1) as shown on the left of the figure below (assume orthographic projection and z=0). Assume we have a (small 5x5 frame buffer) that we map our triangle to via glViewport(0,0,5,5) as shown on the right yielding the triangle in device coordinates with vertices (0,0), (5,0), (2.5,5).

pixels centered at half integers

As you can see, the 13 fragments (shaded pixels in image) with centers inside the triangle should be generated by the rasterizer. Note that the fragment centers are on half-integer coordinates. To implement the OpenGL spec, this is what the result needs to be.

A scan-line polygon fill would determine the x-spans where the scan lines intersect the triangle, but the scan lines are at half-integer y-values as shown in the following figure:

triangle fill OpenGL

A hardware/firmware rasterizer will assume the pixel centers are on the integer grid since this is the most efficient way to perform the fill. In the figure below I have shifted the devices coordinates of the triangle by (-0.5, -0.5) to place the centers on the integer grid:

enter image description here

Note that the pixels centers are now indeed on the integer grid. This rasterizer would simply add back (0.5,0.5) to each fragment center before being passed to the fragment shader. At least that is my plan.

Handling texture coordinates seems to be straight-forward. Imagine the I assigned texture coordinates (0,0), (1,0), (0.5,1) as shown below. The image on the left is using half-integer pixel-centers (the OpenGL way) and the image on the right is using integer pixel centers (the hardware way). The texture coordinates (any attached fragment attributes for that matter) end up having the same values either way -- i.e., nothing special needs to be done.

enter image description here

So does my approach seem correct?

  • Add (-0.5,-0.5) to each fragment coordinate,
  • use the hardware efficient fill,
  • add (0.5, 0.5) back in when generating fragment centers, and
  • don't sweat the other fragment attributes (they just work out).
Horal answered 23/8, 2014 at 0:30 Comment(7)
What makes you think coordinates shifted by 1/2 were inefficient to implement in hardware? Adding powers of 2 (and 1/2 is such one) are easily dealt with by shifting by some bit and adding 1. This is trivially implemented on hardware. A little known fact (to software guys) of silicon design: Adding more transistors is easy; what is hard is routing wires between areas on the die. Having some hardwired logic that does the neccesary transformations is trivial.Krysta
@Krysta I thought about scan converting directly using half-integer pixels, but there were a few details that seemed a little trickier. For example, using integer centers you can always find the next scan line following a fractional y-value by computing ceil(y) which can be easily done in floating-point or fixed point as truncate(y + 0.9999) -- in fixed point with a 16-bit fraction this is just (y + 0x0FFFF) >> 16. With half-integer scan lines you would subtract 0.5, do the ceil op, and add 1/2 back out. It seemed easier just just do the subtractions once up front.Horal
I see you already have full understanding of the math. But you still have a mental roadblock. You're still thinking 1 integer increment == 1 pixel. You have to think integer bicrement = 1 pixel. Work with fixed point coordinates, always shifted by 1 bit. CPUs don't care much if you write i += 1 or i += 2. Yes, the machines usually have an increment instruction, but that normally just decodes to microcode for add 1.Krysta
When it comes to OpenGL the target are GPU implementations anyway, so there's no software at all involved. Yes, there are software OpenGL rasterizer implementations, but those usually are fallbacks. And for dedicated hardware there's no such thing as simpler or harder arithmethic.Krysta
@Krysta Yes, the additions are trivial are either way. With integer pixel centers I can find the first pixel in the span beginning with x as ceil(x) (e.g., on scan line y = 4, the first integer for span [1.75, 2.75] is ceil(1.75) = 2. If I am using half-integer pixels then I would find the first half-integer >= x as ceil(x + 0.5) - 0.5 (e.g., on scan line y = 4.5 the first half-integer on span [2.25,2.75] is ceil(2.25 + 0.5) - 0.5 = 2.5). In fixed point ((x + 0x17FFF) >> 16) << 16) - 0x8000. Yuck.Horal
The hardware has configurable sampling point anyway as it has to work with both DirectX and OpenGL which have different rasterization rules.Deadlock
The method I used to achieve this was to just shift the viewport coordinates by a half pixel. This just leaves you to account for the alignment, but it effectively allows for offset sampling without having that spill out into shader code. I've also used glFramebufferSampleLocationsfvARB, which allow you to move around depth and coverage points anywhere inside the pixel that you like, while also staying fully orthogonal to any shader code. I personally prefer that degree of separation.Grosswardein

© 2022 - 2024 — McMap. All rights reserved.