Rendering meshes with multiple indices
Asked Answered
S

2

65

I have some vertex data. Positions, normals, texture coordinates. I probably loaded it from a .obj file or some other format. Maybe I'm drawing a cube. But each piece of vertex data has its own index. Can I render this mesh data using OpenGL/Direct3D?

Simple answered 22/6, 2012 at 0:5 Comment(0)
S
78

In the most general sense, no. OpenGL and Direct3D only allow one index per vertex; the index fetches from each stream of vertex data. Therefore, every unique combination of components must have its own separate index.

So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.

Your best bet is to simply accept that your data will be larger. A great many model formats will use multiple indices; you will need to fixup this vertex data before you can render with it. Many mesh loading tools, such as Open Asset Importer, will perform this fixup for you.

It should also be noted that most meshes are not cubes. Most meshes are smooth across the vast majority of vertices, only occasionally having different normals/texture coordinates/etc. So while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication.

GL 3.x and D3D10

For D3D10/OpenGL 3.x-class hardware, it is possible to avoid performing fixup and use multiple indexed attributes directly. However, be advised that this will likely decrease rendering performance.

The following discussion will use the OpenGL terminology, but Direct3D v10 and above has equivalent functionality.

The idea is to manually access the different vertex attributes from the vertex shader. Instead of sending the vertex attributes directly, the attributes that are passed are actually the indices for that particular vertex. The vertex shader then uses the indices to access the actual attribute through one or more buffer textures.

Attributes can be stored in multiple buffer textures or all within one. If the latter is used, then the shader will need an offset to add to each index in order to find the corresponding attribute's start index in the buffer.

Regular vertex attributes can be compressed in many ways. Buffer textures have fewer means of compression, allowing only a relatively limited number of vertex formats (via the image formats they support).

Please note again that any of these techniques may decrease overall vertex processing performance. Therefore, it should only be used in the most memory-limited of circumstances, after all other options for compression or optimization have been exhausted.

OpenGL ES 3.0 provides buffer textures as well. Higher OpenGL versions allow you to read buffer objects more directly via SSBOs rather than buffer textures, which might have better performance characteristics.

Simple answered 22/6, 2012 at 0:5 Comment(7)
There's an article about this here (Chapter 21 - Programmable Vertex Pulling), but it's not directly accessible. There is code though.Hamburger
Is this slow because it's not sequential access of buffer?Declamation
@Samik: Indexed access of any kind is going to be non-sequential; that's kinda the point. The performance difference usually comes into play for hardware that has actual hardware support for vertex pulling. AMD's GCN-based architecture does not, so they have to patch your vertex shader based on your VAO in order to create the illusion of having hardware vertex pulling. So doing it manually yourself probably won't slow you down any.Simple
You don't need buffer textures. You can use regular textures. In other words you can do this in DirectX9, OpenGL 2.1. Live example here https://mcmap.net/q/18079/-webgl-texture-coordinates-and-obj Note: not saying you should do this. Only that it's fully possible.Screening
I don't exactly understand how this statement applies: "while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication." In my understanding, if a single vertex position is used in several non-coplanar triangles, then it will necessarily have a different normal depending on the triangle, resulting in a different vertex altogether for each of these triangles. And, also in my understanding, any real mesh will have a lot of vertex positions shared in non-coplanar triangles, hence my confusion. Can someone clarify?Omnivore
@deqyra: "it will necessarily have a different normal depending on the triangle" Why? The difference between a sharp edge between two triangles and a smooth edge is not the angle between the triangles, but whether the normals at the edge vertices are different or not. If they're the same, then it's a smooth edge (or an approximation of one). And most models are smooth.Simple
@NicolBolas I see, I'm not sure I understand fully yet but that's clearer. Thank you!Omnivore
L
6

I found a way that allows you to reduce this sort of repetition that runs a bit contrary to some of the statements made in the other answer (but doesn't specifically fit the question asked here). It does however address my question which was thought to be a repeat of this question.

I just learned about Interpolation qualifiers. Specifically "flat". It's my understanding that putting the flat qualifier on your vertex shader output causes only the provoking vertex to pass it's values to the fragment shader.

This means for the situation described in this quote:

So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.

You can have 8 vertexes, 6 of which contain the unique normals and 2 of normal values are disregarded, so long as you carefully order your primitives indices such that the "provoking vertex" contains the normal data you want to apply to the entire face.

EDIT: My understanding of how it works:

enter image description here

enter image description here

enter image description here

enter image description here

Libration answered 16/10, 2018 at 7:50 Comment(5)
So, how do you actually do that? What does your index data have to look like? How do you provide the positions and normals to the VS so that it can do this?Simple
I'm still implementing it myself. I'll update my answer with a basic example of my understanding.Libration
Posting text in an image makes it hard to read and use, so don't do that. You can use images, just put your text in the text of your post. Also, you keep mixing up the terminology of "position" and "vertex"; when it comes to graphics, they're not interchangeable. Third, your index ordering does not have a consistent winding order (at least, not that I can tell). Lastly, this trick only works for a cube and only for position+normal; if you need each face to have texture coordinates, this isn't going to be helpful.Simple
" if you need each face to have texture coordinates" I didn't consider that. My particular application generates texture coordinates on the vertex shader.Libration
Even if your VS generates texture coordinates, you only have 8 vertices (since you only have 8 vertex indices). So each face could not get distinct texture coordinates.Simple

© 2022 - 2024 — McMap. All rights reserved.