glVertexAttribPointer and glVertexAttribFormat: What's the difference?
Asked Answered
Y

1

19

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.

  1. Why add new APIs that do the same thing as the old ones?

  2. How do the new APIs work?

Yogurt answered 22/6, 2016 at 15:21 Comment(1)
Related questions, even though those are more about glBindVertexBuffer() than glVertexAttribFormat(): #26768439, #29220916.Polemist
Y
60

glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.

The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.

It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).

The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:

  • How to fetch the data from memory.
  • What that data looks like.

glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.

As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.

Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.

With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.

VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.

As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.

glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.

And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.

Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.


The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.

Where things get a bit confusing is with glBindVertexBuffer.

Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.

The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.

The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:

struct Vertex
{
    GLfloat pos[3];
    GLubyte color[4];
    GLushort texCoord[2];
};

The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.

The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:

glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16

So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.

The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.

The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.

This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.

Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).

Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:

void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
  glVertexAttrib*Format(index, size, type, {normalized,} 0);
  glVertexAttribBinding(index, index);

  GLuint buffer;
  glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
  if(buffer == 0)
    glErrorOut(GL_INVALID_OPERATION); //Give an error.

  if(stride == 0)
    stride = CalcStride(size, type);

  GLintptr offset = reinterpret_cast<GLintptr>(pointer);
  glBindVertexBuffer(index, buffer, offset, stride);
}

Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

Yogurt answered 22/6, 2016 at 15:21 Comment(12)
IIUC your explanation, the reimplementation of glVertexAttrib*Pointer has the offsets swapped. The casted pointer should be used with glVertexAttrib*Format while the 0 with glBindVertexBuffer. Or, maybe, I need to reread your answer one more time :)Garrek
@dvd: I copied that from the ARB_vertex_attrib_binding specification. The format offset is the offset from the buffer binding's offset for that particular attribute, and it has a fixed upper limit. The buffer binding's offset is the offset from the start of the buffer object to the 0 position for that binding. See the part above about interleaving and the Format commands.Yogurt
thank you! I have re-read the answer (for the nth time) and now it is more clear!Garrek
Very interesting explanation !Stagger
The only thing that would make this answer even better is an example of an interleaved vertex format split across two (or more) vertex buffers where each buffer contains a couple of elements of the whole vertex. (Which is what I'm currently trying to wrap my head around. DirectX makes this stuff so much easier.)Ijssel
@James: Um, it's almost exactly the same API as D3D. Instead of filling out a struct, you fill out parameters in a function call.Yogurt
@NicolBolas Well, it's not quite the same. :) I think the biggest problem with OpenGL - apart from its age - is the fact that everything is all over the place and none of the documentation links to anything else, so the man page for glBindVertexBuffer, for example, doesn't link to the wiki explaining why you need to do something in a specific way or which other functions are related and how they all fit together. It's very frustrating at times. :)Ijssel
@James: "Well, it's not quite the same" The APIs are largely identical; it's just spelled differently. D3D11_INPUT_ELEMENT_DESC is largely equivalent to glVertexAttribFormat and glVertexAttribBinding. IASetVertexBuffers is largely equivalent to a series of glBindVertexBuffer calls (or just glBindVertexBuffers). The only notable difference I found is that the instance value is a part of the buffer binding in OpenGL, but is part of an attribute's vertex format in D3D.Yogurt
@NicolBolas Something as trivial as differing spelling can be more of a brick wall to a learner than it may seem to one with experience.Outhaul
@Kröw: So OpenGL should have just adopted D3D11 entirely? I have no idea what you're trying to say here. They are "spelled differently" because they're different APIs.Yogurt
What do you mean exactly by "OpenGL should have just adopted D3D11 entirely"? What would it mean for them to "adopt D3D11"? When I said "brick wall" I meant something that would obstruct learning or using OpenGL, but I'm not sure how to clarify it any further. I didn't intend for any "additional meaning" behind it. Perhaps you misinterpreted it as having? You have already pointed out in previous comment that they are spelled differently (which was what my response was referring to).Outhaul
@Kröw: Your comment suggested that you believe the fact that they're spelled differently is a problem that OpenGL could or should have solved. Otherwise, I'm not sure what your comment was intended to communicate. The different spelling only "obstructs learning or using OpenGL" in the same way that English using different spelling from French "obstructs learning or using French". I just don't know what point you were trying to make if it wasn't that OpenGL should have done something differently.Yogurt

© 2022 - 2024 — McMap. All rights reserved.