Why does OpenGL drawing fail when vertex attrib array zero is disabled?
Asked Answered
D

1

11

I was having extreme trouble getting a vertex shader of mine to run under OpenGL 3.3 core on an ATI driver:

#version 150

uniform mat4 graph_matrix, view_matrix, proj_matrix;
uniform bool align_origin;

attribute vec2 graph_position;
attribute vec2 screen_position;
attribute vec2 texcoord0;
attribute vec4 color;
varying vec2 texcoord0_px;
varying vec4 color_px;

void main() {
    // Pick the position or the annotation position
    vec2 pos = graph_position;

    // Transform the coordinates
    pos = vec2(graph_matrix * vec4(pos, 0.0, 1.0));

    if( align_origin )
        pos = floor(pos + vec2(0.5, 0.5)) + vec2(0.5, 0.5);

    gl_Position = proj_matrix * view_matrix * vec4(pos + screen_position, 0.0, 1.0);
    texcoord0_px = texcoord0;
    color_px = color;
}

I used glVertexAttrib4f to specify the color attribute, and turned the attribute array off. According to page 33 of the 3.3 core spec, that should work:

If an array corresponding to a generic attribute required by a vertex shader is not enabled, then the corresponding element is taken from the current generic attribute state (see section 2.7).

But (most of the time, depending on the profile and driver) the shader either didn't run at all or used black if I accessed the disabled color attribute. Replacing it with a constant got it to run.

Much searching yielded this page of tips regarding WebGL, which had the following to say:

Always have vertex attrib 0 array enabled. If you draw with vertex attrib 0 array disabled, you will force the browser to do complicated emulation when running on desktop OpenGL (e.g. on Mac OSX). This is because in desktop OpenGL, nothing gets drawn if vertex attrib 0 is not array-enabled. You can use bindAttribLocation() to force a vertex attribute to use location 0, and use enableVertexAttribArray() to make it array-enabled.

Sure enough, not only was the color attribute assigned to index zero, but if I force-bound a different, array-enabled attribute to zero, the code ran and produced the right color.

I can't find any other mention of this rule anywhere, and certainly not on ATI hardware. Does anyone know where this rule comes from? Or is this a bug in the implementation that the Mozilla folks noticed and warned about?

Duala answered 12/11, 2012 at 17:57 Comment(0)
I
30

tl;dr: this is a driver bug. Core OpenGL 3.3 should allow you to not use attribute 0, but the compatibility profile does not, and some drivers don't implement that switch correctly. Just make sure to use attribute 0 to avoid any problems.

Actual Content:

Let's have a little history lesson in how the OpenGL specification came to be.

In the most ancient days of OpenGL, there was exactly one way to render: immediate mode (ie: glBegin/glVertex/glColor/glEtc/glEnd). Display lists existed, but they were always defined as simply sending the captured commands again. So while implementations didn't actually make all of those function calls, implementations would still behave as if they did.

In OpenGL 1.1, client-side vertex arrays were added to the specification. Now remember: the specification is a document that specifies behavior, not implementation. Therefore, the ARB simply defined that client-side arrays worked exactly like making immediate mode calls, using the appropriate accesses to the current array pointers. Obviously implementations wouldn't actually do that, but they behaved as if they did.

Buffer-object-based vertex arrays were defined in the same way, though with language slightly complicated by pulling from server storage instead of client storage.

Then something happened: ARB_vertex_program (not ARB_vertex_shader. I'm talking about assembly programs).

See, once you have shaders, you want to start being able to define your own attributes instead of using the built-in ones. And that all made sense. However, there was one problem.

Immedate mode works like this:

glBegin(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glEnd();

Every time you call glVertex, this causes all of the current attribute state to be used for a single vertex. All of the other immediate mode functions simply set values into the context; this function actually sends the vertex to OpenGL to be processed. That's very important in immediate mode. And since every vertex must have a position in fixed-function land, it made sense to use this function to decide when a vertex should be processed.

Once you're no longer using OpenGL's fixed-function vertex semantics, you have a problem in immediate mode. Namely, how do you decide when to actually send the vertex?

By convention, they stuck this onto attribute 0. Therefore, all immediate mode rendering must use either attribute 0 or glVertex to send a vertex.

However, because all other rendering is based on the language of immediate mode rendering, all other rendering has the same limitations of immediate mode rendering. Immediate mode requires attribute 0 or glVertex, and therefore so too do client-side arrays and so forth. Even though it doesn't make sense for them to, they need it because of how the specification defines their behavior.

Then OpenGL 3.0 came around. They deprecated immediate mode. Deprecated does not mean removed; the specification still had those functions in it, and all vertex array rendering was still defined in terms of them.

OpenGL 3.1 actually ripped out the old stuff. And that posed a bit of a language problem. After all, every array drawing command was always defined in terms of immediate mode. But once immediate mode no longer exists... how do you define it?

So they had to come up with new language for core OpenGL 3.1+. While doing so, they removed the pointless restriction on needing to use attribute 0.

But the compatibility profile did not.

Therefore, the rules for OpenGL 3.2+ is this. If you have a core OpenGL profile, then you do not have to use attribute 0. If you have a compatibility OpenGL profile, you must use attribute 0 (or glVertex). That's what the specification says.

But that's not what implementations implement.

In general, NVIDIA never cared much for the "must use attribute 0" rule and just does it how you would expect, even in compatibility profiles. Thus violating the letter of the specification. AMD is generally more likely to stick to the specification. However, they forgot to implement the core behavior correctly. So NVIDIA's too permissive on compatibility, and AMD is too restrictive on core.

To work around these driver bugs, simply always use attribute 0.

BTW, if you're wondering, NVIDIA won. In OpenGL 4.3, the compatibility profile uses the same wording for its array rendering commands as core. Thus, you're allowed to not use attribute 0 on both core and compatibility.

Inscription answered 12/11, 2012 at 18:24 Comment(3)
Thanks. Glad to know I'm not completely crazy. I guessed it might be a bug if since was mentioned in the context of OpenGL ES emulators, but I couldn't figure out why I wasn't seeing any mentions of it online or why such a blatant violation of the spec would go unnoticed. Hopefully the next person who runs into this can find this question and your explanation. Fortunately, I do have an attribute I always use, and I'm just binding that to zero to get around this. Thanks for the info.Duala
Wow, only 6 upvotes, that sucks. :) Awesome answer, thanks man.Markitamarkka
Let me add that yet another issue is OpenGL for the longest time had no conformance tests, so all these differences in implementations where just left as luck and whims. It's only in the last few years that they finally started writing conformance tests (porting from OpenGL ES and often taking ideas and or bug reports from WebGLs tests). A bring that up partly as a lesson. If you want multiple implementations to agree on behavior you need extensive conformance tests. Every API missing tests will have lots of these kinds of issues.Jeffry

© 2022 - 2024 — McMap. All rights reserved.