OpenGL - How is GLenum a unsigned 32 bit Integer?
Asked Answered
P

2

5

To begin there are 8 types of Buffer Objects in OpenGL:

  • GL_ARRAY_BUFFER​
  • GL_ELEMENT_ARRAY_BUFFER​
  • GL_COPY_READ_BUFFER
  • ...

They are enums, or more specifically GLenum's. Where GLenum is a unsigned 32 bit integer that has values up to ~ 4,743,222,432 so to say.

Most of the uses of buffer objects involve binding them to a certain target like this: e.g.

glBindBuffer (GL_ARRAY_BUFFER, Buffers [size]);

[void glBindBuffer (GLenum target, GLuint buffer)] documentation

My question is - is that if its an enum its only value must be 0,1,2,3,4..7 respectively so why go all the way and make it a 32 bit integer if it has only values up to 7? Pardon my knowledge of CS and OpenGL, it just seems unethical.

Pshaw answered 29/12, 2013 at 20:13 Comment(0)
L
7

Enums aren't used just for the buffers - but everywhere a symbolic constant is needed. Currently, several thousand enum values are assigned (look into your GL.h and the latest glext.h. Note that vendors get allocated their official enum ranges so they can implement vendor-specific extensions wihtout interfering with others - so a 32Bit enum space is not a bad idea. Furthermore, on modern CPU architechtures, using less than 32Bit won't be any more efficient, so this is not a problem performance-wise.

UPDATE: As Andon M. Coleman pointed out, currently only 16Bit enumerant ranges are beeing allocated. It might be useful to link at the OpenGL Enumerant Allocation Policies, which also has the following remark:

Historically, enumerant values for some single-vendor extensions were allocated in blocks of 1000, beginning with the block [102000,102999] and progressing upward. Values in this range cannot be represented as 16-bit unsigned integers. This imposes a significant and unnecessary performance penalty on some implementations. Such blocks that have already been allocated to vendors will remain allocated unless and until the vendor voluntarily releases the entire block, but no further blocks in this range will be allocated.

Most of these seem to have been removed in favor of 16 Bit values, but 32 Bit values have been in use. In the current glext.h, one still can find some (obsolete) enumerants above 0xffff, like

#ifndef GL_PGI_misc_hints
#define GL_PGI_misc_hints 1
#define GL_PREFER_DOUBLEBUFFER_HINT_PGI   0x1A1F8
#define GL_CONSERVE_MEMORY_HINT_PGI       0x1A1FD
#define GL_RECLAIM_MEMORY_HINT_PGI        0x1A1FE
...
Leeleeann answered 29/12, 2013 at 20:18 Comment(7)
So are you saying that the buffer objects share 'space' with several hundred+ other enums?Pshaw
Nevertheless, OpenGL still officially only uses the lower 16-bits of the GLenum space for allocating constant values. GLenum is a 32-bit type, but in core and modern extended GL only 16- of them are usable. This is why you often see enumerant re-use when an extension is promoted from EXT to ARB and then promoted to core. If we actually had a 32-bit enumerant space to work with, then this really would not be necessary.. you could easily give each (often functionally incompatible) iteration of the extension its own set of discrete constant values.Sturrock
@BDillan: well. there is only one global "name" space for GL enumerants. So that is probably a "yes" to that question.Leeleeann
So I guess the redbook (8th edition) is outdated then. Since it says its 32 bit. OpenGL moves to fast I guess Im not going to bother with these small detailsPshaw
@BDillian: There is a difference between a type's size and its range. Just because GLenum is 32-bit does not mean you have the full range of a 32-bit integer at your disposal. Likewise, just because a function like glBindSampler (...) takes a GLuint for to identify the texture unit you are going to associate it with (instead of GLenum), does not mean you can actually have 4.27 billion texture image units. I would not bother reading a book for these details, they are outlined in the official OpenGL specification, which you can read for free (for any version) at: opengl.org/registrySturrock
@AndonM.Coleman These trivial things catch my attention - I apologize. From how I saw it there were 0-7 integer values and 4 billion spaces so it caught my attention. I didn't know the mechanics of the type which you explained elaborately. Very articulate answer!Pshaw
@BDillan, consider it as a one sort of error checking, common in many communication protocols. As every function accepts only few specific enums, they can report error if you accidentally call e.g. texture functions with buffer related symbolic values.Patel
T
0

Why would you use a short anyway? What situation would you ever be in that you would even save more than 8k ram (if the reports of near a thousand GLenums is correct) by using a short or uint8_t istead of GLuint for enums and const declarations? Considering the trouble of potential hardware incompatibilities and potential cross platform bugs you would introduce, it's kind of odd to try to save something like 8k ram even in the context of the original 2mb Voodoo3d graphics hardware, much less SGL super-computer-farms OpenGL was created for.

Besides, modern x86 and GPU hardware aligns on 32 or 64 bits at a time, you would actually stall the operation of the CPU/GPU as 24 or 56 bits of the register would have to be zeroed out and THEN read/written to, whereas it could operate on the standard int as soon as it was copied in. From the start of OpenGL compute resources have tended to be more valuable than memory while you might do billions of state changes during a program's life you'd be saving about 10kb (kilobytes) of ram max if you replaced every 32 bit GLuint enum with a uint8_t one. I'm trying so hard not to be extra-cynical right now, heh.

For example, one valid reason for things like uint18_t and the like is for large data buffers/algorithms where data fits in that bit-depth. 1024 ints vs 1024 uint8_t variables on the stack is 8k, are we going to split hairs over 8k? Now consider a 4k raw bitmap image of 4000*2500*32 bits, we're talking a few hundred megs and it would be 8 times the size if we used 64 bit RGBA buffers in the place of standard 8 bit RGBA8 buffers, or quadruple in size if we used 32 bit RGBA encoding. Multiply that by the number of textures open or pictures saved and swapping a bit of cpu operations for all that extra memory makes sense, especially in the context of that type of work.

That is where using a non standard integer type makes sense. Unless you're on a 64k machine or something (like an old-school beeper, good luck running OpenGL on that) system if you're trying to save a few bits of memory on something like a const declaration or reference counter you're just wasting everyone's time.

Touslesmois answered 16/6, 2015 at 4:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.