How to cast int to const GLvoid*?
Asked Answered
H

5

6

In my cross platform OpenGL application I want to draw using vertex buffer objects. However I run into problems invoking glDrawRangeElements.

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, 
            GL_UNSIGNED_INT,  static_cast<GLvoid *>  (start * sizeof(unsigned int)));

The compiler (CLang on Mac OS X) does not like the last argument "error: cannot cast from type 'unsigned long' to pointer type 'GLvoid *' (aka 'void *')". OpenGL API defines the type of last arguments as const GLvoid * and expects a pointer when this api is used with vertex arrays. However I understand that when vertex buffer objects are employed instead of pointer, one is expected to pass an integer value representing offset into buffer data. This is what I am trying to do and thus I have to cast. How do I reconcile api requirements with compiler imposing rigorous checks?

Hind answered 20/4, 2014 at 0:46 Comment(7)
You don't need to cast to void, every pointer auto-converts to void. The problem is that you try to pass something that is not a pointer (to the index buffer).Chemism
This is exactly the point. It is my understanding that when using VBO one is expected to pass integer offset into argument defined as pointer. It must be a legacy of old C style habits and lack of type safety in api definitions.Hind
It's because it was designed to give an actual pointer, instead of an offset (for legacy reasons). Unlike Timbo says, you are doing the right thing, you should cast an integer value to a void* in this case. As far as I'm aware you can just cast to GLvoid* directly (not a static_cast), but I don't know if there are any differences between CLang and gcc (as I use gcc, or mingw on windows). Edit: at least that is for modern OpenGL.Downbow
@Downbow Thanks for the idea. It turns out static_cast would not compile with or without const. However old C style cast (GVvoid *) does compile and would not even give me a warning. Learn something everyday. Previously I assumed that C-style cast was completely superseded by newer c++11 types of cast.Hind
@Timbo: Passing an integer there is actually the thing expected to do. However the correct cast would be a cast of the function signature and not of the parameter. See https://mcmap.net/q/1621310/-what-is-the-result-of-null-intPhilip
possible duplicate of What is the result of NULL + int?Philip
No, casting the function signature might be the wrong thing to do if sizeof (int) != sizeof (void *) which is still the case on some (though few) machine / OS combinations.Rainey
H
1

I got it to compile using CLang and c++11 when I used ancient c style casting.

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
        (GLvoid *)  (start * sizeof(unsigned int)));

Alternatives that I liked less but were also accepted by compiler were

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
  reinterpret_cast<GLvoid *>(static_cast<uintptr_t>(start * sizeof(unsigned int))));


glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
    (char *)(0) +  start * sizeof(unsigned int)); 
Hind answered 20/4, 2014 at 2:44 Comment(1)
I agree that your first answer is better than any other suggestions on this thread, however a good argument can be made for using the C++-style cast reinterpret_cast<GLvoid *>(start * sizeof(unsigned int)) (which is exactly equivalent to your code but preferable style)Clayclaybank
O
5

Like this:

reinterpret_cast <GLvoid *> (start * sizeof(unsigned int));

Overweight answered 20/4, 2014 at 1:24 Comment(10)
This is a really dangerous way. reinterpret_cast means that the bit pattern of the value is reinterpreted without any conversion. The expression you cast is of type unsigned, and you cast it to a pointer. Those two types don't need to have the same size, and often do not. So casting them this way could easily cause problems.Padraig
That would not compile in c++11Hind
@LRaiz, I just compiled this, using g++ test9.cpp -std=c++11 -Wall -Wextra, and get no warnings except for the expected 'meaningless statement'.Overweight
@RetoKoradi Would that ever be a lossy conversion? I understand that the types are not always the same size, but on what architecture is a pointer smaller then an unsigned int?Overweight
@MaxBozzi: My concern is about the opposite direction. Say the unsigned is 4 bytes, and the pointer is 8 bytes. The way I understand reinterpret_cast, you would tell the compiler to grab the 8 bytes for the pointer from a memory location where you only have 4 bytes from your unsigned. I'm not totally certain how this behaves, but it looks scary enough that I would never do it.Padraig
reinterpret_cast does not compile using clang with c++11 on macHind
@Hind It is required to compile. From the standard, verbatim, 5.2.10.5: A value of integral type or enumeration type can be explicitly converted to a pointer. While there are conditions on safety, unless I interpreted that passage incorrectly, it should compile.Overweight
@RetoKoradi I just checked the standard. It turns out that the conversion is valid always for the case integral -> pointer, but the reverse is only mandated valid if the conversion is not lossy (i.e., there is a large enough integer type). Check out section 5.2.10.Overweight
@MaxBozzi CLang refuses to compile your expression on 64bit Mac OS X. However I found a variation that works. First use static_cast <uintptr_t> and then follow with reinterpret_cast<void *>. I guess this is an approach that we are expected to use, since unintptr_t was specifically designed to convert between pointer in integer types.Hind
@RetoKoradi reinterpret_cast does NOT mean "the bit pattern of the value is reinterpreted without any conversion". In this case (integer to pointer) it means a pointer value is created in an implementation-defined manner. The meaning can differ in other scenarios that are not integer to pointer.Clayclaybank
P
5

Since it's so commonly used, people frequently use a macro for his type conversion. It can be defined like this:

#define BUFFER_OFFSET(i) ((char *)NULL + (i))

This is a clean and safe way of doing the cast, because it does not make any assumptions about integer and pointer types having the same size, which they often don't on 64-bit systems.

Since I personally prefer C++ style casts, and don't use NULL, I would define it like this:

#define BUFFER_OFFSET(idx) (static_cast<char*>(0) + (idx))
Padraig answered 20/4, 2014 at 2:22 Comment(7)
As I stated earlier that would not compile using CLang with c++11 switches.Hind
This is different from what you had in your post. What's the exact compiler switches you are using? I'd like to try it.Padraig
The second version from my answer compiles for me without any errors or warnings with clang -std=c++11 -Wall. Using the clang from Xcode 5.1 on Mac OS 10.9.2.Padraig
BTW - it turns out that passing result of static_cast<uintptr_t>(offset) to reinterpret_cast<void *> also works.Hind
You are right, your macros compile. I previously made a mistake of placing parenthesis incorrectly and thus attempting to cast offset instead of casting 0 and adding offset afterwards.Hind
Isn't this undefined behaviour? The pointer doesn't point to an array element, so you shouldn't be able to do arithmetic on it, no?Internalize
static_cast<char*>(0) + (idx) causes undefined behaviour if idx is not zero (C++17 expr.add/4)Clayclaybank
H
1

I got it to compile using CLang and c++11 when I used ancient c style casting.

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
        (GLvoid *)  (start * sizeof(unsigned int)));

Alternatives that I liked less but were also accepted by compiler were

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
  reinterpret_cast<GLvoid *>(static_cast<uintptr_t>(start * sizeof(unsigned int))));


glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT,
    (char *)(0) +  start * sizeof(unsigned int)); 
Hind answered 20/4, 2014 at 2:44 Comment(1)
I agree that your first answer is better than any other suggestions on this thread, however a good argument can be made for using the C++-style cast reinterpret_cast<GLvoid *>(start * sizeof(unsigned int)) (which is exactly equivalent to your code but preferable style)Clayclaybank
P
0

This is an almost verbatim copy of my answer https://mcmap.net/q/1621310/-what-is-the-result-of-null-int with slight modifications to make it match this question.


C defines (and C++ follows it), that pointers can be casted to integers, namely of type uintptr_t, and that if the integer obtained that way, casted back into the original pointer type it came from, would yield the original pointer.

Then there's pointer arithmetic, which means if I have two pointers pointing so the same object I can take the difference of them, resulting in a integer (of type ptrdiff_t), and that integer added or subtracted to either of the original pointers, will yield the other. It is also defines, that by adding 1 to a pointer, the pointer to the next element of an indexed object is yielded. Also the difference of two uintptr_t, divided by sizeof(type pointed to) of pointers of the same object must be equal to the pointers themself being subtracted. And last but not least, the uintptr_t values may be anything. They could be opaque handles as well. They're not required to be the addresses (though most implementations do it that way, because it makes sense).

Now we can look at the infamous null pointer. C defines the pointer which is casted to for from type uintptr_u value 0 as the invalid pointer. Note that this is always 0 in your source code. On the backend side, in the compiled program, the binary value used for actually representing it to the machine may be something entirely different! Usually it is not, but it may be. C++ is the same, but C++ doesn't allow for as much implicit casting than C, so one must cast 0 explicitly to void*. Also because the null pointer does not refer to an object and therefore has no dereferenced size pointer arithmetic is undefined for the null pointer. The null pointer referring to no object also means, there is no definition for sensibly casting it to a typed pointer.

So if this is all undefined, why does this macro work after all? Because most implementations (means compilers) are extremely gullible and compiler coders lazy to the highest degree. The integer value of a pointer in the majority of implementations is just the value of the pointer itself on the backend side. So the null pointer is actually 0. And although pointer arithmetic on the null pointer is not checked for, most compilers will silently accept it, if the pointer got some type assigned, even if it makes no sense. char is the "unit sized" type of C if you want to say so. So then pointer arithmetic on cast is like artihmetic on the addresses on the backend side.

To make a long story short, it simply makes no sense to try doing pointer magic with the intended result to be a offset on the C language side, it just doesn't work that way.

Let's step back for a moment and remember, what we're actually trying to do: The original problem was, that the original OpenGL vertex array functions take a pointer as their data parameter, but for Vertex Buffer Objects we actually want to specify a byte based offset into our data, which is a number. To the C compiler the function takes a pointer (a opaque thing as we learned). Instead what was defined by OpenGL is a exploit of how compilers work. Pointers and their integer equivalent are implemented as the same binary representation by most compilers. So what we have to do, it making the compiler call those functions with our number instead of a pointer.

So technically the only thing we need to do is telling to compiler "yes, I know you think this variable a is a integer, and you are right, and that function glDrawElements only takes a void* for it's data parameter. But guess what: That integer was yielded from a void*", by casting it to (void*) and then holding thumbs, that the compiler is actually so stupid to pass the integer value as it is to the function.

So this all comes down to somehow circumventing the old function signature. Casting the pointer is the IMHO dirty method. I'd do it a bit different: I'd mess with the function signature:

typedef void (*TFPTR_DrawElementsOffset)(GLenum,GLsizei,GLenum,uintptr_t);
TFPTR_DrawElementsOffset myglDrawElementsOffset =
    (TFPTR_DrawElementsOffset)glDrawElements;

Now you can use myglDrawElementsOffset without doing any silly casts, and the offset parameter will be passed to the function, without any danger, that the compiler may mess with it. This is also the very method I use in my programs.

Philip answered 20/4, 2014 at 9:2 Comment(0)
C
-3

You could try to call it like this:

// The first to last vertex is 0 to 3
// 6 indices will be used to render the 2 triangles. This make our quad.
// The last parameter is the start address in the IBO => zero

glDrawRangeElements(GL_TRIANGLES, 0, 3, 6, GL_UNSIGNED_SHORT, NULL);

Please have a look to OpenGL tutorial.

Calypso answered 20/4, 2014 at 5:14 Comment(1)
That's not how glDrawRangeElements works. The range specifies the index values which the indices being read should lie within. It doesn't affect which data gets rendered at all.Deca

© 2022 - 2024 — McMap. All rights reserved.