How to manipulate texture content on the fly?
Asked Answered
C

3

25

I have an iPad app I am working on and one possible feature that we are contemplating is to allow the user to touch an image and deform it.

Basically the image would be like a painting and when the user drags their fingers across the image, the image will deform and the pixels that are touched will be "dragged" along the image. Sorry if this is hard to understand, but the bottom line is that we want to edit the content of the texture on the fly as the user interacts with it.

Is there an effective technique for something like this? I am trying to get a grasp of what would need to be done and how heavy an operation it would be.

Right now the only thing I can think of is to search through the content of the texture based on where was touched and copy the pixel data and do some kind of blend on the existing pixel data as the finger moves. Then reloading the texture with glTexImage2D periodically to get this effect.

Cubage answered 8/10, 2010 at 4:12 Comment(6)
@arul, the question is pretty old, but today people are likely going to be using ES2. I would very much be interested in ES2.Trothplight
This is really not the place to ask about OpenGL huh?Trothplight
@Radu and why isn't this place suitable to ask about OpenGL.... it is a programming site, this question is absolutely fineOffing
@RohanKapur, yet there are no answers... I bet that even if I put 1000 more points for bounty, nobody would come with a good answer.Trothplight
Oh lol whoops I thought you were downgrading/demoting this question, sorry wow this question was asked a really long time ago huh?Offing
do you think @Cubage found the answerOffing
B
44

There are at least two fundamentally different approaches:

1. Update pixels (i assume this is what you mean in the question)

The most effective technique to change the pixels in the texture is called Render-to-Texture and can be done in OpenGL/OpenGL ES via FBOs. On desktop OpenGL you can use pixel buffer objects (PBOs) to manipulate pixel data directly on GPU (but OpenGL ES does not support this yet).

On unextended OpenGL you can change the pixels in system memory and then update texture with glTexImage2D/glTexSubImage2D - but this is inefficient last resort solution and should be avoided if possible. glTexSubImage2D is usually much faster since it only updates pixel inside the existing texture, while glTexImage2D creates entirely new texture (as a benefit you can change the size and pixel format of the texture). On the other side, glTexSubImage2D allows to update only parts of the texture.

You say that you want it to work with OpenGL ES, so I would propose to do the following steps:

  • replace glTexImage2D() with glTexSubImage2D() - if you gain enough performance that's it, just let it be;
  • implement render-to-texture with FBOs and shaders - it will require far more work to rewrite your code, but will give even better performance.

For FBOs the code can look like this:

// setup FBO
glGenFramebuffers( 1, &FFrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FFrameBuffer );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, YourTextureID, 0 );
glBindFramebuffer( GL_FRAMEBUFFER, 0 );

// render to FBO
glBindFramebuffer( GL_FRAMEBUFFER, FFrameBuffer );
glViewport( 0, 0, YourTextureWidth, YourTextureHeight );
your rendering code goes here - it will draw directly into the texture
glBindFramebuffer( GL_FRAMEBUFFER, 0 );

// cleanup
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glDeleteFramebuffers( 1, &FFrameBuffer );

Keep in mind that not all pixel formats can be rendered to. RGB/RGBA are usually fine.

2. Update geometry

You can also change the geometry of the object your texture is mapped on. The geometry should be tesselated enough to allow smooth interaction and prevent artifacts to appear. The deformation of geometry can be done via different methods: parametric surfaces, NURBS, patches.

Beitz answered 22/5, 2012 at 12:55 Comment(27)
Can you post an example of how you would change pixels in a texture using FBOs in iOS with OpenGL ES 2.0?Trothplight
I'm still not sure I understand how this works. Say you have a 10 x 10 array of pixels (= 300 GLubytes, 1 GLubyte for each color). 1) How would you render this array of pixels to a texture using FBOs? 2) How would you then change some of the pixels in the texture using FBOs? (I know you can't have a 10 x 10 texture, since the dimensions are not POTs, but just for illustration purpose)Trothplight
Bind your frame buffer, bind the texture (10x10 as you say) and render the quad at desired position using a trivial shader.Benzocaine
1) You can create the initial texture from this array. Just like you do with an ordinary texture. 2) You should render "pixels" using OpenGL commands and primitives.Beitz
@SergeyK., so if you need to change the pixels at certain coordinates with other pixels (no geometry involved, but rather the raw pixel colors), FBOs don't help? Is glTexSubImage2D() the fastest approach to this, on iOS?Trothplight
FBOs can help, but it will be difficult to redesign your app. glTexSubImage2D() is the best value/efforts solution in this case.Beitz
@SergeyK., I am willing to rewrite my application from ground up if I understand how they can help... I read a ton of articles about FBOs, and it seems like everyone is avoiding an answer to this question, for some reason. How does an FBO help with performance if I need to constantly change the colors of an array of pixels within a texture?Trothplight
Because you do it on GPU and GPUs are designed for these kind of things. And why ask 'why' if FBOs just work faster then old-style updates?Beitz
@SergeyK., I'm not asking why, I'm asking how... I don't understand what I must do in order to use FBOs for changing raw pixels in a texture. That's all I want to know.Trothplight
Do you know how to render a single pixel on screen in OpenGL?Beitz
For instance. Then you bind FBO you just draw that point sprite into a texture. The texture just becomes your framebuffer.Beitz
@SergeyK., but if I need to draw something like 100,000 pixels, point sprites are not made for that... And even if the implementation did support as many vertices, I would be uploading way more coordinates (or indices) and color values than I would with glTexSubImage2D()... Are you saying this should be more efficient than glTexSubImage2D()? If so, then I'll have to see what I can do to make it happen...Trothplight
But thats exactly what i mean by "rewrite your code". You would have to rethink everything from scratch keeping FBOs in mind. This is the ultimate way to performance in the modern OpenGL. Is glTexSubImage() not good enough for you?Beitz
@SergeyK., well, I haven't written much, so I have no problem in redoing the few lines of code I wrote... If millions of point sprites are faster than glTexSubImage2D(), then I'll need to see how to implement that. (I can't even address as many indices with an index buffer)Trothplight
This is wrong idea to start with. It is better to ask "Where all these points come from? Can i create them all inside OpenGL?". I.e, you can run a fragment shader for the quad covering entire texture, that will render the pixels. But this is too specific for your certain problem, i have no idea what it is.Beitz
@SergeyK., I considered all options that I know of, and I really need them to be pixels (or very small somethings that can appear and disappear at any time, and that can form a large, irregular, changeable, shape). This will be an arbitrarily deformable shape, that can have arbitrarily small holes in it.Trothplight
Then the outcome is simple: for OpenGL ES - use TexSubImage, for desktop OpenGL - use Pixel Buffer Objects (direct access to GPU texels). That's it. End of story :) P.S. PBOs will be on mobile devices one day.Beitz
@SergeyK., so there really is nothing faster, is there? I'm thinking maybe Apple extensions for DMA, or anything else that can speed this up... Right now I managed to change around 32,000 pixels at a 30 FPS rate with glTexSubImage2D() (on an iPad 3, so the GPU is pretty fast, it would likely be a lot slower on other iOS devices), but it's possible that I'll need more. If I need more I'm out of luck?Trothplight
Should be no problem to update 256x256 (64K texels) RGB texture with glTexSubImage2D with more than 30 times per second. Maybe something else is slowing things down?Beitz
@SergeyK., the test app is pretty bare bones... I'm generating the pixels (with something like _pixels[i] = (GLubyte) ((arc4random()%(255-1))+1)) then sending them over with glTexSubImage2D() and that's pretty much it. The emulator can do 64k texels, but not the device.Trothplight
I can get the iPad 3 to change 65,536 RGBA pixels at a constant framerate of 17 FPS.Trothplight
256x256x3x30 = ~6 Gb/s - that's half of the entire fillrate of iPad3 GPU. And you still have do draw something on screen... Your 30 FPS seems to be realistic. That means it is the hardware performance limit.Beitz
@SergeyK., x4 actually, I have alpha too. I'm thinking this could also be caused by the fact that all these pixels need to be uploaded to OpenGL then copied... It would be great to somehow be able to point it to my memory (or have it give me a chunk of memory to use, PBO style) instead of going through all this overhead for nothing... It all ends up in the same RAM I'm using anyway, and it's such as shame, since I already have all the data there, it really doesn't need to be copied again, never mind the upload overhead...Trothplight
But updating textures, PBO/TexImage/whatever, will cost you fillrate anyway. I mean 12Gb/s is the hardware limit, there nothing you can do about it, just to reduce the complexity of your own data.Beitz
@SergeyK., ah, so even if I didn't have the overhead, the GPU still wouldn't be able to fill so many pixels?Trothplight
Exactly. Too slow memory to do it.Beitz
@SergeyK., that makes sense... Can you please point me to the specs? Also, is there any way I could use glTexImage2D() with less than 8 bits per pixel? I'm thinking that if I could use 4 bits per pixel, I could get significantly better performance.Trothplight
F
3

I have tried two things that might solve your problem. The methods are pretty different, so I suppose your specific usage case will determine which one is appropriate (if any).

First I've done image deformation with geometry. That is, mapping my texture to a subdivided grid and then overlaying this grid with another grid of bezier control points. The user will then move those control points and thus deforming the vertices of the rendered geometry in a smooth manner.

The second method is more similar to what you outline in your question. When creating your texture keep that data around, so you can manipulate the pixels directly there. Then call something like

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);

every time you draw. This might be terribly inefficient, since it likely involves re-uploading all the texture data every frame and I'd love to hear if there's a better way to do it - like directly manipulating the texture data on the GPU. In my case, though the performance is adequate.

I hope this helps, even though it's pretty low on detail.

Footie answered 22/11, 2010 at 12:15 Comment(0)
B
2

Modifying the texture render target using FBO is tricky, but pretty straighforward.

So, we have:

  1. TW by TH offscreen buffer (associated with the texture) - we call this Dest
  2. New "array of pixels", the IW by IH texture - we call this Src
  3. The desire to put (IW x IH) texture in position (TX,TY) to the Dest texture

The trick to "put" Src to Dest is to

  1. Bind the generated FBO as a render target
  2. Use a dummy 4-vertex quad with trivial vertex coords (TX,TY), (TX+IW,Y), (TX+IW,TY+IH), (TX,TY+IH) and texture coordinates (0,0), (1,0), (1,1), (0,1)
  3. Bind a trivial pixel shader which reads Src texture bound to the first Texture unit and outputs it on the "screen" (a.k.a. render target, Dest)
  4. Render the quad
  5. Unbind the FBO

For the Src to be rendered correctly you have to use orthographic projection and identity camera transform.

The (TX,TY) and (IW,IH) coordinates in my 4-step solution must be divided by TW and TH respectively to get mapped correctly to the [0..1, 0..1] framebuffer size. To avoid these divisions in shader you can just use the appropriate Orthographic projection for [0..TW, 0..TH] viewport.

Hope this solves problems with FBOs.

Benzocaine answered 22/5, 2012 at 16:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.