Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS?
If possible, how to do it? And any playback quality/frame-rate or limitations?
Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS?
If possible, how to do it? And any playback quality/frame-rate or limitations?
As of iOS 4.0, you can use AVCaptureDeviceInput
to get the camera as a device input and connect it to a AVCaptureVideoDataOutput
with any object you like set as the delegate. By setting a 32bpp BGRA format for the camera, the delegate object will receive each frame from the camera in a format just perfect for handing immediately to glTexImage2D
(or glTexSubImage2D
if the device doesn't support non-power-of-two textures; I think the MBX devices fall into this category).
There are a bunch of frame size and frame rate options; at a guess you'll have to tweak those depending on how much else you want to use the GPU for. I found that a completely trivial scene with just a textured quad showing the latest frame, redrawn only exactly when a new frame arrives on an iPhone 4, was able to display that device's maximum 720p 24fps feed without any noticeable lag. I haven't performed any more thorough benchmarking than that, so hopefully someone else can advise.
In principle, per the API, frames can come back with some in-memory padding between scanlines, which would mean some shuffling of contents before posting off to GL so you do need to implement a code path for that. In practice, speaking purely empirically, it appears that the current version of iOS never returns images in that form so it isn't really a performance issue.
EDIT: it's now very close to three years later. In the interim Apple has released iOS 5, 6 and 7. With 5 they introduced CVOpenGLESTexture
and CVOpenGLESTextureCache
, which are now the smart way to pipe video from a capture device into OpenGL. Apple supplies sample code here, from which the particularly interesting parts are in RippleViewController.m
, specifically its setupAVCapture
and captureOutput:didOutputSampleBuffer:fromConnection:
— see lines 196–329. Sadly the terms and conditions prevent a duplication of the code here without attaching the whole project but the step-by-step setup is:
CVOpenGLESTextureCacheCreate
and an AVCaptureSession
;AVCaptureDevice
for video;AVCaptureDeviceInput
with that capture device;AVCaptureVideoDataOutput
and tell it to call you as a sample buffer delegate.Upon receiving each sample buffer:
CVImageBufferRef
from it;CVOpenGLESTextureCacheCreateTextureFromImage
to get Y and UV CVOpenGLESTextureRef
s from the CV image buffer;CVImageBuffer
s into OpenGL and has supplied a sample project; I've added a link as well as a more detailed breakdown of the steps involved. –
Dissyllable Use RosyWriter for a MUCH better example of how to do OpenGL video rendering. Performance is very good, especially if you reduce the framerate (~10% at 1080P/30, >=5% at 1080P/15.
© 2022 - 2024 — McMap. All rights reserved.