How to pass Camera preview to the Surface created by MediaCodec.createInputSurface()?
Asked Answered
G

1

16

Ideally I'd like to accomplish two goals:

  1. Pass the Camera preview data to a MediaCodec encoder via a Surface. I can create the Surface using MediaCodec.createInputSurface() but the Camera.setPreviewDisplay() takes a SurfaceHolder, not a Surface.
  2. In addition to passing the Camera preview data to the encoder, I'd also like to display the preview on-screen (so the user can actually see what they are encoding). If the encoder wasn't involved then I'd use a SurfaceView, but that doesn't appear to work in this scenario since SurfaceView creates its own Surface and I think I need to use the one created by MediaCodec.

I've searched online quite a bit for a solution and haven't found one. Some examples on bigflake.com seem like a step in the right direction but they take an approach that adds a bunch of EGL/SurfaceTexture overhead that I'd like to avoid. I'm hoping there is a simpler example or solution where I can get the Camera and MediaCodec talking more directly without involving EGL or textures.

Gillmore answered 28/10, 2013 at 19:6 Comment(0)
R
17

As of Android 4.3 (API 18), the bigflake CameraToMpegTest approach is the correct way.

The EGL/SurfaceTexture overhead is currently unavoidable, especially for what you want to do in goal #2. The idea is:

  • Configure the Camera to send the output to a SurfaceTexture. This makes the Camera output available to GLES as an "external texture".
  • Render the SurfaceTexture to the Surface returned by MediaCodec#createInputSurface(). That feeds the video encoder.
  • Render the SurfaceTexture a second time, to a GLSurfaceView. That puts it on the display for real-time preview.

The only data copying that happens is performed by the GLES driver, so you're doing hardware-accelerated blits, which will be fast.

The only tricky bit is you want the external texture to be available to two different EGL contexts (one for the MediaCodec, one for the GLSurfaceView). You can see an example of creating a shared context in the "Android Breakout game recorder patch" sample on bigflake -- it renders the game twice, once to the screen, once to a MediaCodec encoder.

Update: This is implemented in Grafika ("Show + capture camera").

Update: The multi-context approach in "show + capture camera" approach is somewhat flawed. The "continuous capture" Activity uses a plain SurfaceView, and is able to do both screen rendering and video recording with a single EGL context. This is recommended.

Rocha answered 29/10, 2013 at 14:52 Comment(3)
Regarding the tricky bit, I found that the main difference between the CameraToMpegTest and the breakout game recorder was that in the recorder there's actually a shared egl context defined when creating a context for the recorder part. So just make sure the context is shared, after drawing the "main" picture, make the secondary context and surface current and then use the same drawing code again.Liquor
Alternatively, use a single EGL context with two EGL surfaces. (Grafika has expanded quite a bit since I wrote this; see the "Continuous capture" activity for another example.)Rocha
@Rocha fadden fadden.. are you the developer of the Camera2?Abaft

© 2022 - 2024 — McMap. All rights reserved.