Android Camera2 API Showing Processed Preview Image
Asked Answered
B

2

9

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?

Example Flow :

camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)

Workaround Idea : Sending bitmaps to imageview every time new frame processed.

Butte answered 22/9, 2015 at 19:32 Comment(2)
Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.Freiman
Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.Butte
F
17

Edit after clarification of the question; original answer at bottom

Depends on where you're doing your processing.

If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.

If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.

If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.

If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.

Or as you say, draw to an ImageView every frame, but that'll be slow.


Original answer:

If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.

If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.

If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.

Freiman answered 22/9, 2015 at 21:20 Comment(6)
As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.Butte
Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.Butte
@EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?Blindfish
If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.Freiman
Where do you show the image? Over the TextureView? Or part of the TextureView?Idiographic
@EddyTalvala, I've been trying to figure out how to use ImageWriter to do something similar to OP, but I can't see how to modify images, either those dequeued from ImageWriter or those received from ImageReader. Can you please point to a good resource that shows how to work with ImageWriter to deliver some custom(-ized) content?Polonaise
S
1

I had the same need, and wanted a quick and dirty manipulation for a demo. I was not worried about efficient processing for a final product. This was easily achieved using the following java solution.

My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:

    // Get the surface of the TextureView on the layout
    //SurfaceTexture texture = mTextureView.getSurfaceTexture();
    //if (null == texture) {
    //    return;
    //}
    //texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
    //Surface surface = new Surface(texture);

    // Capture the preview to the memory reader instead of a UI element
    mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
    Surface surface = mPreviewReader.getSurface();

    // This part stays the same regardless of where we render
    mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
    mCaptureRequestBuilder.addTarget(surface);
    mCameraDevice.createCaptureSession(...

Then I registered a listener for the image:

mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = reader.acquireLatestImage();
        if (image != null) {
            Image.Plane plane = image.getPlanes()[0];
            ByteBuffer buffer = plane.getBuffer();
            byte[] bytes = new byte[buffer.capacity()];
            buffer.get(bytes);
            Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
            image.close();
            if(preview != null ) {
                // This gets the canvas for the same mTextureView we would have connected to the
                // Camera2 preview directly above.
                Canvas canvas = mTextureView.lockCanvas();
                if (canvas != null) {
                    float[] colorTransform = {
                            0, 0, 0, 0, 0,
                            .35f, .45f, .25f, 0, 0,
                            0, 0, 0, 0, 0,
                            0, 0, 0, 1, 0};
                    ColorMatrix colorMatrix = new ColorMatrix();
                    colorMatrix.set(colorTransform); //Apply the monochrome green
                    ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
                    Paint paint = new Paint();
                    paint.setColorFilter(colorFilter);
                    canvas.drawBitmap(preview, 0, 0, paint);
                    mTextureView.unlockCanvasAndPost(canvas);
                }
            }
        }
    }
}, mBackgroundPreviewHandler);
Springbok answered 29/1, 2021 at 21:1 Comment(1)
This works nicely, but the preview image is rotated -90*, do you know why?Naked

© 2022 - 2024 — McMap. All rights reserved.