How to create Bitmap from grayscaled byte buffer image?
Asked Answered
B

2

9

I am trying to get frame image to process while using new Android face detection mobile vision api.

So I have created Custom Detector to get Frame and tried to call getBitmap() method but it is null so I accessed grayscale data of frame. Is there a way to create bitmap from it or similiar image holder class?

public class CustomFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;

public CustomFaceDetector(Detector<Face> delegate) {
    mDelegate = delegate;
}

public SparseArray<Face> detect(Frame frame) {
    ByteBuffer byteBuffer = frame.getGrayscaleImageData();
    byte[] bytes = byteBuffer.array();
    int w = frame.getMetadata().getWidth();
    int h = frame.getMetadata().getHeight();
    // Byte array to Bitmap here
    return mDelegate.detect(frame);
}

public boolean isOperational() {
    return mDelegate.isOperational();
}

public boolean setFocus(int id) {
    return mDelegate.setFocus(id);
}}
Brumfield answered 5/9, 2015 at 10:49 Comment(1)
The frame doesn't have bitmap data because it comes directly from the camera. The image format from the camera is NV21: developer.android.com/reference/android/graphics/…Gogh
R
12

You have probably sorted this out already, but in case someone stumbles upon this question in the future, here's how I solved it:

As @pm0733464 points out, the default image format coming out of android.hardware.Camera is NV21, and that is the one used by CameraSource.

This stackoverflow answer provides the answer:

YuvImage yuvimage=new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

Although frame.getGrayscaleImageData() suggests bitmap will be a grayscale version of the original image, this is not the case, in my experience. In fact, the bitmap is identical to the one supplied to the SurfaceHolder natively.

Reorganize answered 10/10, 2015 at 14:19 Comment(1)
This works great. Anyway I could only crop out the face instead of entire image?Sylviesylvite
Y
0

Just adding in a few extras to set a box of 300px on each side for the detection area. By the way if you don'y put in the frame height and width in getGrayscaleImageData() from the metadata you get weird corrupted bitmaps out.

public SparseArray<Barcode> detect(Frame frame) {
        // *** crop the frame here
        int boxx = 300;
        int width = frame.getMetadata().getWidth();
        int height = frame.getMetadata().getHeight();
        int ay = (width/2) + (boxx/2);
        int by = (width/2) - (boxx/2);
        int ax = (height/2) + (boxx/2);
        int bx = (height/2) - (boxx/2);

        YuvImage yuvimage=new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        yuvimage.compressToJpeg(new Rect(by, bx, ay, ax), 100, baos); // Where 100 is the quality of the generated jpeg
        byte[] jpegArray = baos.toByteArray();
        Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        Frame outputFrame = new Frame.Builder().setBitmap(bitmap).build();
        return mDelegate.detect(outputFrame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}
Yecies answered 2/3, 2017 at 11:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.