How to get the current frame (as a Bitmap) for android facedetector in a Tracker event?
Asked Answered
M

1

9

I have the standard com.google.android.gms.vision.Tracker example successfully running on my android device and now i need to postprocess the image to find the iris of the current face which has been notified in the event methods of the Tracker.

So, how do i get the Bitmap frame which matches exactly the com.google.android.gms.vision.face.Face i received in the Tracker events? This also means that the final bitmap should match the webcam resolution and not the screen resolution.

One bad alternative solution is to call takePicture every few ms on my CameraSource and process this picture separately using the FaceDetector. Although this works i have the problem that the video stream freezes during takepicture and i get lots of GC_FOR_ALLOC messages cause of the single bmp facedetector memory waste.

Massimiliano answered 3/6, 2016 at 20:55 Comment(5)
What you're looking for seems to be available at FaceDetector.SparseArray<Face> detect(Frame var1). Once you get a hold on a Frame object you have getBitmap() which sounds very promising. Unfortunately that class is final, which means that intercepting Frame should be possible using reflection.Plash
I'm not sure i get what you suggest. Am i right, that you assume that i have a Frame Object at hand? Because thats the problem i'm facing. I have a detected Face Object without the current Frame and i need the Frame corresponding to a given Face Object. For example, the link i provided in my question has a method onUpdate() at the bottom. Given this method, how can i get the current Frame corresponding to the Face argument of the method?Massimiliano
We don't have access to a frame, unless you wrap all methods in FaceDetector and intercept detect method (reflection apparently won't help because it's final). Save frame and create a getter, then call it in other places at the right moment.Plash
i see - thanks for your helpMassimiliano
pls post as an answer if you try that and it worksPlash
C
9

You have to create your own version of Face tracker which will extend google.vision face detector. In your mainActivity or FaceTrackerActivity(in google tracking sample) class create your version of FaceDetector class as following:

class MyFaceDetector extends Detector<Face> {
    private Detector<Face> mDelegate;

    MyFaceDetector(Detector<Face> delegate) {
        mDelegate = delegate;
    }

    public SparseArray<Face> detect(Frame frame) {
        YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
        yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);
        byte[] jpegArray = byteArrayOutputStream.toByteArray();
        Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        //TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time
        //So you can process this TempBitmap in your own purposes adding extra code here

        return mDelegate.detect(frame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}

Then you have to join your own FaceDetector with CameraSource by modifying your CreateCameraSource method as follows:

private void createCameraSource() {

    Context context = getApplicationContext();

    // You can use your own settings for your detector
    FaceDetector detector = new FaceDetector.Builder(context)
            .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
            .setProminentFaceOnly(true)
            .build();

    // This is how you merge myFaceDetector and google.vision detector
    MyFaceDetector myFaceDetector = new MyFaceDetector(detector);

    // You can use your own processor
    myFaceDetector.setProcessor(
            new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
                    .build());

    if (!myFaceDetector.isOperational()) {
        Log.w(TAG, "Face detector dependencies are not yet available.");
    }

    // You can use your own settings for CameraSource
    mCameraSource = new CameraSource.Builder(context, myFaceDetector)
            .setRequestedPreviewSize(640, 480)
            .setFacing(CameraSource.CAMERA_FACING_FRONT)
            .setRequestedFps(30.0f)
            .build();
}
Champaign answered 19/7, 2017 at 14:23 Comment(1)
Thanks, but how do you use this? How can I get the frame bitmap from the FaceGraphicTracker class? ThanksPalatial

© 2022 - 2024 — McMap. All rights reserved.