How to process individual camera frames while using Android mobile vision library
Asked Answered
F

2

7

I am trying to make a camera app that detects faces using the Google mobile vision API with a custom camera instance, NOT the same "CameraSource" in the Google API as I am also processing the frames to detect colors too and with Camerasource I am not allowed to get the camera frames.

After searching for this issue, the only results I've found are about using mobile vision with it's CameraSource, and not with any custom camera1 API. I've tried to override the frame processing, then do the detection on the outputted pics like here:

camera.setPreviewCallback(new Camera.PreviewCallback() {
            @Override
            public void onPreviewFrame(byte[] data, Camera camera) {
                Log.d("onPreviewFrame", "" + data.length);
                Camera.Parameters parameters = camera.getParameters();
                int width = parameters.getPreviewSize().width;
                int height = parameters.getPreviewSize().height;
                ByteArrayOutputStream outstr = new ByteArrayOutputStream();
                Rect rect = new Rect(0, 0, width, height);
                YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
                yuvimage.compressToJpeg(rect, 20, outstr);
                Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
                detector = new FaceDetector.Builder(getApplicationContext())
                        .setTrackingEnabled(true)
                        .setClassificationType(FaceDetector.ALL_LANDMARKS)
                        .setMode(FaceDetector.FAST_MODE)
                        .build();

                detector.setProcessor(
                        new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
                                .build());

                if (detector.isOperational()) {
                    frame = new Frame.Builder().setBitmap(bmp).build();
                    mFaces = detector.detect(frame);
//                    detector.release();
                }
            }
        });

So is there any way that I can link mobile vision with my camera instance for the sake of frame processing and to detect faces with it? You can see what I've done so far here: https://github.com/etman55/FaceDetectionSampleApp

**NEW UPDATE

After finding an open source file for the CameraSource class I solved most of my problems, but now when trying to detect faces the detector receives the frames correctly but it can't detect anything >> you can see my last commit in the github repo.

Foxworth answered 14/2, 2017 at 11:37 Comment(2)
i think it's hard to support Vision API for API level 21 check Pre-requisites of Android Vision API is SDK level 26 or greater and camera1 is deprecated from API level 21 github.com/googlesamples/android-vision developer.android.com/reference/android/hardware/Camera.htmlMetabolite
actually android vision is using their own camera1 Api class and it's lock with proguard so it's hard to figured it out that's why i am making my custom camera1 classFoxworth
N
4

I can provide you with some very useful tips.

  • Building a new FaceDetector for each frame the camera provides is very bad idea, and also unnecessary. You only have to start it once, outside the camera frames receiver.

  • It is not necessary to get the YUV_420_SP (or NV21) frames, then convert it to YUV instance, then convert it to Bitmap, then create a Frame.Builder() with the Bitmap. If you take a look at the Frame.Builder Documentation you can see that it allows NV21 directly from Camera Preview. Like this:

    @override public void onPreviewFrame(byte[] data, Camera camera) {detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21));}

Neilson answered 11/4, 2017 at 20:46 Comment(1)
it should call .build() also like detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21).build());Angle
C
1

And the Kotin version:

    import com.google.android.gms.vision.Frame as GoogleVisionFrame
    import io.fotoapparat.preview.Frame as FotoapparatFrame

    fun recogniseFrame(frame: FotoapparatFrame) = detector.detect(buildDetectorFrame(frame))
        .asSequence()
        .firstOrNull { it.displayValue.isNotEmpty() }
        ?.displayValue

    private fun buildDetectorFrame(frame: FotoapparatFrame) =
        GoogleVisionFrame.Builder()
            .setRotation(frame.rotation.toGoogleVisionRotation())
            .setImageData(
                ByteBuffer.wrap(frame.image),
                frame.size.width,
                frame.size.height,
                ImageFormat.NV21
            ).build()
Corybantic answered 25/2, 2019 at 12:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.