OpenGL byte-buffer with OpenCV face detection
Asked Answered
A

1

0

I am trying to overlay stickers on face using OpenCV and OpenGL.

I am getting the ByteBuffer inside the onDrawFrame:

@Override
    public void onDrawFrame(GL10 unused) {
        if (VERBOSE) {
            Log.d(TAG, "onDrawFrame tex=" + mTextureId);
        }

        mSurfaceTexture.updateTexImage();
        mSurfaceTexture.getTransformMatrix(mSTMatrix);

        byteBuffer.rewind();
        GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, byteBuffer);
        mat.put(0, 0, byteBuffer.array());

        if (mCascadeClassifier != null) {
            mFaces.empty();
            mCascadeClassifier.detectMultiScale(mat, mFaces);
            Log.d(TAG, "No. of faces detected : " + mFaces.toArray().length);
        }

        drawFrame(mTextureId, mSTMatrix);
}

My mat object is initialized in with camera preview width and height:

mat = new Mat(height, width, CvType.CV_8UC3);

The log return 0 face detections. I have two questions:

  1. What am I missing here for face detection using OpenCV?
  2. Also, how can I improve the performance/efficiency of video frame rendering and do the realtime face detection? because glReadPixels takes time to execute and slow down the rendering.
Aretha answered 27/10, 2015 at 12:59 Comment(0)
S
2

You are calling glReadPixels() on the GLES frame buffer before you've rendered anything. You'd need to do it after drawFrame() if you were hoping to read back the SurfaceTexture rendering. You may want to consider rendering the texture offscreen to a pbuffer EGLSurface instead, and reading back from that.

There are a few different ways to get the pixel data from the Camera:

  1. Use the Camera byte[] APIs. Generally involves a software copy, so it tends to be slow.
  2. Send the output to an ImageReader. This gives you immediate access to the raw YUV data.
  3. Send the output to a SurfaceTexture, render the texture, read RGB data out with glReadPixels() (which is what I believe you are trying to do). This is generally very fast, but on some devices and versions of Android it can be slow.
Supraorbital answered 27/10, 2015 at 17:43 Comment(4)
Once again thanks you for the highlighting the problem. I am new to OpenGL so having difficulty in comprehending OffScreen buffer. There is a method in EglSurfaceBase(grafika project) createOffscreenSurface but I am not sure, how can I use it. To clarify, I should have two render surfaces (normal and offscreen), glReadPixels should happen in Offscreen surface where I should process the frame using OpenCV and then render edited frame to the normal surface. It would be great, if you refer me to some example to look that deals with the similar situation.Aretha
I don't know of an example involving OpenCV. ContinuousCaptureActivity's drawFrame() switches between two surfaces (one for display, one for video) when rendering. The TextureUploadActivity benchmark does all its rendering onto an offscreen surface, then for debugging saves a copy of the last one using saveFrame() (which uses glReadPixels()). So the various pieces are there, but there's no fully-formed example in Grafika.Supraorbital
This is great! Sincere thanks for your continuous help. :)Aretha
ImageReader is Camera2. Don't ever use Camera2 API! (many manufactures don't implemented correctly) github.com/googlesamples/android-Camera2Basic/issues/123Diarist

© 2022 - 2024 — McMap. All rights reserved.