How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?
Asked Answered
A

2

24

Unlike Android, I'm relatively new to GL/libgdx. The task I need to solve, namely rendering the Android camera's YUV-NV21 preview image to the screen background inside libgdx in real time is multi-faceted. Here are the main concerns:

  1. Android camera's preview image is only guaranteed to be in the YUV-NV21 space (and in the similar YV12 space where U and V channels are not interleaved but grouped). Assuming that most modern devices will provide implicit RGB conversion is VERY wrong, e.g the newest Samsung Note 10.1 2014 version only provides the YUV formats. Since nothing can be drawn to the screen in OpenGL unless it is in RGB, the color space must somehow be converted.

  2. The example in the libgdx documentation (Integrating libgdx and the device camera) uses an Android surface view that is below everything to draw the image on with GLES 1.1. Since the beginning of March 2014, OpenGLES 1.x support is removed from libgdx due to being obsolete and nearly all devices now supporting GLES 2.0. If you try the same sample with GLES 2.0, the 3D objects you draw on the image will be half-transparent. Since the surface behind has nothing to do with GL, this cannot really be controlled. Disabling BLENDING/TRANSLUCENCY does not work. Therefore, rendering this image must be done purely in GL.

  3. This has to be done in real-time, so the color space conversion must be VERY fast. Software conversion using Android bitmaps will probably be too slow.

  4. As a side-feature, the camera image must be accessible from the Android code in order to perform other tasks than drawing it on the screen, e.g sending it to a native image processor through JNI.

The question is, how is this task done properly and as fast as possible?

Antiphlogistic answered 17/3, 2014 at 14:23 Comment(0)
A
85

The short answer is to load the camera image channels (Y,UV) into textures and draw these textures onto a Mesh using a custom fragment shader that will do the color space conversion for us. Since this shader will be running on the GPU, it will be much faster than CPU and certainly much much faster than the Java code. Since this mesh is part of GL, any other 3D shapes or sprites can be safely drawn over or under it.

I solved the problem starting from this answer https://mcmap.net/q/581645/-yuv-to-rgb-conversion-by-fragment-shader. I understood the general method using the following link: How to use camera view with OpenGL ES, it is written for Bada but the principles are the same. The conversion formulas there were a bit weird so I replaced them with the ones in the Wikipedia article YUV Conversion to/from RGB.

The following are the steps leading to the solution:

YUV-NV21 explanation

Live images from the Android camera are preview images. The default color space (and one of the two guaranteed color spaces) is YUV-NV21 for camera preview. The explanation of this format is very scattered, so I'll explain it here briefly:

The image data is made of (width x height) x 3/2 bytes. The first width x height bytes are the Y channel, 1 brightness byte for each pixel. The following (width / 2) x (height / 2) x 2 = width x height / 2 bytes are the UV plane. Each two consecutive bytes are the V,U (in that order according to the NV21 specification) chroma bytes for the 2 x 2 = 4 original pixels. In other words, the UV plane is (width / 2) x (height / 2) pixels in size and is downsampled by a factor of 2 in each dimension. In addition, the U,V chroma bytes are interleaved.

Here is a very nice image that explains the YUV-NV12, NV21 is just U,V bytes flipped:

YUV-NV12

How to convert this format to RGB?

As stated in the question, this conversion would take too much time to be live if done inside the Android code. Luckily, it can be done inside a GL shader, which runs on the GPU. This will allow it to run VERY fast.

The general idea is to pass the our image's channels as textures to the shader and render them in a way that does RGB conversion. For this, we have to first copy the channels in our image to buffers that can be passed to textures:

byte[] image;
ByteBuffer yBuffer, uvBuffer;

...

yBuffer.put(image, 0, width*height);
yBuffer.position(0);

uvBuffer.put(image, width*height, width*height/2);
uvBuffer.position(0);

Then, we pass these buffers to actual GL textures:

/*
 * Prepare the Y channel texture
 */

//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();

//Y texture is (width*height) in size and each pixel is one byte; 
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B 
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 
    width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

//Use linear interpolation when magnifying/minifying the texture to 
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

/*
 * Prepare the UV channel texture
 */

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();

//UV texture is (width/2*height/2) in size (downsampled by 2 in 
//both dimensions, each pixel corresponds to 4 pixels of the Y channel) 
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL 
//puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's 
//why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well. 
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 
    width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, 
    uvBuffer);

//Use linear interpolation when magnifying/minifying the texture to 
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, 
    GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

Next, we render the mesh we prepared earlier (covers the entire screen). The shader will take care of rendering the bound textures on the mesh:

shader.begin();

//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);

//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);

mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();

Finally, the shader takes over the task of rendering our textures to the mesh. The fragment shader that achieves the actual conversion looks like the following:

String fragmentShader = 
    "#ifdef GL_ES\n" +
    "precision highp float;\n" +
    "#endif\n" +

    "varying vec2 v_texCoord;\n" +
    "uniform sampler2D y_texture;\n" +
    "uniform sampler2D uv_texture;\n" +

    "void main (void){\n" +
    "   float r, g, b, y, u, v;\n" +

    //We had put the Y values of each pixel to the R,G,B components by 
    //GL_LUMINANCE, that's why we're pulling it from the R component,
    //we could also use G or B
    "   y = texture2D(y_texture, v_texCoord).r;\n" + 

    //We had put the U and V values of each pixel to the A and R,G,B 
    //components of the texture respectively using GL_LUMINANCE_ALPHA. 
    //Since U,V bytes are interspread in the texture, this is probably 
    //the fastest way to use them in the shader
    "   u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +
    "   v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +

    //The numbers are just YUV to RGB conversion constants
    "   r = y + 1.13983*v;\n" +
    "   g = y - 0.39465*u - 0.58060*v;\n" +
    "   b = y + 2.03211*u;\n" +

    //We finally set the RGB color of our pixel
    "   gl_FragColor = vec4(r, g, b, 1.0);\n" +
    "}\n"; 

Please note that we are accessing the Y and UV textures using the same coordinate variable v_texCoord, this is due to v_texCoord being between -1.0 and 1.0 which scales from one end of the texture to the other as opposed to actual texture pixel coordinates. This is one of the nicest features of shaders.

The full source code

Since libgdx is cross-platform, we need an object that can be extended differently in different platforms that handles the device camera and rendering. For example, you might want to bypass YUV-RGB shader conversion altogether if you can get the hardware to provide you with RGB images. For this reason, we need a device camera controller interface that will be implemented by each different platform:

public interface PlatformDependentCameraController {

    void init();

    void renderBackground();

    void destroy();
} 

The Android version of this interface is as follows (the live camera image is assumed to be 1280x720 pixels):

public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {

    private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives

    private Camera camera; //The camera object

    //The Y and UV buffers that will pass our image channel data to the textures
    private ByteBuffer yBuffer;
    private ByteBuffer uvBuffer;

    ShaderProgram shader; //Our shader
    Texture yTexture; //Our Y texture
    Texture uvTexture; //Our UV texture
    Mesh mesh; //Our mesh that we will draw the texture on

    public AndroidDependentCameraController(){

        //Our YUV image is 12 bits per pixel
        image = new byte[1280*720/8*12];
    }

    @Override
    public void init(){

        /*
         * Initialize the OpenGL/libgdx stuff
         */

        //Do not enforce power of two texture sizes
        Texture.setEnforcePotImages(false);

        //Allocate textures
        yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format
        uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format

        //Allocate buffers on the native memory space, not inside the JVM heap
        yBuffer = ByteBuffer.allocateDirect(1280*720);
        uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes
        yBuffer.order(ByteOrder.nativeOrder());
        uvBuffer.order(ByteOrder.nativeOrder());

        //Our vertex shader code; nothing special
        String vertexShader = 
                "attribute vec4 a_position;                         \n" + 
                "attribute vec2 a_texCoord;                         \n" + 
                "varying vec2 v_texCoord;                           \n" + 

                "void main(){                                       \n" + 
                "   gl_Position = a_position;                       \n" + 
                "   v_texCoord = a_texCoord;                        \n" +
                "}                                                  \n";

        //Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
        //Effectively making YUV to RGB conversion
        String fragmentShader = 
                "#ifdef GL_ES                                       \n" +
                "precision highp float;                             \n" +
                "#endif                                             \n" +

                "varying vec2 v_texCoord;                           \n" +
                "uniform sampler2D y_texture;                       \n" +
                "uniform sampler2D uv_texture;                      \n" +

                "void main (void){                                  \n" +
                "   float r, g, b, y, u, v;                         \n" +

                //We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE, 
                //that's why we're pulling it from the R component, we could also use G or B
                "   y = texture2D(y_texture, v_texCoord).r;         \n" + 

                //We had put the U and V values of each pixel to the A and R,G,B components of the
                //texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread 
                //in the texture, this is probably the fastest way to use them in the shader
                "   u = texture2D(uv_texture, v_texCoord).a - 0.5;  \n" +                                   
                "   v = texture2D(uv_texture, v_texCoord).r - 0.5;  \n" +


                //The numbers are just YUV to RGB conversion constants
                "   r = y + 1.13983*v;                              \n" +
                "   g = y - 0.39465*u - 0.58060*v;                  \n" +
                "   b = y + 2.03211*u;                              \n" +

                //We finally set the RGB color of our pixel
                "   gl_FragColor = vec4(r, g, b, 1.0);              \n" +
                "}                                                  \n"; 

        //Create and compile our shader
        shader = new ShaderProgram(vertexShader, fragmentShader);

        //Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen
        mesh = new Mesh(true, 4, 6, 
                new VertexAttribute(Usage.Position, 2, "a_position"), 
                new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));

        //The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)
        float[] vertices = {
                -1.0f,  1.0f,   // Position 0
                0.0f,   0.0f,   // TexCoord 0
                -1.0f,  -1.0f,  // Position 1
                0.0f,   1.0f,   // TexCoord 1
                1.0f,   -1.0f,  // Position 2
                1.0f,   1.0f,   // TexCoord 2
                1.0f,   1.0f,   // Position 3
                1.0f,   0.0f    // TexCoord 3
        };

        //The indices come in trios of vertex indices that describe the triangles of our mesh
        short[] indices = {0, 1, 2, 0, 2, 3};

        //Set vertices and indices to our mesh
        mesh.setVertices(vertices);
        mesh.setIndices(indices);

        /*
         * Initialize the Android camera
         */
        camera = Camera.open(0);

        //We set the buffer ourselves that will be used to hold the preview image
        camera.setPreviewCallbackWithBuffer(this); 

        //Set the camera parameters
        Camera.Parameters params = camera.getParameters();
        params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
        params.setPreviewSize(1280,720); 
        camera.setParameters(params);

        //Start the preview
        camera.startPreview();

        //Set the first buffer, the preview doesn't start unless we set the buffers
        camera.addCallbackBuffer(image);
    }

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {

        //Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space
        camera.addCallbackBuffer(image);
    }

    @Override
    public void renderBackground() {

        /*
         * Because of Java's limitations, we can't reference the middle of an array and 
         * we must copy the channels in our byte array into buffers before setting them to textures
         */

        //Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
        yBuffer.put(image, 0, 1280*720);
        yBuffer.position(0);

        //Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
        uvBuffer.put(image, 1280*720, 1280*720/2);
        uvBuffer.position(0);

        /*
         * Prepare the Y channel texture
         */

        //Set texture slot 0 as active and bind our texture object to it
        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
        yTexture.bind();

        //Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture
        Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);


        /*
         * Prepare the UV channel texture
         */

        //Set texture slot 1 as active and bind our texture object to it
        Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
        uvTexture.bind();

        //UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel) 
        //and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture
        //and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.
        //Note that we could have also found V at G or B as well. 
        Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);

        //Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
        Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

        /*
         * Draw the textures onto a mesh using our shader
         */

        shader.begin();

        //Set the uniform y_texture object to the texture at slot 0
        shader.setUniformi("y_texture", 0);

        //Set the uniform uv_texture object to the texture at slot 1
        shader.setUniformi("uv_texture", 1);

        //Render our mesh using the shader, which in turn will use our textures to render their content on the mesh
        mesh.render(shader, GL20.GL_TRIANGLES);
        shader.end();
    }

    @Override
    public void destroy() {
        camera.stopPreview();
        camera.setPreviewCallbackWithBuffer(null);
        camera.release();
    }
}

The main application part just ensures that init() is called once in the beginning, renderBackground() is called every render cycle and destroy() is called once in the end:

public class YourApplication implements ApplicationListener {

    private final PlatformDependentCameraController deviceCameraControl;

    public YourApplication(PlatformDependentCameraController cameraControl) {
        this.deviceCameraControl = cameraControl;
    }

    @Override
    public void create() {              
        deviceCameraControl.init();
    }

    @Override
    public void render() {      
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

        //Render the background that is the live camera image
        deviceCameraControl.renderBackground();

        /*
         * Render anything here (sprites/models etc.) that you want to go on top of the camera image
         */
    }

    @Override
    public void dispose() {
        deviceCameraControl.destroy();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

The only other Android-specific part is the following extremely short main Android code, you just create a new Android specific device camera handler and pass it to the main libgdx object:

public class MainActivity extends AndroidApplication {

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
        cfg.useGL20 = true; //This line is obsolete in the newest libgdx version
        cfg.a = 8;
        cfg.b = 8;
        cfg.g = 8;
        cfg.r = 8;

        PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();
        initialize(new YourApplication(cameraControl), cfg);

        graphics.getView().setKeepScreenOn(true);
    }
}

How fast is it?

I tested this routine on two devices. While the measurements are not constant across frames, a general profile can be observed:

  1. Samsung Galaxy Note II LTE - (GT-N7105): Has ARM Mali-400 MP4 GPU.

    • Rendering one frame takes around 5-6 ms, with occasional jumps to around 15 ms every couple of seconds
    • Actual rendering line (mesh.render(shader, GL20.GL_TRIANGLES);) consistently takes 0-1 ms
    • Creation and binding of both textures consistently take 1-3 ms in total
    • ByteBuffer copies generally take 1-3 ms in total but jump to around 7ms occasionally, probably due to the image buffer being moved around in the JVM heap
  2. Samsung Galaxy Note 10.1 2014 - (SM-P600): Has ARM Mali-T628 GPU.

    • Rendering one frame takes around 2-4 ms, with rare jumps to around 6-10 ms
    • Actual rendering line (mesh.render(shader, GL20.GL_TRIANGLES);) consistently takes 0-1 ms
    • Creation and binding of both textures take 1-3 ms in total but jump to around 6-9 ms every couple of seconds
    • ByteBuffer copies generally take 0-2 ms in total but jump to around 6ms very rarely

Please don't hesitate to share if you think that these profiles can be made faster with some other method. Hope this little tutorial helped.

Antiphlogistic answered 17/3, 2014 at 14:23 Comment(12)
Thanks for this awesome post! I've just tried this on my Nexus 4, but onfortunately am only getting a green screen being rendered. Do you have any idea why this is caused? Would be great if you could help me!Krueger
OK I've found out that the "onPreviewFrame()" is actually never called... so it doesn't seem to enter the preview loop at all. I now added a SurfaceTexture object, set it to GL20.GL_TEXTURE0 and added it as setPreviewTexture(surfaceTexture). Now it enters the loop and renders the image as wanted through the yTexture and uvTexture... But why isn't the loop entered without that?Krueger
We had the same issue with some of our devices; it seems to be a device specific issue. We're currently investigating.Sortilege
Ok thank you... :) Well one workaround is through doing the setPreviewTexture stuff to start the loop... But still weird that this happens on some devicesKrueger
I have applied your method, but it fills with camera background all of textures I have added after renderBackground(), here's a screenshot of what I've gotGalatians
This is an absolutely first rate explanation and example, thanks!Pappose
Hi, I am trying to render the YUV 422(arranged like YUYV) frame using the same approach as you explained. But I am getting a weired output. Can I post my code here? @AyberkÖzgürBatch
@Vinothios We are not pursuing this project since a long time so I have no way to test your code, sorry.Sortilege
@AyberkÖzgür I have resolved my issue myself. Anyway thanks for the reply :-)Batch
I have some problems while converting yuv nv12 to RGB. Can you help, please? #60274022Cassidy
glPixelStore(GL_UNPACK_ROW_LENGTH, frame->linesize[1]); And I got Access Violation when use GL_LUMINANCE_ALPHA on uv_buffer from ffmpeg::AVFrame. what happend, it NV12 formatScorify
@Vinothios can you please post your code if you still have :)Whap
S
5

For the fastest and most optimized way, just use the common GL Extention

//Fragment Shader
#extension GL_OES_EGL_image_external : require
uniform samplerExternalOES u_Texture;

Than in Java

surfaceTexture = new SurfaceTexture(textureIDs[0]);
try {
   someCamera.setPreviewTexture(surfaceTexture);
} catch (IOException t) {
   Log.e(TAG, "Cannot set preview texture target!");
}

someCamera.startPreview();

private static final int GL_TEXTURE_EXTERNAL_OES = 0x8D65;

In Java GL Thread

GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureIDs[0]);
GLES20.glUniform1i(uTextureHandle, 0);

The color conversion is already done for you. You can do what ever you want right in the Fragment shader.

At all its a none Libgdx solution since it is platform dependent. You can Initialize the Platform dependant stuff in the wraper and than send it to Libgdx Activity.

Hope that saves you some time in your research.

Spoon answered 19/3, 2014 at 12:54 Comment(9)
It seems like a nice speed-up in terms of avoiding buffer copies, but I really need to have the explicit byte buffer that contains the image for native OpenCV processing. Is there any way that you can extract the byte buffer from the SurfaceTexture?Sortilege
when you use openCV, you don't need to hessitate with the basics. they have a nice api ready to use. If you want to have some Object recognition it would be the best way. You could do this with an own calculating shader, extracting solved values after each frame, but you have to be pretty good in Math to be better as the long developed OpenCV lib. When you just want to change visuals, just use the SurfaceTexture. There is also a read option with glReadPixels(), but i wouldn't suggest this in case of performance is need.Spoon
I'm not extremely familiar with OpenCV but it might actually be a good idea to try to let OpenCV do the conversion on the GPU (I've been told there is such a capability) and then render the image on the screen somehow. The thing is, the OpenCV processing is done in an external library that requires an ordinary byte array which I'm not really allowed to change.Sortilege
pls tell me, what you are aiming for. What do you want to get out of the data. It feels like you want to take a sledgehammer to crack a nut ;)Spoon
Ok, here it is: I take the live camera image, convert it to RGB and display on the screen, and at the same time send it to a native code that uses OpenCV in the background to do some image processing and return some simple results to the Java code. The said native code is in another library developed by someone else. Don't worry, the current solution is working nicely and is fast enough, I just wanted to share my solution in this thread :)Sortilege
k, i don't get why you are doing the conversion and not openCV itself.Spoon
I don't use OpenCV to do conversion because it introduces another software dependency; and to make it worse, a JNI dependency which is not particularly pretty. I'm already using JNI/OpenCV in my application but the solution could have been used without them. I see your point that my solution was particularly designed to open the camera image to the native code and why not go to OpenCV while we're at it. It could very well be done but I doubt the conversion could be optimized significantly more if at all to actually go through the effort of using OpenCV.Sortilege
And the way I'm using OpenCV is bypassing the whole OpenCV4Android SDK/ndk-build and building the actual OpenCV code using cmake and standalone NDK toolchains into shared libraries to be loaded dynamically during runtime. This gets rid of the OpenCV Manager and ndk-build process altogether, which by the way have their own advantages and disadvantages. If you want, I'll point you to my argumentation/rant about this issue which will be up in a public place in a few days. In short, I don't want to introduce OpenCV dependencies in tutorial codes unless absolutely necessary.Sortilege
@Spoon you should earn more reputation for this answer! Just a hint for others who are trying this: check if your test device support this extension with 'Gdx.gl20.glGetString(GL20.GL_EXTENSIONS)', implement the 'onFrameAvailable' callback of the 'SurfaceTexture' and call 'updateTexImage' only from your OpenGL Thread.Unitarian

© 2022 - 2024 — McMap. All rights reserved.