Crop rect is unknown in this image
Asked Answered
A

1

0

What actually I want is to get the bitmap from ArCore onUpdate method. I had try various solutions but doesn't work. Then I got another solution and try to implement that solution but it will throw java.lang.UnsupportedOperationException: Crop rect is unknown in this image. exception. Here is my code.

@Override
public void onUpdate(FrameTime frameTime) {
    try {
        com.google.ar.core.Camera camera = fragment.getArSceneView().getArFrame().getCamera();
        if (camera.getTrackingState() == TrackingState.TRACKING) {
            fragment.getPlaneDiscoveryController().hide();
            final Handler handler = new Handler(Looper.getMainLooper());
            handler.postDelayed(new Runnable() {
                @Override
                public void run() {
                    runOnUiThread(new Runnable() {
                        @Override
                        public void run() {
                            try {
                                final Image image = fragment.getArSceneView().getArFrame().acquireCameraImage();
                                final ByteBuffer yuvBytes = imageToByteBuffer(image);

                                // Convert YUV to RGB
                                final RenderScript rs = RenderScript.create(ShareScreenActivity.this);
                                final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
                                final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);

                                final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
                                allocationYuv.copyFrom(yuvBytes.array());

                                ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
                                scriptYuvToRgb.setInput(allocationYuv);
                                scriptYuvToRgb.forEach(allocationRgb);
                                allocationRgb.copyTo(bitmap);

                                // Release
                                bitmap.recycle();
                                allocationYuv.destroy();
                                allocationRgb.destroy();
                                rs.destroy();
                                image.close();
                            } catch (NotYetAvailableException e) {
                                e.printStackTrace();
                            }
                        }
                    });
                }
            }, 100);
        }
    } catch (Exception e) {
        e.printStackTrace();
    }
}

ImageByteBuffer

private ByteBuffer imageToByteBuffer(final Image image) {
    final Rect crop = image.getCropRect();
    final int width = crop.width();
    final int height = crop.height();

    final Image.Plane[] planes = image.getPlanes();
    final byte[] rowData = new byte[planes[0].getRowStride()];
    final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
    final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);

    int channelOffset = 0;
    int outputStride = 0;

    for (int planeIndex = 0; planeIndex < 3; planeIndex++) {
        if (planeIndex == 0) {
            channelOffset = 0;
            outputStride = 1;
        } else if (planeIndex == 1) {
            channelOffset = width * height + 1;
            outputStride = 2;
        } else if (planeIndex == 2) {
            channelOffset = width * height;
            outputStride = 2;
        }

        final ByteBuffer buffer = planes[planeIndex].getBuffer();
        final int rowStride = planes[planeIndex].getRowStride();
        final int pixelStride = planes[planeIndex].getPixelStride();

        final int shift = (planeIndex == 0) ? 0 : 1;
        final int widthShifted = width >> shift;
        final int heightShifted = height >> shift;

        buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));

        for (int row = 0; row < heightShifted; row++) {
            final int length;

            if (pixelStride == 1 && outputStride == 1) {
                length = widthShifted;
                buffer.get(output.array(), channelOffset, length);
                channelOffset += length;
            } else {
                length = (widthShifted - 1) * pixelStride + 1;
                buffer.get(rowData, 0, length);

                for (int col = 0; col < widthShifted; col++) {
                    output.array()[channelOffset] = rowData[col * pixelStride];
                    channelOffset += outputStride;
                }
            }

            if (row < heightShifted - 1) {
                buffer.position(buffer.position() + rowStride - length);
            }
        }
    }
    return output;
}

I got the exception at this line final Rect crop = image.getCropRect();

I would like to thanks the @PerracoLabs to give me some hints to execute my requirement.

Note:

  • The reason behind to use the handler delay to prevent the exception of ResourceExhaustedException.
  • fragment is the object of ArFragment.

Stacktrace

E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.base.lion, PID: 19514
java.lang.UnsupportedOperationException: Crop rect is unknown in this image.
    at com.google.ar.core.ArImage.getCropRect(ArImage.java:1)
    at com.base.lion.activity.ShareScreenActivity.imageToByteBuffer(ShareScreenActivity.java:1614)
    at com.base.lion.activity.ShareScreenActivity.access$100(ShareScreenActivity.java:151)
    at com.base.lion.activity.ShareScreenActivity$7$1.run(ShareScreenActivity.java:1240)
    at android.app.Activity.runOnUiThread(Activity.java:7184)
    at com.base.lion.activity.ShareScreenActivity$7.run(ShareScreenActivity.java:1235)
    at android.os.Handler.handleCallback(Handler.java:938)
    at android.os.Handler.dispatchMessage(Handler.java:99)
    at android.os.Looper.loop(Looper.java:236)
    at android.app.ActivityThread.main(ActivityThread.java:8019)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:600)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:967)
Afrikaans answered 26/10, 2021 at 7:11 Comment(5)
Can you post the Exception stacktrace when in crashes? You will find it in the logcat traces.Scalenus
@PerracoLabs I have added the stacktrace please look into that. Thanks.Afrikaans
It seems that the ArImage implementation doesn't support at all getCropRect, and will always throw UnsupportedOperationException. So, the ArImage implementation is a bit different from the Image class. Try to use your own Rect with metrics: {0, 0, ArImage.width, ArImage.height}Scalenus
@PerracoLabs Yes agree with you. But the Image image is not the object of ArImage then why it is throw the error of ArImage. Let me try with my own rect metrics function.Afrikaans
I believe that your call to reader.acquireLatestImage() returns a ArImage object, and such class does not support getCropRect. It has the getCropRect method only because ArImage also implements the "Image" interface, but this doesn't mean that it supports the crop method, and if you check the documentation or source code of ArImage you will see that getCropRect always throws UnsupportedOperationException. So in your case you should use your own metrics. The getCropRect method has only meaning for Image objects coming from the camera image sensor.Scalenus
T
0

Use this:

import android.content.Context;
import android.content.res.Configuration;
import android.graphics.Bitmap;
import android.graphics.ImageFormat;
import android.graphics.Matrix;
import android.media.Image;

import androidx.renderscript.Allocation;
import androidx.renderscript.Element;
import androidx.renderscript.RenderScript;
import androidx.renderscript.ScriptIntrinsicYuvToRGB;
import androidx.renderscript.Type;

import javax.inject.Inject;
import javax.inject.Singleton;

@Singleton
public class ImageBitmapCreationHelper {

    private final Context context;

    // for converting yuv to argb
    private RenderScript renderScript;
    private ScriptIntrinsicYuvToRGB intrinsicYuvToRGB;
    private Type typeYUV;
    private YuvToByteArray yuvToByteArray;

    @Inject
    public ImageBitmapCreationHelper(Context context) {
        this.context = context;
    }

    @Inject
    public void createYuvToRgbConverter() {
        renderScript = RenderScript.create(context);
        yuvToByteArray = new YuvToByteArray();
        typeYUV = new Type.Builder(renderScript,
                Element.createPixel(renderScript, Element.DataType.UNSIGNED_8,
                        Element.DataKind.PIXEL_YUV))
                .setYuvFormat(ImageFormat.NV21)
                .create();
        intrinsicYuvToRGB = ScriptIntrinsicYuvToRGB.create(renderScript,
                Element.U8_4(renderScript));
    }

    public Bitmap createImageBitmap(Image image, boolean rotateIfPortrait) {
        byte[] yuvByteArray;
        int h = image.getHeight();
        int w = image.getWidth();
        Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
        int pixelSizeBits = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888);
        yuvByteArray = new byte[(int) ((h * w) * pixelSizeBits / 8)];
        yuvToByteArray.setPixelCount(h * w);
        yuvToByteArray.imageToByteArray(image, yuvByteArray);

        Allocation inputAllocation = Allocation.createSized(
                renderScript,
                typeYUV.getElement(),
                yuvByteArray.length
        );
        Allocation outputAllocation = Allocation.createFromBitmap(renderScript, bitmap);
        inputAllocation.copyFrom(yuvByteArray);
        intrinsicYuvToRGB.setInput(inputAllocation);
        intrinsicYuvToRGB.forEach(outputAllocation);
        outputAllocation.copyTo(bitmap);
        if (rotateIfPortrait) {
            Bitmap newBitmap;
            int sensorOrientation = context.getResources()
                    .getConfiguration().orientation;
            if (sensorOrientation == Configuration.ORIENTATION_PORTRAIT) {
                Matrix matrix = new Matrix();
                matrix.postRotate(90);
                newBitmap = Bitmap.createBitmap(bitmap, 0, 0,
                        bitmap.getWidth(), bitmap.getHeight(), matrix, false);
            }
            else {
                newBitmap = bitmap;
            }
            return newBitmap;
        }
        return bitmap;
    }

    public boolean isImagePortrait() {
        return context.getResources().getConfiguration().orientation
                == Configuration.ORIENTATION_PORTRAIT;
    }
}

Then, get the bitmap in onUpdate:

    Frame frame = arSceneView.getArFrame();
    if (frame != null) {
        try {
            Image cameraImage = frame.acquireCameraImage();
            Bitmap bitmap = imageBitmapCreationHelper.createImageBitmap(
                cameraImage, true);
        } catch (NotYetAvailableException | NullPointerException
                | DeadlineExceededException | ResourceExhaustedException e) {
            e.printStackTrace();
        }
    }
Teetotal answered 31/12, 2021 at 20:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.