首页 > 解决方案 > 此图像中的裁剪矩形未知

问题描述

实际上我想要的是从 ArCore onUpdate 方法获取位图。我尝试了各种解决方案,但都不起作用。然后我得到了另一个解决方案并尝试实现该解决方案,但它会抛出java.lang.UnsupportedOperationException: Crop rect is unknown in this image。例外。这是我的代码。

@Override
public void onUpdate(FrameTime frameTime) {
    try {
        com.google.ar.core.Camera camera = fragment.getArSceneView().getArFrame().getCamera();
        if (camera.getTrackingState() == TrackingState.TRACKING) {
            fragment.getPlaneDiscoveryController().hide();
            final Handler handler = new Handler(Looper.getMainLooper());
            handler.postDelayed(new Runnable() {
                @Override
                public void run() {
                    runOnUiThread(new Runnable() {
                        @Override
                        public void run() {
                            try {
                                final Image image = fragment.getArSceneView().getArFrame().acquireCameraImage();
                                final ByteBuffer yuvBytes = imageToByteBuffer(image);

                                // Convert YUV to RGB
                                final RenderScript rs = RenderScript.create(ShareScreenActivity.this);
                                final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
                                final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);

                                final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
                                allocationYuv.copyFrom(yuvBytes.array());

                                ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
                                scriptYuvToRgb.setInput(allocationYuv);
                                scriptYuvToRgb.forEach(allocationRgb);
                                allocationRgb.copyTo(bitmap);

                                // Release
                                bitmap.recycle();
                                allocationYuv.destroy();
                                allocationRgb.destroy();
                                rs.destroy();
                                image.close();
                            } catch (NotYetAvailableException e) {
                                e.printStackTrace();
                            }
                        }
                    });
                }
            }, 100);
        }
    } catch (Exception e) {
        e.printStackTrace();
    }
}

图像字节缓冲区

private ByteBuffer imageToByteBuffer(final Image image) {
    final Rect crop = image.getCropRect();
    final int width = crop.width();
    final int height = crop.height();

    final Image.Plane[] planes = image.getPlanes();
    final byte[] rowData = new byte[planes[0].getRowStride()];
    final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
    final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);

    int channelOffset = 0;
    int outputStride = 0;

    for (int planeIndex = 0; planeIndex < 3; planeIndex++) {
        if (planeIndex == 0) {
            channelOffset = 0;
            outputStride = 1;
        } else if (planeIndex == 1) {
            channelOffset = width * height + 1;
            outputStride = 2;
        } else if (planeIndex == 2) {
            channelOffset = width * height;
            outputStride = 2;
        }

        final ByteBuffer buffer = planes[planeIndex].getBuffer();
        final int rowStride = planes[planeIndex].getRowStride();
        final int pixelStride = planes[planeIndex].getPixelStride();

        final int shift = (planeIndex == 0) ? 0 : 1;
        final int widthShifted = width >> shift;
        final int heightShifted = height >> shift;

        buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));

        for (int row = 0; row < heightShifted; row++) {
            final int length;

            if (pixelStride == 1 && outputStride == 1) {
                length = widthShifted;
                buffer.get(output.array(), channelOffset, length);
                channelOffset += length;
            } else {
                length = (widthShifted - 1) * pixelStride + 1;
                buffer.get(rowData, 0, length);

                for (int col = 0; col < widthShifted; col++) {
                    output.array()[channelOffset] = rowData[col * pixelStride];
                    channelOffset += outputStride;
                }
            }

            if (row < heightShifted - 1) {
                buffer.position(buffer.position() + rowStride - length);
            }
        }
    }
    return output;
}

我在这一行得到了异常 final Rect crop = image.getCropRect();

我要感谢@PerracoLabs 给我一些提示来执行我的要求。

笔记:

堆栈跟踪

E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.base.lion, PID: 19514
java.lang.UnsupportedOperationException: Crop rect is unknown in this image.
    at com.google.ar.core.ArImage.getCropRect(ArImage.java:1)
    at com.base.lion.activity.ShareScreenActivity.imageToByteBuffer(ShareScreenActivity.java:1614)
    at com.base.lion.activity.ShareScreenActivity.access$100(ShareScreenActivity.java:151)
    at com.base.lion.activity.ShareScreenActivity$7$1.run(ShareScreenActivity.java:1240)
    at android.app.Activity.runOnUiThread(Activity.java:7184)
    at com.base.lion.activity.ShareScreenActivity$7.run(ShareScreenActivity.java:1235)
    at android.os.Handler.handleCallback(Handler.java:938)
    at android.os.Handler.dispatchMessage(Handler.java:99)
    at android.os.Looper.loop(Looper.java:236)
    at android.app.ActivityThread.main(ActivityThread.java:8019)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:600)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:967)

标签: javaandroidarcoresceneform

解决方案


用这个:

import android.content.Context;
import android.content.res.Configuration;
import android.graphics.Bitmap;
import android.graphics.ImageFormat;
import android.graphics.Matrix;
import android.media.Image;

import androidx.renderscript.Allocation;
import androidx.renderscript.Element;
import androidx.renderscript.RenderScript;
import androidx.renderscript.ScriptIntrinsicYuvToRGB;
import androidx.renderscript.Type;

import javax.inject.Inject;
import javax.inject.Singleton;

@Singleton
public class ImageBitmapCreationHelper {

    private final Context context;

    // for converting yuv to argb
    private RenderScript renderScript;
    private ScriptIntrinsicYuvToRGB intrinsicYuvToRGB;
    private Type typeYUV;
    private YuvToByteArray yuvToByteArray;

    @Inject
    public ImageBitmapCreationHelper(Context context) {
        this.context = context;
    }

    @Inject
    public void createYuvToRgbConverter() {
        renderScript = RenderScript.create(context);
        yuvToByteArray = new YuvToByteArray();
        typeYUV = new Type.Builder(renderScript,
                Element.createPixel(renderScript, Element.DataType.UNSIGNED_8,
                        Element.DataKind.PIXEL_YUV))
                .setYuvFormat(ImageFormat.NV21)
                .create();
        intrinsicYuvToRGB = ScriptIntrinsicYuvToRGB.create(renderScript,
                Element.U8_4(renderScript));
    }

    public Bitmap createImageBitmap(Image image, boolean rotateIfPortrait) {
        byte[] yuvByteArray;
        int h = image.getHeight();
        int w = image.getWidth();
        Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
        int pixelSizeBits = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888);
        yuvByteArray = new byte[(int) ((h * w) * pixelSizeBits / 8)];
        yuvToByteArray.setPixelCount(h * w);
        yuvToByteArray.imageToByteArray(image, yuvByteArray);

        Allocation inputAllocation = Allocation.createSized(
                renderScript,
                typeYUV.getElement(),
                yuvByteArray.length
        );
        Allocation outputAllocation = Allocation.createFromBitmap(renderScript, bitmap);
        inputAllocation.copyFrom(yuvByteArray);
        intrinsicYuvToRGB.setInput(inputAllocation);
        intrinsicYuvToRGB.forEach(outputAllocation);
        outputAllocation.copyTo(bitmap);
        if (rotateIfPortrait) {
            Bitmap newBitmap;
            int sensorOrientation = context.getResources()
                    .getConfiguration().orientation;
            if (sensorOrientation == Configuration.ORIENTATION_PORTRAIT) {
                Matrix matrix = new Matrix();
                matrix.postRotate(90);
                newBitmap = Bitmap.createBitmap(bitmap, 0, 0,
                        bitmap.getWidth(), bitmap.getHeight(), matrix, false);
            }
            else {
                newBitmap = bitmap;
            }
            return newBitmap;
        }
        return bitmap;
    }

    public boolean isImagePortrait() {
        return context.getResources().getConfiguration().orientation
                == Configuration.ORIENTATION_PORTRAIT;
    }
}

然后,在 onUpdate 中获取位图:

    Frame frame = arSceneView.getArFrame();
    if (frame != null) {
        try {
            Image cameraImage = frame.acquireCameraImage();
            Bitmap bitmap = imageBitmapCreationHelper.createImageBitmap(
                cameraImage, true);
        } catch (NotYetAvailableException | NullPointerException
                | DeadlineExceededException | ResourceExhaustedException e) {
            e.printStackTrace();
        }
    }

推荐阅读