首页 > 解决方案 > 如何仅将人脸检测裁剪为位图部分并显示结果?

问题描述

我正在实施google-vision face tracker

我已经看到了一些 stackoverflow 实现(here)来创建人脸位图,这是我的结果:

人脸检测和位图

class MyFaceDetector extends Detector<Face> {

private Detector<Face> mDelegate;

MyFaceDetector(Detector<Face> delegate) {
    mDelegate = delegate;
}


public SparseArray<Face> detect(Frame frame) {

    YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null); // Create YUV image from byte[]
    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);// Convert YUV image to Jpeg
    byte[] jpegArray = byteArrayOutputStream.toByteArray();
    Bitmap bmp = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length); // Convert Jpeg to Bitmap

    Frame outputbmp =new Frame.Builder().setBitmap(bmp).setRotation(Frame.ROTATION_270).build();

    //part of image processing
    //...
    //part of image processing

        double heartRateFrequency = Fft.FFT(arrayGreen, heartRateFrameLength, finalSamplingFrequency);
        double battitialminuto=(int)ceil(heartRateFrequency*60);
        double heartRate1Frequency = Fft.FFT(arrayRed, heartRateFrameLength, finalSamplingFrequency);
        double breath1=(int)ceil(heartRate1Frequency*60);
        if((battitialminuto > 10 || battitialminuto < 24) )
        {
            if((breath1 > 10 || breath1 < 24)) {

                bufferAvgBr = (battitialminuto+breath1)/2;

            }
            else{

                bufferAvgBr = battitialminuto;
            }
        }
        else if((breath1 > 10 || breath1 < 24)){

            bufferAvgBr = breath1;
        }

        Breath=(int)bufferAvgBr;
    }

    else {
        // do nothing
    }

    return mDelegate.detect(outputbmp);
}

public boolean isOperational() {
    return mDelegate.isOperational();
}

public boolean setFocus(int id) {
    return mDelegate.setFocus(id);
}
}

这里有 FaceTrackerActivity 类:

private void createCameraSource() {
    Context context = getApplicationContext();
    detector = new FaceDetector.Builder(context)
            .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
            .setMode(FaceDetector.ACCURATE_MODE)
            .setLandmarkType(FaceDetector.ALL_LANDMARKS)
            .build();
    MyFaceDetector myFaceDetector = new MyFaceDetector(detector);

    detector.setProcessor(
            new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())

                    .build());

    if (!detector.isOperational()) {

        Log.w(TAG, "Face detector dependencies are not yet available.");
    }

    mCameraSource = new CameraSource.Builder(context, detector)
            .setRequestedPreviewSize(640, 480)
            .setFacing(CameraSource.CAMERA_FACING_FRONT)
            .setRequestedFps(30.0f)
            .build();

这里有一部分是我要放置 MyFaceDetection 类产生的 Breath 的地方:

@SuppressLint("ClickableViewAccessibility")
    @Override
    public void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        setContentView(R.layout.main);
        mPreview = (CameraSourcePreview) findViewById(R.id.preview);
        mGraphicOverlay = (GraphicOverlay) findViewById(R.id.faceOverlay);
        info3 = (TextView) findViewById(R.id.info3);
        displayInfo = (Button) findViewById(R.id.display);
displayInfo.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {

             info3.setText("result"+ Breath);   
        });

我是初学者,我不明白为什么没有结果。我不确定我在计算位图图像或调用呼吸结果时犯了错误,但编译我没有错误,所以我找不到问题。有人可以指导我走向正确的道路吗?

标签: javaandroidimage-processingcameraandroid-camera

解决方案


推荐阅读