首页 > 解决方案 > AVMetaDataObject.bounds 到 SwiftUI 位置?

问题描述

我有两种看法。父视图持有 a CameraFeed UIViewControllerRepresentable,它传回 a 的边界AVMetadataFaceObject。我正在尝试在边界应该在的地方绘制叠加层。我有点接近了,但我的映射不太正确。传CameraFeed回一组标量式边界,我将几何阅读器乘以它

致谢:感谢 HackingWithSwift InstaFilterHot Prospects示例,顺便说一句。他们对取得这一步非常有帮助。

我确实知道CALayer我可以在 AV 对象中绘制一个,如果我这样做,我可以使用更强大的 Vision 框架,但我正在看看我是否可以先尝试这种方法。我也知道有一个CIFaceFeature我也可以使用。此外,我没有可使用的 TrueDepth 前置摄像头。我只是想看看我是否可以劫持这个看似最简单的解决方案以使其发挥作用。

关于如何AVMetaDataObject进行参考框架转换的工作范围和方法,我缺少什么?提前致谢。GitHub 上的完整代码

struct CameraFeedView: View {
    @State var foundFace:CGRect?
    @State var geometryRect:CGRect = CGRect(x: 0, y: 0, width: 0, height: 0)
    //@State var foundFaceAdjusted:CGRect?
    //var testRect:CGRect = CGRect(
    //           x: 0.44112000039964916, 
    //           y: 0.1979580322805941, 
    //           width: 0.3337599992007017, 
    //           height: 0.5941303606941507)
    
    var body: some View {
        GeometryReader(content: { geometry in
            ZStack {
                CameraFeed(codeTypes: [.face], completion: handleCameraReturn)
                if (foundFace != nil) {
                    Rectangle()
                        .stroke()
                        .foregroundColor(.orange)
                        .frame(width: 100, height: 100, alignment: .topLeading)
                        .position(
                            x: geometry.size.width * foundFace!.origin.x, 
                            y: geometry.size.height * foundFace!.origin.y)
                        
                    FoundObject(frameRect: geometryRect, boundsRect: foundFace!)
                        .stroke()
                        .foregroundColor(.blue)
                }
            }
            .onAppear(perform: {
                let frame = geometry.frame(in: .global)
                geometryRect = CGRect(
                                origin: CGPoint(x: frame.minX, y: frame.minY), 
                                size: geometry.size
                               )
            })
        })
        
    }
    
    func handleCameraReturn(result: Result<CGRect, CameraFeed.CameraError>) {
        switch result {
        case .success(let bounds):
            print(bounds)
            foundFace = bounds
            //TODO: Add a timer
        case .failure(let error):
            print("Scanning failed: \(error)")
            foundFace = nil
        }
    }

}

struct FoundObject: Shape {
    func reMapBoundries(frameRect:CGRect, boundsRect:CGRect) -> CGRect {
        //Y bounded to width? Really?
        let newY = (frameRect.width * boundsRect.origin.x) + (1.0-frameRect.origin.x)
        //X bounded to height? Really?
        let newX = (frameRect.height * boundsRect.origin.y) + (1.0-frameRect.origin.y)
        let newWidth = 100//(frameRect.width * boundsRect.width)
        let newHeight = 100//(frameRect.height * boundsRect.height)
        let newRect = CGRect(
              origin: CGPoint(x: newX, y: newY), 
              size: CGSize(width: newWidth, height: newHeight))
        return newRect
    }
    
    let frameRect:CGRect
    let boundsRect:CGRect
    func path(in rect: CGRect) -> Path {
        var path = Path()
        path.addRect(reMapBoundries(frameRect: frameRect, boundsRect: boundsRect))
        return path
    }
}

标签: swiftswiftui

解决方案


推荐阅读