首页 > 解决方案 > CI 人脸检测器停止使用 AVAssetWriter 录制音频

问题描述

我想使用苹果 CI 人脸检测器在实时视频中检测人脸,然后我想使用 AVAssetWriter 将视频录制到文件中。

我以为我可以正常工作,但音频是喜怒无常的。有时它会与视频一起正确录制,有时它会开始录制但然后静音,有时它与视频不同步,有时它根本无法工作。

通过打印语句,我可以看到音频样本缓冲区在那里。它必须与面部检测有关,因为当我注释掉记录工作正常的代码时。

这是我的代码:

// MARK: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    let writable = canWrite()
    if writable {
        print("Writable")
    }

    if writable,
        sessionAtSourceTime == nil {
        // Start Writing
        sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
        videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
        print("session started")
    }


    // processing on the images, not audio
    if output == videoDataOutput {
        connection.videoOrientation = .portrait
        if connection.isVideoMirroringSupported {
            connection.isVideoMirrored = true
        }

        // convert current frame to CIImage
        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, pixelBuffer!, CMAttachmentMode(kCMAttachmentMode_ShouldPropagate)) as? [String: Any]
        let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments)

        // Detects faces based on your ciimage
        let features = faceDetector?.features(in: ciImage, options: [CIDetectorSmile : true,
                                                                     CIDetectorEyeBlink : true,
                                                                     ]).compactMap({ $0 as? CIFaceFeature })

        // Retreive frame of your buffer
        let desc = CMSampleBufferGetFormatDescription(sampleBuffer)
        let bufferFrame = CMVideoFormatDescriptionGetCleanAperture(desc!, false)

        // Draw face masks
        DispatchQueue.main.async { [weak self] in
            UIView.animate(withDuration: 0.2) {
            self?.drawFaceMasksFor(features: features!, bufferFrame: bufferFrame)
            }
        }
    }

    if writable,
        output == videoDataOutput,
        (videoWriterInput.isReadyForMoreMediaData) {
        // write video buffer
        videoWriterInput.append(sampleBuffer)
        print("video buffering")
    } else if writable,
        output == audioDataOutput,
        (audioWriterInput.isReadyForMoreMediaData) {
        // write audio buffer
        audioWriterInput?.append(sampleBuffer)
        print("audio buffering")
    }

}

标签: swiftavfoundationswift4avassetwritercidetector

解决方案


推荐阅读