首页 > 解决方案 > 删除 AVAssetWriter 的第一个黑色/空白帧

问题描述

我有一个avassetwriter使用应用过滤器录制视频,然后通过avqueueplayer.

我的问题是,在播放时,录制的视频在第一帧显示黑色/空白屏幕。据我了解,这是由于作者在捕获第一个实际视频帧之前捕获了音频。

为了尝试解决,我在附加到音频写入器输入时放置了一个布尔检查,是否第一个视频帧附加到了适配器。也就是说,尽管打印出时间戳,我仍然在播放时看到一个黑框,这显示视频先于音频......我还尝试在输出 == 视频时检查以启动写入会话,但最终得到同样的结果。

任何指导或其他解决方法将不胜感激。

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds

if output == _videoOutput {
    if connection.isVideoOrientationSupported { connection.videoOrientation = .portrait }
        
    guard let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
    let ciImage = CIImage(cvImageBuffer: cvImageBuffer)

    guard let filteredCIImage = applyFilters(inputImage: ciImage) else { return }
    self.ciImage = filteredCIImage

    guard let cvPixelBuffer = getCVPixelBuffer(from: filteredCIImage) else { return }
    self.cvPixelBuffer = cvPixelBuffer
        
    self.ciContext.render(filteredCIImage, to: cvPixelBuffer, bounds: filteredCIImage.extent, colorSpace: CGColorSpaceCreateDeviceRGB())
        
    metalView.draw()
}
        
switch _captureState {
case .start:
    
    guard let outputUrl = tempURL else { return }
    
    let writer = try! AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
    
    let videoSettings = _videoOutput!.recommendedVideoSettingsForAssetWriter(writingTo: .mp4)
    let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
    videoInput.mediaTimeScale = CMTimeScale(bitPattern: 600)
    videoInput.expectsMediaDataInRealTime = true
    
    let pixelBufferAttributes = [
        kCVPixelBufferCGImageCompatibilityKey: NSNumber(value: true),
        kCVPixelBufferCGBitmapContextCompatibilityKey: NSNumber(value: true),
        kCVPixelBufferPixelFormatTypeKey: NSNumber(value: Int32(kCVPixelFormatType_32ARGB))
    ] as [String:Any]
    
    let adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput, sourcePixelBufferAttributes: pixelBufferAttributes)
    if writer.canAdd(videoInput) { writer.add(videoInput) }
                            
    let audioSettings = _audioOutput!.recommendedAudioSettingsForAssetWriter(writingTo: .mp4) as? [String:Any]
    let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
    audioInput.expectsMediaDataInRealTime = true
    if writer.canAdd(audioInput) { writer.add(audioInput) }

    _filename = outputUrl.absoluteString
    _assetWriter = writer
    _assetWriterVideoInput = videoInput
    _assetWriterAudioInput = audioInput
    _adapter = adapter
    _captureState = .capturing
    _time = timestamp
                
    writer.startWriting()
    writer.startSession(atSourceTime: CMTime(seconds: timestamp, preferredTimescale: CMTimeScale(600)))
    
case .capturing:
    
    if output == _videoOutput {
        if _assetWriterVideoInput?.isReadyForMoreMediaData == true {
            let time = CMTime(seconds: timestamp, preferredTimescale: CMTimeScale(600))
            _adapter?.append(self.cvPixelBuffer, withPresentationTime: time)

            if !hasWrittenFirstVideoFrame { hasWrittenFirstVideoFrame = true }
        }
    } else if output == _audioOutput {
        if _assetWriterAudioInput?.isReadyForMoreMediaData == true, hasWrittenFirstVideoFrame {
            _assetWriterAudioInput?.append(sampleBuffer)
        }
    }
    break
    
case .end:
    
    guard _assetWriterVideoInput?.isReadyForMoreMediaData == true, _assetWriter!.status != .failed else { break }
    
    _assetWriterVideoInput?.markAsFinished()
    _assetWriterAudioInput?.markAsFinished()
    _assetWriter?.finishWriting { [weak self] in
        
        guard let output = self?._assetWriter?.outputURL else { return }
        
        self?._captureState = .idle
        self?._assetWriter = nil
        self?._assetWriterVideoInput = nil
        self?._assetWriterAudioInput = nil
        
        
        self?.previewRecordedVideo(with: output)
    }
    
default:
    break
}
}

标签: iosswiftavfoundationavassetwriter

解决方案


确实,在.capturing您确保写入的第一个样本缓冲区是视频样本缓冲区的状态下,通过丢弃之前的音频样本缓冲区 -但是您仍然允许音频样本缓冲区的演示时间戳以writer.startSession(atSourceTime:). 这意味着您的视频一开始什么都没有,因此您不仅短暂地什么也听不到(这很难注意到),您也什么也看不到,您的视频播放器恰好用黑框表示。

从这个角度来看,没有黑框可以去掉,只有一个空白可以填补。您可以通过从第一个视频时间戳开始会话来填补这一空白。

这可以通过防止.start状态中的非视频样本缓冲区来实现,或者通过移动writer.startSession(atSourceTime:)if !hasWrittenFirstVideoFrame {}我猜的不太干净来实现。

ps 为什么要在CMTime和秒之间来回转换?为什么不坚持CMTime


推荐阅读