首页 > 解决方案 > AVAssetWriterInput appendSampleBuffer 成功,但从 CMSampleBufferGetSampleSize 记录错误 kCMSampleBufferError_BufferHasNoSampleSizes

问题描述

从 iOS 12.4 beta 版本开始,Calling appendSampleBufferon anAVAssetWriterInput正在记录以下错误:

CMSampleBufferGetSampleSize 在 /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c 发出 err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0): 4153

我们在之前的版本中没有看到这个错误,在 iOS 13 测试版中也没有。有没有其他人遇到过这个问题,并且可以提供任何信息来帮助我们解决它?

更多细节

我们的应用程序正在录制视频和音频,使用两个AVAssetWriterInput对象,一个用于视频输入(附加像素缓冲区),另一个用于音频输入 - 附加使用创建的音频缓冲区CMSampleBufferCreate。(见下面的代码。)

由于我们的音频数据是非交错的,创建后我们将其转换为交错格式,并将其传递给appendSampleBuffer.

相关代码

// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
        CMTimeMake(1, _asbdFormat.mSampleRate),
        currentAudioTime,
        kCMTimeInvalid };


OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
                                           NULL,
                                           false,
                                           NULL,
                                           NULL,
                                           _cmFormat,
                                           (CMItemCount)(*inNumberFrames),
                                           1,
                                           &timing,
                                           0,
                                           NULL,
                                           &buff);

// checking for error... (non returned)

// Converting from non-interleaved to interleaved.
    float zero = 0.f;
    vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
    // Channel L
    vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
    // Channel R
    vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);

    _interleavedABL.mBuffers[0].mDataByteSize =  _interleavedASBD.mBytesPerFrame * numFrames;
    status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
                                                            kCFAllocatorDefault,
                                                            kCFAllocatorDefault,
                                                            0,
                                                            &_interleavedABL);

// checking for error... (non returned)

if (_assetWriterAudioInput.readyForMoreMediaData) {

    BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer];  // THIS PRODUCES THE ERROR.

// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}

在此之前,这_assetWriterAudioInput是创建的方式:

-(BOOL) initializeAudioWriting
{
    BOOL success = YES;

    NSDictionary *audioCompressionSettings = // settings dictionary, see below.

    if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
        _assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
        _assetWriterAudioInput.expectsMediaDataInRealTime = YES;

        if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
            [_assetWriter addInput:_assetWriterAudioInput];
        }
        else {
            // return error
        }
    }
    else {
        // return error
    }

    return success;
}

audioCompressionSettings 定义为:

+ (NSDictionary*)audioSettingsForRecording
{
    AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
    double preferredHardwareSampleRate;

    if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
    {
        preferredHardwareSampleRate = [sharedAudioSession sampleRate];
    }
    else
    {
        preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
    }

    AudioChannelLayout acl;
    bzero( &acl, sizeof(acl));
    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;


    return @{
         AVFormatIDKey: @(kAudioFormatMPEG4AAC),
         AVNumberOfChannelsKey: @2,
         AVSampleRateKey: @(preferredHardwareSampleRate),
         AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
         AVEncoderBitRateKey: @160000
         };
}

记录以下appendSampleBuffer错误和调用堆栈(相关部分):

/BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c: 4153

0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]

1 我的应用程序 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260] ...

任何帮助,将不胜感激。

编辑:添加以下信息: 我们将 0 和 NULL 传递给 - 根据文档,这是我们在创建非交错数据缓冲区时必须传递的参数(尽管此文档对我来说有点混乱numSampleSizeEntriessampleSizeArrayCMSampleBufferCreate

我们尝试传递 1 和一个指向 size_t 参数的指针,例如:

size_t sampleSize = 4;

但它没有帮助:它记录了一个错误:

figSampleBufferCheckDataSize 发出信号 err=-12731 (kFigSampleBufferError_RequiredParameterMissing) (bbuf 与 sbuf 数据大小不匹配)

而且我们不清楚应该有什么值(如何知道每个样本的样本量),或者这是否是正确的解决方案。

标签: iosobjective-caudio-recordingavassetwriteravasset

解决方案


我想我们有答案:

如下传递CMSampleBufferCreate的numSampleSizeEntriessampleSizeArray 参数似乎可以修复它(仍然需要完全验证)。

据我了解,原因是我们最后附加了交错缓冲区,它需要具有样本大小(至少在 12.4 版本中)。

// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
                                           NULL,
                                           false,
                                           NULL,
                                           NULL,
                                           _cmFormat,
                                           (CMItemCount)(*inNumberFrames),
                                           1,
                                           &timing,
                                           1,
                                           &sampleSize,
                                           &buff);

推荐阅读