首页 > 解决方案 > 将 AVAudioSourceNode 连接到 AVAudioSinkNode 不起作用

问题描述

语境

我正在使用 AVAudioEngine 编写一个信号解释器,它将分析麦克风输入。在开发过程中,我想使用默认输入缓冲区,这样我就不必为麦克风发出噪音来测试我的更改。我正在使用 Catalyst 进行开发。

问题

我正在使用AVAudioSinkNode来获取声音缓冲区(据称性能比使用更好.installTap)。我正在使用(子类)AVAudioSourceNode来生成正弦波。当我将这两者连接在一起时,我希望调用接收节点的回调,但事实并非如此。也没有调用源节点的渲染块。

let engine = AVAudioEngine()

let output = engine.outputNode
let outputFormat = output.inputFormat(forBus: 0)
let sampleRate = Float(outputFormat.sampleRate)

let sineNode440 = AVSineWaveSourceNode(
    frequency: 440,
    amplitude: 1,
    sampleRate: sampleRate
)

let sink = AVAudioSinkNode { _, frameCount, audioBufferList -> OSStatus in
    print("[SINK] + \(frameCount) \(Date().timeIntervalSince1970)")
    return noErr
}

engine.attach(sineNode440)
engine.attach(sink)
engine.connect(sineNode440, to: sink, format: nil)

try engine.start()

附加测试

如果我连接engine.inputNode到接收器(即engine.connect(engine.inputNode, to: sink, format: nil)),接收器回调将按预期调用。

当我连接sineNode440到 时engine.outputNode,我可以听到声音并且按预期调用渲染块。因此,当连接到设备输入/输出时,源和接收器都单独工作,但不能一起工作。

AVSineWaveSourceNode

对问题不重要但相关: AVSineWaveSourceNode 基于Apple 示例代码。此节点在连接到 时会产生正确的声音engine.outputNode

class AVSineWaveSourceNode: AVAudioSourceNode {

    /// We need this separate class to be able to inject the state in the render block.
    class State {
        let amplitude: Float
        let phaseIncrement: Float
        var phase: Float = 0

        init(frequency: Float, amplitude: Float, sampleRate: Float) {
            self.amplitude = amplitude
            phaseIncrement = (2 * .pi / sampleRate) * frequency
        }
    }

    let state: State

    init(frequency: Float, amplitude: Float, sampleRate: Float) {
        let state = State(
            frequency: frequency,
            amplitude: amplitude,
            sampleRate: sampleRate
        )
        self.state = state

        let format = AVAudioFormat(standardFormatWithSampleRate: Double(sampleRate), channels: 1)!

        super.init(format: format, renderBlock: { isSilence, _, frameCount, audioBufferList -> OSStatus in
            print("[SINE GENERATION \(frequency) - \(frameCount)]")
            let tau = 2 * Float.pi
            let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
            for frame in 0..<Int(frameCount) {
                // Get signal value for this frame at time.
                let value = sin(state.phase) * amplitude
                // Advance the phase for the next frame.
                state.phase += state.phaseIncrement
                if state.phase >= tau {
                    state.phase -= tau
                }
                if state.phase < 0.0 {
                    state.phase += tau
                }
                // Set the same value on all channels (due to the inputFormat we have only 1 channel though).
                for buffer in ablPointer {
                    let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                    buf[frame] = value
                }
            }

            return noErr
        })

        for i in 0..<self.numberOfInputs {
            print("[SINEWAVE \(frequency)] BUS \(i) input format: \(self.inputFormat(forBus: i))")
        }

        for i in 0..<self.numberOfOutputs {
            print("[SINEWAVE \(frequency)] BUS \(i) output format: \(self.outputFormat(forBus: i))")
        }
    }
}

标签: swiftavfoundationcore-audio

解决方案


outputNodeAVAudioEngine正常配置(“在线”)时驱动音频处理图。outputNode从其输入节点拉取音频,从其输入节点拉取音频等。当您在没有连接到 的情况下相互连接sineNode和时,没有任何东西连接到 的输出总线或输入总线,因此当硬件要求音频时,它无处可寻。sinkoutputNodesinkoutputNodeoutputNode

sink如果我理解正确,我认为您可以通过摆脱、连接并sineNode手动渲染模式outputNode运行AVAudioEngine来完成您想做的事情。在手动渲染模式下,您传递一个手动渲染块来接收音频(类似于)并通过调用renderOffline(_:to:)手动驱动图形。AVAudioSinkNode


推荐阅读