首页 > 解决方案 > iOS 语音框架 EARErrorDomain 代码=0

问题描述

Xcode 12.3
iOS 14.3
iPad Mini gen。5

上下文:
在我们的应用程序中,我们使用语音作为用户导航功能的主要输入。每次他们需要提供输入时,应用程序都会调用start()创建一个新的实例SFSpeechAudioBufferRecognitionRequest,该实例用于实例化一个recognitionTask. 一旦语音输入被识别,stop()被调用,它调用recognitionTask.cancelrecognitionTask.finish(见下文)。

func start(resultHandler: @escaping ResultHandler) throws {
    switch self.state {
    case .stopping:
        throw SpeechSessionError.notReadyToStart
    case .started:
        throw SpeechSessionError.invalidState
    case .unconfigured, .stopped:
        break
    }
    self.resultHandler = resultHandler

    self.sawBestTranscription = false
    self.mostRecentlyProcessedSegmentDuration = 0
    let request = SFSpeechAudioBufferRecognitionRequest.init()
    if recognizer.supportsOnDeviceRecognition {
        print("SpeechSession: Using on-device recognition")
        request.requiresOnDeviceRecognition = true
    } else {
        print("SpeechSession: Using remote recognition")
        // Don't assign value to attribute
    }
    self.request = request

    if self.state == .unconfigured || self.state == .stopped {
        let audioSession = AVAudioSession.sharedInstance()
        try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: .interruptSpokenAudioAndMixWithOthers)
        try audioSession.setActive(true, options: [.notifyOthersOnDeactivation])

        let node = self.audioEngine.inputNode
        let recordingFormat = node.outputFormat(forBus: 0)
        node.removeTap(onBus: 0)
        node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (audioPCMBuffer, _) in
            self?.request?.append(audioPCMBuffer)
        }
        self.state = .stopped
    }

    print("SpeechSession start()")
    try self.audioEngine.start()
    let task = self.recognizer.recognitionTask(with: request, delegate: self.recognizerDelegate)
    self.task = task
    self.state = .started
}

func stop(continueDeliveringTranscriptions: Bool) throws {
    guard self.state == .started else { throw SpeechSessionError.invalidState }
    print("SpeechSession stop()")
    self.state = .stopping(continueDeliveringTranscriptions: continueDeliveringTranscriptions)
    
    self.audioEngine.stop()
    self.request?.endAudio()
    if continueDeliveringTranscriptions {
        self.task?.finish()
    } else {
        self.task?.cancel()
        self.state = .stopped
    }
}

问题:
该应用程序一开始可以正常工作。30分钟左右后start(),调用并提供语音输入后会出现bug;它没有转录语音输入,而是触发了didFinish带有错误的处理程序Error Domain=EARErrorDomain Code=0 "Quasar executor C++ exception: 0x2d102dc28: Could not vm_allocate 4194304 of N5kaldi6quasar9TokenHeap11ForwardLinkE: 3

任何地方都没有提到这个错误。谷歌搜索没有返回任何相关结果。有谁知道这个错误来自哪里以及如何解决它?

标签: iosswiftspeech-recognition

解决方案


推荐阅读