首页 > 解决方案 > 直接从音频数据流语音到文本而不保存到 (.wav) 文件

问题描述

我正在使用 Pyaudio 和 Speech_recognition 模块将音频转换为文本。我的问题是我可以在不创建 audio.wav 文件的情况下将使用 pyaudio 录制的音频转换为文本吗?因为当我使用打开它时,语音识别模块显然可以与音频文件一起使用with sr.AudioFile(chunk_name) as source:

录音代码:

def start_recording(duration,chunk_th):
    '''
    This function takes the duration in seconds and chunks threashold then records the sound from the microphone 
    and displays the text in realtime after saving the recordings into the folder using multithreading.
    '''
    audio_format = pyaudio.paInt16
    rate = 44100

    p = pyaudio.PyAudio()
    r = sr.Recognizer()

    stream = p.open(format=audio_format,
                    channels=1,
                    rate=rate,
                    input=True,
                    frames_per_buffer=1024)

    print("*** Recording Started ***")

    frames = []

    chunk_no = 1
    for i in range(1, int(rate / 1024 * duration)+1):
        data = stream.read(1024)
        frames.append(data)
        if i % chunk_th == 0 or i == int(rate / 1024 * duration):
            # start a thread with frames
            fm_copy = frames.copy()
            t = threading.Thread(target=save_chunk,args=(fm_copy,chunk_no,rate,p,r))
            t.start()
            chunk_no += 1
            frames.clear()
#     print(len(frames))
    print("* done recording")

    stream.stop_stream()
    stream.close()
    p.terminate()

这是用于录制的代码,我在间隔后将帧传递到我的线程中,以便我可以进行录制并实时处理音频。
我在线程中调用的函数:

def save_chunk(fr,chunk_no,rate,p,r):
    '''
    This function will take the frames for the each chunk and saves the file in the folder
    then extract the spoken text from the file and displays the text.
    Question : Can i extract the audio from frames directly using same methodology without having to 
    create audio files in the folder?
    '''
#     print("this has recived {} frames".format(len(fr)))
    try:
         
        chunk_name = "Chunks/chunk_{}.wav".format(chunk_no)
        wf = wave.open(chunk_name, 'wb')
        wf.setnchannels(1)
        wf.setsampwidth(p.get_sample_size(pyaudio.paInt16))
        wf.setframerate(rate)
        wf.writeframes(b''.join(fr))
        wf.close()
        
        
        with sr.AudioFile(chunk_name) as source:
            audio = r.listen(source)
            text = r.recognize_google(audio)
            print(text)
            
    except Exception as e:
        print("No Audio Detected",e)

标签: audiospeech-recognitionspeech-to-textpyaudiogoogle-speech-api

解决方案


推荐阅读