首页 > 解决方案 > 我的说话人识别神经网络不能正常工作

问题描述

我在第一学位有一个期末项目,我想建立一个神经网络,它将获取 wav 文件的前 13 个 mfcc 系数,并从一群谈话者中返回谁在音频文件中讲话。

我想让你注意:

  1. 我的音频文件是独立于文本的,因此它们有不同的长度和单词
  2. 我已经用 10 个扬声器的大约 35 个音频文件训练了机器(第一个扬声器大约有 15 个,第二个 10 个,第三个和第四个大约 5 个)

我定义:

X=mfcc(sound_voice)

Y=zero_array + 1 在第 i_th 位置(其中第 i_th 位置是 0 表示第一个扬声器,1 表示第二个,2 表示第三个......)

然后训练机器,然后检查机器的输出中的一些文件......

所以这就是我所做的......但不幸的是,结果看起来完全是随机的......

你能帮我理解为什么吗?

这是我在 python 中的代码-

from sklearn.neural_network import MLPClassifier
import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
from os import listdir
from os.path import isfile, join
from random import shuffle
import matplotlib.pyplot as plt
from tqdm import tqdm

winner = []  # this array count how much Bingo we had when we test the NN
for TestNum in tqdm(range(5)):  # in every round we build NN with X,Y that out of them we check 50 after we build the NN
    X = []
    Y = []
    onlyfiles = [f for f in listdir("FinalAudios/") if isfile(join("FinalAudios/", f))]   # Files in dir
    names = []  # names of the speakers
    for file in onlyfiles:  # for each wav sound
        # UNESSECERY TO UNDERSTAND THE CODE
        if " " not in file.split("_")[0]:
            names.append(file.split("_")[0])
        else:
            names.append(file.split("_")[0].split(" ")[0])
    names = list(dict.fromkeys(names))  # names of speakers
    vector_names = []  # vector for each name
    i = 0
    vector_for_each_name = [0] * len(names)
    for name in names:
        vector_for_each_name[i] += 1
        vector_names.append(np.array(vector_for_each_name))
        vector_for_each_name[i] -= 1
        i += 1
    for f in onlyfiles:
        if " " not in f.split("_")[0]:
            f_speaker = f.split("_")[0]
        else:
            f_speaker = f.split("_")[0].split(" ")[0]
        (rate, sig) = wav.read("FinalAudios/" + f)  # read the file
        try:
            mfcc_feat = python_speech_features.mfcc(sig, rate, winlen=0.2, nfft=512)  # mfcc coeffs
            for index in range(len(mfcc_feat)):  # adding each mfcc coeff to X, meaning if there is 50000 coeffs than
                # X will be [first coeff, second .... 50000'th coeff] and Y will be [f_speaker_vector] * 50000
                X.append(np.array(mfcc_feat[index]))
                Y.append(np.array(vector_names[names.index(f_speaker)]))
        except IndexError:
            pass
    Z = list(zip(X, Y))

    shuffle(Z)  # WE SHUFFLE X,Y TO PERFORM RANDOM ON THE TEST LEVEL

    X, Y = zip(*Z)
    X = list(X)
    Y = list(Y)
    X = np.asarray(X)
    Y = np.asarray(Y)

    Y_test = Y[:50]  # CHOOSE 50 FOR TEST, OTHERS FOR TRAIN
    X_test = X[:50]
    X = X[50:]
    Y = Y[50:]

    clf = MLPClassifier(solver='lbfgs', alpha=1e-2, hidden_layer_sizes=(5, 3), random_state=2)  # create the NN
    clf.fit(X, Y)  # Train it

    for sample in range(len(X_test)):  # add 1 to winner array if we correct and 0 if not, than in the end it plot it
        if list(clf.predict([X[sample]])[0]) == list(Y_test[sample]):
            winner.append(1)
        else:
            winner.append(0)

# plot winner
plot_x = []
plot_y = []
for i in range(1, len(winner)):
    plot_y.append(sum(winner[0:i])*1.0/len(winner[0:i]))
    plot_x.append(i)
plt.plot(plot_x, plot_y)
plt.xlabel('x - axis')
# naming the y axis
plt.ylabel('y - axis')

# giving a title to my graph
plt.title('My first graph!')

# function to show the plot
plt.show()

这是我的 zip 文件,其中包含代码和音频文件:https ://ufile.io/eggjm1gw

标签: machine-learningaudioneural-networksignal-processingvoice-recognition

解决方案


您的代码中存在许多问题,几乎不可能一次性完成,但让我们试一试。有两个主要问题:

  • 目前,您正在尝试用很少的训练示例来教您的神经网络,每个说话者只需一个(!)。任何机器学习算法都不可能学习任何东西。
  • 更糟糕的是,您所做的是在每个记录的前 25 毫秒内仅向 ANN 提供 MFCC(25 来自 的winlen参数python_speech_features)。在这些记录中的每一个中,前 25 毫秒将接近相同。即使每个扬声器有 10k 条录音,使用这种方法也无济于事。

我会给你具体的建议,但不会做所有的编码——毕竟这是你的功课。

  • 使用所有 MFCC,而不仅仅是前 25 毫秒。其中许多应该被跳过,仅仅是因为没有语音活动。通常应该有 VOD(Voice Activity Detector)告诉你要采取哪些,但在这个练习中我会跳过它作为初学者(你需要先学习基础知识)。
  • 不要使用字典。它不仅每个扬声器不会使用超过一个 MFCC 向量,而且它对于您的任务来说是非常低效的数据结构。使用numpy数组,它们更快且内存效率更高。有大量教程,包括scikit-learn演示如何numpy在这种情况下使用的教程。本质上,您创建了两个数组:一个带有训练数据,第二个带有标签。示例:如果omersk扬声器“产生”50000 个 MFCC 向量,您将获得(50000, 13)训练数组。对应的标签数组将具有50000与说话者相对应的单个常量值 (id)(例如,omersk为 0,lucas为 1,依此类推)。我会考虑使用更长的窗口(也许 200 毫秒,实验!)来减少方差。

不要忘记拆分数据以进行训练、验证和测试。您将拥有足够多的数据。此外,对于这个练习,我会注意不要为任何单个说话者提供过多的数据 - 而不是采取措施确保算法没有偏见。

稍后,当您进行预测时,您将再次计算说话人的 MFCC。通过 10 秒记录、200 毫秒窗口和 100 毫秒重叠,您将获得 99 个 MFCC 向量,形状为(99, 13). 对于每个产生概率,该模型应该在 99 个向量中的每一个上运行。当你把它加起来(并规范化,让它变得更好)并取最高值时,你会得到最有可能的演讲者。

通常还会考虑许多其他事情,但在这种情况下(家庭作业),我将专注于正确掌握基础知识。

编辑:我决定尝试用你的想法来创建模型,但基础是固定的。它不是完全干净的 Python,因为它改编自我正在运行的 Jupyter Notebook。

import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
import glob
import os

from collections import defaultdict
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.model_selection import cross_validate
from sklearn.ensemble import RandomForestClassifier


audio_files_path = glob.glob('audio/*.wav')
win_len = 0.04 # in seconds
step = win_len / 2
nfft = 2048

mfccs_all_speakers = []
names = []
data = []

for path in audio_files_path:
    fs, audio = wav.read(path)
    if audio.size > 0:
        mfcc = python_speech_features.mfcc(audio, samplerate=fs, winlen=win_len,
                                            winstep=step, nfft=nfft, appendEnergy=False)
        filename = os.path.splitext(os.path.basename(path))[0]
        speaker = filename[:filename.find('_')]
        data.append({'filename': filename,
                     'speaker': speaker,
                     'samples': mfcc.shape[0],
                     'mfcc': mfcc})
    else:
        print(f'Skipping {path} due to 0 file size')

speaker_sample_size = defaultdict(int)
for entry in data:
    speaker_sample_size[entry['speaker']] += entry['samples']

person_with_fewest_samples = min(speaker_sample_size, key=speaker_sample_size.get)
print(person_with_fewest_samples)

max_accepted_samples = int(speaker_sample_size[person_with_fewest_samples] * 0.8)
print(max_accepted_samples)

training_idx = []
test_idx = []
accumulated_size = defaultdict(int)

for entry in data:
    if entry['speaker'] not in accumulated_size:
        training_idx.append(entry['filename'])
        accumulated_size[entry['speaker']] += entry['samples']
    elif accumulated_size[entry['speaker']] < max_accepted_samples:
        accumulated_size[entry['speaker']] += entry['samples']
        training_idx.append(entry['filename'])

X_train = []
label_train = []

X_test = []
label_test = []

for entry in data:
    if entry['filename'] in training_idx:
        X_train.append(entry['mfcc'])
        label_train.extend([entry['speaker']] * entry['mfcc'].shape[0])
    else:
        X_test.append(entry['mfcc'])
        label_test.extend([entry['speaker']] * entry['mfcc'].shape[0])

X_train = np.concatenate(X_train, axis=0)
X_test = np.concatenate(X_test, axis=0)

assert (X_train.shape[0] == len(label_train))
assert (X_test.shape[0] == len(label_test))

print(f'Training: {X_train.shape}')
print(f'Testing: {X_test.shape}')

le = preprocessing.LabelEncoder()
y_train = le.fit_transform(label_train)
y_test = le.transform(label_test)

clf = MLPClassifier(solver='lbfgs', alpha=1e-2, hidden_layer_sizes=(5, 3), random_state=42, max_iter=1000)

cv_results = cross_validate(clf, X_train, y_train, cv=4)
print(cv_results)

{'fit_time': array([3.33842635, 4.25872731, 4.73704267, 5.9454329 ]),
 'score_time': array([0.00125694, 0.00073504, 0.00074005, 0.00078583]),
 'test_score': array([0.40380048, 0.52969121, 0.48448687, 0.46043165])}

test_score不是明星。有很多需要改进的地方(对于初学者来说,算法的选择),但基础知识就在那里。请注意我如何获得训练样本。这不是随机的,我只考虑整个录音。您不能将给定录音中的样本同时放入trainingtest,因为test这应该是新颖的。

您的代码中有什么不起作用?我想说很多。您正在采集 200 毫秒的样本,但非常短fftpython_speech_features可能向您抱怨fftis 应该比您正在处理的帧长。

我留给你测试模型。这不会很好,但它是一个开端。


推荐阅读