首页 > 解决方案 > Python Librosa Keras 神经网络错误:数组的索引过多

问题描述

我最近尝试进行一项实验,使用 Keras 在 Python IDE IDLE 中编写的神经网络用于分析 GTZAN 歌曲数据集。我试图改变层以查看是否对性能有任何影响。我的实验基于一篇详细介绍该项目基础的特定文章:

https://medium.com/@navdeepsingh_2336/identifying-the-genre-of-a-song-with-neural-networks-851db89c42f0

根据 Stack Overflow 上另一位开发人员的建议,我寻求了 scikit-learn 模块的帮助。

我的代码显示在这里:

import librosa
import librosa.feature
import librosa.display
import glob
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils.np_utils import to_categorical

def display_mfcc(song):
    y, _ = librosa.load(song)
    mfcc = librosa.feature.mfcc(y)

    plt.figure(figsize=(10, 4))
    librosa.display.specshow(mfcc, x_axis='time', y_axis='mel')
    plt.colorbar()
    plt.title(song)
    plt.tight_layout()
    plt.show()


def extract_features_song(f):
    y, _ = librosa.load(f)

    mfcc = librosa.feature.mfcc(y)
    mfcc /= np.amax(np.absolute(mfcc))

    return np.ndarray.flatten(mfcc)[:25000]

def generate_features_and_labels():
    all_features = []
    all_labels = []
    genres = ['blues', 'classical', 'country', 'disco', 'hiphop',
    'jazz', 'metal', 'pop', 'reggae', 'rock']

    for genre in genres:
        sound_files = glob.glob('genres/'+genre+'/*.au')
        print('Processing %d songs in %s genre...' % 
        (len(sound_files), genre))
        for f in sound_files:
            features = extract_features_song(f)
            all_features.append(features)
            all_labels.append(genre)

    label_uniq_ids, label_row_ids = np.unique(all_labels,   
    (len(sound_files), genre))
    label_row_ids = label_row_ids.astype(np.int32, copy=False)
    onehot_labels = to_categorical(label_row_ids, 
    len(label_uniq_ids))

    return np.stack(all_features), onehot_labels


features, labels = generate_features_and_labels()

print(np.shape(features))
print(np.shape(labels))

training_split = 0.8

x = features
y = labels

sss = StratifiedShuffleSplit(n_splits=1, test_size=0.20,     
random_state=37)

for train_index, test_index in sss.split(features, labels):
  x_train, x_test = features[train_index], features[test_index]
  y_train, y_test = labels[train_index], labels[test_index]

print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)

train_input = train_index[:,:-10:]
train_labels = train_index[:,-10:]

test_input = test_index[:,:-10:]
test_labels = test_index[:,-10:]

print(np.shape(train_input))
print(np.shape(train_labels))

model = Sequential([
    Dense(100, input_dim=np.shape(train_input)[1]),
    Activation('relu'),
    Dense(10),
    Activation('softmax'),
    ])


model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
print(model.summary())

model.fit(train_input, train_labels, epochs=10, batch_size=32,
          validation_split=0.2) 
loss, acc = model.evaluate(test_input, test_labels, batch_size=32)

print('Done!')
print('Loss: %.4f, accuracy: %.4f' % (loss, acc))

当我运行程序时,Python 开始打印预期的响应:

Processing 100 songs in blues genre...
Processing 100 songs in classical genre...
Processing 100 songs in country genre...
Processing 100 songs in disco genre...
Processing 100 songs in hiphop genre...
Processing 100 songs in jazz genre...
Processing 100 songs in metal genre...
Processing 100 songs in pop genre...
Processing 100 songs in reggae genre...
Processing 100 songs in rock genre...
(1000, 25000)
(1000, 10)
(800, 25000) (200, 25000) (800, 10) (200, 10)

但这被一条错误消息打断了:

Traceback (most recent call last):
  File "/Users/surengrigorian/Documents/Stage1.py", line 74, in <module>
    train_input = train_index[:,:-10:]
IndexError: too many indices for array

感谢您提供有关此问题的任何帮助。

标签: pythonkerasscikit-learnneural-networklibrosa

解决方案


这是因为train_indextest_index是一维数组,其中包含要在训练和测试中使用的样本索引。它们本身不是数据。您试图访问[:,:-10:]一维数组上的第二个轴(通过执行)是问题所在。

请在该行中指定您要执行的操作:

train_input = train_index[:,:-10:]

推荐阅读