首页 > 解决方案 > 如何修复网络输入形状

问题描述

共有 1275 张图像,每张图像的尺寸为 (128,19,1)。这些图像被分成五个一组,所以有 255 个(1275/5)个样本,每个样本有 5 个图像,最终的数据形状是(255、5、128、19、1)。该数据必须输入到 CONVLSTM2D 网络,其代码如下。训练过程已完全完成,但在评估过程开始时,它给出了以下错误。谢谢,如果有人可以帮助我修复它。

错误:

IndexError:列表索引超出范围

文件“”,第 1 行,在 runfile('D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py', wdir='D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction /stft_feature/Training_set/P300')

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\spyder_kernels\customize\spydercustomize.py”,第 786 行,运行文件 execfile(文件名,命名空间)

在 execfile exec(compile(f.read() , 文件名, 'exec'), 命名空间)

文件“D:/thesis/Paper 3/Feature Extraction/two_dimension_Feature_extraction/stft_feature/Training_set/P300/Afrah_convlstm2d.py”,第 111 行,在 test_loss,test_acc = seq.evaluate(test_data)

文件“C:\Users\pouyaandish\AppData\Local\conda\conda\envs\kafieh\lib\site-packages\keras\engine\training.py”,第 1361 行,在评估回调=回调)

如果 issparse(ins[i]) 和不是 K.is_sparse(feed[i]):

IndexError:列表索引超出范围

#Importing libraries
#-------------------------------------------------
from PIL import Image
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import os
from matplotlib import pyplot as plt


#Data Preprocessing
#-----------------------------------------------------------------
Data = np.zeros((255,5,128,19,1),dtype=np.uint8)

image_folder = 'D:\\thesis\\Paper 3\\Feature Extraction\\two_dimension_Feature_extraction\\stft_feature\\Training_set\\P300'
images = [img for img in os.listdir(image_folder) if img.endswith(".png")]

for image in images:
    img = Image.open(image).convert('L')
    array = np.array(img)
    array = np.expand_dims(np.array(img), axis=2)
    for i in range(0, len(Data)):
        for j in range(0, 4):
            Data[i,j] = array

           

labels = np.zeros((2,len(Data)), dtype=np.uint8)
labels = np.transpose(labels)
for i in range(0, len(Data) ):
    if i <= 127:
        labels[i][0] = 1
    elif i > 127 :
        labels[i][1] = 1            
            
#Network Configuration
#--------------------------------------------------------------------------------------------------------------------------
seq = Sequential()
seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   input_shape=(5, 128, 19, 1),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
                   padding='same', return_sequences=True))
seq.add(BatchNormalization())

seq.add(Flatten())
seq.add(Dense(output_dim = 128, activation = 'relu'))
seq.add(Dense(output_dim = 2, activation = 'relu'))
seq.compile(loss='binary_crossentropy', optimizer='adadelta', metrics=['acc'])

#Fit the Data on Model
#--------------------------------------------------------------------------------------
train_data_1 = Data[0:84]
train_data_2 = Data[127:212]
train_data = np.concatenate([train_data_1, train_data_2])
label_train_1 = labels[0:84]
label_train_2 = labels[127:212]
label_train = np.concatenate([label_train_1, label_train_2])

val_data_1 = Data[84:104]
val_data_2 = Data[212:232]
val_data = np.concatenate([val_data_1, val_data_2])
label_val_1 = labels[84:104]
label_val_2 = labels[212:232]
label_val = np.concatenate([label_val_1, label_val_2])


test_data_1 = Data[104:127]
test_data_2 = Data[232:]
test_data = np.concatenate([test_data_1, test_data_2])
label_test_1 = labels[104:127]
label_test_2 = labels[232:]
label_test = np.concatenate([label_test_1, label_test_2])


history = seq.fit(train_data,label_train, validation_data=( val_data, label_val), epochs = 2 , batch_size = 10)

#Visualize the Result
#---------------------------------------------------------------------------------------
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.plot()
plt.legend()
plt.show()
#Evaluate Model on test Data
#----------------------------------------------------------------------------------------------
test_loss, test_acc = seq.evaluate(test_data)
print('test_acc:', test_acc)





     

标签: pythontensorflowkeras

解决方案


问题出在最后,当你评估你的模型时,你只是忘了给出y论据。此修改应该可以解决问题:

test_loss, test_acc = seq.evaluate(test_data, y=label_test)

推荐阅读