首页 > 解决方案 > PyTorch 的 LSTM 中的 Input_size 错误:RuntimeError: shape '[10, 30, 1]' 对于大小为 150 的输入无效

问题描述

大家,我正在使用 LSTM 来预测某一天的股票指数,只使用它之前 30 天的股票指数作为输入。我认为在这个例子中,LSTM 输入的大小应该是 [10,30,1],所以我使用t_x=x.view(10,30,1)reshape 输入。但是shape '[10, 30, 1]' is invalid for input of size 150当我运行下面的代码时有一个 RuntimeError(),你能帮我找出问题所在吗?谢谢:)

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.utils.data import TensorDataset


dataread_df=pd.read_csv('D:/Desktop/399300.csv')
dataread_series=pd.Series(dataread_df['value'].values)
plt.plot(dataread_series)
plt.show()


def generate_data_df(series, n):
    if len(series) <= n:
        raise Exception("The Length of series is %d, while affect by (n=%d)." % (len(series), n))
    df = pd.DataFrame()
    for i in range(n):
        df['x%d' % i] = series.tolist()[i:-(n - i)]
    df['y'] = series.tolist()[n:]
    return df
data_df = generate_data_df(dataread_series, 30)

data_numpy=np.array(data_df)
mean=np.mean(data_numpy)
std=np.std(data_numpy)
data_numpy = (data_numpy-mean)/std
train_size=int(len(data_numpy)*0.7)
test_size=len(data_numpy)-train_size
trainset_np=data_numpy[:train_size]
testset_np=data_numpy[train_size:]
train_x_np=trainset_np[:,:30]
train_y_np=trainset_np[:,30:]
test_x_np=testset_np[:,:30]
test_y_np=testset_np[:,30:]

train_x=torch.Tensor(train_x_np)
train_y=torch.Tensor(train_y_np)
test_x=torch.Tensor(test_x_np)
test_y=torch.Tensor(test_y_np)
trainset=TensorDataset(train_x,train_y)
testset=TensorDataset(test_x,test_y)
trainloader = DataLoader(trainset, batch_size=10, shuffle=True)
testloader=DataLoader(testset,batch_size=10,shuffle=True)

class Net(nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        self.rnn=nn.LSTM(input_size=1,hidden_size=64,num_layers=1,batch_first=True)
        self.out=nn.Sequential(nn.Linear(64,1))
    def forward(self,x):
        r_out,(h_n,h_c)=self.rnn(x,None)
        out=self.out(r_out[:,-1,:])
        return out
rnn = Net()
print(rnn)

optimizer = torch.optim.Adam(rnn.parameters(), lr=0.0001)  
criterion = nn.MSELoss()
train_correct=0
test_correct=0
train_total=0
test_total=0
prediction_list=[]

for epoch in range(10):
    running_loss_train=0
    running_loss_test=0
    for i,(x1,y1) in enumerate(trainloader):
        t_x1=x1.view(10,30,1)
        output=rnn(t_x1)
        loss_train=criterion(output,y1)
        optimizer.zero_grad() 
        loss_train.backward() 
        optimizer.step()
        running_loss_train+=loss_train.item()
    for i,(x2,y2) in enumerate(testloader):
        t_x2=x2.view(10,30,1)
        prediction=rnn(t_x2)
        loss_test=criterion(prediction,y2)
        running_loss_test+=loss_test.item()
        prediction_list.append(prediction)
    print('Epoch {} Train Loss:{}, Test Loss:{}'.format(epoch+1,running_loss_train,running_loss_test))
    prediction_list_plot=np.array(prediction_list)
    plt.plot(test_y_np.flatten(),'r-',linewidth=0.1,label='real data')
    plt.plot(prediction_list_plot.flatten(),'b-',linewidth=0.1,label='predicted data')
    plt.show()
print('Finish training')

运行时错误:

RuntimeError                              Traceback (most recent call last)
<ipython-input-3-fb8cb4c93775> in <module>
     71     running_loss_test=0
     72     for i,(x1,y1) in enumerate(trainloader):
---> 73         t_x1=x1.view(10,30,1)
     74         output=rnn(t_x1)
     75         loss_train=criterion(output,y1)

RuntimeError: shape '[10, 30, 1]' is invalid for input of size 150

标签: pythonlstmpytorchrecurrent-neural-network

解决方案


鉴于您使用batch_first=True并假设批量大小为 10,(10, 30, 1)因为输入的形状是正确的,因为它是(batch_size, seq_len, input_size).

问题是从哪里来150x1在尝试应用之前的形状是什么.view(...)?您可以检查以下内容:

for i,(x1,y1) in enumerate(trainloader):
    print(x1.shape)
    ...

直观地说,它应该类似于(10, ???)因为您将 10 设置为批量大小。现在我假设你的训练和测试数据已经关闭。


推荐阅读