首页 > 解决方案 > 为什么 torch.nn 包不支持单个样本的输入

问题描述

我试图用 pytorch 理解深度学习。我阅读了 pytorch 教程:https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html,其内容如下:

''torch.nn 只支持小批量。整个 torch.nn 包仅支持作为小批量样本的输入,而不是单个样本。例如,nn.Conv2d 将采用 nSamples x nChannels x Height x Width 的 4D 张量。如果您只有一个样本,只需使用 input.unsqueeze(0) 添加一个假的批次维度。''

我不确定它是什么意思。事实上,我已经制作了一个简单的前馈神经网络(参见下面的代码),在该网络上我使用了一个非常小的数据集(这个想法是先了解它如何在没有小批量的情况下工作,而不是真正有任何有用的东西),并且因此不需要使用小批量。因此,我直接介绍每个时期的所有样本。如果我理解正确,我应该添加“train_data = train_data.unsqueeze(0)”。但我不确定在哪里,因为它似乎将数据大小更改为 1。此外,它无需添加此行即可工作,所以我为什么要真正使用它?

任何帮助将不胜感激!

import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import pandas as pd
import os
import numpy as np

# Download data
#... 

# Construct network
len_input = len(data[0])
len_output = nbr_class
print('There is %d classes used for classification'%nbr_class)

#defining a new class: Net, that extended nn.Module
#setup the “skeleton” of our network architecture
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        #creation of fully connected. A fully connected neural network layer is represented by the nn.Linear object, 
        #with the first argument in the definition being the number of nodes in layer l and the next argument being 
        #the number of nodes in layer l+1
        self.fc1 = nn.Linear(len_input, 200)
        self.fc2 = nn.Linear(200, 200)
        self.fc3 = nn.Linear(200, len_output)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return F.log_softmax(x)

# Create initial network
epochs = 3000
model = Net()
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)

# Train & test network
def train(data, target, epoch):
    model.train() # set the model in "training mode"
    # run the main training loop
    #no need batch size as small number of sample! like this the process is more exact
    #zero our gradients before calling .backward() which is necessary for new sum of gradients
    optimizer.zero_grad()
    target_pred = model(data)
    loss = criterion(target_pred, target)
    #Propagate the gradients back through the network
    loss.backward()
    #update the weights
    #tell our optimizer to “step”, meaning that the weights will be updated using the calculated gradients 
    #according to our rule. perform model parameter update (update weights)
    optimizer.step()
    # for graphing puposes
    loss_array.append(loss.data[0])

    if epoch in list(range(0,20000,100)):
        print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, loss.data[0]))

def test(epoch, test_data, test_target):
    #eval mode to turn Dropout and BatchNorm off
    model.eval()
    test_loss = 0
    correct = 0
    test_target_pred = model(test_data)
    criterion = nn.NLLLoss()
    # sum up batch loss
    test_loss += criterion(test_target_pred, test_target).data[0]
    pred = test_target_pred.data.max(1)[1]  # get the index of the max log-probability
    correct += pred.eq(test_target.data).sum()
    if epoch in list(range(0,20000,100)):
        print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, correct, len(test_target), 100. * correct / len(test_target)))

if __name__ == '__main__':
    for epoch in range(epochs):
        train(data = train_data, target=train_target, epoch=epoch)
        test(epoch, test_data, test_target)

标签: pythondeep-learningpytorchmini-batch

解决方案


推荐阅读