首页 > 解决方案 > 如何在 pytorch nn.module 中设置图层的值?

问题描述

我有一个模型,我正在尝试使用它。我正在解决这些错误,但现在我认为它已经归结为我层中的值。我收到此错误:

RuntimeError: Given groups=1, weight of size 24 1 3 3, expected input[512, 50, 50, 3] to have 1 channels, 
but got 50 channels instead

我的参数是:

LR = 5e-2
N_EPOCHS = 30
BATCH_SIZE = 512
DROPOUT = 0.5

我的图像信息是:

depth=24
channels=3
original height = 1600
original width = 1200
resized to 50x50

这是我的数据的大小:

Train shape (743, 50, 50, 3) (743, 7)
Test shape (186, 50, 50, 3) (186, 7)
Train pixels 0 255 188.12228712427097 61.49539262385051
Test pixels 0 255 189.35559211469533 60.688278787628775

我在这里试图了解每一层的期望值,但是当我在这里输入它所说的内容时,https://towardsdatascience.com/pytorch-layer-dimensions-what-sizes-should-they-be-and-为什么-4265a41e01fd,它给了我关于错误通道和内核的错误。

我发现 torch_summary 让我对输出有更多的了解,但它只会提出更多的问题。

这是我的 torch_summary 代码:

from torchvision import models
from torchsummary import summary
import torch
import torch.nn as nn

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Conv2d(1,24, kernel_size=5)  # output (n_examples, 16, 26, 26)
        self.convnorm1 = nn.BatchNorm2d(24) # channels from prev layer
        self.pool1 = nn.MaxPool2d((2, 2))  # output (n_examples, 16, 13, 13)
        self.conv2 = nn.Conv2d(24,48,kernel_size=5)  # output (n_examples, 32, 11, 11)
        self.convnorm2 = nn.BatchNorm2d(48) # 2*channels?
        self.pool2 = nn.AvgPool2d((2, 2))  # output (n_examples, 32, 5, 5)
        self.linear1 = nn.Linear(400,120)  # input will be flattened to (n_examples, 32 * 5 * 5)
        self.linear1_bn = nn.BatchNorm1d(400) # features?
        self.drop = nn.Dropout(DROPOUT)
        self.linear2 = nn.Linear(400, 10)
        self.act = torch.relu

    def forward(self, x):
        x = self.pool1(self.convnorm1(self.act(self.conv1(x))))
        x = self.pool2(self.convnorm2(self.act(self.conv2(x))))
        x = self.drop(self.linear1_bn(self.act(self.linear1(x.view(len(x), -1)))))
        return self.linear2(x)


device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=CNN().to(device)
summary(model, (3, 50, 50))

这就是它给我的:

  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 24 1 5 5, expected input[2, 3, 50, 50] to have 1 channels, but got 3 channels instead

当我运行我的整个代码并 unsqueeze_(0) 我的数据时,像这样......x_train = torch.from_numpy(x_train).unsqueeze_(0) 我收到这个错误:

 File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 24 1 5 5, but got 5-dimensional input of size [1, 743, 50, 50, 3] instead

我不知道如何弄清楚如何在图层中填写正确的值。有人可以帮我找到正确的值并了解如何理解吗?我确实知道一层的输出应该是另一层的输入。没有什么与我认为我知道的相符。提前致谢!!

标签: pythonpytorchconv-neural-networkvgg-net

解决方案


看来您输入x张量轴的顺序错误。
正如您在输入中看到的,必须是doc Conv2d(N, C, H, W)

N是批量大小,C表示通道数,H是以像素为单位的输入平面的高度,以像素为单位W的宽度。

因此,为了正确使用torch.permute前传中的交换轴。

...
def forward(self, x):
    x = x.permute(0, 3, 1, 2)
    ...
    ...
    return self.linear2(x)
...

示例permute

t = torch.rand(512, 50, 50, 3)
t.size()
torch.Size([512, 50, 50, 3])

t = t.permute(0, 3, 1, 2)
t.size()
torch.Size([512, 3, 50, 50])

推荐阅读