首页 > 解决方案 > 在 pytorch v1.0 Sequential 模块中使用 flatten

问题描述

由于我的 CUDA 版本是 8,我使用的是 torch 1.0.0

我需要将 Flatten 层用于 Sequential 模型。这是我的代码:

import torch
import torch.nn as nn
import torch.nn.functional as F
print(torch.__version__)
# 1.0.0
from collections import OrderedDict

layers = OrderedDict()
layers['conv1'] = nn.Conv2d(1, 5, 3)
layers['relu1'] = nn.ReLU()
layers['conv2'] = nn.Conv2d(5, 1, 3)
layers['relu2'] = nn.ReLU()
layers['flatten'] = nn.Flatten()
layers['linear1'] = nn.Linear(3600, 1)
model = nn.Sequential(
layers
).cuda()

它给了我以下错误:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-38-080f7c5f5037> in <module>
      6 layers['conv2'] = nn.Conv2d(5, 1, 3)
      7 layers['relu2'] = nn.ReLU()
----> 8 layers['flatten'] = nn.Flatten()
      9 layers['linear1'] = nn.Linear(3600, 1)
     10 model = nn.Sequential(

AttributeError: module 'torch.nn' has no attribute 'Flatten'

如何在 pytorch 1.0.0 中展平我的转换层输出?

标签: pythonpytorchconv-neural-network

解决方案


只需制作一个新的 Flatten 图层。

from collections import OrderedDict

class Flatten(nn.Module):
    def forward(self, input):
        return input.view(input.size(0), -1)

layers = OrderedDict()
layers['conv1'] = nn.Conv2d(1, 5, 3)
layers['relu1'] = nn.ReLU()
layers['conv2'] = nn.Conv2d(5, 1, 3)
layers['relu2'] = nn.ReLU()
layers['flatten'] = Flatten()
layers['linear1'] = nn.Linear(3600, 1)
model = nn.Sequential(
layers
).cuda()

推荐阅读