pytorch - 简单的卷积网络
问题描述
我是 pytorch 的新手,我希望你在我尝试设置的小网上点亮你的灯。
class PzConv2d(nn.Module):
""" Convolution 2D Layer followed by PReLU activation
"""
def __init__(self, n_in_channels, n_out_channels, **kwargs):
super(PzConv2d, self).__init__()
self.conv = nn.Conv2d(n_in_channels, n_out_channels, bias=True,
**kwargs)
self.activ = nn.ReLU()
def forward(self, x):
x = self.conv(x)
return self.activ(x)
class PzPool2d(nn.Module):
""" Average Pooling Layer
"""
def __init__(self, kernel_size, stride, padding=0):
super(PzPool2d, self).__init__()
self.pool = nn.AvgPool2d(kernel_size=kernel_size,
stride=stride,
padding=padding,
ceil_mode=True,
count_include_pad=False)
def forward(self, x):
return self.pool(x)
class PzFullyConnected(nn.Module):
""" Dense or Fully Connected Layer followed by ReLU
"""
def __init__(self, n_inputs, n_outputs, withrelu=True, **kwargs):
super(PzFullyConnected, self).__init__()
self.withrelu = withrelu
self.linear = nn.Linear(n_inputs, n_outputs, bias=True)
self.activ = nn.ReLU()
def forward(self, x):
x = self.linear(x)
if self.withrelu:
x = self.activ(x)
return x
class NetCNN(nn.Module):
def __init__(self,n_input_channels,debug=False):
super(NetCNN, self).__init__()
self.n_bins = 180
self.debug = debug
self.conv0 = PzConv2d(n_in_channels=n_input_channels,
n_out_channels=64,
kernel_size=5,padding=2)
self.pool0 = PzPool2d(kernel_size=2,stride=2,padding=0)
self.conv1 = PzConv2d(n_in_channels=64,
n_out_channels=92,
kernel_size=3,padding=2)
self.pool1 = PzPool2d(kernel_size=2,stride=2,padding=0)
self.conv2 = PzConv2d(n_in_channels=92,
n_out_channels=128,
kernel_size=3,padding=2)
self.pool2 = PzPool2d(kernel_size=2,stride=2,padding=0)
self.fc0 = PzFullyConnected(n_inputs=12800,n_outputs=1024)
self.fc1 = PzFullyConnected(n_inputs=1024,n_outputs=self.n_bins)
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def forward(self, x, dummy):
# x:image tensor N_batch, Channels, Height, Width
# size N, Channels:=5 filtres, H,W = 64 pixels
# dummy: is not used
# stage 0 conv 64 x 5x5
x = self.conv0(x)
x = self.pool0(x)
# stage 1 conv 92 x 3x3
x = self.conv1(x)
x = self.pool1(x)
# stage 2 conv 128 x 3x3
x = self.conv2(x)
x = self.pool2(x)
x = self.fc0(x.view(-1,self.num_flat_features(x)))
x = self.fc1(x)
output = x
return output
我已经检查了正向过程中中间“x”张量的尺寸是否良好(至少在我发送随机输入图像张量时)。但如果你看到一些奇怪的东西,请告诉我。
现在,我在 forward 方法中看到了带有 F."function" 序列的代码,而不是像我所做的那样声明不同的层。这有什么不同吗?
(请注意,我使用 F.cross_entropy 作为损失函数,所以我不会通过 SoftMax 结束可能的网络。)
谢谢。
解决方案
推荐阅读
- android-studio - 将 Unity 游戏导入为动态功能
- clojure - 如何在 Clojure EDN 中以不同的顺序执行自定义标签阅读器
- java - Java - 如何将 CSV 文件反序列化为 JavaBeans
- html - 剪辑路径无法正确缩放
- google-cloud-platform - 我正在尝试在谷歌云功能中添加验证 jwt 令牌方法,但它显示错误
- javascript - 如何从表格的最后一行获取输入类型文本的值
- java - 2 个按钮 2 个 EditTexts 字符串输入 1 个 TextVew 输出
- react-native - React Native Reanimated 动画属性列表
- reactjs - 为什么指定道具的对象属性不起作用?
- react-native - RingCentral 双向通信