machine-learning - 如何将一个模型的中间层传递给另一个模型以在 PyTorch 中跳过连接
问题描述
我想将编码器解码器架构定义为两个独立的模型,然后使用 nn.Sequential() 将它们连接起来,如下面的代码所示。现在,假设我想将 Encoder conv4 块的输出连接/连接到 Decoder 的 deconv1 块作为跳过连接。有没有一种方法可以在不将两个模型(编码器和解码器)合二为一的情况下实现这一目标。我想将它们分开,以便能够将同一编码器的输出用作多个解码器的输入。
class Encoder(nn.Module):
def __init__(self, conv_dim=64, n_res_blocks=2):
super(Encoder, self).__init__()
# Define the encoder
self.conv1 = conv(3, conv_dim, 4)
self.conv2 = conv(conv_dim, conv_dim*2, 4)
self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
self.conv4 = conv(conv_dim*4, conv_dim*4, 4)
# Define the resnet part of the encoder
# Residual blocks
res_layers = []
for layer in range(n_res_blocks):
res_layers.append(ResidualBlock(conv_dim*4))
# use sequential to create these layers
self.res_blocks = nn.Sequential(*res_layers)
# leaky relu function
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)
def forward(self, x):
# define feedforward behavior, applying activations as necessary
conv1 = self.leaky_relu(self.conv1(x))
conv2 = self.leaky_relu(self.conv2(conv1))
conv3 = self.leaky_relu(self.conv3(conv2))
conv4 = self.leaky_relu(self.conv4(conv3))
out = self.res_blocks(conv4)
return out
# Define the Decoder Architecture
class Decoder(nn.Module):
def __init__(self, conv_dim=64, n_res_blocks=2):
super(Decoder, self).__init__()
# Define the resnet part of the decoder
# Residual blocks
res_layers = []
for layer in range(n_res_blocks):
res_layers.append(ResidualBlock(conv_dim*4))
# use sequential to create these layers
self.res_blocks = nn.Sequential(*res_layers)
# Define the decoder
self.deconv1 = deconv(conv_dim*4, conv_dim*4, 4)
self.deconv2 = deconv(conv_dim*4, conv_dim*2, 4)
self.deconv3 = deconv(conv_dim*2, conv_dim, 4)
self.deconv4 = deconv(conv_dim, conv_dim, 4)
# no batch norm on last layer
self.out_layer = deconv(conv_dim, 3, 1, stride=1, padding=0, normalization=False)
# leaky relu function
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)
def forward(self, x):
# define feedforward behavior, applying activations as necessary
res = self.res_blocks(x)
deconv1 = self.leaky_relu(self.deconv1(res))
deconv2 = self.leaky_relu(self.deconv2(deconv1))
deconv3 = self.leaky_relu(self.deconv3(deconv2))
deconv4 = self.leaky_relu(self.deconv4(deconv3))
# tanh applied to last layer
out = F.tanh(self.out_layer(deconv4))
out = torch.clamp(out, min=-0.5, max=0.5)
return out
def model():
enc = Encoder(conv_dim=64, n_res_blocks=2)
dec = Decoder(conv_dim=64, n_res_blocks=2)
return nn.Sequential(enc, dec)
解决方案
除了从编码器只返回潜在特征(最后一层的输出)之外,您还可以返回中间层的输出以及潜在特征,可以是一个列表。之后,在解码器的转发函数中,您可以访问从编码器返回的值列表(这是解码器的参数)并在解码器层中相应地使用它。
希望这一点有所帮助。
推荐阅读
- bash - 从多个包中获取所有唯一许可证并使用 bash 脚本进行排序
- postgresql - Postgres 从文件语法错误导入数据库
- java - 处理 xml 文件时的 UTF8 编码无效
- graphql - GraphQL 是否支持相关子查询
- r - `bookdown`/`rmarkdown`/`knitr`: YAML 标头元素依赖于`knitr::opts_knit$get('rmarkdown.pandoc.to')`?
- arrays - 如何按日期字段对json数组进行排序
- python - AWS lambda,scrapy 和捕获异常
- python - 从 Seaborn regplot 中提取均值和置信区间
- python - 如何使用多个 .csv 文件应用快速傅里叶变换
- excel - 制作一个函数来获取工作表的页码