python - TensorFlow:使用相同的模型两次生成双变量
问题描述
我正在尝试使用自己训练的预训练模型,它看起来像:
def Conv2D(x, channel, kernel_size, stride=1, bias=True, padding='same',name="conv2d",reuse=False):
with tf.variable_scope(name):
conv_out = tf.keras.layers.Conv2D(channel,kernel_size=kernel_size,strides=(stride,stride),padding=padding,use_bias=bias)(x)
return conv_out
class MyNet_Pretrain():
def __init__(self):
self.network_name = "net_pretrain"
def _mynet_pretrain(self,l1):
with tf.variable_scope(self.network_name) as vs:
conv1_down = ReLU(Conv2D(l1, 32, 3, 1, True,name='conv1_down'))
conv2_down = ReLU(Conv2D(conv1_down, 64, 3, 2, True,name='conv2_down'))
conv3_down = ReLU(Conv2D(conv2_down, 128, 3, 2, True,name='conv3_down'))
conv3_down_upsample = tf.image.resize_bicubic(conv3_down, [128, 128], True)
conv1_up = ReLU(Conv2D(conv3_down_upsample, 64, 3, 1, True,name='conv1_up'))
conv1_up_upsample = tf.image.resize_bicubic(conv1_up, [256, 256], True)
conv2_up = ReLU(Conv2D(conv1_up_upsample, 64, 3, 1, True,name='conv2_up'))
conv3_up = Conv2D(conv2_up, 4, 1, 1, True,name='conv3_up')
return conv3_down, conv3_up
我想用它来计算生成图像和目标图像的特征图,如下所示:
net_pretrain = MyNet_Pretrain()
outputs_pretrain1 = net_pretrain._mynet_pretrain(outputs)
outputs_pretrain2 = net_pretrain._mynet_pretrain(target_pl)
然而,当我检查所有可训练变量时,我发现这个网络的变量增加了一倍:
[<tf.Variable 'net_pretrain/conv1_down/conv2d_9/kernel:0' shape=(3, 3, 4, 32) dtype=float32>, ...<tf.Variable 'net_pretrain_1/conv1_down/conv2d_15/kernel:0' shape=(3, 3, 4, 32) dtype=float32>, ...]
我不确定问题出在哪里。太感谢了!
解决方案
推荐阅读
- echarts - Echarts 4图表:指定每个类别的符号
- scilab - 如何在 xcos/scicos 框图仿真中定义全局变量?
- google-sheets - 具有不同行的数组公式?(累计)
- android - 如何通过蓝牙与医疗设备配对?
- python - 调试 Django 应用程序时如何在终端中打印消息?
- r - R在基于netcdf文件的make上制作平铺图
- mysql - 查看 mysql 请求是否为 Null、Empty 集或其中包含某些内容
- discord.py-rewrite - discord.py 重写 | 获取作者信息的问题
- stored-procedures - 存储过程中可以检索的文档数量是否有上限?
- eclipse - 导入 org.apache.chemistry 无法解析,但库已经在构建路径中?