python - 在 tensorflow 1.15 中如何正确传递权重和偏差?
问题描述
我正在尝试使用 tensorflow 1.15 实现 U_NET 架构,这是第一个卷积层:
import tensorflow as tf
print("############################### VERSION TENSORFLOW ###############################################")
print(tf.__version__)
print("############################### VERSION TENSORFLOW ###############################################")
def u_net_model(feature):
w_init = tf.truncated_normal_initializer(stddev=0.01)
print("--------------------------------------------------------------------------------- w_init")
print(w_init)
b_init = tf.constant_initializer(value=0.40)
gamma_init = tf.random_normal_initializer(1., 0.02)
with tf.variable_scope("u_network",reuse=True):
x = tf.keras.Input(batch_size = 5,tensor=feature)
#y = tf.keras.layers.Dense(16, activation='softmax')(x)
conv1 = tf.keras.layers.Conv2D(64,4,(2,2),activation = 'relu',padding='same',kernel_initializer= w_init,bias_initializer=b_init, name = "convolution1")(x)
print("conv1")
print(conv1)
conv2 = tf.keras.layers.Conv2D(128,4,(2,2),activation = 'relu',padding='same', kernel_initializer= w_init,bias_initializer=b_init, name = "convolution2")(conv1)
print("conv2")
print(conv2)
conv2 = tf.keras.layers.BatchNormalization()(conv2)
print("conv2")
print(conv2)
在 main.py 我有:
nw, nh, nz = X_train.shape[1:]
t_image_good = tf.placeholder('float32', [25, nw, nh, nz], name='good_image')
print(t_image_good)
t_image_good_samples = tf.placeholder('float32', [50, nw, nh, nz], name='good_image_samples')
print(t_image_good_samples)
t_PROVA = t_image_good
t_PROVA_samples = t_image_good_samples
g_nmse_a = tf.sqrt(tf.reduce_sum(tf.squared_difference(t_PROVA, t_PROVA), axis=[1, 2, 3]))
g_nmse_b = tf.sqrt(tf.reduce_sum(tf.square(t_PROVA), axis=[1, 2, 3]))
g_nmse = tf.reduce_mean(g_nmse_a / g_nmse_b)
generator_loss = g_alpha *g_nmse
print("generator_loss")
#geneator_loss è un tensore
print(generator_loss)
learning_rate = 0.0001
beta = 0.5
print("\n")
generator_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'u_network')
print("--------------------------------------- generator_variables")
print(generator_variables)
generator_gradient_optimum = tf.train.AdamOptimizer(learning_rate, beta1=beta).minimize(generator_loss, var_list = generator_variables )
nw, nh, nz = X_train.shape[1:]
t_image_good = tf.placeholder('float32', [25, nw, nh, nz], name='good_image')
print(t_image_good)
t_image_good_samples = tf.placeholder('float32', [50, nw, nh, nz], name='good_image_samples')
print(t_image_good_samples)
t_PROVA = t_image_good
t_PROVA_samples = t_image_good_samples
g_nmse_a = tf.sqrt(tf.reduce_sum(tf.squared_difference(t_PROVA, t_PROVA), axis=[1, 2, 3]))
g_nmse_b = tf.sqrt(tf.reduce_sum(tf.square(t_PROVA), axis=[1, 2, 3]))
g_nmse = tf.reduce_mean(g_nmse_a / g_nmse_b)
generator_loss = g_alpha *g_nmse
print("generator_loss")
#geneator_loss è un tensore
print(generator_loss)
learning_rate = 0.0001
beta = 0.5
print("\n")
generator_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'u_network')
print("--------------------------------------- generator_variables")
print(generator_variables)
generator_gradient_optimum = tf.train.AdamOptimizer(learning_rate, beta1=beta).minimize(generator_loss, var_list = generator_variables )
当我运行它时,我得到:
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'u_network/convolution1/kernel:0' shape=(4, 4, 1, 64) dtype=float32>", "<tf.Variable 'u_network/convolution1/bias:0' shape=(64,) dtype=float32>", "<tf.Variable 'u_network/convolution2/kernel:0' shape=(4, 4, 64, 128) dtype=float32>", "<tf.Variable 'u_network/convolution2/bias:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization/gamma:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization/beta:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/convolution3/kernel:0' shape=(4, 4, 128, 256) dtype=float32>", "<tf.Variable 'u_network/convolution3/bias:0' shape=(256,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization_1/gamma:0' shape=(256,) dtype=float32>"
...这种类型的许多行,最后以:
and loss Tensor("mul_10:0", shape=(), dtype=float32).
我要做的是传递参数、权重和偏差,以便启动 AdamOptimizer。
我究竟做错了什么?
解决方案
在您提供的代码中,您没有在哪里调用u_net_model
. 您提供的代码在图中只有几个占位符,并在其上执行了一些操作。您使用的操作是tf.square
并且tf.squared_difference
其中没有任何可学习的参数,因此优化器没有什么可以最小化(或收敛)的。
推荐阅读
- r - 将数据框转换为 xts 对象
- xml - 来自 Spring MVC 提供的 REST API 的漂亮打印 XML 响应
- vue.js - Vue:传递给路由器链接的道具
- node.js - 使用把手#each 循环遍历数组值
- r - 如何将函数默认值分配给 R .GlobalEnv
- .net - 如何获取 CLR 加载的类型列表?
- cypress - 测试失败后继续在 cypress runner 中运行,不会超时
- javascript - OnClick Javascript 滚动到 PHP 重复结果中的 iframe 位置
- java - 如何使用 Hibernate 持久化嵌套对象?
- python-3.6 - Discord Bot Python 2.6(如何通过嵌入造句)