首页 > 解决方案 > TensorFlow 自定义损失馈送

问题描述

我想输入计算的损失(即 self.calc_loss/d_reward_sum ,其值为 1.0288568self.sess.run(tf.compat.v1.train.AdamOptimizer().minimize(d_reward_sum)) 然后显示错误

AttributeError:“numpy.dtype”对象没有属性“base_dtype”

如果我使用占位符

self.calc_loss = tf.placeholder(tf.float32) 
self.train_opt = tf.compat.v1.train.AdamOptimizer().minimize(self.calc_loss)
...   
train_modal = self.sess.run([self.train_opt],feed_dict = {
            self.calc_loss :d_reward_sum ,
        })

然后它给出错误

> ([str(v) for _, v in grads_and_vars], loss))
> ValueError: No gradients provided for any variable, check your graph for ops that do not 
> support gradients, between variables ["<tf.Variable 'Variable:0'
> shape=(4, 24) dtype=float32_ref>", "<tf.Variable 'Variable_1:0'
> shape=(24,) dtype=float32_ref>", "<tf.Variable 'Variable_2:0'
> shape=(24, 12) dtype=float32_ref>", "<tf.Variable 'Variable_3:0'
> shape=(12,) dtype=float32_ref>", "<tf.Variable 'Variable_4:0'
> shape=(12, 4) dtype=float32_ref>", "<tf.Variable 'Variable_5:0'
> shape=(4,) dtype=float32_ref>", "<tf.Variable 'Variable_6:0' shape=(4,
> 2) dtype=float32_ref>"] and loss Tensor("Placeholder:0",
> dtype=float32).

如果我做

self.sess.run(tf.compat.v1.train.AdamOptimizer().minimize(tf.constant(4.1, tf.float32, name='A')))

然后错误也和上面一样

它期望什么价值?

标签: tensorflow

解决方案


推荐阅读