首页 > 解决方案 > 在 Keras 自定义损失函数中计算递归平均值

问题描述

我有一个非常规的用例,我需要在 Keras 的自定义损失函数中计算递归平均值。

递归平均值计算为(我希望有 mathjax)$m_t = m_{t-1} + (x_t - m_{t-1}) / t$。所以我已经破解了一个解决方案,我在模型之外创建 tf.Variable,如下所示:

def my_loss(m_t, m_t_1, t):
    def _my_loss(y_true, y_pred):
        m_t_1.assign(m_t)
        t.assign_add(1.)

        x = K.mean(y_true * y_pred)
        m_t.assign(m_t_1 + (x - m_t_1) / t)

        return -1. * m_t

    return _my_loss

input_layer = Input(shape=(5))
output_layer = Dense(1, activation='linear')(input_layer)
m_t = tf.Variable(initial_value=0., shape=tf.TensorShape(None), trainable=False)
m_t_1 = tf.Variable(initial_value=0., shape=tf.TensorShape(None), trainable=False)
t = tf.Variable(initial_value=0., shape=tf.TensorShape(None), trainable=False)
loss = my_loss(m_t, m_t_1, t)

model = keras.Model(input_layer, output_layer)
model.compile(loss=loss, optimizer='sgd')

model.train_on_batch(x=np.random.normal(0., 1., (10, 5)), y=np.random.normal(0., 1., (10, 1)))

我收到以下错误:

ValueError: in converted code:

    C:\Users\Steven\Anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py:305 train_on_batch  *
        outs, total_loss, output_losses, masks = (
    C:\Users\Steven\Anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\keras\engine\training_eager.py:273 _process_single_batch
        model.optimizer.apply_gradients(zip(grads, trainable_weights))
    C:\Users\Steven\Anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\optimizer_v2.py:426 apply_gradients
        grads_and_vars = _filter_grads(grads_and_vars)
    C:\Users\Steven\Anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\keras\optimizer_v2\optimizer_v2.py:1039 _filter_grads
        ([v.name for _, v in grads_and_vars],))

    ValueError: No gradients provided for any variable: ['dense_2/kernel:0', 'dense_2/bias:0'].

知道如何在损失函数中有状态吗?

标签: kerastensorflow2.0

解决方案


推荐阅读