首页 > 解决方案 > 在 Keras/Tensorflow 中实现自定义 WARP 损失函数时出现错误:LookupError: No gradient defined for operation

问题描述

我正在创建一个自定义损失函数——在这个函数之前我已经制作了其他函数,效果很好。但是,我遇到了渐变错误:

LookupError:没有为操作“loss/target_global_pool_loss/while/RandomShuffle”定义梯度(操作类型:RandomShuffle)

我不确定这是否是我在 tensorflow while 循环中处理事情的方式,但是,如果我打开一个 python 终端,我会得到一个浮点值:

import tensorflow as tf
import warp_loss
a = [0,1,0,1,1,1,0,0,1]
b = [0.5,0.5,0.3,0.7,0.8,0.9,0.,0.2,0.2]
a = tf.constant(a)
b = tf.constant(b)
sess = tf.InteractiveSession()
loss = warp_loss(a,b)
loss.eval()
0.41588834
loss
<tf.Tensor 'while_3/Exit_1:0' shape=() dtype=float32>
def warp_loss(y_true, y_pred):
    """
    Implementation of the WARP loss function

    Arguments:
    y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
    y_pred -- prediction values 0-1.

    Returns:
    loss -- real number, value of the loss
    """

    neg_mask  = tf.where(tf.equal(y_true, 0), tf.ones_like(y_pred), tf.zeros_like(y_pred))

    # Get positive and negative scores   
    positives = tf.boolean_mask(y_pred,y_true)
    negatives = tf.boolean_mask(y_pred,neg_mask)

    loss = tf.constant(0, dtype=tf.float32)
    p    = tf.constant(0)

    # Loop all positives
    while_condition = lambda p, loss: tf.less(p, tf.shape(positives)[0])
    def sampling(p, loss):
        # Simulate random sampling without resampling
        shuffled  = tf.random.shuffle(negatives)

        # If no negative above positive, low loss
        sample_i  = tf.cond( tf.keras.backend.sum(K.cast(K.greater(shuffled, positives[p]), K.floatx())) > 0, lambda: tf.cast(tf.argmax(K.cast(K.greater(shuffled, positives[p]), K.floatx())), tf.float32) , lambda: tf.cast(-1, tf.float32 ) )

        # Every positive is equally wanted (therefore -1 foregoes to the investigated positive class)
        L = tf.log(tf.cast(tf.shape(negatives)[0],tf.float32)/(sample_i+1.))
        distance = tf.cast(shuffled[tf.cast(sample_i,tf.int32)], tf.float32)-tf.cast(positives[p], tf.float32)

        # Sum up loss
        individual_loss  = tf.cond( sample_i >= 0 , lambda: L*distance , lambda: tf.cast(0, tf.float32 ) )

        return [tf.add(p, 1), tf.add(loss, individual_loss)]

    _, loss = tf.while_loop(while_condition, sampling, [p, loss])

    return loss

我希望我的输出应该是一个浮点值,就像我的其他损失函数一样。

我的输入是 ai,j,channels,输出是潜在类的二进制列表。我每批次做 train_on_batch 1 个样本(这里它失败了):

 File "train.py", line 319, in <module>
    batch_out = model.train_on_batch(np.array([npzobj['features']]), np.array([npzobj['targets']]))
  File "/lib/python3.5/site-packages/keras/engine/training.py", line 1216, in train_on_batch
    self._make_train_function()
  File "/lib/python3.5/site-packages/keras/engine/training.py", line 509, in _make_train_function
    loss=self.total_loss)
  File "/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/lib/python3.5/site-packages/keras/optimizers.py", line 184, in get_updates
    grads = self.get_gradients(loss, params)
  File "/lib/python3.5/site-packages/keras/optimizers.py", line 89, in get_gradients
    grads = K.gradients(loss, params)
  File "/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2757, in gradients
    return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
  File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 923, in _GradientsHelper
    (op.name, op.type))
LookupError: No gradient defined for operation 'loss/target_global_pool_loss/while/RandomShuffle' (op type: RandomShuffle)

标签: tensorflowerror-handlingkerasdeep-learningloss-function

解决方案


显然,随机洗牌没有梯度,但是,遵循这个解决方案GPU 内核的 tf.random_shuffle解决了我的问题。

shuffled  = tf.gather(negatives, tf.random.shuffle(tf.range(tf.shape(negatives)[0])))

# Instead of

shuffled  = tf.random.shuffle(negatives)

推荐阅读