首页 > 解决方案 > 使用 tensorflow 2 实现梯度惩罚损失

问题描述

早上好,

我正在尝试按照本文所述为一维数据实施改进的 WGAN: https ://arxiv.org/pdf/1704.00028.pdf

它已经在 keras-contrib github 中作为示例实现: https ://github.com/keras-team/keras-contrib/blob/master/examples/improved_wgan.py 然而,梯度惩罚损失的这种实现并不是不再使用 tf2。K.gradients() 返回 [None]。

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:505 train_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:467 train_step  **
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:143 __call__
        losses = self.call(y_true, y_pred)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:246 call
        return self.fn(y_true, y_pred, **self._fn_kwargs)
    <ipython-input-7-4f0896d0107b>:104 gradient_penalty_loss
        gradients_sqr = K.square(gradients)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:2189 square
        return math_ops.square(x)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:9964 square
        "Square", x=x, name=name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:488 _apply_op_helper
        (input_name, err))

    ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.

这是问题的完整示例: https ://colab.research.google.com/drive/11dcMKoiCigTnEn7QvmjqLNrJdmFztByT

有谁知道发生了什么变化?知道如何解决这个问题吗?

更新:这忽略了构建计算图时的错误。然后它似乎运行

def gradient_penalty_loss(y_true, y_pred, averaged_samples):
  gradients = K.gradients(y_pred, averaged_samples)[0]
  try:
    gradients_sqr = K.square(gradients)
  except ValueError:
    print("Gradients returned None")
    return 0
  gradients_sqr_sum = K.sum(gradients_sqr, axis=np.arange(1, len(gradients_sqr.shape)))
  gradient_l2_norm = K.sqrt(gradients_sqr_sum)

  gradient_penalty = K.square(1 - gradient_l2_norm)

  return K.mean(gradient_penalty)

尽管如此,我得到越来越高的损失函数,梯度惩罚损失被忽略了吗? 损失

标签: pythontensorflowkerastensorflow2.0tensorflow2.x

解决方案


如果您执行 UPDATE 中建议的操作,tf 将忽略损失函数

使用 Tensorflow 2,这似乎是不可能的旧方式。我最终更改了代码以使其适应这种创建模型的方式。我建议什么?

  1. 使用 keras 创建 gen/disc 模型
  2. 加入他们扩展 tf.keras.Model 类,如 WGAN:https ://github.com/timsainb/tensorflow2-generation-models

推荐阅读