首页 > 解决方案 > 如何在 TensorFlow GradientTape 中使用多个渐变?

问题描述

像下面的代码一样连接了 3 个神经网络,我们如何从初始网络中获取两个梯度?第一个梯度工作,但第二个返回None张量。似乎它们彼此没有关联以获得渐变。这里有什么问题??

with tf.GradientTape() as tape1:
    with tf.GradientTape() as tape2:
        output1 = NN_model1(input1, training=True)
        output2 = NN_model2(output1, training=True)
        output3 = NN_model3([input1, output1, output2], training=True)
        loss1 = -tf.math.reduce_mean(output3)
        loss2 = -tf.math.reduce_mean(output2)
    grad1 = tape2.gradient(loss1, NN_model1.trainable_variables)
grad2 = tape1.gradient(loss2, grad1)
optimizer.apply_gradients(zip(grad2, NN_model1.trainable_variables))

标签: pythontensorflowkerasneural-networkgradient

解决方案


我认为正确的做法应该如下:

with tf.GradientTape() as tape:
    output1 = NN_model1(input1, training=True)
    output2 = NN_model2(output1, training=True)
    output3 = NN_model3([input1, output1, output2], training=True)
    loss1 = -tf.math.reduce_mean(output3)
    loss2 = -tf.math.reduce_mean(output2)
grad = tape.gradient([loss1, loss2], NN_model1.trainable_variables)
optimizer.apply_gradients(zip(grad, NN_model1.trainable_variables))

推荐阅读