首页 > 解决方案 > 使用 tensorlfow 进行深度学习:GradientTape 实现中的问题

问题描述

以下代码是对我的训练 cicle 的 3 个不同测试,在前两种情况下,梯度返回 Null 向量,在最后一种情况下有效。

这是我第一次尝试,根据网上的信息,with语句记录了其中的可训练变量。

for epoch in range(epochsize):
    for batch in batchlist:
        loss_value = tf.constant(0.)
        mini_batch_losses = []
        for seqref in batch:
            seqref=int(seqref)
            with tf.GradientTape() as tape:
                X_train,y_train = loadvalue(seqref) #caricamento elementi
                logits = model(X_train, training=True)
                loss_value = loss_fn(y_train, logits)
                mini_batch_losses.append(loss_value)
        loss_avg = tf.reduce_mean(mini_batch_losses)
        print("batch " + str(seqref) + " losses:" + str(loss_avg.numpy()))
        grads = tape.gradient(loss_avg, model.trainable_weights)
        optimizer.apply_gradients(grads_and_vars=zip(grads, model.trainable_weights))

但是由于某些原因它不起作用,所以我尝试(随机)将 with 语句放在前面,但它也不起作用

for epoch in range(epochsize):
    for batch in batchlist:
        loss_value = tf.constant(0.)
        mini_batch_losses = []
        with tf.GradientTape() as tape:
            for seqref in batch:
                seqref=int(seqref)
                X_train,y_train = loadvalue(seqref) #caricamento elementi
                logits = model(X_train, training=True)
                loss_value = loss_fn(y_train, logits)
                mini_batch_losses.append(loss_value)
        loss_avg = tf.reduce_mean(mini_batch_losses)
        print("batch " + str(seqref) + " losses:" + str(loss_avg.numpy()))
        grads = tape.gradient(loss_avg, model.trainable_weights)
        optimizer.apply_gradients(grads_and_vars=zip(grads, model.trainable_weights))

最后,包括 with 语句中的平均值,它终于起作用了

for epoch in range(epochsize):
    for batch in batchlist:
        loss_value = tf.constant(0.)
        mini_batch_losses = []
        with tf.GradientTape() as tape:
            for seqref in batch:
                seqref=int(seqref)
                X_train,y_train = loadvalue(seqref) #caricamento elementi
                logits = model(X_train, training=True)
                loss_value = loss_fn(y_train, logits)
                mini_batch_losses.append(loss_value)
            loss_avg = tf.reduce_mean(mini_batch_losses)
        print("batch " + str(seqref) + " losses:" + str(loss_avg.numpy()))
        grads = tape.gradient(loss_avg, model.trainable_weights)
        optimizer.apply_gradients(grads_and_vars=zip(grads, model.trainable_weights))

主要问题是我不知道它是好用还是正常工作..所以我需要更详细地了解 gradientTape 是如何工作的。你能帮助我吗?

标签: pythontensorflowtensorflow2.0gradienttape

解决方案


推荐阅读