首页 > 解决方案 > 计算从中间层获取的输出的梯度并使用优化器更新权重

问题描述

我正在尝试实现以下架构,但不确定是否正确应用渐变胶带。

LFFD 研究论文

在上面的架构中,我们可以看到,蓝色框中的输出来自多个层。每个蓝色框在论文中被称为损失分支,其中包含两个损失,即交叉熵和 l2 损失。我在 tensorflow 2 中编写了架构,并使用梯度磁带进行自定义训练。我不确定的一件事是我应该如何使用梯度胶带更新损失。

我有两个疑问,

  1. 在这种情况下,我应该如何使用渐变胶带进行多次损失。我有兴趣看代码!
  2. 例如,考虑上图中的第 3 个蓝色框(第 3 个损失分支),我们将从conv 13层获取输入并获得两个输出,一个用于分类,另一个用于回归。因此,在计算了我应该如何更新权重的损失之后,我应该更新上面的所有层(从 conv 1 到 conv 13)还是应该只更新获取我conv 13的层权重(conv 11、12 和 13) .

我还附上了一个链接,我昨天在该链接中详细发布了一个问题。

下面是我尝试过的梯度下降的片段。如果我错了,请纠正我。

        images = batch.data[0]
        images = (images - 127.5) / 127.5

        targets = batch.label

        with tensorflow.GradientTape() as tape:
            outputs = self.net(images)
            loss = self.loss_criterion(outputs, targets)
        
        self.scheduler(i, self.optimizer)
        grads = tape.gradient(loss, self.net.trainable_variables)
        self.optimizer.apply_gradients(zip(grads, self.net.trainable_variables))

下面是自定义损失函数的代码,它被用作上面的 loss_criterion。

    losses = []
    for i in range(self.num_output_scales):
        pred_score = outputs[i * 2]
        pred_bbox = outputs[i * 2 + 1]
        gt_mask = targets[i * 2]
        gt_label = targets[i * 2 + 1]

        pred_score_softmax = tensorflow.nn.softmax(pred_score, axis=1)
        loss_mask = tensorflow.ones(pred_score_softmax.shape, tensorflow.float32)

        if self.hnm_ratio > 0:
            pos_flag = (gt_label[:, 0, :, :] > 0.5)
            pos_num = tensorflow.math.reduce_sum(tensorflow.cast(pos_flag, dtype=tensorflow.float32)) 
        if pos_num > 0:
            neg_flag = (gt_label[:, 1, :, :] > 0.5)
            neg_num = tensorflow.math.reduce_sum(tensorflow.cast(neg_flag, dtype=tensorflow.float32))
            neg_num_selected = min(int(self.hnm_ratio * pos_num), int(neg_num))
            neg_prob = tensorflow.where(neg_flag, pred_score_softmax[:, 1, :, :], \
            tensorflow.zeros_like(pred_score_softmax[:, 1, :, :]))
            neg_prob_sort = tensorflow.sort(tensorflow.reshape(neg_prob, shape=(1, -1)), direction='ASCENDING')
            prob_threshold = neg_prob_sort[0][int(neg_num_selected)]
            neg_grad_flag = (neg_prob <= prob_threshold)
            loss_mask = tensorflow.concat([tensorflow.expand_dims(pos_flag, axis=1), 
                tensorflow.expand_dims(neg_grad_flag, axis=1)], axis=1)
        else:
            neg_choice_ratio = 0.1
            neg_num_selected = int(tensorflow.cast(tensorflow.size(pred_score_softmax[:, 1, :, :]), dtype=tensorflow.float32) * 0.1)
            neg_prob = pred_score_softmax[:, 1, :, :]
            neg_prob_sort = tensorflow.sort(tensorflow.reshape(neg_prob, shape=(1, -1)), direction='ASCENDING')
            prob_threshold = neg_prob_sort[0][int(neg_num_selected)]
            neg_grad_flag = (neg_prob <= prob_threshold)                
            loss_mask = tensorflow.concat([tensorflow.expand_dims(pos_flag, axis=1), 
                tensorflow.expand_dims(neg_grad_flag, axis=1)], axis=1)

        pred_score_softmax_masked = tensorflow.where(loss_mask, pred_score_softmax, 
            tensorflow.zeros_like(pred_score_softmax, dtype=tensorflow.float32))
        pred_score_log = tensorflow.math.log(pred_score_softmax_masked)
        score_cross_entropy = - tensorflow.where(loss_mask, gt_label[:, :2, :, :], 
            tensorflow.zeros_like(gt_label[:, :2, :, :], dtype=tensorflow.float32)) * pred_score_log
        loss_score = tensorflow.math.reduce_sum(score_cross_entropy) / 
        tensorflow.cast(tensorflow.size(score_cross_entropy), tensorflow.float32)

        mask_bbox = gt_mask[:, 2:6, :, :]
        predict_bbox = pred_bbox * mask_bbox
        label_bbox = gt_label[:, 2:6, :, :] * mask_bbox
        # l2 loss of boxes
        # loss_bbox = tensorflow.math.reduce_sum(tensorflow.nn.l2_loss((label_bbox - predict_bbox)) ** 2) / 2
        loss_bbox = mse(label_bbox, predict_bbox) / tensorflow.math.reduce_sum(mask_bbox)

        # Adding only losses relevant to a branch and sending them for back prop
        losses.append(loss_score + loss_bbox)
        # losses.append(loss_bbox)
    
        # Adding all losses and sending to back prop Approach 1
        # loss_cls += loss_score
        # loss_reg += loss_bbox
        # loss_branch.append(loss_score)
        # loss_branch.append(loss_bbox)
        # loss = loss_cls + loss_reg

    return losses

我没有收到任何错误,但我的损失并没有减少。这是我的训练日志

有人请帮我解决这个问题。

标签: computer-visiontensorflow2.0tf.kerasgradienttape

解决方案


推荐阅读