首页 > 解决方案 > PyTorch 中 BatchNorm2d 的导数

问题描述

在我的网络中,我想在前向传播中计算我的网络的前向传播和后向传播。为此,我必须手动定义正向传递层的所有反向传递方法。
对于激活函数,这很容易。而且对于线性和卷积层,它也运行良好。但我真的在 BatchNorm 上苦苦挣扎。由于 BatchNorm 论文仅讨论了 1D 案例:到目前为止,我的实现如下所示:

def backward_batchnorm2d(input, output, grad_output, layer):
    gamma = layer.weight
    beta = layer.bias
    avg = layer.running_mean
    var = layer.running_var
    eps = layer.eps
    B = input.shape[0]

    # avg, var, gamma and beta are of shape [channel_size]
    # while input, output, grad_output are of shape [batch_size, channel_size, w, h]
    # for my calculations I have to reshape avg, var, gamma and beta to [batch_size, channel_size, w, h] by repeating the channel values over the whole image and batches

    dL_dxi_hat = grad_output * gamma
    dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True)
    dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B
    dL_dxi = dL_dxi_hat / torch.sqrt(var + eps) + 2.0 * dL_dvar * (input - avg) / B + dL_davg / B # dL_dxi_hat / sqrt()
    dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True)
    dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True)
    return dL_dxi, dL_dgamma, dL_dbeta

当我用 torch.autograd.grad() 检查我的渐变时,我注意到dL_dgamma并且dL_dbeta是正确的,但dL_dxi不正确,(很多)。但我找不到我的错误。我的错误在哪里?

作为参考,这里是 BatchNorm 的定义:

在此处输入图像描述

以下是一维情况下的导数公式:在此处输入图像描述

标签: pytorchderivativeautograd

解决方案


def backward_batchnorm2d(input, output, grad_output, layer):
    gamma = layer.weight
    gamma = gamma.view(1,-1,1,1) # edit
    # beta = layer.bias
    # avg = layer.running_mean
    # var = layer.running_var
    eps = layer.eps
    B = input.shape[0] * input.shape[2] * input.shape[3] # edit

    # add new
    mean = input.mean(dim = (0,2,3), keepdim = True)
    variance = input.var(dim = (0,2,3), unbiased=False, keepdim = True)
    x_hat = (input - mean)/(torch.sqrt(variance + eps))
    
    dL_dxi_hat = grad_output * gamma
    # dL_dvar = (-0.5 * dL_dxi_hat * (input - avg) / ((var + eps) ** 1.5)).sum((0, 2, 3), keepdim=True) 
    # dL_davg = (-1.0 / torch.sqrt(var + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + dL_dvar * (-2.0 * (input - avg)).sum((0, 2, 3), keepdim=True) / B
    dL_dvar = (-0.5 * dL_dxi_hat * (input - mean)).sum((0, 2, 3), keepdim=True)  * ((variance + eps) ** -1.5) # edit
    dL_davg = (-1.0 / torch.sqrt(variance + eps) * dL_dxi_hat).sum((0, 2, 3), keepdim=True) + (dL_dvar * (-2.0 * (input - mean)).sum((0, 2, 3), keepdim=True) / B) #edit
    
    dL_dxi = (dL_dxi_hat / torch.sqrt(variance + eps)) + (2.0 * dL_dvar * (input - mean) / B) + (dL_davg / B) # dL_dxi_hat / sqrt()
    # dL_dgamma = (grad_output * output).sum((0, 2, 3), keepdim=True) 
    dL_dgamma = (grad_output * x_hat).sum((0, 2, 3), keepdim=True) # edit
    dL_dbeta = (grad_output).sum((0, 2, 3), keepdim=True)
    return dL_dxi, dL_dgamma, dL_dbeta
  1. 因为您没有上传您的转发 snipcode,所以如果您的 gamma 的形状大小为1,您需要将其重塑为[1,gamma.shape[0],1,1].
  2. 该公式遵循一维,其中比例因子是批量大小的总和。但是,在 2D 中,总和应该在 3 个维度之间,所以B = input.shape[0] * input.shape[2] * input.shape[3].
  3. 并且running_meanrunning_var用于测试/推理模式,我们不在训练中使用它们(您可以在论文中找到它)。您需要的均值和方差是从输入中计算出来的,您可以将均值、方差存储x_hat = (x-mean)/sqrt(variance + eps)到您的对象layer中,或者像我在上面的代码中所做的那样重新计算# add new。然后将它们替换为 的公式dL_dvar, dL_davg, dL_dxi
  4. dL_dgamma应该是不正确的,因为你自己乘以梯度output,它应该被修改为grad_output * x_hat.

推荐阅读