首页 > 解决方案 > can't find the inplace operation: one of the variables needed for gradient computation has been modified by an inplace operation

问题描述

I am trying to compute a loss on the jacobian of the network (i.e. to perform double backprop), and I get the following error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I can't find the inplace operation in my code, so I don't know which line to fix.

*The error occurs in the last line: loss3.backward()

            inputs_reg = Variable(data, requires_grad=True)
            output_reg = self.model.forward(inputs_reg)

            num_classes = output.size()[1]
            jacobian_list = []
            grad_output = torch.zeros(*output_reg.size())

            if inputs_reg.is_cuda:
                grad_output = grad_output.cuda()
                jacobian_list = jacobian.cuda()

            for i in range(10):

                zero_gradients(inputs_reg)
                grad_output.zero_()
                grad_output[:, i] = 1
                jacobian_list.append(torch.autograd.grad(outputs=output_reg,
                                                  inputs=inputs_reg,
                                                  grad_outputs=grad_output,
                                                  only_inputs=True,
                                                  retain_graph=True,
                                                  create_graph=True)[0])


            jacobian = torch.stack(jacobian_list, dim=0)
            loss3 = jacobian.norm()
            loss3.backward()

标签: pytorchbackpropagation

解决方案


您可以使用包中set_detect_anomaly 可用的函数 autograd来准确查找导致错误的行。

这是描述相同问题和使用上述功能的解决方案的链接。


推荐阅读