首页 > 解决方案 > Pytorch 参数不会使用自定义损失函数更新 (Pytorch)

问题描述

我正在尝试使用优化器来调整成本函数的一组参数,其中包括跨神经网络的前向传递。参数指定该神经网络权重的均值和方差。然而,当在优化过程的每次迭代中更新参数时,除了属于前向传递的一项之外,成本函数的所有项都有助于参数更新。也就是说,如果所有其他术语都被注释掉,则不会更新任何参数。有没有办法解决这个问题?

编辑:我在下面添加了一个人为的例子。

import torch

class TestNN(torch.nn.Module):
    def __init__(self):
        super(TestNN, self).__init__()
        self.fc1 = torch.nn.Linear(10, 1)

    def forward(self, x):
        x = self.fc1(x)
        return x

    def getParameters(self):
        return [self.fc1.weight.transpose(0, 1), self.fc1.bias]

    def setParameters(self, parameters):
        # Can anything be done here to keep parameters in the graph?
        weight, bias = parameters
        self.fc1.weight = torch.nn.Parameter(weight.transpose(0, 1))
        self.fc1.bias = torch.nn.Parameter(bias)

def computeCost(parameters, input):
    testNN = TestNN()
    testNN.setParameters(parameters)
    cost = testNN(input) ** 2
    print(cost) # Cost stays the same :(
    return cost

def minimizeLoss(maxIter, optimizer, lossFunc, lossFuncArgs):
    for i in range(maxIter):
        optimizer.zero_grad()
        loss = lossFunc(*lossFuncArgs)
        loss.backward(retain_graph = True)
        optimizer.step()
        if i % 100 == 0:
            print(loss)

input = torch.randn(1, 10)
weight = torch.ones(10, 1)
bias = torch.ones(1, 1)

parameters = (weight, bias)
lossArgs = (parameters, input)
optimizer = torch.optim.Adam(parameters, lr = 0.01)

minimizeLoss(10, optimizer, computeCost, lossArgs)

标签: pythonmachine-learningpytorch

解决方案


推荐阅读