deep-learning - 在 Pytorch 中估计高斯模型的混合
问题描述
我实际上想用混合高斯作为基本分布来估计归一化流,所以我有点被火炬困住了。但是,您可以通过在火炬中估计高斯模型的混合来重现我的代码中的错误。我的代码如下:
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import torch
from torch import nn
from torch import optim
import torch.distributions as D
num_layers = 8
weights = torch.ones(8,requires_grad=True).to(device)
means = torch.tensor(np.random.randn(8,2),requires_grad=True).to(device)#torch.randn(8,2,requires_grad=True).to(device)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True).to(device)
mix = D.Categorical(weights)
comp = D.Independent(D.Normal(means,stdevs), 1)
gmm = D.MixtureSameFamily(mix, comp)
num_iter = 10001#30001
num_iter2 = 200001
loss_max1 = 100
for i in range(num_iter):
x = torch.randn(5000,2)#this can be an arbitrary x samples
loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
optimizer1.zero_grad()
loss2.backward()
optimizer1.step()
我得到的错误是:
0
8.089411823514835
Traceback (most recent call last):
File "/home/cameron/AnacondaProjects/gmm.py", line 183, in <module>
loss2.backward()
File "/home/cameron/anaconda3/envs/torch/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/cameron/anaconda3/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
在您看到模型运行 1 次迭代之后。
解决方案
您的代码中存在排序问题,因为您在训练循环之外创建了高斯混合模型,所以在计算损失时,高斯混合模型将尝试使用您在定义模型时设置的参数的初始值,但是optimizer1.step()
已经修改该值,因此即使您设置loss2.backward(retain_graph=True)
了仍然会出现错误:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
这个问题的解决方案是在更新参数时简单地创建新的高斯混合模型,示例代码按预期运行:
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import torch
from torch import nn
from torch import optim
import torch.distributions as D
num_layers = 8
weights = torch.ones(8,requires_grad=True)
means = torch.tensor(np.random.randn(8,2),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)
parameters = [weights, means, stdevs]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)
num_iter = 10001
for i in range(num_iter):
mix = D.Categorical(weights)
comp = D.Independent(D.Normal(means,stdevs), 1)
gmm = D.MixtureSameFamily(mix, comp)
optimizer1.zero_grad()
x = torch.randn(5000,2)#this can be an arbitrary x samples
loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
loss2.backward()
optimizer1.step()
print(i, loss2)
推荐阅读
- php - Codeigniter 4 cookie 问题无法设置
- javascript - 将 JS 测验的正确答案设为绿色,将错误答案设为红色
- java - Java 中 x + x 和 2*x 的区别
- python - REST API url 设计
- python - 使用 Plotly 和 Flask 的雷达图
- postgresql - 如何设置创建某些行的日期?我试图在 PostgreSQL 中只使用触发器和函数
- bash - 如何复制特定文本下方的特定行?
- c++ - 从 int 到 char 的转换失败,控制台打印出奇怪的符号
- python-3.x - 如何在 Pandas 中使用正则表达式?
- python - 如何重新着色图像中的多个像素 - Python