首页 > 解决方案 > LSTM hidden_​​state PyTorch 几乎相同,导致负 KLDivLoss

问题描述

我一直对这个问题感到困惑,我无法弄清楚我做错了什么......我已经训练了一个自动编码器(LSTM-LSTM),我现在正在尝试使用编码的特征来使用KLDivLoss. 然而,事实证明,编码的特征几乎总是相同的(请参见下面将打印精度设置为 10 时的示例):

# last hidden state of the encoder
tensor([[[ 0.1086065620, -0.0446619801, -0.0530930459,  ...,
          -0.0573375113,  0.1083261892,  0.0037083717],
         [ 0.1086065620, -0.0446619801, -0.0530930459,  ...,
          -0.0573375151,  0.1083261892,  0.0037083712],
         [ 0.1086065620, -0.0446619801, -0.0530930459,  ...,
          -0.0573375188,  0.1083262041,  0.0037083719],
         ...,
         [ 0.1086065620, -0.0446619801, -0.0530930422,  ...,
          -0.0573375151,  0.1083262041,  0.0037083724],
         [ 0.1086065620, -0.0446619801, -0.0530930385,  ...,
          -0.0573375151,  0.1083262041,  0.0037083712],
         [ 0.1086065620, -0.0446619801, -0.0530930385,  ...,
          -0.0573375188,  0.1083261892,  0.0037083707]]],
       grad_fn=<StackBackward>)

你认为这种行为可以解释吗?我正在研究高维时间序列,我的目标是从本文中获得灵感来实现一种无监督聚类方法。如果我将编码器的原始输出输入到KLDivLoss我会得到一个负损失......但是,如果我重新调整编码器的输出(使用sklearn.preprocessing.StandardScaler),我会得到所需的行为:正损失值。

我已经确保第一项KLDivLoss是对数概率,第二项是概率......

编码器的代码(其中包含用于识别相关驱动序列的注意力机制):

class Encoder(nn.Module):
    def __init__(self, config, input_size: int):
        super(Encoder, self).__init__()
        self.input_size = input_size
        self.hidden_size = config['hidden_size_encoder']
        self.seq_len = config['seq_len']

        self.lstm = nn.LSTM(
            input_size=self.input_size,
            hidden_size=self.hidden_size,
            num_layers=1
        )
        self.attn = nn.Linear(
            in_features=2 * self.hidden_size + self.seq_len,
            out_features=1
        )
        self.dropout = nn.Dropout(p=0.5)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, input_data):
        h_t, c_t = (init_hidden(input_data, self.hidden_size), init_hidden(input_data, self.hidden_size))
        input_weighted = Variable(torch.zeros(input_data.size(0), self.seq_len, self.input_size))

        for t in range(self.seq_len):
            x = torch.cat((h_t.repeat(self.input_size, 1, 1).permute(1, 0, 2),
                           c_t.repeat(self.input_size, 1, 1).permute(1, 0, 2),
                           input_data.permute(0, 2, 1).to(device)), dim=2).to(
                device) 

            e_t = self.attn(x.view(-1, self.hidden_size * 2 + self.seq_len)) 
            a_t = self.dropout(self.softmax(e_t.view(-1, self.input_size)))

            weighted_input = torch.mul(a_t, input_data[:, t, :].to(device))
            self.lstm.flatten_parameters()
            _, (h_t, c_t) = self.lstm(weighted_input.unsqueeze(0), (h_t, c_t))

            input_weighted[:, t, :] = weighted_input

        return input_weighted[:, -1:, :], h_t, c_t

聚类代码

class Clusterizer(nn.Module):
    def __init__(self, n_clusters, hidden_dim, encode, alpha=1.0):
        super(Clusterizer, self).__init__()
        self.n_clusters = n_clusters
        self.hidden_dim = hidden_dim
        self.encoder = encoder
        self.alpha = alpha
        self.centroids = None

    def init_centroids(self, encoded_x):
        '''initialize clusters center using KMeans'''
        kmeans = KMeans(n_clusters=self.n_clusters, random_state=0, n_init=10).fit(encoded_x)
        centroids = torch.tensor(kmeans.cluster_centers_, dtype=torch.float)
        self.centroids = nn.Parameter(centroids, requires_grad=True)

    def target_distribution(self, q_):
        weight = (q_ ** 2) / torch.sum(q_, 0)
        return (weight.t() / torch.sum(weight, 1)).t()

    def forward(self, encoded_x):
        num = ((1 + torch.norm(encoded_x.unsqueeze(1) - self.centroids, dim=2)) / self.alpha) ** (-(self.alpha + 1) / 2)
        den = torch.sum(num, dim=1, keepdim=True)
        return num / den

代码的最后一部分(出现问题的地方),这里encoded_data是 shape 编码器的堆叠输出(nb_observation, hidden_size)

criterion = nn.KLDivLoss(size_average=False)

...

clusterizer.init_centroids(encoded_data)
output = clusterizer(encoded_data.to(device))

target_distrib = clusterizer.target_distribution(output)
loss = criterion(output.log(), target_distrib)

很抱歉,这是很多代码,但我希望它有助于确定问题的根源。

标签: pythondeep-learningpytorchlstm

解决方案


推荐阅读