首页 > 解决方案 > PyTorch 中的 LSTM 模型实现

问题描述

我试图在 PyTorch 中实现 CNN+LSTM 模型,但我对 LSTM 部分有疑问(我以前从未使用过 LSTM)。您能否编写多对一 LSTM 模型类(图片链接:https ://i.ibb.co/SRGWT5j/lstm.png )...

标签: deep-learningpytorchlstmrecurrent-neural-network

解决方案


对于 Pytorch 中的 nn.LSTM,根据文档https://pytorch.org/docs/stable/nn.html?highlight=lstm#torch.nn.LSTM

它将输入作为 (embedding_size_dimension , hidden_​​size_dimension , number_of_layers) (当前忽略双向参数,我们也可以传递初始 hidden_​​state 和 cell_state )

所以我们需要传递一个形状为 [max sentence length , batch size , embedding size ] 的张量

只是一个样本模型

class Model(nn.Module):
    def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
        super(Model, self).__init__()
        self.output_size = output_size
        self.n_layers = n_layers
        self.hidden_dim = hidden_dim

        self.embedding = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=drop_prob)

    def forward(self, sentence):
        batch_size = sentence.size(0)
        sentence = sentence.long()
        embeds = self.embedding(sentence)
        lstm_out, hidden = self.lstm(embeds)
        # so here lstm_out will be of [max sentence length , batch size , hidden size]
        # so for simple many-to-one we can just use output of last cell of LSTM
        out = lstm_out[-1,:,:]
        return out

你可以参考这个链接,它在 pytorch 中很好地解释了 LSTM,它还有一个 SentimentNet 模型的示例

https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/


推荐阅读