首页 > 解决方案 > 两种嵌入模型的联合训练(KGE + GloVe)

问题描述

如何在 pytorch 中创建一个共享知识图嵌入(KGE) 模型、TuckER(如下所示)和(假设共现矩阵以及维度已经可用)的参数的联合模型?

换句话说,联合模型必须遵守 CMTF(耦合矩阵和张量因子因子化)框架的标准并且两个嵌入权重必须在训练期间绑定。这里的问题是 KGE 需要一个三元组(主体、关系、对象),而 GloVe 需要一个共现矩阵。此外,它们的损失函数的计算方式也不同。

class TuckER(torch.nn.Module):
    def __init__(self, d, d1, d2, **kwargs):
        super(TuckER, self).__init__()

        self.E = torch.nn.Embedding(len(d.entities), d1)
        self.R = torch.nn.Embedding(len(d.relations), d2)
        self.W = torch.nn.Parameter(torch.tensor(np.random.uniform(-1, 1, (d2, d1, d1)), 
                                    dtype=torch.float, device="cuda", requires_grad=True))

        self.input_dropout = torch.nn.Dropout(kwargs["input_dropout"])
        self.hidden_dropout1 = torch.nn.Dropout(kwargs["hidden_dropout1"])
        self.hidden_dropout2 = torch.nn.Dropout(kwargs["hidden_dropout2"])
        self.loss = torch.nn.BCELoss()

        self.bn0 = torch.nn.BatchNorm1d(d1)
        self.bn1 = torch.nn.BatchNorm1d(d1)
        
    def init(self):
        xavier_normal_(self.E.weight.data)
        xavier_normal_(self.R.weight.data)

    def forward(self, e1_idx, r_idx):
        e1 = self.E(e1_idx)
        x = self.bn0(e1)
        x = self.input_dropout(x)
        x = x.view(-1, 1, e1.size(1))

        r = self.R(r_idx)
        W_mat = torch.mm(r, self.W.view(r.size(1), -1))
        W_mat = W_mat.view(-1, e1.size(1), e1.size(1))
        W_mat = self.hidden_dropout1(W_mat)

        x = torch.bmm(x, W_mat) 
        x = x.view(-1, e1.size(1))      
        x = self.bn1(x)
        x = self.hidden_dropout2(x)
        x = torch.mm(x, self.E.weight.transpose(1,0))
        pred = torch.sigmoid(x)
        return pred

我知道如何通过加载状态字典、获取实例、在两个模型上运行它们,然后在顶部应用前馈层来联合训练两个预训练模型。但我似乎无法弄清楚这种情况。你能建议我如何做到这一点吗?


重要资源:

  1. TuckER 的代码 - https://github.com/ibalazevic/TuckER

标签: pythondeep-learningpytorchtorchknowledge-graph

解决方案


推荐阅读