首页 > 解决方案 > pytorch forward 函数内部的多处理

问题描述

我正在尝试在图形构成层之后对每个节点嵌入执行一个完全连接的层。它可以工作,但速度非常慢,所以我试图让它瘫痪。

基本上我有这个普通版本的代码:

class Net(nn.Module):
def __init__(self):
    super(Net, self).__init__()
    # first 3 is the feature size
    self.gcn = nn.ModuleList([
        GCN(3, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, F.leaky_relu),
        GCN(32, 32, None)])

    self.fc = nn.ModuleList([
        nn.Linear(32, 32, bias=True),
        nn.Linear(32, 32, bias=True),
        nn.Linear(32, 1, bias=True)])


def forward(self, g, features):

    ret = th.zeros(len(features), 1, dtype=features.dtype, device=features.device)

    for layer in self.gcn:
        features = layer(g, features)

    for i, f in enumerate(features):
        for l in self.fc:
            f = l(f)
        ret[i][0] = f

    return ret

理论上,第二个 for 应该很容易并行化。问题是我做不到。我尝试了 torch.multiprocessing 之类的东西

def tail(layers, i, f, dictionary):
    for l in layers:
        f = l(f)
    dictionary[i]: f

def forward(self, g, features):

    ret = th.zeros(len(features), 1, dtype=features.dtype, device=features.device)

    for layer in self.gcn:
        features = layer(g, features)

    manager = mp.Manager()
    return_dict = manager.dict()

    proc = []
    for i, f in enumerate(features):
        p = mp.Process(target=tail, args=(self.fc, i, f, return_dict))
        p.start()
        proc.append(p)

    for p in proc:
        p.join()

    for i in return_dict.keys():
        ret[i] = return_dict[i]

    return ret

但没有运气。有任何想法吗?

标签: pythonmultiprocessingpytorch

解决方案


推荐阅读