python - Is there a way that feeding a input to layers parallelly in PyTorch?
问题描述
I'm implementing ELMo with PyTorch. I want to feed ELMo's CNNs, which have different filter map sizes, on word matrices but I'm considering an efficent way. Here's my code:
# fill the empty tensor iteratively
batch_size = word.size(0)
y = torch.zeros(batch_size, self.kernel_dim)
cnt = 0
for kernel in self.kernels:
temp = kernel(word)
pooled = torch.max(temp, dim=2)[0]
y[:, cnt:cnt+pooled.size(1)] = pooled
cnt += pooled.size(1)
# Using torch.cat
y = []
for kernel in kernels:
temp = kernel(a)
y.append(torch.max(temp, dim=2)[0]) # max pooling
y = torch.cat(y, dim=1)
I have two questions. The first one is "Is there a way that feeding input to layers in parallel?". It makes me avoiding for loop iteration so that make my code more efficient. The second one is "Which one is faster between using torch.cat
and just filling an empty tensor.
解决方案
推荐阅读
- deep-learning - 具有多个输入的 PyTorch Experience Replay
- javascript - Topojson沿海省份
- ansible - 何时评估 with_items 内的值列表的条件
- javascript - Electron 与 Vue.js,不使用样板
- asp.net - 从 ApplicationUser 继承返回 null
- r - 使用 grepl 进行双 for 循环过滤数据帧
- ecmascript-6 - 解构时可空嵌套对象上的 ES6 默认参数
- wix - 用于 2 个组件的 WiX 安装程序
- android - 如何将 ImageView 放在边界半径顶部的 CardView 内?
- c++ - 如何在同一行打印和更新记分牌?