首页 > 解决方案 > 如何在libtorch中将形状(n,k)的张量与形状(k)的张量堆叠?

问题描述

torch::stack接受 ac10::TensorList并且在给出相同形状的张量时工作得很好。但是,当您尝试发送先前torch::stack编辑的张量的输出时,它会失败并导致内存访问冲突。

更具体地说,假设我们有 3 个形状为 4 的张量,例如:

torch::Tensor x1 = torch::randn({4});
torch::Tensor x2 = torch::randn({4});
torch::Tensor x3 = torch::randn({4});
torch::Tensor y = torch::randn({4});

第一轮堆叠是微不足道的:

torch::Tensor stacked_xs = torch::stack({x1,x2,x3});

但是,尝试这样做:

torch::Tensor stacked_result = torch::stack({y, stacked_xs});

将失败。我希望获得与np.vstackPython 中相同的行为,这是允许并有效的。我该怎么办?

标签: c++torchlibtorch

解决方案


y您可以向with添加维度torch::unsqueeze。然后与cat(not stack,与 numpy 不同,但结果将是您所要求的) 连接:

torch::Tensor x1 = torch::randn({4});
torch::Tensor x2 = torch::randn({4});
torch::Tensor x3 = torch::randn({4});
torch::Tensor y = torch::randn({4});

torch::Tensor stacked_xs = torch::stack({x1,x2,x3});
torch::Tensor stacked_result = torch::cat({y.unsqueeze(0), stacked_xs});

也可以根据您的喜好展平您的第一个堆栈然后对其进行重塑:

torch::Tensor stacked_xs = torch::stack({x1,x2,x3});
torch::Tensor stacked_result = torch::cat({y, stacked_xs.view({-1}}).view({4,4});

推荐阅读