首页 > 解决方案 > 将 Pytorch 预训练模型部署到 Movidius 神经计算棒

问题描述

我使用Detectron2训练了一个模型(这是faster_rcnn_R_50_FPN_3x),之后我使用torch将模型保存在驱动器上:

model_save_name = 'model.pkl'
path = F"/content/drive/My Drive/{model_save_name}"
torch.save(trainer.model, path)

之后我尝试将其转换为 onnx ,我遵循了这个https://www.programmersought.com/article/29567352620/

但是当我在本地下载模型并且在我的模型上应用以下代码时:

import torch, torchvision
device = torch.device("cpu")
model = torch.load("modelx.pkl",map_location=device)
model.eval()
batch_size = 1
input_shape = (1,224,224) # i also tried (3,224,224)
output_path="output"
input_data_shape = torch.randn(batch_size,*input_shape, device=device)
torch.onnx.export(model,input_data_shape,output_path, verbose=True)

它一直给我这个错误

/home/momo/Desktop/detectron2/detectron2/modeling/meta_arch/rcnn.py:224: 
TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing 
a tensor of different shape won't change the number of iterations executed (and might 
lead to errors or silently give incorrect results).
images = [x["image"].to(self.device) for x in batched_inputs]
Traceback (most recent call last):
File "Convert.py", line 10, in <module>
torch.onnx.export(model,input_data_shape,output_path, verbose=True)
File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/__init__.py", line 275, 
in export return utils.export(model, args, f, export_params, verbose, training,
File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in 
export_export(model, args, f, export_params, verbose, training, input_names,
output_names,File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/utils.py", 
line 689, in _export_model_to_graph(model, args, verbose, input_names,
File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 458, 
in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args,
File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 422, in 
_create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/momo/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 373, in 
_trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, 
_return_inputs_states=True)
File "/home/momo/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1160, 
in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, 
_return_inputs_states)(*args, **kwargs)
File "/home/momo/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 
1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/momo/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in 
forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/momo/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in 
wrapper
outs.append(self.inner(*trace_inputs))
File "/home/momo/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 
1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/momo/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 
1039, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/momo/Desktop/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 146, 
in forward
return self.inference(batched_inputs)
File "/home/momo/Desktop/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 199, 
in inference
images = self.preprocess_image(batched_inputs)
File "/home/momo/Desktop/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 224, 
in preprocess_image
images = [x["image"].to(self.device) for x in batched_inputs]
 File "/home/momo/Desktop/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 224, 
in <listcomp>
images = [x["image"].to(self.device) for x in batched_inputs]
IndexError: too many indices for tensor of dimension 3

知道为什么吗?以及如何解决?

标签: pythonfaster-rcnn

解决方案


推荐阅读