首页 > 解决方案 > '/bin/convert_to_uff.py': 没有这样的文件或目录

问题描述

我正在尝试优化YoloV3使用tensorRT

我来了一篇名为“您在部署之前优化过深度学习模型吗?”的帖子?

Docker在帖子中使用。

使用Enabling GPUs in the Container Runtime Ecosystem安装nvidia-docker2

docker pull aminehy/tensorrt-opencv-python3:version-1.3aminehy/tensorrt-opencv-python3拉取最新版本的docker 镜像

这是图片

$ sudo docker images
REPOSITORY                        TAG                             IMAGE ID            CREATED             SIZE
nvcr.io/nvidia/cuda               10.1-cudnn7-devel-ubuntu18.04   b4879c167fc1        2 weeks ago         3.67GB
aminehy/tensorrt-opencv-python3   version-1.3                     0302e477816d        4 months ago        5.36GB
aminehy/tensorrt-opencv-python3   latest                          604502819d12        4 months ago        4.94GB
aminehy/tensorrt-opencv-python3   version-1.1                     d693210c500c        4 months ago        4.94GB

我跑了

$sudo docker run -it --rm -v $(pwd):/workspace --runtime=nvidia -w /workspace -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY aminehy/tensorrt-opencv-python3:version-1.3```

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 19.05 (build 6392482)

NVIDIA TensorRT 5.1.5 (c) 2016-2019, NVIDIA CORPORATION.  All rights reserved.
Container image (c) 2019, NVIDIA CORPORATION.  All rights reserved.

https://developer.nvidia.com/tensorrt

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

root@a38b20eeb740:/workspace# cd /opt/tensorrt/python/
root@a38b20eeb740:/opt/tensorrt/python# chmod +x python_setup.sh 
root@a38b20eeb740:/opt/tensorrt/python# ./python_setup.sh
Requirement already satisfied: Pillow in /usr/local/lib/python3.5/dist-packages (from -r /opt/tensorrt/samples/sampleSSD/requirements.txt (line 1)) (6.0.0)
WARNING: You are using pip version 19.2.1, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Ignoring torch: markers 'python_version == "3.7"' don't match your environment
......
......
......
Setting up graphsurgeon-tf (5.1.5-1+cuda10.1) ...
Setting up uff-converter-tf (5.1.5-1+cuda10.1) ...
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/dist-packages/uff/__init__.py", line 1, in <module>
    from uff import converters, model  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/model/__init__.py", line 1, in <module>
    from . import uff_pb2 as uff_pb  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/model/uff_pb2.py", line 6, in <module>
    from google.protobuf.internal import enum_type_wrapper
ImportError: No module named google.protobuf.internal
chmod: cannot access '/bin/convert_to_uff.py': No such file or directory

似乎找不到convert_to_uff.py内部调用的任何文件bin

怎么回事?

我哪里做错了?

标签: pythondockertensorrtnvidia-docker

解决方案


尝试重新安装 protobuf 以确保:

pip install protobuf

推荐阅读