首页 > 技术文章 > MLPerf 机器学习基准测试实战入门(二)object_detection

caiyishuai 2021-01-25 14:17 原文

object_detection使用Mask R-CNN with ResNet50 backbone进行模型训练,参考链接为https://github.com/Caiyishuai/training/tree/master/object_detection

将MLPerf库拷到本地

mkdir -p mlperf
cd mlperf
git clone https://github.com/mlperf/training.git

安装CUDA和docker

source training/install_cuda_docker.sh

建立镜像

cd training/object_detection/
nvidia-docker build . -t mlperf/object_detection

准备Dataset

source download_dataset.sh

查看下载的数据信息,如果shell里下载较慢可以利用下载器下载

#!/bin/bash

# Get COCO 2014 data sets
mkdir -p pytorch/datasets/coco
pushd pytorch/datasets/coco

curl -O https://dl.fbaipublicfiles.com/detectron/coco/coco_annotations_minival.tgz
tar xzf coco_annotations_minival.tgz

curl -O http://images.cocodataset.org/zips/train2014.zip
unzip train2014.zip

curl -O http://images.cocodataset.org/zips/val2014.zip
unzip val2014.zip

curl -O http://images.cocodataset.org/annotations/annotations_trainval2014.zip
unzip annotations_trainval2014.zip

# TBD: MD5 verification
# $md5sum *.zip *.tgz
#f4bbac642086de4f52a3fdda2de5fa2c  annotations_trainval2017.zip
#cced6f7f71b7629ddf16f17bbcfab6b2  train2017.zip
#442b8da7639aecaf257c1dceb8ba8c80  val2017.zip
#2d2b9d2283adb5e3b8d25eec88e65064  coco_annotations_minival.tgz

popd

如果用下载器下载在当前目录下,更改此文件为:

tar xzf coco_annotations_minival.tgz


unzip train2014.zip


unzip val2014.zip


unzip annotations_trainval2014.zip

# TBD: MD5 verification
# $md5sum *.zip *.tgz
#f4bbac642086de4f52a3fdda2de5fa2c  annotations_trainval2017.zip
#cced6f7f71b7629ddf16f17bbcfab6b2  train2017.zip
#442b8da7639aecaf257c1dceb8ba8c80  val2017.zip
#2d2b9d2283adb5e3b8d25eec88e65064  coco_annotations_minival.tgz

popd

启动镜像

nvidia-docker run -v /root/worktable/:/workspace -t -i --rm --ipc=host mlperf/object_detection "cd mlperf/training/object_detection && ./run_and_time.sh"

在镜像中进入运行的目录

cd mlperf/training/object_detection 

先运行./install.sh

./install.sh

install.sh里的内容,调用了pytorch/setup.py,要去下载maskrcnn-benchmark,可以先下载下来放到pytorch底下

 其中涉及联网下载R-50.pkl,可以现在到本地,再把本地的R-50.pkl放到镜像里的/root/.torch/models/底下

运行Benchmark

./run_and_time.sh

注意:如果单机多卡,则要更改./run_and_time.sh里的命令

查看自己电脑gpu信息

watch -d -n 1 nvidia-smi

 

 

 

 

 pytorch单机多卡训练的命令:

python -m torch.distributed.launch --nproc_per_node=你的GPU数量
               YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other
               arguments of your training script)

更改./run_and_time.sh为

 

watch -d -n 1 nvidia-smi可以看到自己的gpu都跑起来了

参考链接:https://blog.csdn.net/han2529386161/article/details/102723482

推荐阅读