本文主要是介绍mmdetection模型转onnx和tensorrt实战,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
一,说明
1.本次实战使用的是mmdetection算法框架中的Cascase-Rcnn训练的模型;
2.模型转换时,运行环境中各种工具的版本要保持一致;
3.TensorRT我一直装不上,我用的是镜像环境.
参考链接:link
二,使用Docker镜像
1.0,镜像基础环境构建
export TAG=openmmlab/mmdeploy:ubuntu20.04-cuda11.8-mmdeploy
docker pull $TAG
基础环境包含以下,此处Torch版本要和训练环境中保持一致
OS = Ubuntu20.04
CUDA = 11.8
CUDNN = 8.9
Python = 3.8.10
Torch= 2.0.0
TorchVision= 0.15.0
TorchScript= 2.0.0
TensorRT= 8.6.1.6
ONNXRuntime= 1.15.1
OpenVINO= 2022.3.0
ncnn= 20230816
openppl= 0.8.1
link
运行Docker 环境
export TAG=openmmlab/mmdeploy:ubuntu20.04-cuda11.8-mmdeploy
docker run --gpus=all -it --rm $TAG
常见问题
docker: Error response from daemon: could not select device driver "" with capabilities: [gpu].
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.listsudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
三.模型转换
1.0,镜像环境安装mmdetection,要和训练环境保持一致
# 安装 mmdetection。转换时,需要使用 mmdetection 仓库中的模型配置文件,构建 PyTorch nn module
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
mim install -v -e .
cd ..mim install mmdet# 下载 Faster R-CNN 模型权重
wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth# 执行转换命令,实现端到端的转换
python3 mmdeploy/tools/deploy.py \mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \mmdetection/demo/demo.jpg \--work-dir mmdeploy_model/faster-rcnn \--device cuda \--dump-info
转换我自己的模型示例
python3 mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
checkpoints/shebei/cascade-rcnn_r101_fpn_1x_coco.py \
checkpoints/shebei/epoch_16.pth checkpoints/shebei/test_img/2020_180305.jpg \
--work-dir mmdeploy_model/cascade-rcnn0205 \--device cuda \--dump-info
得到的结果
root@f88294e16365:~/workspace/mmdeploy_model/cascade-rcnn0205# ll -h
total 747M
drwxr-xr-x 2 root root 4.0K Feb 5 02:01 ./
drwxr-xr-x 9 root root 4.0K Feb 5 01:59 ../
-rw-r--r-- 1 root root 342 Feb 5 01:59 deploy.json
-rw-r--r-- 1 root root 2.4K Feb 5 01:59 detail.json
-rw-r--r-- 1 root root 403M Feb 5 02:01 end2end.engine
-rw-r--r-- 1 root root 337M Feb 5 01:59 end2end.onnx
-rw-r--r-- 1 root root 3.9M Feb 5 02:01 output_pytorch.jpg
-rw-r--r-- 1 root root 3.9M Feb 5 02:01 output_tensorrt.jpg
-rw-r--r-- 1 root root 3.9K Feb 5 01:59 pipeline.json
注意事项,mmdet>2.0版本转换过程中,如果class_name数量大于20时候,会出现报错
File "/home/ai-developer/data/mmdetection-main/mmdet/visualization/palette.py", line 65, in get_palette
assert len(dataset_palette) >= num_classes,
AssertionError: The length of palette should not be less than num_classes.
我已经提了issues,找到解决方案后会更新
四.Python API
link
from mmdeploy_runtime import Detector
import cv2# 读取图片
img = cv2.imread('mmdetection/demo/demo.jpg')# 创建检测器
detector = Detector(model_path='mmdeploy_models/faster-rcnn', device_name='cuda', device_id=0)
# 执行推理
bboxes, labels, _ = detector(img)
# 使用阈值过滤推理结果,并绘制到原图中
indices = [i for i in range(len(bboxes))]
for index, bbox, label_id in zip(indices, bboxes, labels):[left, top, right, bottom], score = bbox[0:4].astype(int), bbox[4]if score < 0.3:continuecv2.rectangle(img, (left, top), (right, bottom), (0, 255, 0))cv2.imwrite('output_detection.png', img)
五,目前使用的python api 并没有使得推理速度提高至100ms以下
问题在于使用opencv读取图像平均占用200ms,模型推理时间在50ms左右,
from mmdeploy_runtime import Detector
import cv2
import timedetector = Detector(model_path='mmdeploy_model/cascade-rcnn0205', device_name='cuda', device_id=0)
starttime=time.time()
for i in range(1000):img = cv2.imread('checkpoints/shebei/test_img/2020_180305.jpg')bboxes, labels, _ = detector(img)indices = [i for i in range(len(bboxes))]# for index, bbox, label_id in zip(indices, bboxes, labels):#[left, top, right, bottom], score = bbox[0:4].astype(int), bbox[4]# if score < 0.3:# continue# cv2.rectangle(img, (left, top), (right, bottom),(0, 0, 255))#cv2.imwrite('output_detection.png', img)
endtime=time.time()-starttime
print(endtime)
print(endtime/1000)
[2024-02-05 02:26:04.252] [mmdeploy] [warning] [trt_net.cpp:24] TRTNet: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
258.1476366519928
0.2581476366519928
root@f88294e16365:~/workspace# python3 inference_model_python_api.py
[2024-02-05 02:34:40.087] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "mmdeploy_model/cascade-rcnn0205"
[2024-02-05 02:34:40.986] [mmdeploy] [warning] [trt_net.cpp:24] TRTNet: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
202.68383264541626
0.20268383264541626
这篇关于mmdetection模型转onnx和tensorrt实战的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!