本文主要是介绍TensorRT实现EfficientDet推理加速(二),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
一、参考资料
为什么你比官方的运行速度快
全网第一SoTA成绩却朴实无华的pytorch版efficientdet
PyTorch转TensorRT流程
一文掌握Pytorch-onnx-tensorrt模型转换
onnx-tensorrt安装
ONNX前向inference调试
二、重要说明
- 论文中介绍说EfficiendDet-D1(640)的infer速度为16ms,实际上测试没有那么快;需要跑一下 efficientdet官方仓库 代码,测试真实速度;
- 在GTX 1650(4GB)显卡上,测试EfficientDet-D0(512),不包含后处理infer,tensorRT加速比可达10倍;包含后处理的infer,tensorRT加速比可达4倍;
三、存在的问题
-
BatchedNMS_TRT
batchedNMSPlugin
Code sample to add custom importer for BatchedNMS_TRT in builtin_op_importers.cpp
How to use NMS with Pytorch model (that was converted to ONNX -> TensorRT) #795ERROR:EngineBuilder:Failed to load ONNX file: /home/yichao/Downloads/saved_model_onnx-1/model.onnx ERROR:EngineBuilder:In node 882 (parseGraph): UNSUPPORTED_NODE: No importer registered for op: BatchedNMS_TRT
https://github.com/NVIDIA/TensorRT/blob/master/CHANGELOG.md https://github.com/NVIDIA/TensorRT/tree/master/plugin/batchedNMSPlugin错误原因: tensorRT 8.0.1之后的版本支持 EfficientNMS_TRT plugins插件解决办法: 更换tensorRT 版本
-
EfficientNMS_TRT
nmsPlugin
EfficientNMS_TRT not working on jetson nano (TensorRT 8.0.1)ERROR:EngineBuilder:Failed to load ONNX file: /home/yichao/Downloads/saved_model_onnx-1/model.onnx ERROR:EngineBuilder:In node 853 (parseGraph): UNSUPPORTED_NODE: No importer registered for op: EfficientNMS_TRT
错误原因: 不兼容低于 tensorRT 8.0.1 版本的 plugins 插件,需要增加 --legacy_plugins 参数,增加兼容性。 --legacy_plugins allows falling back to older plugins on systems where a version lower than TensorRT 8.0.1 is installed. This will result in substantially slower inference times however, but is provided for compatibility.解决办法: 增加 --legacy_plugins 参数python create_onnx.py \--input_shape '1,512,512,3' \--saved_model /home/yichao/Downloads/efficientdet_d0_coco17_tpu-32/saved_model \--onnx /home/yichao/Downloads/saved_model_onnx-1/model.onnx \--legacy_plugins
-
无法解析onnx模型
ERROR:EngineBuilder:Failed to load ONNX file: /home/yichao/Downloads/saved_model_onnx/model.onnx ERROR:EngineBuilder:In node -1 (parseGraph): UNSUPPORTED_NODE: Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
错误原因: 博主的onnx版本太新了,降低onnx版本即可,按照requirements.txt文件中的版本安装方法一: 降低onnx版本,按照requirements.txt文件中的版本安装方法二: 如果方法一无法解决,尝试方法二 下载另外一种格式的预训练模型(1)AutoML Models (2)TFOD Models
-
build_engine.py生成引擎错误
[TensorRT] ERROR: [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (fpn_cells/cell_0/fnode0/add_n/add: broadcast dimensions must be conformable ) ERROR:EngineBuilder:Failed to load ONNX file: /media/yichao/蚁巢文件/YOYOFile/ModelZoo/EfficientDet模型/D7/saved_model_onnx/model.onnx ERROR:EngineBuilder:In node 681 (parseGraph): INVALID_NODE: Invalid Node - fpn_cells/cell_0/fnode0/add_n/add [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (fpn_cells/cell_0/fnode0/add_n/add: broadcast dimensions must be conformable )
[TensorRT] ERROR: [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (mul_5: broadcast dimensions must be conformable ) ERROR:EngineBuilder:Failed to load ONNX file: /media/yichao/蚁巢文件/YOYOFile/ModelZoo/EfficientDet模型/D7/saved_model_onnx/model.onnx ERROR:EngineBuilder:In node 1452 (parseGraph): INVALID_NODE: Invalid Node - mul_5 [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (mul_5: broadcast dimensions must be conformable )
错误原因: 不同Model模型的input shape不同,解决办法: 对于EfficientDet D0 python create_onnx.py \--input_shape '1,512,512,3' \--saved_model /path/to/saved_model \--onnx /path/to/model.onnx对于EfficientDet D7 python create_onnx.py \--input_shape '1,1536,1536,3' \--saved_model /path/to/saved_model \--onnx /path/to/model.onnx
这篇关于TensorRT实现EfficientDet推理加速(二)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!