本文主要是介绍【tensorrt】——batch推理对比,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
关键词:tensorrt, int8, float16,batch推理
该测试结果有问题,正确的测试请移步:【tensorrt】——trtexec动态batch支持与batch推理耗时评测
int8量化,这篇文章中nvidia tensorrt的int8推理在batch大的时候有推理速度的提升,这里实测一下。
- 采用float16精度的ddrnet23模型,tensorrt的python api进行推理。可以看到采用batch的推理方式并没有什么提升。
with batch:1, inference time:0.0089 s
with batch:2, inference time:0.0078 s
with batch:3, inference time:0.0076 s
with batch:4, inference time:0.0074 s
with batch:5, inference time:0.0075 s
with batch:6, inference time:0.0072 s
with batch:7, inference time:0.0075 s
with batch:8, inference time:0.0073 s
with batch:9, inference time:0.0077 s
with batch:10, inference time:0.0080 s
with batch:11, inference time:0.0089 s
with batch:12, inference time:0.0090 s
with batch:13, inference time:0.0089 s
with batch:14, inference time:0.0105 s
with batch:15, inference time:0.0087 s
with batch:16, inference time:0.0083 s
with batch:17, inference time:0.0079 s
with batch:18, inference time:0.0080 s
with batch:19, inference time:0.0080 s
with batch:20, inference time:0.0079 s
with batch:21, inference time:0.0079 s
with batch:22, inference time:0.0079 s
with batch:23, inference time:0.0078 s
with batch:24, inference time:0.0078 s
- 采用int8精度的hrnet_ocrw18
with batch:1, inference time:0.0109 s
with batch:2, inference time:0.0088 s
with batch:3, inference time:0.0081 s
with batch:4, inference time:0.0078 s
with batch:5, inference time:0.0076 s
with batch:6, inference time:0.0074 s
with batch:7, inference time:0.0077 s
with batch:8, inference time:0.0075 s
with batch:9, inference time:0.0075 s
with batch:10, inference time:0.0083 s
with batch:11, inference time:0.0081 s
with batch:12, inference time:0.0080 s
with batch:13, inference time:0.0080 s
with batch:14, inference time:0.0082 s
with batch:15, inference time:0.0085 s
with batch:16, inference time:0.0080 s
with batch:17, inference time:0.0083 s
with batch:18, inference time:0.0082 s
with batch:19, inference time:0.0083 s
with batch:20, inference time:0.0082 s
with batch:21, inference time:0.0084 s
with batch:22, inference time:0.0089 s
with batch:23, inference time:0.0091 s
with batch:24, inference time:0.0089 s
with batch:25, inference time:0.0084 s
with batch:26, inference time:0.0079 s
with batch:27, inference time:0.0079 s
with batch:28, inference time:0.0081 s
with batch:29, inference time:0.0086 s
with batch:30, inference time:0.0086 s
with batch:31, inference time:0.0084 s
总结:
在int8和float16上实测是没有什么提升的。
1. 从:https://blog.csdn.net/zhou_438/article/details/112823818,可以看到batch size到32以上后单张推理才有提升
2. 从这里可以看到 batch_size1,2 也是没有变换的
这篇关于【tensorrt】——batch推理对比的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!