本文主要是介绍caffe2与pytorch计算速度比较,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
说明
caffe2 是读取的onnx模型,pytorch是加载的原始pth模型
测试结果
模型:mobilenet-v2
device | caffe2 | pytorch |
---|---|---|
cuda | 90ms | 8ms |
cpu | 24ms | 10ms |
附caffe2推理代码
import onnx
import datetime
# Load the ONNX model
model = onnx.load("model/mobilenet-v2_100.onnx")# Check that the IR is well formed
onnx.checker.check_model(model)# Print a human readable representation of the graph
onnx.helper.printable_graph(model.graph, 'graph.txt')# ...continuing from above
import caffe2.python.onnx.backend as backend
import numpy as nprep = backend.prepare(model, device="CPU") # or "CPU"
# For the Caffe2 backend:
# rep.predict_net is the Caffe2 protobuf for the network
# rep.workspace is the Caffe2 workspace for the network
# (see the class caffe2.python.onnx.backend.Workspace)for i in range(100):begin = datetime.datetime.now()outputs = rep.run(np.random.randn(1, 3, 128, 128).astype(np.float32))end = datetime.datetime.now()k = (end - begin).microsecondsprint(k/1000)
# To run networks with more than one input, pass a tuple
# rather than a single numpy ndarray.
print(outputs[0])
这篇关于caffe2与pytorch计算速度比较的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!