本文主要是介绍【ncnn android】算法移植(七)——pytorch2onnx代码粗看,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
目的:
- 了解torch2onnx的流程
- 了解其中的一些技术细节
1. 程序细节
- get_graph
将pytorch的模型转成onnx需要的graph
-
graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
-
trace, torch_out, inputs_states = torch.jit.get_trace_graph(model, args, _force_outplace=True, _return_inputs_states=True) warn_on_static_input_change(inputs_states)
- graph_export_onnx
proto, export_map = graph._export_onnx(params_dict, opset_version, dynamic_axes, defer_weight_export,operator_export_type, strip_doc_string, val_keep_init_as_ip)
2. 其他
- batchnorm
在保存成onnx的时候,设置verbose=True,可以看有哪些属性。
%554 : Float(1, 16, 8, 8) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%550, %model.detect.context.inconv.conv.weight), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/Conv2d[conv] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/modules/conv.py:342:0%555 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%554, %model.detect.context.inconv.bn.weight, %model.detect.context.inconv.bn.bias, %model.detect.context.inconv.bn.running_mean, %model.detect.context.inconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0%556 : Float(1, 16, 8, 8) = onnx::Relu(%555), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/ReLU[act] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:912:0%557 : Float(1, 16, 8, 8) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%556, %model.detect.context.upconv.conv.weight), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/Conv2d[conv] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/modules/conv.py:342:0%558 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%557, %model.detect.context.upconv.bn.weight, %model.detect.context.upconv.bn.bias, %model.detect.context.upconv.bn.running_mean, %model.detect.context.upconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0%559 : Float(1, 16, 8, 8) = onnx::Relu(%558), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/ReLU[act] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:912:0
这里以batchnorm为例,说明一下:
-
首先是pytorch中的:
%558 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%557, %model.detect.context.upconv.bn.weight, %model.detect.context.upconv.bn.bias, %model.detect.context.upconv.bn.running_mean, %model.detect.context.upconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0
其中小括号中就是要保存的参数的属性有:bn.weight bn.bias bn.running_mean bn.running_var
-
ncnn中onnx2ncnn中如何读取预训练权重。
const onnx::TensorProto& scale = weights[node.input(1)];
const onnx::TensorProto& B = weights[node.input(2)];
const onnx::TensorProto& mean = weights[node.input(3)];
const onnx::TensorProto& var = weights[node.input(4)];
- node.input(1):bn.weight
- node.input(2):bn.bias
- node.input(3):bn.running_mean
- node.input(4):bn.running_var
顺序和pytorch2onnx写入的顺序一致
- maxpool
- pytorch的打印信息
%pool_hm : Float(1, 1, 8, 8) = onnx::MaxPool[ceil_mode=0, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%hm), scope: OnnxModel # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:488:0
- ncnn中如何读取结构参数
因为maxpool层是没有预训练权重的,只有一些结构参数
std::string auto_pad = get_node_attr_s(node, "auto_pad");//TODO
std::vector<int> kernel_shape = get_node_attr_ai(node, "kernel_shape");
std::vector<int> strides = get_node_attr_ai(node, "strides");
std::vector<int> pads = get_node_attr_ai(node, "pads");
- 注意:这里“auto_pad”字段和pytorch中的“ceil_model”字段是不一样的。这是因为pytorch2onnx版本和ncnn版本不对应造成的。可能ncnn20180704版时,maxpool的onnx表达中有“auto_pad”属性。
这篇关于【ncnn android】算法移植(七)——pytorch2onnx代码粗看的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!