TVM踩坑记录

2024-03-26 20:38
文章标签 记录 tvm

本文主要是介绍TVM踩坑记录,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

TVM踩坑记录

  • 安装
  • AirFace.onnx试验
      • 问题定位:
  • 自动优化

安装

参照官网。
安装完成后安装官网quick-start-py跑一跑简单例子

import numpy as np
from tvm import relay
import tvm
from tvm.contrib import graph_runtimebatch_size = 1
num_class = 1000
image_shape = (3, 224, 224)
data_shape = (batch_size,) + image_shape
out_shape = (batch_size, num_class)mod, params = relay.testing.resnet.get_workload(num_layers=18, batch_size=batch_size, image_shape=image_shape)print(mod.astext(show_meta_data=False))

结果失败

Traceback (most recent call last):File "./compile.py", line 84, in <module>from tvm import relayFile "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/__init__.py", line 27, in <module>from . import expr_functorFile "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/expr_functor.py", line 24, in <module>from .op import OpFile "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/op/__init__.py", line 20, in <module>from .op import get, register, register_schedule, register_compute, register_gradient, \File "/usr/local/lib/python3.6/dist-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/op/op.py", line 19, in <module>import topiFile "/usr/local/lib/python3.6/dist-packages/topi-0.6.dev0-py3.6.egg/topi/__init__.py", line 43, in <module>from . import nnFile "/usr/local/lib/python3.6/dist-packages/topi-0.6.dev0-py3.6.egg/topi/nn/__init__.py", line 23, in <module>from .deformable_conv2d import *File "/usr/local/lib/python3.6/dist-packages/topi-0.6.dev0-py3.6.egg/topi/nn/deformable_conv2d.py", line 23, in <module>from ..cpp.image import bilinear_sample_nchw
ImportError: cannot import name 'bilinear_sample_nchw'

这个错误需要在环境变量中添加:

export LD_LIBRARY_PATH=/home/bokyliu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi

问题一个接一个

Connected to pydev debugger (build 182.4505.26)
Traceback (most recent call last):File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1664, in <module>main()File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1658, in mainglobals = debugger.run(setup['file'], None, None, is_module)File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1068, in runpydev_imports.execfile(file, globals, locals)  # execute the scriptFile "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfileexec(compile(contents+"\n", file, 'exec'), glob, loc)File "/home/bokyliu/Project/TVM/quick-start.py", line 13, in <module>mod, params = relay.testing.resnet.get_workload(
AttributeError: module 'tvm.relay' has no attribute 'testing'

问题很无语啊,python把testing当成文件了,其实他是一个文件夹,所以修改代码为如下:

import numpy as np
# from tvm import relay
from tvm.relay.testing import resnet
import tvm
from tvm.contrib import graph_runtimebatch_size = 1
num_class = 1000
image_shape = (3, 224, 224)
data_shape = (batch_size,) + image_shape
out_shape = (batch_size, num_class)mod, params = resnet.get_workload(num_layers=18, batch_size=batch_size, image_shape=image_shape)print(mod.astext(show_meta_data=False))

成功打印出计算图:

v0.0.4
def @main(%data: Tensor[(1, 3, 224, 224), float32], %bn_data_gamma: Tensor[(3), float32], %bn_data_beta: Tensor[(3), float32], %bn_data_moving_mean: Tensor[(3), float32], %bn_data_moving_var: Tensor[(3), float32], %conv0_weight: Tensor[(64, 3, 7, 7), float32], %bn0_gamma: Tensor[(64), float32], %bn0_beta: Tensor[(64), float32], %bn0_moving_mean: Tensor[(64), float32], %bn0_moving_var: Tensor[(64), float32], %stage1_unit1_bn1_gamma: Tensor[(64), float32], %stage1_unit1_bn1_beta: Tensor[(64), float32], %stage1_unit1_bn1_moving_mean: Tensor[(64), float32], %stage1_unit1_bn1_moving_var: Tensor[(64), float32], %stage1_unit1_conv1_weight: Tensor[(64, 64, 3, 3), float32], %stage1_unit1_bn2_gamma: Tensor[(64), float32], %stage1_unit1_bn2_beta: Tensor[(64), float32], %stage1_unit1_bn2_moving_mean: Tensor[(64), float32], %stage1_unit1_bn2_moving_var: Tensor[(64), float32], %stage1_unit1_conv2_weight: Tensor[(64, 64, 3, 3), float32], %stage1_unit1_sc_weight: Tensor[(64, 64, 1, 1), float32], %stage1_unit2_bn1_gamma: Tensor[(64), float32], %stage1_unit2_bn1_beta: Tensor[(64), float32], %stage1_unit2_bn1_moving_mean: Tensor[(64), float32], %stage1_unit2_bn1_moving_var: Tensor[(64), float32], %stage1_unit2_conv1_weight: Tensor[(64, 64, 3, 3), float32], %stage1_unit2_bn2_gamma: Tensor[(64), float32], %stage1_unit2_bn2_beta: Tensor[(64), float32], %stage1_unit2_bn2_moving_mean: Tensor[(64), float32], %stage1_unit2_bn2_moving_var: Tensor[(64), float32], %stage1_unit2_conv2_weight: Tensor[(64, 64, 3, 3), float32], %stage2_unit1_bn1_gamma: Tensor[(64), float32], %stage2_unit1_bn1_beta: Tensor[(64), float32], %stage2_unit1_bn1_moving_mean: Tensor[(64), float32], %stage2_unit1_bn1_moving_var: Tensor[(64), float32], %stage2_unit1_conv1_weight: Tensor[(128, 64, 3, 3), float32], %stage2_unit1_bn2_gamma: Tensor[(128), float32], %stage2_unit1_bn2_beta: Tensor[(128), float32], %stage2_unit1_bn2_moving_mean: Tensor[(128), float32], %stage2_unit1_bn2_moving_var: Tensor[(128), float32], %stage2_unit1_conv2_weight: Tensor[(128, 128, 3, 3), float32], %stage2_unit1_sc_weight: Tensor[(128, 64, 1, 1), float32], %stage2_unit2_bn1_gamma: Tensor[(128), float32], %stage2_unit2_bn1_beta: Tensor[(128), float32], %stage2_unit2_bn1_moving_mean: Tensor[(128), float32], %stage2_unit2_bn1_moving_var: Tensor[(128), float32], %stage2_unit2_conv1_weight: Tensor[(128, 128, 3, 3), float32], %stage2_unit2_bn2_gamma: Tensor[(128), float32], %stage2_unit2_bn2_beta: Tensor[(128), float32], %stage2_unit2_bn2_moving_mean: Tensor[(128), float32], %stage2_unit2_bn2_moving_var: Tensor[(128), float32], %stage2_unit2_conv2_weight: Tensor[(128, 128, 3, 3), float32], %stage3_unit1_bn1_gamma: Tensor[(128), float32], %stage3_unit1_bn1_beta: Tensor[(128), float32], %stage3_unit1_bn1_moving_mean: Tensor[(128), float32], %stage3_unit1_bn1_moving_var: Tensor[(128), float32], %stage3_unit1_conv1_weight: Tensor[(256, 128, 3, 3), float32], %stage3_unit1_bn2_gamma: Tensor[(256), float32], %stage3_unit1_bn2_beta: Tensor[(256), float32], %stage3_unit1_bn2_moving_mean: Tensor[(256), float32], %stage3_unit1_bn2_moving_var: Tensor[(256), float32], %stage3_unit1_conv2_weight: Tensor[(256, 256, 3, 3), float32], %stage3_unit1_sc_weight: Tensor[(256, 128, 1, 1), float32], %stage3_unit2_bn1_gamma: Tensor[(256), float32], %stage3_unit2_bn1_beta: Tensor[(256), float32], %stage3_unit2_bn1_moving_mean: Tensor[(256), float32], %stage3_unit2_bn1_moving_var: Tensor[(256), float32], %stage3_unit2_conv1_weight: Tensor[(256, 256, 3, 3), float32], %stage3_unit2_bn2_gamma: Tensor[(256), float32], %stage3_unit2_bn2_beta: Tensor[(256), float32], %stage3_unit2_bn2_moving_mean: Tensor[(256), float32], %stage3_unit2_bn2_moving_var: Tensor[(256), float32], %stage3_unit2_conv2_weight: Tensor[(256, 256, 3, 3), float32], %stage4_unit1_bn1_gamma: Tensor[(256), float32], %stage4_unit1_bn1_beta: Tensor[(256), float32], %stage4_unit1_bn1_moving_mean: Tensor[(256), float32], %stage4_unit1_bn1_moving_var: Tensor[(256), float32], %stage4_unit1_conv1_weight: Tensor[(512, 256, 3, 3), float32], %stage4_unit1_bn2_gamma: Tensor[(512), float32], %stage4_unit1_bn2_beta: Tensor[(512), float32], %stage4_unit1_bn2_moving_mean: Tensor[(512), float32], %stage4_unit1_bn2_moving_var: Tensor[(512), float32], %stage4_unit1_conv2_weight: Tensor[(512, 512, 3, 3), float32], %stage4_unit1_sc_weight: Tensor[(512, 256, 1, 1), float32], %stage4_unit2_bn1_gamma: Tensor[(512), float32], %stage4_unit2_bn1_beta: Tensor[(512), float32], %stage4_unit2_bn1_moving_mean: Tensor[(512), float32], %stage4_unit2_bn1_moving_var: Tensor[(512), float32], %stage4_unit2_conv1_weight: Tensor[(512, 512, 3, 3), float32], %stage4_unit2_bn2_gamma: Tensor[(512), float32], %stage4_unit2_bn2_beta: Tensor[(512), float32], %stage4_unit2_bn2_moving_mean: Tensor[(512), float32], %stage4_unit2_bn2_moving_var: Tensor[(512), float32], %stage4_unit2_conv2_weight: Tensor[(512, 512, 3, 3), float32], %bn1_gamma: Tensor[(512), float32], %bn1_beta: Tensor[(512), float32], %bn1_moving_mean: Tensor[(512), float32], %bn1_moving_var: Tensor[(512), float32], %fc1_weight: Tensor[(1000, 512), float32], %fc1_bias: Tensor[(1000), float32]) -> Tensor[(1, 1000), float32] {%0 = nn.batch_norm(%data, %bn_data_gamma, %bn_data_beta, %bn_data_moving_mean, %bn_data_moving_var, epsilon=2e-05f, scale=False) /* ty=(Tensor[(1, 3, 224, 224), float32], Tensor[(3), float32], Tensor[(3), float32]) */;%1 = %0.0;%2 = nn.conv2d(%1, %conv0_weight, strides=[2, 2], padding=[3, 3], channels=64, kernel_size=[7, 7]) /* ty=Tensor[(1, 64, 112, 112), float32] */;...%89 = nn.bias_add(%88, %fc1_bias, axis=-1) /* ty=Tensor[(1, 1000), float32] */;nn.softmax(%89) /* ty=Tensor[(1, 1000), float32] */
}

后续继续运行例子,没有出现新的问题。但是发现在执行

loaded_lib = tvm.module.load(path_lib)

之后,运行目录下会出现deploy_lib.tar.so文件

AirFace.onnx试验

做完了上面的,有点飘了,看例子中超分辨率的onnx模型用tvm部署貌似没有很多操作,就想着把airface模型拿来试一下,毕竟最近没少为这个模型伤脑筋。于是,借用教程的代码:

onnx_model = onnx.load('/home/bokyliu/Project/TVM/airFace06.onnx')
######################################################################
# Load a test image
# ---------------------------------------------
# A single cat dominates the examples!
from PIL import Image
# img_url = 'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
# img_path = download_testdata(img_url, 'cat.png', module='data')
img = Image.open('/home/bokyliu/feature1.jpg').resize((112, 112))
img_arr = np.array(img).transpose(2, 0, 1)
######################################################################
# Compile the model with relay
# ---------------------------------------------
target = 'llvm'input_name = '1'
shape_dict = {input_name: img_arr.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)with relay.build_config(opt_level=1):intrp = relay.build_module.create_executor('graph', mod, tvm.cpu(0), target)######################################################################
# Execute on TVM
# ---------------------------------------------
dtype = 'float32'
tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), **params).asnumpy()

运行,提示:

Traceback (most recent call last):File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1664, in <module>main()File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1658, in mainglobals = debugger.run(setup['file'], None, None, is_module)File "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1068, in runpydev_imports.execfile(file, globals, locals)  # execute the scriptFile "/home/bokyliu/Work/pycharm-community-2018.2.4/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfileexec(compile(contents+"\n", file, 'exec'), glob, loc)File "/home/bokyliu/Project/TVM/airface_from_onnx.py", line 72, in <module>mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1497, in from_onnxmod, params = g.from_onnx(graph, opset)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1284, in from_onnxraise ValueError("Must provide an input shape for `{0}`.".format(i_name))ValueError: Must provide an input shape for `0`.

看到这个问题,只需要将

input_name = '1' # 改为 input_name = 'input.1'

至于为什么此处要修改,经过我发现,在onnx的计算图里

graph(%input.1 : Float(1, 3, 112, 112),%conv1.conv.weight : Float(64, 3, 3, 3),%conv1.bn.weight : Float(64),...)

此处要一致,因为tvm也是按照计算图走的。
再次运行:

tvm.error.OpNotImplemented: The following operators are not supported for frontend ONNX: ATen

这个说明原始模型里面有onnx不支持的内容,应该是转onnx出错了。用pytorch1.3转onnx问题解决。
接下来换了onnx没有出现上述问题,但是想想也知道肯定会有新的问题出现:

WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm
.
.
.
WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm
Traceback (most recent call last):File "/home/bokyliu/Project/TVM/airface_from_onnx.py", line 73, in <module>mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1497, in from_onnxmod, params = g.from_onnx(graph, opset)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1325, in from_onnxop = self._convert_operator(op_name, inputs, attr, opset)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 1425, in _convert_operatorsym = convert_map[op_name](inputs, attrs, self._params)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/onnx.py", line 470, in _impl_v5static_shape = infer_value_simulated(shape, params)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/common.py", line 520, in infer_value_simulatedoutput_value = infer_value(input_val, params)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/frontend/common.py", line 494, in infer_valuegraph, lib, params = tvm.relay.build(func, target="llvm", params=params)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/build_module.py", line 244, in buildgraph_json, mod, params = bld_mod.build(func, target, target_host, params)File "/home/bokyliu/Project/incubator-tvm/python/tvm/relay/build_module.py", line 109, in buildself._build(func, target, target_host)File "/home/bokyliu/Project/incubator-tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__raise get_last_ffi_error()tvm._ffi.base.TVMError: Traceback (most recent call last):[bt] (8) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1b5) [0x7f6b3cd6f6e5][bt] (7) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::relay::Function, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&)+0x5e) [0x7f6b3cd6e66e][bt] (6) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::RelayBuildModule::Optimize(tvm::relay::Function, tvm::Map<tvm::Integer, tvm::Target, void, void> const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, tvm::runtime::NDArray> > > const&)+0xee) [0x7f6b3cd6d87e][bt] (5) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x1d5) [0x7f6b3ce18825][bt] (4) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f6b3ce163cc][bt] (3) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f6b3cd39aa7][bt] (2) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x86) [0x7f6b3cd39326][bt] (1) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ErrorReporter::RenderErrors(tvm::relay::Module const&, bool)+0x230c) [0x7f6b3cdf675c][bt] (0) /home/bokyliu/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f6b3c724af2]File "/home/bokyliu/Project/incubator-tvm/src/relay/ir/error.cc", line 133
TVMError: 
Error(s) have occurred. The program has been annotated with them:In `main`: 
v0.0.4
fn () {%0 = nn.conv2d(meta[relay.Constant][0], meta[relay.Constant][1], strides=[2, 2], padding=[1, 1], kernel_size=[3, 3]);%1 = nn.batch_norm(%0, meta[relay.Constant][2], meta[relay.Constant][3], meta[relay.Constant][4], meta[relay.Constant][5], epsilon=1e-05f);%2 = %1.0;%3 = expand_dims(meta[relay.Constant][6], axis=1);%4 = expand_dims(%3, axis=2);%5 = nn.prelu(%2, %4) tensor type `Tensor[(64), float32]` has 1 dimensions, while `Tensor[(64, 1, 1), float32]` has 3 dimensions; unable to unify: `Tensor[(64), float32]` and `Tensor[(64, 1, 1), float32]`; ;%6 = nn.conv2d(%5, meta[relay.Constant][7], padding=[1, 1], groups=64, kernel_size=[3, 3]);

问题定位:

WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm

在执行build_module.py里此句时提示WARNING

op = self._convert_operator(op_name, inputs, attr, opset) 

其中:
op_name = BatchNormalization
用断点跟进去,后续继续定位到build_module.py中

sym = convert_map[op_name](inputs, attrs, self._params)

继续命中断点
卷积层:
卷积层命中断点
bn层:
bn层命中断点
PReLU:
PReLU
觉得有意思的是PReLU的inputs,单独截了个图:
PReLU-inputs
free_var代表释放了的变量?
按照这个顺序找下去,最后一直到python/tvm/relay/op/nn/nn.py中:

def prelu(data, alpha, axis=1):"""This operator takes data as input and does Leaky versionof a Rectified Linear Unit... math::`y = x > 0 ? x : alpha * x`Parameters----------data : tvm.relay.ExprThe input data to the operator.alpha : tvm.relay.ExprSlope coefficient for the negative half axis.axis : int, optionalSpecify which shape axis the channel is specified.Returns-------result : tvm.relay.ExprThe computed result."""return _make.prelu(data, alpha, axis)

后来也是阴差阳错,在TVM的计算图中看到有两个ExpandDim操作,axis分别是:1、2!但这是哪来的呢?
想到这里,赶紧回头看了看ONNX转出时打印的graph,果然也有unsqeeze操作。
所以问题出在ONNX上,后来了解到ONNX有好几个版本,于是我换回了pytorch1.0.1,这是目前坑最少的版本。
但用pytorch1.0.1转出的ONNX里面又有ATen,没办法,只能自己向ONNX中注册ATen对应的op了。
然后就一切顺利了,模型得以跑通。但TVM编译ONNX再进行推理和TVM加载tar、json、params再进行推理的速度也有区别(cpu i5 7500),前者耗时约8s后者约330ms,都比pytorch直接运行pth(220ms)要慢。

自动优化

没什么特别的,主要是参考官方demo

这篇关于TVM踩坑记录的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!


原文地址:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.chinasem.cn/article/849779

相关文章

Python获取中国节假日数据记录入JSON文件

《Python获取中国节假日数据记录入JSON文件》项目系统内置的日历应用为了提升用户体验,特别设置了在调休日期显示“休”的UI图标功能,那么问题是这些调休数据从哪里来呢?我尝试一种更为智能的方法:P... 目录节假日数据获取存入jsON文件节假日数据读取封装完整代码项目系统内置的日历应用为了提升用户体验,

Spring Boot 配置文件之类型、加载顺序与最佳实践记录

《SpringBoot配置文件之类型、加载顺序与最佳实践记录》SpringBoot的配置文件是灵活且强大的工具,通过合理的配置管理,可以让应用开发和部署更加高效,无论是简单的属性配置,还是复杂... 目录Spring Boot 配置文件详解一、Spring Boot 配置文件类型1.1 applicatio

MySQL INSERT语句实现当记录不存在时插入的几种方法

《MySQLINSERT语句实现当记录不存在时插入的几种方法》MySQL的INSERT语句是用于向数据库表中插入新记录的关键命令,下面:本文主要介绍MySQLINSERT语句实现当记录不存在时... 目录使用 INSERT IGNORE使用 ON DUPLICATE KEY UPDATE使用 REPLACE

Python 中的异步与同步深度解析(实践记录)

《Python中的异步与同步深度解析(实践记录)》在Python编程世界里,异步和同步的概念是理解程序执行流程和性能优化的关键,这篇文章将带你深入了解它们的差异,以及阻塞和非阻塞的特性,同时通过实际... 目录python中的异步与同步:深度解析与实践异步与同步的定义异步同步阻塞与非阻塞的概念阻塞非阻塞同步

Python Dash框架在数据可视化仪表板中的应用与实践记录

《PythonDash框架在数据可视化仪表板中的应用与实践记录》Python的PlotlyDash库提供了一种简便且强大的方式来构建和展示互动式数据仪表板,本篇文章将深入探讨如何使用Dash设计一... 目录python Dash框架在数据可视化仪表板中的应用与实践1. 什么是Plotly Dash?1.1

Spring Boot中定时任务Cron表达式的终极指南最佳实践记录

《SpringBoot中定时任务Cron表达式的终极指南最佳实践记录》本文详细介绍了SpringBoot中定时任务的实现方法,特别是Cron表达式的使用技巧和高级用法,从基础语法到复杂场景,从快速启... 目录一、Cron表达式基础1.1 Cron表达式结构1.2 核心语法规则二、Spring Boot中定

国内环境搭建私有知识问答库踩坑记录(ollama+deepseek+ragflow)

《国内环境搭建私有知识问答库踩坑记录(ollama+deepseek+ragflow)》本文给大家利用deepseek模型搭建私有知识问答库的详细步骤和遇到的问题及解决办法,感兴趣的朋友一起看看吧... 目录1. 第1步大家在安装完ollama后,需要到系统环境变量中添加两个变量2. 第3步 “在cmd中

Spring Retry 实现乐观锁重试实践记录

《SpringRetry实现乐观锁重试实践记录》本文介绍了在秒杀商品SKU表中使用乐观锁和MybatisPlus配置乐观锁的方法,并分析了测试环境和生产环境的隔离级别对乐观锁的影响,通过简单验证,... 目录一、场景分析 二、简单验证 2.1、可重复读 2.2、读已提交 三、最佳实践 3.1、配置重试模板

在 Spring Boot 中使用异步线程时的 HttpServletRequest 复用问题记录

《在SpringBoot中使用异步线程时的HttpServletRequest复用问题记录》文章讨论了在SpringBoot中使用异步线程时,由于HttpServletRequest复用导致... 目录一、问题描述:异步线程操作导致请求复用时 Cookie 解析失败1. 场景背景2. 问题根源二、问题详细分

关于Spring @Bean 相同加载顺序不同结果不同的问题记录

《关于Spring@Bean相同加载顺序不同结果不同的问题记录》本文主要探讨了在Spring5.1.3.RELEASE版本下,当有两个全注解类定义相同类型的Bean时,由于加载顺序不同,最终生成的... 目录问题说明测试输出1测试输出2@Bean注解的BeanDefiChina编程nition加入时机总结问题说明