非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (II, Python 简单实例)

本文主要是介绍非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (II, Python 简单实例),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Title: 非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (II, Python 简单实例)


姊妹博文

非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (I - 原理与算法)


0.前言

本篇博文作为对前述 “非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (I - 原理与算法)” 的简单实践扩展.

理论部分参见前述博文, 此处不再重复. 这里只是补充一个简单的 Python 实例.


1. 最优问题实例

m i n i m i z e g ( x ) = 1 2 ∥ r ( x ) ∥ 2 2 = 1 2 ∑ i = 1 3 r i ( x ) 2 (I-1) {\rm minimize}\quad {g}(\mathbf{x}) = \frac{1}{2}\|\mathbf{r}(\mathbf{x})\|_2^2 = \frac{1}{2}\sum_{i=1}^{3} r_i(\mathbf{x})^2 \tag{I-1} minimizeg(x)=21r(x)22=21i=13ri(x)2(I-1)

其中

x = [ x 1 , x 2 ] T \mathbf{x} = \begin{bmatrix} x_1, x_2 \end{bmatrix}^{\small\rm T} x=[x1,x2]T

r ( x ) = [ r 1 ( x ) , r 2 ( x ) , r 3 ( x ) ] T \mathbf{r}(\mathbf{x}) = \begin{bmatrix} r_1(\mathbf{x}), \, r_2(\mathbf{x}) ,\,r_3(\mathbf{x}) \end{bmatrix}^{\small\rm T} r(x)=[r1(x),r2(x),r3(x)]T

r 1 ( x ) = sin ⁡ x 1 − 0.4 r_1(\mathbf{x}) = \sin x_1 -0.4 r1(x)=sinx10.4

r 2 ( x ) = cos ⁡ x 2 + 0.8 r_2(\mathbf{x}) = \cos x_2 + 0.8 r2(x)=cosx2+0.8

r 3 ( x ) = x 1 2 + x 2 2 − 1 r_3(\mathbf{x}) = \sqrt{x_1^2 +x_2^2} -1 r3(x)=x12+x22 1

可以推得

∂ r ( x ) ∂ x = [ cos ⁡ x 1 0 0 − sin ⁡ x 2 x 1 x 1 2 + x 2 2 x 2 x 1 2 + x 2 2 ] \frac{\partial \mathbf{r}(\mathbf{x})}{\partial \mathbf{x}} = \begin{bmatrix}\cos x_1 & 0\\ 0 &-\sin x_2 \\ \frac{x_1}{\sqrt{x_1^2+x_2^2}} & \frac{x_2}{\sqrt{x_1^2+x_2^2}} \end{bmatrix} xr(x)= cosx10x12+x22 x10sinx2x12+x22 x2

g ( x ) = 1 2 [ ( sin ⁡ x 1 − 0.4 ) 2 + ( cos ⁡ x 2 + 0.8 ) 2 + ( x 2 2 + x 1 2 − 1 ) 2 ] g(\mathbf{x})=\frac{1}{2} \left[{ {{\left( \sin{ {x_1} }-0.4\right) }^{2}}+{{\left( \cos{ {x_2} }+0.8\right) }^{2}}+{{\left( \sqrt{{{{x_2}}^{2}}+{{{x_1}}^{2}}}-1\right) }^{2}}}\right] g(x)=21[(sinx10.4)2+(cosx2+0.8)2+(x22+x12 1)2]

∇ g ( x ) = [ x 1 ( x 2 2 + x 1 2 − 1 ) x 2 2 + x 1 2 + cos ⁡ x 1 ( sin ⁡ x 1 − 0.4 ) x 2 ( x 2 2 + x 1 2 − 1 ) x 2 2 + x 1 2 − sin ⁡ x 2 ( cos ⁡ x 2 + 0.8 ) ] \nabla g(\mathbf{x}) = \begin{bmatrix}\frac{{x_1} \left( \sqrt{{{{x_2}}^{2}}+{{{x_1}}^{2}}}-1\right) }{\sqrt{{{{x_2}}^{2}}+{{{x_1}}^{2}}}}+\cos{ {x_1} } \left( \sin{ {x_1} }-0.4\right) \\ \frac{{x_2} \left( \sqrt{{{{x_2}}^{2}}+{{{x_1}}^{2}}}-1\right) }{\sqrt{{{{x_2}}^{2}}+{{{x_1}}^{2}}}}- \sin{ {x_2} } \left( \cos{ {x_2} }+0.8\right) \end{bmatrix} g(x)= x22+x12 x1(x22+x12 1)+cosx1(sinx10.4)x22+x12 x2(x22+x12 1)sinx2(cosx2+0.8)

H ~ ( x ) = [ x 1 2 x 2 2 + x 1 2 + ( cos ⁡ x 1 ) 2 x 1 x 2 x 2 2 + x 1 2 x 1 x 2 x 2 2 + x 1 2 ( sin ⁡ x 2 ) 2 + x 2 2 x 2 2 + x 1 2 ] \widetilde{\mathbf{H}}(\mathbf{x})=\begin{bmatrix}\frac{{{{x_1}}^{2}}}{{{{x_2}}^{2}}+{{{x_1}}^{2}}}+{{(\cos{ {x_1} })}^{2}} & \frac{{x_1} {x_2}}{{{{x_2}}^{2}}+{{{x_1}}^{2}}}\\ \frac{{x_1} {x_2}}{{{{x_2}}^{2}}+{{{x_1}}^{2}}} & {{(\sin{ {x_2} )}}^{2}}+\frac{{{{x_2}}^{2}}}{{{{x_2}}^{2}}+{{{x_1}}^{2}}}\end{bmatrix} H (x)=[x22+x12x12+(cosx1)2x22+x12x1x2x22+x12x1x2(sinx2)2+x22+x12x22]

具体的符号推导参见非线性最小二乘问题的数值方法 —— 从牛顿迭代法到高斯-牛顿法 (实例篇 V).


2. 狗腿法 (Powell‘s Dog Leg Method) Python 实现

基于狗腿法的算法流程实现如下简单 Python Demo:

from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import inv, det, norm
from math import cos
from math import sin
from math import sqrt
from math import pow# multiplication of two matrixs
def multiply_matrix(A, B):if  A.shape[1] == B.shape[0]:C = np.zeros((A.shape[0], B.shape[1]), dtype = float)[rows, cols] = C.shapefor row in range(rows): for col in range(cols):for elt in range(len(B)):C[row, col] += A[row, elt] * B[elt, col]return Celse:return "Cannot multiply A and B. Please check whether the dimensions of the inputs are compatible."# g(x) = (1/2) ||r(x)||_2^2
def g(x_vector):x_1 = x_vector[0]x_2 = x_vector[1]return ( pow(sin(x_1)-0.4, 2)+ pow(cos(x_2)+0.8, 2) + pow(sqrt(pow(x_2,2)+pow(x_1,2))-1, 2) ) /2# r(x) = [r_1, r_2, r_3]^{T}
def r(x_vector):x_1 = x_vector[0]x_2 = x_vector[1]return np.array([[sin(x_1)-0.4],[cos(x_2)+0.8],[sqrt(pow(x_1,2)+pow(x_2,2))-1]], dtype=object)# \partial r(x) / \partial x
def dr(x_vector):x_1 = x_vector[0]x_2 = x_vector[1]if sqrt(pow(x_2,2)+pow(x_1,2)) < 1e-3:  ## 人为设置return np.array([[cos(x_1),	0], [0, -sin(x_2)],[0, 0]], dtype=object)else:return np.array([[cos(x_1),	0],[0,	-sin(x_2)],[x_1/sqrt(pow(x_2,2)+pow(x_1,2)), x_2/sqrt(pow(x_2,2)+pow(x_1,2))]], dtype=object)# Simplified Hessian matrix in Gauss-Newton method
# refer to eq. ​(I-1-2) in blog "非线性最小二乘问题的数值方法 —— 从高斯-牛顿法到列文伯格-马夸尔特法 (I)"
def sH(x_vector):x_1 = x_vector[0]x_2 = x_vector[1]return multiply_matrix(np.transpose(dr(x_1, x_2)), dr(x_1, x_2)) # \nabla g(x_1, x_2)
# refer to eq. ​(I-1-3) in blog "非线性最小二乘问题的数值方法 —— 从高斯-牛顿法到列文伯格-马夸尔特法 (I)"
def dg(x_vector):x_1 = x_vector[0]x_2 = x_vector[1]return np.array(multiply_matrix(np.transpose(dr(x_1, x_2)), r(x_1, x_2)))# model for the cost function g based on eq (II-2-2) in "非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (I - 原理与算法)"
def L_model(h_vector, g_i, dg_i, sH_i):return g_i + multiply_matrix( dg_i.transpose(), h_vector) + 0.5 * multiply_matrix(multiply_matrix(h_vector.transpose(), sH_i), h_vector)def dog_leg_method(x_vector, epsilon_1, epsilon_2, epsilon_3, max_iter, trust_region_radius):# x_1 = x_vector[1]# # x_2 = x_vector[2]iter = 0delta = trust_region_radius   # trust-region radiusfound = False# g_i = g(x_vector)x_current_vector = x_vectorr_i = r(x_current_vector)dr_i = dr(x_current_vector)dg_i = multiply_matrix(np.transpose(dr_i), r_i)g_i = g(x_current_vector)# if np.max(np.abs(dg_i)) < epsilon_1:if (norm(r_i, np.inf) < epsilon_3 ) or (norm(dg_i, np.inf) < epsilon_1):found = Truearray_x_1 = []array_x_2 = []array_x_3 = []# x_new_vector = np.array([0,0])# g_new = np.infwhile (found == False) and (iter < max_iter):# sH_i = sH(x_vector)array_x_1.append(x_current_vector[0])array_x_2.append(x_current_vector[1])array_x_3.append(g_i)iter += 1step_sd_i = - dg_i Jh = multiply_matrix(dr_i, step_sd_i)alpha_i = pow(norm(step_sd_i, 2), 2) / pow(norm(Jh, 2), 2)step_cp_i = alpha_i * step_sd_i                     ## Steepest descent stepsH_i = multiply_matrix(np.transpose(dr_i), dr_i)    ## Simplified Hessian Matrixinv_sH_i =  inv(sH_i)step_gn_i = - np.array(multiply_matrix(inv_sH_i, dg_i))       ## Gauss-Newton steprho = -1while (rho < 0) and (found == False):               ## Until step acceptableif norm(step_gn_i, 2) < delta:                  ## Case Istep_dl_i = step_gn_iprint("Iterating index [%d], Case I"%iter)elif norm(step_cp_i, 2) >= delta:               ## Case II step_dl_i = (delta / norm(step_sd_i, 2)) * step_sd_iprint("Iterating index [%d], Case II"%iter)else:                                           ## Case IIIstep_gn_cp_i = step_gn_i - step_cp_ign_cp_norm_sq = pow(norm(step_gn_cp_i, 2),2)delta_cp_sq = pow(delta,2) - pow(norm(step_cp_i, 2), 2)c_matrix = multiply_matrix( np.transpose(step_cp_i), step_gn_cp_i )c = c_matrix[0][0]sqrt_discriminant = sqrt( pow(c,2) + gn_cp_norm_sq * delta_cp_sq ) if (c <= 0):beta = (-c + sqrt_discriminant) / gn_cp_norm_sqelse:beta = delta_cp_sq / (c + sqrt_discriminant)step_dl_i = step_cp_i + beta * step_gn_cp_iprint("Iterating index [%d], Case III"%iter)norm_step_dl = norm(step_dl_i, 2)if (norm_step_dl <= epsilon_2 * (norm(x_current_vector, 2) + epsilon_2)):found = Trueelse:# print(x_current_vector.shape)# print(step_dl_i.shape)x_new_vector = x_current_vector + step_dl_i.flatten()g_new = g(x_new_vector)L_0 = g_iL_h = L_model(step_dl_i, g_i, dg_i, sH_i)rho = (g_i - g_new) / (L_0 - L_h)if (rho > 0):                           ## Step acceptablex_current_vector = x_new_vector     ## New iterating stater_i = r(x_current_vector)dr_i = dr(x_current_vector)dg_i = multiply_matrix(np.transpose(dr_i), r_i)g_i = g(x_current_vector)if (norm(r_i, np.inf) < epsilon_3 ) or (norm(dg_i, np.inf) < epsilon_1):found = Trueif (rho > 0.75):                        ## Expanding trust regionif (delta - 3 * norm_step_dl < 0):delta = 3 * norm_step_dlelif (rho < 0.25):                      ## Shrinking trust regiondelta = delta / 2if (delta < (epsilon_2*(norm(x_current_vector, 2)+epsilon_2))):found = Truereturn array_x_1, array_x_2, array_x_3def result_plot(trajectory, trust_region_radius):fig = plt.figure()ax3 = plt.axes(projection='3d')xx = np.arange(-5,5,0.1)yy = np.arange(-4,4,0.1)X, Y = np.meshgrid(xx, yy)Z = np.zeros((X.shape[0], Y.shape[1]), dtype = float)for i in range(X.shape[0]):for j in range(Y.shape[1]):Z[i,j] = g(np.array([X[0,j], Y[i,0]]))ax3.plot_surface(X, Y, Z, rstride = 1, cstride = 1, cmap='rainbow', alpha=0.25)ax3.contour(X, Y, Z, offset=-1, cmap = 'rainbow')ax3.plot(trajectory[0], trajectory[1], trajectory[2], "r--")offset_data = -1*np.ones(len(trajectory[0]))ax3.plot(trajectory[0], trajectory[1], offset_data,'k--')ax3.set_title('Dog Leg Method \n(Initial point [%.1f, %.1f], Trust-region radius %.2f)' %(trajectory[0][0], trajectory[1][0], trust_region_radius))ax3.set_xlabel("r_1")ax3.set_ylabel("r_2")ax3.set_zlabel("g")file_name_prefix = "./dog_leg"file_extension = ".png"radius= "-r"file_name = f"{file_name_prefix}_{trajectory[0][0]}_{trajectory[1][0]}{radius}{trust_region_radius}{file_extension}"print(file_name)plt.draw()plt.savefig(file_name)if __name__ == "__main__":test_data = np.array([[4.9, 3.9], [-2.9, 1.9], [0.1, -0.1], [-0.1, 0.1], [0,-3.8],[1,2.5]], dtype=object)trust_region_radius = np.array([0.4, 0.01, 2.0])for radius in trust_region_radius:for inital_data in test_data:print("\nInitial point: [%.1f, %.1f]" %(inital_data[0],inital_data[1]))print("Trust region radius: %.2f" %radius)epsilon_1 = 1e-6epsilon_2 = 1e-6epsilon_3 = 1e-6max_iter = 1000trajectory = dog_leg_method(inital_data, epsilon_1, epsilon_2, epsilon_3, max_iter, radius)result_plot(trajectory, radius)

3. 测试结果

A. 结果显示

测试显示 (初始点 [4.9, 3.9], 信赖域半径分别为 0.01、0.4、2.0)测试显示 (初始点 [-2.9, 1.9], 信赖域半径分别为 0.01、0.4、2.0)
Levenberg_Marquardt_4.9_3.9Levenberg_Marquardt_4.9_3.9
Levenberg_Marquardt_0.1_-0.1Levenberg_Marquardt_-0.1_0.1
Levenberg_Marquardt_0_-3.8Levenberg_Marquardt_1_2.5

B. 迭代步说明

不同信赖域的迭代步及每一步的类型如下表所示. Case III 代表该步类型为狗腿步, Case II 代表该步类型为柯西步, Case I 代表该步类型为高斯-牛顿步.

越靠近收敛极小值点, 高斯-牛顿步类型出现频率越高, 这样有利于快速收敛.

信赖域半径对计算性能也有影响.

Trust-region radius = 0.01Trust-region radius = 0.4Trust-region radius = 2.0
Initial point: [4.9, 3.9]
Trust region radius: 0.01
Iterating index [1], Case II
Iterating index [2], Case II
Iterating index [3], Case II
Iterating index [4], Case II
Iterating index [5], Case II
Iterating index [6], Case II
Iterating index [7], Case I
Iterating index [8], Case I
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I
Iterating index [23], Case I
Iterating index [24], Case I
Iterating index [25], Case I
Initial point: [4.9, 3.9]
Trust region radius: 0.40
Iterating index [1], Case II
Iterating index [2], Case II
Iterating index [3], Case III
Iterating index [4], Case III
Iterating index [4], Case III
Iterating index [5], Case I
Iterating index [5], Case I
Iterating index [5], Case III
Iterating index [6], Case II
Iterating index [7], Case I
Iterating index [8], Case I
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I
Iterating index [23], Case I
Initial point: [4.9, 3.9]
Trust region radius: 2.00
Iterating index [1], Case II
Iterating index [2], Case I
Iterating index [3], Case I
Iterating index [3], Case I
Iterating index [3], Case III
Iterating index [4], Case I
Iterating index [5], Case I
Iterating index [6], Case I
Iterating index [7], Case I
Iterating index [8], Case I
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I
Initial point: [-2.9, 1.9]
Trust region radius: 0.01
Iterating index [1], Case II
Iterating index [2], Case II
Iterating index [3], Case II
Iterating index [4], Case II
Iterating index [5], Case I
Iterating index [6], Case I
Iterating index [7], Case I
Iterating index [8], Case III
Iterating index [8], Case III
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I
Iterating index [23], Case I
Iterating index [24], Case I
Iterating index [25], Case I
Iterating index [26], Case I
Initial point: [-2.9, 1.9]
Trust region radius: 0.40
Iterating index [1], Case II
Iterating index [2], Case I
Iterating index [3], Case I
Iterating index [4], Case I
Iterating index [5], Case III
Iterating index [5], Case III
Iterating index [6], Case I
Iterating index [7], Case I
Iterating index [8], Case I
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I
Iterating index [23], Case I
Initial point: [-2.9, 1.9]
Trust region radius: 2.00
Iterating index [1], Case I
Iterating index [2], Case I
Iterating index [3], Case III
Iterating index [4], Case III
Iterating index [4], Case III
Iterating index [5], Case I
Iterating index [6], Case I
Iterating index [7], Case I
Iterating index [8], Case I
Iterating index [9], Case I
Iterating index [10], Case I
Iterating index [11], Case I
Iterating index [12], Case I
Iterating index [13], Case I
Iterating index [14], Case I
Iterating index [15], Case I
Iterating index [16], Case I
Iterating index [17], Case I
Iterating index [18], Case I
Iterating index [19], Case I
Iterating index [20], Case I
Iterating index [21], Case I
Iterating index [22], Case I

4. 结论

以上仅为一个狗腿法的简单示例, 推导和算法请见 “非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (I - 原理与算法)”.

如有问题请指出, 谢谢!

这篇关于非线性最小二乘问题的数值方法 —— 狗腿法 Powell‘s Dog Leg Method (II, Python 简单实例)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/636559

相关文章

Windows环境下解决Matplotlib中文字体显示问题的详细教程

《Windows环境下解决Matplotlib中文字体显示问题的详细教程》本文详细介绍了在Windows下解决Matplotlib中文显示问题的方法,包括安装字体、更新缓存、配置文件设置及编码調整,并... 目录引言问题分析解决方案详解1. 检查系统已安装字体2. 手动添加中文字体(以SimHei为例)步骤

MyBatis-Plus通用中等、大量数据分批查询和处理方法

《MyBatis-Plus通用中等、大量数据分批查询和处理方法》文章介绍MyBatis-Plus分页查询处理,通过函数式接口与Lambda表达式实现通用逻辑,方法抽象但功能强大,建议扩展分批处理及流式... 目录函数式接口获取分页数据接口数据处理接口通用逻辑工具类使用方法简单查询自定义查询方法总结函数式接口

MySQL深分页进行性能优化的常见方法

《MySQL深分页进行性能优化的常见方法》在Web应用中,分页查询是数据库操作中的常见需求,然而,在面对大型数据集时,深分页(deeppagination)却成为了性能优化的一个挑战,在本文中,我们将... 目录引言:深分页,真的只是“翻页慢”那么简单吗?一、背景介绍二、深分页的性能问题三、业务场景分析四、

JAVA中安装多个JDK的方法

《JAVA中安装多个JDK的方法》文章介绍了在Windows系统上安装多个JDK版本的方法,包括下载、安装路径修改、环境变量配置(JAVA_HOME和Path),并说明如何通过调整JAVA_HOME在... 首先去oracle官网下载好两个版本不同的jdk(需要登录Oracle账号,没有可以免费注册)下载完

SpringSecurity整合redission序列化问题小结(最新整理)

《SpringSecurity整合redission序列化问题小结(最新整理)》文章详解SpringSecurity整合Redisson时的序列化问题,指出需排除官方Jackson依赖,通过自定义反序... 目录1. 前言2. Redission配置2.1 RedissonProperties2.2 Red

nginx 负载均衡配置及如何解决重复登录问题

《nginx负载均衡配置及如何解决重复登录问题》文章详解Nginx源码安装与Docker部署,介绍四层/七层代理区别及负载均衡策略,通过ip_hash解决重复登录问题,对nginx负载均衡配置及如何... 目录一:源码安装:1.配置编译参数2.编译3.编译安装 二,四层代理和七层代理区别1.二者混合使用举例

JSONArray在Java中的应用操作实例

《JSONArray在Java中的应用操作实例》JSONArray是org.json库用于处理JSON数组的类,可将Java对象(Map/List)转换为JSON格式,提供增删改查等操作,适用于前后端... 目录1. jsONArray定义与功能1.1 JSONArray概念阐释1.1.1 什么是JSONA

使用Python删除Excel中的行列和单元格示例详解

《使用Python删除Excel中的行列和单元格示例详解》在处理Excel数据时,删除不需要的行、列或单元格是一项常见且必要的操作,本文将使用Python脚本实现对Excel表格的高效自动化处理,感兴... 目录开发环境准备使用 python 删除 Excphpel 表格中的行删除特定行删除空白行删除含指定

MySQL中的LENGTH()函数用法详解与实例分析

《MySQL中的LENGTH()函数用法详解与实例分析》MySQLLENGTH()函数用于计算字符串的字节长度,区别于CHAR_LENGTH()的字符长度,适用于多字节字符集(如UTF-8)的数据验证... 目录1. LENGTH()函数的基本语法2. LENGTH()函数的返回值2.1 示例1:计算字符串

Python通用唯一标识符模块uuid使用案例详解

《Python通用唯一标识符模块uuid使用案例详解》Pythonuuid模块用于生成128位全局唯一标识符,支持UUID1-5版本,适用于分布式系统、数据库主键等场景,需注意隐私、碰撞概率及存储优... 目录简介核心功能1. UUID版本2. UUID属性3. 命名空间使用场景1. 生成唯一标识符2. 数