深度学习:TensorFlow2构建、保存、加载神经网络模型【经典流程】

本文主要是介绍深度学习:TensorFlow2构建、保存、加载神经网络模型【经典流程】,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、network.save_weights、network.load_weights

保存模型的参数,加载已保存的参数的network的结构必须和之前的network的所有结构一模一样

import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'  # 放在 import tensorflow as tf 之前才有效import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, optimizers, datasets# 一、获取数据集
(X_train, Y_train), (X_test, Y_test) = datasets.mnist.load_data()
print('X_train.shpae = {0},Y_train.shpae = {1}------------type(X_train) = {2},type(Y_train) = {3}'.format(X_train.shape, Y_train.shape, type(X_train), type(Y_train)))# 二、数据处理
# 预处理函数:将numpy数据转为tensor
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.x = tf.reshape(x, [28 * 28])y = tf.cast(y, dtype=tf.int32)y = tf.one_hot(y, depth=10)return x, y# 2.1 处理训练集
# print('X_train.shpae = {0},Y_train.shpae = {1}------------type(X_train) = {2},type(Y_train) = {3}'.format(X_train.shape, Y_train.shape, type(X_train), type(Y_train)))
dataset_train = tf.data.Dataset.from_tensor_slices((X_train, Y_train))  # 此步骤自动将numpy类型的数据转为tensor
dataset_train = dataset_train.map(preprocess)  # 调用map()函数批量修改每一个元素数据的数据类型
dataset_train = dataset_train.shuffle(len(X_train))  # 打散dataset_train中的样本顺序,防止图片的原始顺序对神经网络性能的干扰
print('dataset_train = {0},type(dataset_train) = {1}'.format(dataset_train, type(dataset_train)))
batch_size_train = 20000  # 每个batch里的样本数量设置100-200之间合适。
dataset_batch_train = dataset_train.batch(batch_size_train)  # 将dataset_batch_train中每sample_num_of_each_batch_train张图片分为一个batch,读取一个batch相当于一次性并行读取sample_num_of_each_batch_train张图片
print('dataset_batch_train = {0},type(dataset_batch_train) = {1}'.format(dataset_batch_train, type(dataset_batch_train)))
# 2.2 处理测试集
dataset_test = tf.data.Dataset.from_tensor_slices((X_test, Y_test))  # 此步骤自动将numpy类型的数据转为tensor
dataset_test = dataset_test.map(preprocess)  # 调用map()函数批量修改每一个元素数据的数据类型
dataset_test = dataset_test.shuffle(len(X_test))  # 打散样本顺序,防止图片的原始顺序对神经网络性能的干扰
batch_size_test = 5000  # 每个batch里的样本数量设置100-200之间合适。
dataset_batch_test = dataset_test.batch(batch_size_test)  # 将dataset_test中每sample_num_of_each_batch_test张图片分为一个batch,读取一个batch相当于一次性并行读取sample_num_of_each_batch_test张图片# 三、构建神经网络结构:Dense 表示全连接神经网络,激活函数用 relu
network = keras.Sequential([layers.Dense(500, activation=tf.nn.relu),  # 降维:784-->500layers.Dense(300, activation=tf.nn.relu),  # 降维:500-->300layers.Dense(100, activation=tf.nn.relu),  # 降维:300-->100layers.Dense(10)])  # 降维:100-->10,最后一层一般不需要在此处指定激活函数,在计算Loss的时候会自动运用激活函数
network.build(input_shape=[None, 784])  # 28*28=784,None表示样本数量,是不确定的值。
network.summary()  # 打印神经网络model的简要信息# 四、设置神经网络各个参数
network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])# 五、给神经网络喂数据,训练神经网络模型参数
print('\n++++++++++++++++++++++++++++++++++++++++++++Training 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.fit(dataset_batch_train, epochs=5, validation_data=dataset_batch_test, validation_freq=2)  # validation_freq参数表示每多少个epoch做一次验证/validation
print('++++++++++++++++++++++++++++++++++++++++++++Training 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')# 六、模型评估 test/evluation
print('\n++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.evaluate(dataset_batch_test)
print('++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')network.save_weights('weights.ckpt')
print('\n================saved weights================')
del network
print('================del network================')# 七、创建一个和所加载参数的原始network一模一样的network
print('================创建一个和所加载参数的原始network一模一样的network================')
network = keras.Sequential([layers.Dense(500, activation=tf.nn.relu),  # 降维:784-->500layers.Dense(300, activation=tf.nn.relu),  # 降维:500-->300layers.Dense(100, activation=tf.nn.relu),  # 降维:300-->100layers.Dense(10)])  # 降维:100-->10,最后一层一般不需要在此处指定激活函数,在计算Loss的时候会自动运用激活函数
network.build(input_shape=[None, 784])  # 28*28=784,None表示样本数量,是不确定的值。
network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
network.load_weights('weights.ckpt')
print('================loaded weights================')# 八、模型评估 test/evluation
print('\n++++++++++++++++++++++++++++++++++++++++++++加载weights后--->Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.evaluate(dataset_batch_test)
print('++++++++++++++++++++++++++++++++++++++++++++加载weights后--->Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')# 九、模型上线应用
sample = next(iter(dataset_batch_test))  # 从 dataset_batch_test 中取一个batch数据做模拟
x = sample[0]
y = sample[1]  # one-hot
pred = network.predict(x)  # [b, 10]
y = tf.argmax(y, axis=1)  # convert back to number
pred = tf.argmax(pred, axis=1)
print('\n++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
print(pred)
print(y)
print('++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:结束++++++++++++++++++++++++++++++++++++++++++++')

打印结果:

X_train.shpae = (60000, 28, 28),Y_train.shpae = (60000,)------------type(X_train) = <class 'numpy.ndarray'>type(Y_train) = <class 'numpy.ndarray'>
dataset_train = <ShuffleDataset shapes: ((784,), (10,)), types: (tf.float32, tf.float32)>type(dataset_train) = <class 'tensorflow.python.data.ops.dataset_ops.ShuffleDataset'>
dataset_batch_train = <BatchDataset shapes: ((None, 784), (None, 10)), types: (tf.float32, tf.float32)>type(dataset_batch_train) = <class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 500)               392500    
_________________________________________________________________
dense_1 (Dense)              (None, 300)               150300    
_________________________________________________________________
dense_2 (Dense)              (None, 100)               30100     
_________________________________________________________________
dense_3 (Dense)              (None, 10)                1010      
=================================================================
Total params: 573,910
Trainable params: 573,910
Non-trainable params: 0
_________________________________________________________________++++++++++++++++++++++++++++++++++++++++++++Training 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
Epoch 1/5
3/3 [==============================] - 2s 113ms/step - loss: 2.7174 - accuracy: 0.1086
Epoch 2/5
3/3 [==============================] - 3s 492ms/step - loss: 2.6596 - accuracy: 0.1666 - val_loss: 1.6333 - val_accuracy: 0.4709
Epoch 3/5
3/3 [==============================] - 2s 115ms/step - loss: 1.5516 - accuracy: 0.4968
Epoch 4/5
3/3 [==============================] - 2s 255ms/step - loss: 1.0690 - accuracy: 0.6475 - val_loss: 0.7587 - val_accuracy: 0.7859
Epoch 5/5
3/3 [==============================] - 2s 115ms/step - loss: 0.7137 - accuracy: 0.7955
++++++++++++++++++++++++++++++++++++++++++++Training 阶段:结束++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
2/2 [==============================] - 0s 22ms/step - loss: 0.5240 - accuracy: 0.8493
++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++================saved weights================
================del network================
================创建一个和所加载参数的原始network一模一样的network================
================loaded weights================++++++++++++++++++++++++++++++++++++++++++++加载weights后--->Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
2/2 [==============================] - 0s 22ms/step - loss: 0.5223 - accuracy: 0.8486
++++++++++++++++++++++++++++++++++++++++++++加载weights后--->Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:开始++++++++++++++++++++++++++++++++++++++++++++
tf.Tensor([6 3 7 ... 5 1 0], shape=(5000,), dtype=int64)
tf.Tensor([6 3 7 ... 3 1 0], shape=(5000,), dtype=int64)
++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:结束++++++++++++++++++++++++++++++++++++++++++++Process finished with exit code 0

二、network.save()、network.load()

保存整个模型,加载后再根据network的通常做法进行操作。

import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'  # 放在 import tensorflow as tf 之前才有效import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, optimizers, datasets# 一、获取数据集
(X_train, Y_train), (X_test, Y_test) = datasets.mnist.load_data()
print('X_train.shpae = {0},Y_train.shpae = {1}------------type(X_train) = {2},type(Y_train) = {3}'.format(X_train.shape, Y_train.shape, type(X_train), type(Y_train)))# 二、数据处理
# 预处理函数:将numpy数据转为tensor
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.x = tf.reshape(x, [28 * 28])y = tf.cast(y, dtype=tf.int32)y = tf.one_hot(y, depth=10)return x, y# 2.1 处理训练集
# print('X_train.shpae = {0},Y_train.shpae = {1}------------type(X_train) = {2},type(Y_train) = {3}'.format(X_train.shape, Y_train.shape, type(X_train), type(Y_train)))
dataset_train = tf.data.Dataset.from_tensor_slices((X_train, Y_train))  # 此步骤自动将numpy类型的数据转为tensor
dataset_train = dataset_train.map(preprocess)  # 调用map()函数批量修改每一个元素数据的数据类型
dataset_train = dataset_train.shuffle(len(X_train))  # 打散dataset_train中的样本顺序,防止图片的原始顺序对神经网络性能的干扰
print('dataset_train = {0},type(dataset_train) = {1}'.format(dataset_train, type(dataset_train)))
batch_size_train = 20000  # 每个batch里的样本数量设置100-200之间合适。
dataset_batch_train = dataset_train.batch(batch_size_train)  # 将dataset_batch_train中每sample_num_of_each_batch_train张图片分为一个batch,读取一个batch相当于一次性并行读取sample_num_of_each_batch_train张图片
print('dataset_batch_train = {0},type(dataset_batch_train) = {1}'.format(dataset_batch_train, type(dataset_batch_train)))
# 2.2 处理测试集
dataset_test = tf.data.Dataset.from_tensor_slices((X_test, Y_test))  # 此步骤自动将numpy类型的数据转为tensor
dataset_test = dataset_test.map(preprocess)  # 调用map()函数批量修改每一个元素数据的数据类型
dataset_test = dataset_test.shuffle(len(X_test))  # 打散样本顺序,防止图片的原始顺序对神经网络性能的干扰
batch_size_test = 5000  # 每个batch里的样本数量设置100-200之间合适。
dataset_batch_test = dataset_test.batch(batch_size_test)  # 将dataset_test中每sample_num_of_each_batch_test张图片分为一个batch,读取一个batch相当于一次性并行读取sample_num_of_each_batch_test张图片# 三、构建神经网络结构:Dense 表示全连接神经网络,激活函数用 relu
network = keras.Sequential([layers.Dense(500, activation=tf.nn.relu),  # 降维:784-->500layers.Dense(300, activation=tf.nn.relu),  # 降维:500-->300layers.Dense(100, activation=tf.nn.relu),  # 降维:300-->100layers.Dense(10)])  # 降维:100-->10,最后一层一般不需要在此处指定激活函数,在计算Loss的时候会自动运用激活函数
network.build(input_shape=[None, 784])  # 28*28=784,None表示样本数量,是不确定的值。
network.summary()  # 打印神经网络model的简要信息# 四、设置神经网络各个参数
network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])# 五、给神经网络喂数据,训练神经网络模型参数
print('\n++++++++++++++++++++++++++++++++++++++++++++Training 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.fit(dataset_batch_train, epochs=5, validation_data=dataset_batch_test, validation_freq=2)  # validation_freq参数表示每多少个epoch做一次验证/validation
print('++++++++++++++++++++++++++++++++++++++++++++Training 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')# 六、模型评估 test/evluation
print('\n++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.evaluate(dataset_batch_test)
print('++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')network.save('model.h5')
print('\n================saved total model================')
del network
print('================del network================')# 七、从磁盘加载保存的整体模型(包括所有参数、结构...)
print('================loaded model from file================')
network = tf.keras.models.load_model('model.h5', compile=False)
network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])# 八、模型评估 test/evluation
print('\n++++++++++++++++++++++++++++++++++++++++++++从磁盘加载整个model后--->Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
network.evaluate(dataset_batch_test)
print('++++++++++++++++++++++++++++++++++++++++++++从磁盘加载整个model后--->Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++')# 九、模型上线应用
sample = next(iter(dataset_batch_test))  # 从 dataset_batch_test 中取一个batch数据做模拟
x = sample[0]
y = sample[1]  # one-hot
pred = network.predict(x)  # [b, 10]
y = tf.argmax(y, axis=1)  # convert back to number
pred = tf.argmax(pred, axis=1)
print('\n++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:开始++++++++++++++++++++++++++++++++++++++++++++')
print(pred)
print(y)
print('++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:结束++++++++++++++++++++++++++++++++++++++++++++')

打印结果:

X_train.shpae = (60000, 28, 28),Y_train.shpae = (60000,)------------type(X_train) = <class 'numpy.ndarray'>type(Y_train) = <class 'numpy.ndarray'>
dataset_train = <ShuffleDataset shapes: ((784,), (10,)), types: (tf.float32, tf.float32)>type(dataset_train) = <class 'tensorflow.python.data.ops.dataset_ops.ShuffleDataset'>
dataset_batch_train = <BatchDataset shapes: ((None, 784), (None, 10)), types: (tf.float32, tf.float32)>type(dataset_batch_train) = <class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 500)               392500    
_________________________________________________________________
dense_1 (Dense)              (None, 300)               150300    
_________________________________________________________________
dense_2 (Dense)              (None, 100)               30100     
_________________________________________________________________
dense_3 (Dense)              (None, 10)                1010      
=================================================================
Total params: 573,910
Trainable params: 573,910
Non-trainable params: 0
_________________________________________________________________++++++++++++++++++++++++++++++++++++++++++++Training 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
Epoch 1/5
3/3 [==============================] - 2s 119ms/step - loss: 2.4869 - accuracy: 0.2464
Epoch 2/5
3/3 [==============================] - 2s 514ms/step - loss: 3.5169 - accuracy: 0.3786 - val_loss: 1.5471 - val_accuracy: 0.5026
Epoch 3/5
3/3 [==============================] - 2s 116ms/step - loss: 1.4532 - accuracy: 0.5238
Epoch 4/5
3/3 [==============================] - 2s 273ms/step - loss: 0.9930 - accuracy: 0.6789 - val_loss: 0.6357 - val_accuracy: 0.8010
Epoch 5/5
3/3 [==============================] - 2s 112ms/step - loss: 0.6005 - accuracy: 0.8118
++++++++++++++++++++++++++++++++++++++++++++Training 阶段:结束++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
2/2 [==============================] - 0s 24ms/step - loss: 0.4489 - accuracy: 0.8735
++++++++++++++++++++++++++++++++++++++++++++Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++================saved total model================
================del network================
================loaded model from file================++++++++++++++++++++++++++++++++++++++++++++从磁盘加载整个model后--->Evluation 阶段:开始++++++++++++++++++++++++++++++++++++++++++++
2/2 [==============================] - 0s 21ms/step - loss: 0.4505 - accuracy: 0.8729
++++++++++++++++++++++++++++++++++++++++++++从磁盘加载整个model后--->Evluation 阶段:结束++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:开始++++++++++++++++++++++++++++++++++++++++++++
tf.Tensor([9 0 9 ... 5 1 9], shape=(5000,), dtype=int64)
tf.Tensor([9 0 9 ... 5 1 9], shape=(5000,), dtype=int64)
++++++++++++++++++++++++++++++++++++++++++++加载weights后--->应用阶段:结束++++++++++++++++++++++++++++++++++++++++++++Process finished with exit code 0

三、tf.saved_model.save()、tf.saved_model.load()、

保存为可以被其他语言(比如:C++)调用的格式
在这里插入图片描述

这篇关于深度学习:TensorFlow2构建、保存、加载神经网络模型【经典流程】的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1128904

相关文章

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

Security OAuth2 单点登录流程

单点登录(英语:Single sign-on,缩写为 SSO),又译为单一签入,一种对于许多相互关连,但是又是各自独立的软件系统,提供访问控制的属性。当拥有这项属性时,当用户登录时,就可以获取所有系统的访问权限,不用对每个单一系统都逐一登录。这项功能通常是以轻型目录访问协议(LDAP)来实现,在服务器上会将用户信息存储到LDAP数据库中。相同的,单一注销(single sign-off)就是指

Spring Security基于数据库验证流程详解

Spring Security 校验流程图 相关解释说明(认真看哦) AbstractAuthenticationProcessingFilter 抽象类 /*** 调用 #requiresAuthentication(HttpServletRequest, HttpServletResponse) 决定是否需要进行验证操作。* 如果需要验证,则会调用 #attemptAuthentica

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

学习hash总结

2014/1/29/   最近刚开始学hash,名字很陌生,但是hash的思想却很熟悉,以前早就做过此类的题,但是不知道这就是hash思想而已,说白了hash就是一个映射,往往灵活利用数组的下标来实现算法,hash的作用:1、判重;2、统计次数;

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G

嵌入式QT开发:构建高效智能的嵌入式系统

摘要: 本文深入探讨了嵌入式 QT 相关的各个方面。从 QT 框架的基础架构和核心概念出发,详细阐述了其在嵌入式环境中的优势与特点。文中分析了嵌入式 QT 的开发环境搭建过程,包括交叉编译工具链的配置等关键步骤。进一步探讨了嵌入式 QT 的界面设计与开发,涵盖了从基本控件的使用到复杂界面布局的构建。同时也深入研究了信号与槽机制在嵌入式系统中的应用,以及嵌入式 QT 与硬件设备的交互,包括输入输出设

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]