本文主要是介绍基于Keras的模型量化(PTQ、QAT),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
对PTQ和QAT的详细解释在这篇哦:
《模型量化(三)—— 量化感知训练QAT(pytorch)》
本文给的代码是基于tensorflow
目录
- PTQ
- 只量化权重
- 权重和激活值全量化
- QAT
- 套路创建和训练模型
- 用QAT克隆和微调预训练模型
- 量化模型
- 评估TF和TFLite
PTQ
只量化权重
只是优化了模型大小,对于模型的计算没什么优化,因为W * X时,W要反量化为浮点进行运算,相当于还增加了反量化这一累赘操作…
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
权重和激活值全量化
通过量化权重和激活,可以同时改善功耗和时延,因为最关键的密集部分W * X使用 8 位而不是浮点数进行计算。这需要一个较小的calibration数据集来计算激活值反量化时的S和Z操作。
import tensorflow as tfdef representative_dataset_gen():for _ in range(num_calibration_steps):# Get sample input data as a numpy array in a method of your choosing.yield [input]converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
tflite_quant_model = converter.convert()
详细内容参考官方的说明:
https://www.tensorflow.org/model_optimization/guide/quantization/post_training?hl=zh-cn
QAT
套路创建和训练模型
!pip install -q tensorflow
!pip install -q tensorflow-model-optimizationimport tempfile
import os
import tensorflow as tf
from tensorflow_model_optimization.python.core.keras.compat import keras# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0# Define the model architecture.
model = keras.Sequential([keras.layers.InputLayer(input_shape=(28, 28)),keras.layers.Reshape(target_shape=(28, 28, 1)),keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),keras.layers.MaxPooling2D(pool_size=(2, 2)),keras.layers.Flatten(),keras.layers.Dense(10)
])# Train the digit classification model
model.compile(optimizer='adam',loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])model.fit(train_images,train_labels,epochs=1,validation_split=0.1,
)
用QAT克隆和微调预训练模型
import tensorflow_model_optimization as tfmotquantize_model = tfmot.quantization.keras.quantize_model# q_aware stands for for quantization aware.
q_aware_model = quantize_model(model)# `quantize_model` requires a recompile.
q_aware_model.compile(optimizer='adam',loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])q_aware_model.summary()##############################################################
train_images_subset = train_images[0:1000] # out of 60000
train_labels_subset = train_labels[0:1000]q_aware_model.fit(train_images_subset, train_labels_subset,batch_size=500, epochs=1, validation_split=0.1)##############################################################
_, baseline_model_accuracy = model.evaluate(test_images, test_labels, verbose=0)_, q_aware_model_accuracy = q_aware_model.evaluate(test_images, test_labels, verbose=0)print('Baseline test accuracy:', baseline_model_accuracy)
print('Quant test accuracy:', q_aware_model_accuracy)
#现在模型还是float32,不是int8
量化模型
用上面介绍的PTQ方法
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]quantized_tflite_model = converter.convert()
评估TF和TFLite
import numpy as npdef evaluate_model(interpreter):input_index = interpreter.get_input_details()[0]["index"]output_index = interpreter.get_output_details()[0]["index"]# Run predictions on every image in the "test" dataset.prediction_digits = []for i, test_image in enumerate(test_images):if i % 1000 == 0:print('Evaluated on {n} results so far.'.format(n=i))# Pre-processing: add batch dimension and convert to float32 to match with# the model's input data format.test_image = np.expand_dims(test_image, axis=0).astype(np.float32)interpreter.set_tensor(input_index, test_image)# Run inference.interpreter.invoke()# Post-processing: remove batch dimension and find the digit with highest# probability.output = interpreter.tensor(output_index)digit = np.argmax(output()[0])prediction_digits.append(digit)print('\n')# Compare prediction results with ground truth labels to calculate accuracy.prediction_digits = np.array(prediction_digits)accuracy = (prediction_digits == test_labels).mean()return accuracy###################################################################
interpreter = tf.lite.Interpreter(model_content=quantized_tflite_model)
interpreter.allocate_tensors()test_accuracy = evaluate_model(interpreter)print('Quant TFLite test_accuracy:', test_accuracy)
print('Quant TF test accuracy:', q_aware_model_accuracy)
这篇关于基于Keras的模型量化(PTQ、QAT)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!