本文主要是介绍ggml文件格式,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
ggml文件格式
其中模型文件使用
ggml/examples/gpt-2/download-model.sh
脚本下载
我下载的是gpt-2 117M 这个
模型词汇编码表 encoder.json :
{"!": 0,"\"": 1,"#": 2,"$": 3,"%": 4,"&": 5,"'": 6,"(": 7,")": 8,"*": 9,"+": 10,",": 11,"-": 12,".": 13,"/": 14,"0": 15,"1": 16,"2": 17,"3": 18,"4": 19,"5": 20,"6": 21,"7": 22,......省略"\u0120Collider": 50253,"\u0120informants": 50254,"\u0120gazed": 50255,"<|endoftext|>": 50256
}
通过转换脚本理解ggml文件格式
ggml/examples/gpt-2/convert-ckpt-to-ggml.py :
# Convert a model checkpoint to a ggml compatible file
#
# Load the model using TensorFlow.
# Iterate over all variables and write them to a binary file.
#
# For each variable, write the following:
# - Number of dimensions (int)
# - Name length (int)
# - Dimensions (int[n_dims])
# - Name (char[name_length])
# - Data (float[n_dims])
#
# By default, the bigger matrices are converted to 16-bit floats.
# This can be disabled by adding the "use-f32" CLI argument.
#
# At the start of the ggml file we write the model parameters
# and vocabulary.
#import sys
import json
import struct
import numpy as np
import tensorflow as tf# ref: https://github.com/openai/gpt-2/blob/master/src/encoder.py
def bytes_to_unicode():"""Returns list of utf-8 byte and a corresponding list of unicode strings.The reversible bpe codes work on unicode strings.This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.This is a signficant percentage of your normal, say, 32K bpe vocab.To avoid that, we want lookup tables between utf-8 bytes and unicode strings.And avoids mapping to whitespace/control characters the bpe code barfs on."""bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))cs = bs[:]n = 0for b in range(2**8):if b not in bs:bs.append(b)cs.append(2**8+n)n += 1cs = [chr(n) for n in cs]return dict(zip(bs, cs))# helper method to convert a numpy array to different float types
def convert_to_ftype(data, ftype):# fp16if ftype == 1:return data.astype(np.float16)assert False, "Invalid ftype: " + str(ftype)if len(sys.argv) < 3:print("Usage: convert-ckpt-to-ggml.py dir-model ftype\n")print(" ftype == 0 -> float32")print(" ftype == 1 -> float16")sys.exit(1)# output in the same directory as the model
dir_model = sys.argv[1]
fname_out = sys.argv[1] + "/ggml-model.bin"#加载词汇编码表
with open(dir_model + "/encoder.json", "r", encoding="utf-8") as f:encoder = json.load(f)#加载模型相关参数
with open(dir_model + "/hparams.json", "r", encoding="utf-8") as f:hparams = json.load(f)# possible data types
# ftype == 0 -> float32
# ftype == 1 -> float16
#
# map from ftype to string
ftype_str = ["f32", "f16"]#导出的ggml文件名称
ftype = 1
if len(sys.argv) > 2:ftype = int(sys.argv[2])if ftype < 0 or ftype > 1:print("Invalid ftype: " + str(ftype))sys.exit(1)fname_out = sys.argv[1] + "/ggml-model-" + ftype_str[ftype] + ".bin"#从tensorflow模型路径获取模型参数信息,各层参数名、形状等等
list_vars = tf.train.list_variables(dir_model)fout = open(fname_out, "wb")#写入ggml文件头
fout.write(struct.pack("i", 0x67676d6c)) # magic: ggml in hex
#写入model的hparams.json 数据
fout.write(struct.pack("i", hparams["n_vocab"]))
fout.write(struct.pack("i", hparams["n_ctx"]))
fout.write(struct.pack("i", hparams["n_embd"]))
fout.write(struct.pack("i", hparams["n_head"]))
fout.write(struct.pack("i", hparams["n_layer"]))
#写入量化类型
fout.write(struct.pack("i", ftype))# ASCII 码表
byte_encoder = bytes_to_unicode()
# ASCII 码表解码器
byte_decoder = {v:k for k, v in byte_encoder.items()}# 写入词汇编码表的长度
fout.write(struct.pack("i", len(encoder)))#遍历词汇编码表的每一个词汇编码
for key in encoder:#遍历一个词汇的编码转换为字符组成text单词text = bytearray([byte_decoder[c] for c in key])#写入单词长度fout.write(struct.pack("i", len(text)))#写入单词fout.write(text)#遍历模型变量 名称、形状
for name, shape in list_vars:print("Processing variable: " + name + " with shape: ", shape)#从模型中加载变量,并转换为 NumPy 数组,去除尺寸为 1 的维度data = tf.train.load_variable(dir_model, name).squeeze()#变量维数n_dims = len(data.shape);#为了提高效率 - 转置投影矩阵# for efficiency - transpose the projection matrices# "model/h.*/attn/c_attn/w"# "model/h.*/attn/c_proj/w"# "model/h.*/mlp/c_fc/w"# "model/h.*/mlp/c_proj/w"if name[-14:] == "/attn/c_attn/w" or \name[-14:] == "/attn/c_proj/w" or \name[-11:] == "/mlp/c_fc/w" or \name[-13:] == "/mlp/c_proj/w":print(" Transposing")data = data.transpose()#变量形状dshape = data.shape#量化权重参数,其他参数量化为float32ftype_cur = 0if ftype != 0:# match name:# "model/wte"# "model/h.*/attn/c_attn/w"# "model/h.*/attn/c_proj/w"# "model/h.*/mlp/c_fc/w"# "model/h.*/mlp/c_proj/w"if name == "model/wte" or name[-2:] == "/w":print(" Converting to " + ftype_str[ftype])data = convert_to_ftype(data, ftype)ftype_cur = ftypeelse:print(" Converting to float32")data = data.astype(np.float32)ftype_cur = 0# headerstr = name.encode('utf-8')#ggml文件写入 模型变量维数,变量名称长度,量化类型fout.write(struct.pack("iii", n_dims, len(str), ftype_cur))#写入变量所有维度值for i in range(n_dims):fout.write(struct.pack("i", dshape[n_dims - 1 - i]))#写入变量名fout.write(str);# 写入变量data.tofile(fout)#写完模型参数,关闭文件
fout.close()print("Done. Output file: " + fname_out)
print("")
转换模型为ggml 文件
python ggml/examples/gpt-2/convert-ckpt-to-ggml.py ./models/gpt-2-117M/ 1
输出如下:
Processing variable: model/h0/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h0/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h0/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h0/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h0/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h0/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h0/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h0/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h0/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h0/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h0/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h0/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h1/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h1/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h1/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h1/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h1/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h1/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h1/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h1/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h1/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h1/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h1/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h1/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h10/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h10/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h10/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h10/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h10/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h10/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h10/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h10/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h10/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h10/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h10/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h10/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h11/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h11/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h11/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h11/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h11/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h11/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h11/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h11/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h11/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h11/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h11/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h11/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h2/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h2/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h2/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h2/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h2/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h2/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h2/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h2/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h2/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h2/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h2/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h2/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h3/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h3/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h3/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h3/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h3/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h3/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h3/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h3/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h3/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h3/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h3/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h3/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h4/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h4/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h4/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h4/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h4/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h4/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h4/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h4/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h4/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h4/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h4/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h4/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h5/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h5/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h5/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h5/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h5/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h5/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h5/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h5/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h5/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h5/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h5/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h5/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h6/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h6/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h6/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h6/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h6/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h6/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h6/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h6/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h6/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h6/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h6/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h6/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h7/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h7/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h7/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h7/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h7/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h7/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h7/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h7/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h7/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h7/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h7/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h7/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h8/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h8/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h8/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h8/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h8/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h8/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h8/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h8/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h8/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h8/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h8/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h8/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h9/attn/c_attn/b with shape: [2304]Converting to float32
Processing variable: model/h9/attn/c_attn/w with shape: [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h9/attn/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h9/attn/c_proj/w with shape: [1, 768, 768]TransposingConverting to f16
Processing variable: model/h9/ln_1/b with shape: [768]Converting to float32
Processing variable: model/h9/ln_1/g with shape: [768]Converting to float32
Processing variable: model/h9/ln_2/b with shape: [768]Converting to float32
Processing variable: model/h9/ln_2/g with shape: [768]Converting to float32
Processing variable: model/h9/mlp/c_fc/b with shape: [3072]Converting to float32
Processing variable: model/h9/mlp/c_fc/w with shape: [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h9/mlp/c_proj/b with shape: [768]Converting to float32
Processing variable: model/h9/mlp/c_proj/w with shape: [1, 3072, 768]TransposingConverting to f16
Processing variable: model/ln_f/b with shape: [768]Converting to float32
Processing variable: model/ln_f/g with shape: [768]Converting to float32
Processing variable: model/wpe with shape: [1024, 768]Converting to float32
Processing variable: model/wte with shape: [50257, 768]Converting to f16Done. Output file: ./models/gpt-2-117M//ggml-model-f16.bin
这篇关于ggml文件格式的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!