ggml文件格式

2024-05-24 19:04
文章标签 文件格式 ggml

本文主要是介绍ggml文件格式,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ggml文件格式

其中模型文件使用
ggml/examples/gpt-2/download-model.sh 脚本下载
我下载的是gpt-2 117M 这个
模型词汇编码表 encoder.json :

{"!": 0,"\"": 1,"#": 2,"$": 3,"%": 4,"&": 5,"'": 6,"(": 7,")": 8,"*": 9,"+": 10,",": 11,"-": 12,".": 13,"/": 14,"0": 15,"1": 16,"2": 17,"3": 18,"4": 19,"5": 20,"6": 21,"7": 22,......省略"\u0120Collider": 50253,"\u0120informants": 50254,"\u0120gazed": 50255,"<|endoftext|>": 50256
}

通过转换脚本理解ggml文件格式

ggml/examples/gpt-2/convert-ckpt-to-ggml.py :

# Convert a model checkpoint to a ggml compatible file
#
# Load the model using TensorFlow.
# Iterate over all variables and write them to a binary file.
#
# For each variable, write the following:
#   - Number of dimensions (int)
#   - Name length (int)
#   - Dimensions (int[n_dims])
#   - Name (char[name_length])
#   - Data (float[n_dims])
#
# By default, the bigger matrices are converted to 16-bit floats.
# This can be disabled by adding the "use-f32" CLI argument.
#
# At the start of the ggml file we write the model parameters
# and vocabulary.
#import sys
import json
import struct
import numpy as np
import tensorflow as tf# ref: https://github.com/openai/gpt-2/blob/master/src/encoder.py
def bytes_to_unicode():"""Returns list of utf-8 byte and a corresponding list of unicode strings.The reversible bpe codes work on unicode strings.This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.This is a signficant percentage of your normal, say, 32K bpe vocab.To avoid that, we want lookup tables between utf-8 bytes and unicode strings.And avoids mapping to whitespace/control characters the bpe code barfs on."""bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))cs = bs[:]n = 0for b in range(2**8):if b not in bs:bs.append(b)cs.append(2**8+n)n += 1cs = [chr(n) for n in cs]return dict(zip(bs, cs))# helper method to convert a numpy array to different float types
def convert_to_ftype(data, ftype):# fp16if ftype == 1:return data.astype(np.float16)assert False, "Invalid ftype: " + str(ftype)if len(sys.argv) < 3:print("Usage: convert-ckpt-to-ggml.py dir-model ftype\n")print("  ftype == 0 -> float32")print("  ftype == 1 -> float16")sys.exit(1)# output in the same directory as the model
dir_model = sys.argv[1]
fname_out = sys.argv[1] + "/ggml-model.bin"#加载词汇编码表
with open(dir_model + "/encoder.json", "r", encoding="utf-8") as f:encoder = json.load(f)#加载模型相关参数
with open(dir_model + "/hparams.json", "r", encoding="utf-8") as f:hparams = json.load(f)# possible data types
#   ftype == 0 -> float32
#   ftype == 1 -> float16
#
# map from ftype to string
ftype_str = ["f32", "f16"]#导出的ggml文件名称
ftype = 1
if len(sys.argv) > 2:ftype = int(sys.argv[2])if ftype < 0 or ftype > 1:print("Invalid ftype: " + str(ftype))sys.exit(1)fname_out = sys.argv[1] + "/ggml-model-" + ftype_str[ftype] + ".bin"#从tensorflow模型路径获取模型参数信息,各层参数名、形状等等
list_vars = tf.train.list_variables(dir_model)fout = open(fname_out, "wb")#写入ggml文件头
fout.write(struct.pack("i", 0x67676d6c)) # magic: ggml in hex
#写入model的hparams.json 数据
fout.write(struct.pack("i", hparams["n_vocab"]))
fout.write(struct.pack("i", hparams["n_ctx"]))
fout.write(struct.pack("i", hparams["n_embd"]))
fout.write(struct.pack("i", hparams["n_head"]))
fout.write(struct.pack("i", hparams["n_layer"]))
#写入量化类型
fout.write(struct.pack("i", ftype))# ASCII 码表
byte_encoder = bytes_to_unicode()
# ASCII 码表解码器
byte_decoder = {v:k for k, v in byte_encoder.items()}# 写入词汇编码表的长度
fout.write(struct.pack("i", len(encoder)))#遍历词汇编码表的每一个词汇编码
for key in encoder:#遍历一个词汇的编码转换为字符组成text单词text = bytearray([byte_decoder[c] for c in key])#写入单词长度fout.write(struct.pack("i", len(text)))#写入单词fout.write(text)#遍历模型变量 名称、形状
for name, shape in list_vars:print("Processing variable: " + name + " with shape: ", shape)#从模型中加载变量,并转换为 NumPy 数组,去除尺寸为 1 的维度data = tf.train.load_variable(dir_model, name).squeeze()#变量维数n_dims = len(data.shape);#为了提高效率 - 转置投影矩阵# for efficiency - transpose the projection matrices# "model/h.*/attn/c_attn/w"# "model/h.*/attn/c_proj/w"# "model/h.*/mlp/c_fc/w"# "model/h.*/mlp/c_proj/w"if name[-14:] == "/attn/c_attn/w" or \name[-14:] == "/attn/c_proj/w" or \name[-11:] == "/mlp/c_fc/w" or \name[-13:] == "/mlp/c_proj/w":print("  Transposing")data = data.transpose()#变量形状dshape = data.shape#量化权重参数,其他参数量化为float32ftype_cur = 0if ftype != 0:# match name:#  "model/wte"#  "model/h.*/attn/c_attn/w"#  "model/h.*/attn/c_proj/w"#  "model/h.*/mlp/c_fc/w"#  "model/h.*/mlp/c_proj/w"if name == "model/wte" or name[-2:] == "/w":print("  Converting to " + ftype_str[ftype])data = convert_to_ftype(data, ftype)ftype_cur = ftypeelse:print("  Converting to float32")data = data.astype(np.float32)ftype_cur = 0# headerstr = name.encode('utf-8')#ggml文件写入 模型变量维数,变量名称长度,量化类型fout.write(struct.pack("iii", n_dims, len(str), ftype_cur))#写入变量所有维度值for i in range(n_dims):fout.write(struct.pack("i", dshape[n_dims - 1 - i]))#写入变量名fout.write(str);# 写入变量data.tofile(fout)#写完模型参数,关闭文件
fout.close()print("Done. Output file: " + fname_out)
print("")

转换模型为ggml 文件

 python ggml/examples/gpt-2/convert-ckpt-to-ggml.py  ./models/gpt-2-117M/   1

输出如下:

Processing variable: model/h0/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h0/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h0/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h0/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h0/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h0/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h0/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h0/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h0/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h0/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h0/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h0/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h1/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h1/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h1/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h1/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h1/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h1/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h1/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h1/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h1/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h1/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h1/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h1/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h10/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h10/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h10/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h10/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h10/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h10/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h10/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h10/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h10/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h10/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h10/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h10/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h11/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h11/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h11/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h11/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h11/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h11/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h11/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h11/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h11/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h11/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h11/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h11/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h2/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h2/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h2/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h2/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h2/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h2/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h2/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h2/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h2/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h2/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h2/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h2/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h3/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h3/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h3/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h3/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h3/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h3/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h3/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h3/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h3/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h3/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h3/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h3/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h4/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h4/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h4/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h4/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h4/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h4/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h4/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h4/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h4/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h4/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h4/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h4/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h5/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h5/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h5/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h5/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h5/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h5/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h5/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h5/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h5/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h5/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h5/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h5/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h6/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h6/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h6/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h6/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h6/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h6/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h6/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h6/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h6/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h6/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h6/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h6/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h7/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h7/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h7/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h7/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h7/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h7/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h7/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h7/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h7/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h7/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h7/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h7/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h8/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h8/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h8/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h8/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h8/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h8/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h8/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h8/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h8/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h8/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h8/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h8/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/h9/attn/c_attn/b with shape:  [2304]Converting to float32
Processing variable: model/h9/attn/c_attn/w with shape:  [1, 768, 2304]TransposingConverting to f16
Processing variable: model/h9/attn/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h9/attn/c_proj/w with shape:  [1, 768, 768]TransposingConverting to f16
Processing variable: model/h9/ln_1/b with shape:  [768]Converting to float32
Processing variable: model/h9/ln_1/g with shape:  [768]Converting to float32
Processing variable: model/h9/ln_2/b with shape:  [768]Converting to float32
Processing variable: model/h9/ln_2/g with shape:  [768]Converting to float32
Processing variable: model/h9/mlp/c_fc/b with shape:  [3072]Converting to float32
Processing variable: model/h9/mlp/c_fc/w with shape:  [1, 768, 3072]TransposingConverting to f16
Processing variable: model/h9/mlp/c_proj/b with shape:  [768]Converting to float32
Processing variable: model/h9/mlp/c_proj/w with shape:  [1, 3072, 768]TransposingConverting to f16
Processing variable: model/ln_f/b with shape:  [768]Converting to float32
Processing variable: model/ln_f/g with shape:  [768]Converting to float32
Processing variable: model/wpe with shape:  [1024, 768]Converting to float32
Processing variable: model/wte with shape:  [50257, 768]Converting to f16Done. Output file: ./models/gpt-2-117M//ggml-model-f16.bin

这篇关于ggml文件格式的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/999265

相关文章

Android raw,assets目录源使文件格式使用

播放raw目录下的mp3文件: MediaPlayer mediaPlayer1;mediaPlayer1 = MediaPlayer.create(this, R.raw.boot); //文件路径 /raw/boot.mp3mediaPlayer1.start();mediaPlayer1.stop(); 读取assets目录下图片资源文件: <Im

Excel中.xls和.xlsx文件格式的区别,及C++操作Excel文件

‌文件结构和兼容性‌: XLS是Excel 97-2003版本的文件格式,而XLSX是Excel 2007及以上版本的文件格式。XLS格式是向下兼容的,意味着较新的Excel版本可以打开XLS文件,但较旧的版本无法打开XLSX文件。相反,XLSX格式是向上兼容的,即较新的Excel版本可以打开XLSX和XLS格式的文件‌12。 ‌功能和兼容性‌: XLSX格式支持更多的函数和公式,如SUMIFS

pdf转word怎么转换?2024快速进行文件格式转换的几款软件

pdf转word怎么转换?2024快速进行文件格式转换的几款软件 将PDF文件转换为Word文档是日常工作中常见的需求,尤其是当你需要编辑或重新利用PDF中的内容时。市面上有许多软件可以帮助你轻松完成PDF转Word的任务,以下是五款值得推荐的软件,它们可以帮助你快速、高效地进行文件格式转换。 1.迅捷PDF转换器 这是一个非常简单方便好上手的PDF编辑和转换工具,功能强大且使用简单。它不

js 解决由于#65279(bom文件格式)产生的空白行

把以下代码放到HTML页面的head标签里就行了: <script> var a=document.body.innerHTML; document.body.innerHTML=a.replace(/\ufeff/g,''); </script>

干货分享|分享一款免费的文件格式转换神器File Converter

下载地址:File Converter - Convert your files in just 2 clicks! 工具介绍: 使用方法:下载和操作非常便捷。下图演示如何将PDF文件转换为docx文件。 注:如果不想下载软件File Converter,还可以使用Speedpdf进行线上转换,链接如下: Speedpdf - 提供免费PDF转换器、编辑器、阅读器、下载、分享的服务

cad导出图片格式怎么导出?5个软件帮助你快速转换文件格式

cad导出图片格式怎么导出?5个软件帮助你快速转换文件格式 将CAD文件导出为图片格式可以帮助你更方便地展示、分享或打印设计图纸。CAD(Computer-Aided Design)文件通常以DWG或DXF格式保存,而要将它们转换为常见的图片格式(如PNG、JPEG或BMP),你可以使用以下五种软件,这些软件不仅操作简单,还能确保高质量的输出。 迅捷CAD转换器 这是一款非常简单方便实用的

详细解析MATLAB和Simulink中的文件格式:mat, mdl, mexw32, 和 m 文件

matlab 探索MATLAB和Simulink中的文件格式:MAT, MDL, MEXW32, 和 M 文件**MAT 文件 (.mat)****MDL 文件 (.mdl)****MEX 文件 (.mexw32/.mexw64)****M 文件 (.m)****总结** 探索MATLAB和Simulink中的文件格式:MAT, MDL, MEXW32, 和 M 文

yq 配置文件格式转换工具

在现代开发和运维的世界中,处理和转换不同格式的数据文件如 YAML、JSON、XML、CSV 等是日常任务。文件格式的多样性和复杂性常常给开发者带来不小的挑战。在这种情况下,强大的命令行工具能够极大地简化工作流程,本文主要介绍一款基于Go实现处理 YAML、JSON、XML、CSV、TOML 的命令行工具yq。 1 安装 go yq 1)安装 go 编译器 yq 依赖的 go 版本是 1.2

点云文件格式

点云文件是用于存储三维空间中点的位置信息的文件格式,这些点可以代表物理对象表面上的点,物体内部的点、空间中任何类型的点。 属性包括点的X,Y,Z坐标,颜色、法线向量、强度值。 常见的点云文件格式: PLY(Polygon File Format):这是一种以ASCII或二进制格式存储点云数据的文件格式,常用于存储三维模型。 PCD(Point Cloud Data):这是一种以ASCII

arm elf文件格式简单分析

/*  * ELF文件格式定义  */ #ifndef __ELF2BIN_H__ #define __ELF2BIN_H__ #define Elf32_Addr unsigned int #define Elf32_Half unsigned short #define Elf32_Off unsigned int #define Elf32_SWord unsigned int #d