本文主要是介绍计算百川大模型的输出token,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
代码比较简单,记录一下免得以后要再copy一次。
首先需要在modeling_baichuan.py的BaichuanForCausalLM类中添加get_outputs函数
def get_outputs(self, tokenizer, messages: List[dict], stream=False,generation_config: Optional[GenerationConfig]=None):generation_config = generation_config or self.generation_configinput_ids = build_chat_input(self, tokenizer, messages, generation_config.max_new_tokens)outputs = self.generate(input_ids, generation_config=generation_config)return outputs
然后运行下面的代码计算,注意,因为我显卡空间不够,所以是半精度运行。
import os
import torch
import platform
from colorama import Fore, Style
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
import timeprint("init model ...")
model = AutoModelForCausalLM.from_pretrained("/home/Datasets/large_models/all_pt_model/baichuan2_13b_20240102",torch_dtype=torch.float16,device_map="auto",trust_remote_code=True
)model.generation_config = GenerationConfig.from_pretrained("/home/Datasets/large_models/all_pt_model/baichuan2_13b_20240102"
)
tokenizer = AutoTokenizer.from_pretrained("/home/Datasets/large_models/all_pt_model/baichuan2_13b_20240102",use_fast=False,trust_remote_code=True
)messages = []
outs = []
generate_tokens = []
query = ["怎么创建线程", "什么是信号量,给我创建信号量的示例", "发送邮件的示例", "如何查找设备", "你好,介绍一下自己,要一千字", "给我一个创建线程的代码示例", "详细介绍一下rt-thread", "rt-thread的历史发展", "创建设备的代码示例", ]for q in query:messages = [{"role": "user", "content": q}]t1 = time.time()outputs = model.get_outputs(tokenizer, messages)t2 = time.time()outs.append(outputs)generate_tokens.append(len(outputs[0]) / (t2 - t1))print(generate_tokens)print(sum(generate_tokens)/len(generate_tokens))
这篇关于计算百川大模型的输出token的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!