NLP(六十四)使用FastChat计算LLaMA-2模型的token长度

2023-10-18 14:20

本文主要是介绍NLP(六十四)使用FastChat计算LLaMA-2模型的token长度,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

LLaMA-2模型部署

  在文章NLP(五十九)使用FastChat部署百川大模型中,笔者介绍了FastChat框架,以及如何使用FastChat来部署百川模型。
  本文将会部署LLaMA-2 70B模型,使得其兼容OpenAI的调用风格。部署的Dockerfile文件如下:

FROM nvidia/cuda:11.7.1-runtime-ubuntu20.04RUN apt-get update -y && apt-get install -y python3.9 python3.9-distutils curl
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.9 get-pip.py
RUN pip3 install fschat

Docker-compose.yml文件如下:

version: "3.9"services:fastchat-controller:build:context: .dockerfile: Dockerfileimage: fastchat:latestports:- "21001:21001"entrypoint: ["python3.9", "-m", "fastchat.serve.controller", "--host", "0.0.0.0", "--port", "21001"]fastchat-model-worker:build:context: .dockerfile: Dockerfilevolumes:- ./model:/root/modelimage: fastchat:latestports:- "21002:21002"deploy:resources:reservations:devices:- driver: nvidiadevice_ids: ['0', '1']capabilities: [gpu]entrypoint: ["python3.9", "-m", "fastchat.serve.model_worker", "--model-names", "llama2-70b-chat", "--model-path", "/root/model/llama2/Llama-2-70b-chat-hf", "--num-gpus", "2", "--gpus",  "0,1", "--worker-address", "http://fastchat-model-worker:21002", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "21002"]fastchat-api-server:build:context: .dockerfile: Dockerfileimage: fastchat:latestports:- "8000:8000"entrypoint: ["python3.9", "-m", "fastchat.serve.openai_api_server", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "8000"]

部署成功后,会占用2张A100,每张A100占用约66G显存。
  测试模型是否部署成功:

curl http://localhost:8000/v1/models

输出结果如下:

{"object": "list","data": [{"id": "llama2-70b-chat","object": "model","created": 1691504717,"owned_by": "fastchat","root": "llama2-70b-chat","parent": null,"permission": [{"id": "modelperm-3XG6nzMAqfEkwfNqQ52fdv","object": "model_permission","created": 1691504717,"allow_create_engine": false,"allow_sampling": true,"allow_logprobs": true,"allow_search_indices": true,"allow_view": true,"allow_fine_tuning": false,"organization": "*","group": null,"is_blocking": false}]}]
}

部署LLaMA-2 70B模型成功!

Prompt token长度计算

  在FastChat的Github开源项目中,项目提供了计算Prompt的token长度的API,文件路径为:fastchat/serve/model_worker.py,调用方法为:

curl --location 'localhost:21002/count_token' \
--header 'Content-Type: application/json' \
--data '{"prompt": "What is your name?"}'

输出结果如下:

{"count": 6,"error_code": 0
}

Conversation token长度计算

  在FastChat中计算Conversation(对话)的token长度较为麻烦。
  首先我们需要获取LLaMA-2 70B模型的对话配置,调用API如下:

curl --location --request POST 'http://localhost:21002/worker_get_conv_template'

输出结果如下:

{'conv': {'messages': [],'name': 'llama-2','offset': 0,'roles': ['[INST]', '[/INST]'],'sep': ' ','sep2': ' </s><s>','sep_style': 7,'stop_str': None,'stop_token_ids': [2],'system_message': 'You are a helpful, respectful and honest ''assistant. Always answer as helpfully as ''possible, while being safe. Your answers should ''not include any harmful, unethical, racist, ''sexist, toxic, dangerous, or illegal content. ''Please ensure that your responses are socially ''unbiased and positive in nature.\n''\n''If a question does not make any sense, or is not ''factually coherent, explain why instead of '"answering something not correct. If you don't ""know the answer to a question, please don't share "'false information.','system_template': '[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n'}}

  在FastChat中的对话文件(fastchat/conversation.py)中,提供了对话加工的代码,这里不再展示,使用时直接复制整个文件即可,该文件不依赖任何第三方模块。
  我们需要将对话按照OpenAI的方式加工成对应的Prompt,输入的对话(messages)如下:

messages = [{“role”: “system”, “content”: “You are Jack, you are 20 years old, answer questions with humor.”}, {“role”: “user”, “content”: “What is your name?”},{“role”: “assistant”, “content”: " Well, well, well! Look who’s asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!“}, {“role”: “user”, “content”: “How old are you?”}, {“role”: “assistant”, “content”: " Oh, you want to know my age? Well, let’s just say I’m older than a bottle of wine but younger than a bottle of whiskey. I’m like a fine cheese, getting better with age, but still young enough to party like it’s 1999!”}, {“role”: “user”, “content”: “Where is your hometown?”}]

Python代码如下:

# -*- coding: utf-8 -*-
# @place: Pudong, Shanghai 
# @file: prompt.py
# @time: 2023/8/8 19:24
from conversation import Conversation, SeparatorStylemessages = [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999!"}, {"role": "user", "content": "Where is your hometown?"}]llama2_conv = {"conv":{"name":"llama-2","system_template":"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n","system_message":"You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.","roles":["[INST]","[/INST]"],"messages":[],"offset":0,"sep_style":7,"sep":" ","sep2":" </s><s>","stop_str":None,"stop_token_ids":[2]}}
conv = llama2_conv['conv']conv = Conversation(name=conv["name"],system_template=conv["system_template"],system_message=conv["system_message"],roles=conv["roles"],messages=list(conv["messages"]),  # prevent in-place modificationoffset=conv["offset"],sep_style=SeparatorStyle(conv["sep_style"]),sep=conv["sep"],sep2=conv["sep2"],stop_str=conv["stop_str"],stop_token_ids=conv["stop_token_ids"],)if isinstance(messages, str):prompt = messages
else:for message in messages:msg_role = message["role"]if msg_role == "system":conv.set_system_message(message["content"])elif msg_role == "user":conv.append_message(conv.roles[0], message["content"])elif msg_role == "assistant":conv.append_message(conv.roles[1], message["content"])else:raise ValueError(f"Unknown role: {msg_role}")# Add a blank message for the assistant.conv.append_message(conv.roles[1], None)prompt = conv.get_prompt()print(repr(prompt))

加工后的Prompt如下:

"[INST] <<SYS>>\nYou are Jack, you are 20 years old, answer questions with humor.\n<</SYS>>\n\nWhat is your name?[/INST]  Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend! </s><s>[INST] How old are you? [/INST]  Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999! </s><s>[INST] Where is your hometown? [/INST]"

  最后再调用计算Prompt的API(参考上节的Prompt token长度计算),输出该对话的token长度为199.
  我们使用FastChat提供的对话补充接口(v1/chat/completions)验证输入的对话token长度,请求命令为:

curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{"model": "llama2-70b-chat","messages": [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who'\''s asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let'\''s just say I'\''m older than a bottle of wine but younger than a bottle of whiskey. I'\''m like a fine cheese, getting better with age, but still young enough to party like it'\''s 1999!"}, {"role": "user", "content": "Where is your hometown?"}]
}'

输出结果为:

{"id": "chatcmpl-mQxcaQcNSNMFahyHS7pamA","object": "chat.completion","created": 1691506768,"model": "llama2-70b-chat","choices": [{"index": 0,"message": {"role": "assistant","content": " Ha! My hometown? Well, that's a tough one. I'm like a bird, I don't have a nest, I just fly around and land wherever the wind takes me. But if you really want to know, I'm from a place called \"The Internet\". It's a magical land where memes and cat videos roam free, and the Wi-Fi is always strong. It's a beautiful place, you should visit sometime!"},"finish_reason": "stop"}],"usage": {"prompt_tokens": 199,"total_tokens": 302,"completion_tokens": 103}
}

注意,输出的prompt_tokens为199,这与我们刚才计算的对话token长度的结果是一致的!

总结

  本文主要介绍了如何在FastChat中部署LLaMA-2 70B模型,并详细介绍了Prompt token长度计算以及对话(conversation)的token长度计算。希望能对读者有所帮助~
  笔者的一点心得是:阅读源码真的很重要。
  笔者的个人博客网址为:https://percent4.github.io/ ,欢迎大家访问~

参考网址

  1. NLP(五十九)使用FastChat部署百川大模型: https://blog.csdn.net/jclian91/article/details/131650918
  2. FastChat: https://github.com/lm-sys/FastChat

  欢迎关注我的公众号NLP奇幻之旅,原创技术文章第一时间推送。

  欢迎关注我的知识星球“自然语言处理奇幻之旅”,笔者正在努力构建自己的技术社区。

这篇关于NLP(六十四)使用FastChat计算LLaMA-2模型的token长度的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/233129

相关文章

C++使用栈实现括号匹配的代码详解

《C++使用栈实现括号匹配的代码详解》在编程中,括号匹配是一个常见问题,尤其是在处理数学表达式、编译器解析等任务时,栈是一种非常适合处理此类问题的数据结构,能够精确地管理括号的匹配问题,本文将通过C+... 目录引言问题描述代码讲解代码解析栈的状态表示测试总结引言在编程中,括号匹配是一个常见问题,尤其是在

Java中String字符串使用避坑指南

《Java中String字符串使用避坑指南》Java中的String字符串是我们日常编程中用得最多的类之一,看似简单的String使用,却隐藏着不少“坑”,如果不注意,可能会导致性能问题、意外的错误容... 目录8个避坑点如下:1. 字符串的不可变性:每次修改都创建新对象2. 使用 == 比较字符串,陷阱满

Python使用国内镜像加速pip安装的方法讲解

《Python使用国内镜像加速pip安装的方法讲解》在Python开发中,pip是一个非常重要的工具,用于安装和管理Python的第三方库,然而,在国内使用pip安装依赖时,往往会因为网络问题而导致速... 目录一、pip 工具简介1. 什么是 pip?2. 什么是 -i 参数?二、国内镜像源的选择三、如何

使用C++实现链表元素的反转

《使用C++实现链表元素的反转》反转链表是链表操作中一个经典的问题,也是面试中常见的考题,本文将从思路到实现一步步地讲解如何实现链表的反转,帮助初学者理解这一操作,我们将使用C++代码演示具体实现,同... 目录问题定义思路分析代码实现带头节点的链表代码讲解其他实现方式时间和空间复杂度分析总结问题定义给定

Linux使用nload监控网络流量的方法

《Linux使用nload监控网络流量的方法》Linux中的nload命令是一个用于实时监控网络流量的工具,它提供了传入和传出流量的可视化表示,帮助用户一目了然地了解网络活动,本文给大家介绍了Linu... 目录简介安装示例用法基础用法指定网络接口限制显示特定流量类型指定刷新率设置流量速率的显示单位监控多个

JavaScript中的reduce方法执行过程、使用场景及进阶用法

《JavaScript中的reduce方法执行过程、使用场景及进阶用法》:本文主要介绍JavaScript中的reduce方法执行过程、使用场景及进阶用法的相关资料,reduce是JavaScri... 目录1. 什么是reduce2. reduce语法2.1 语法2.2 参数说明3. reduce执行过程

如何使用Java实现请求deepseek

《如何使用Java实现请求deepseek》这篇文章主要为大家详细介绍了如何使用Java实现请求deepseek功能,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录1.deepseek的api创建2.Java实现请求deepseek2.1 pom文件2.2 json转化文件2.2

python使用fastapi实现多语言国际化的操作指南

《python使用fastapi实现多语言国际化的操作指南》本文介绍了使用Python和FastAPI实现多语言国际化的操作指南,包括多语言架构技术栈、翻译管理、前端本地化、语言切换机制以及常见陷阱和... 目录多语言国际化实现指南项目多语言架构技术栈目录结构翻译工作流1. 翻译数据存储2. 翻译生成脚本

C++ Primer 多维数组的使用

《C++Primer多维数组的使用》本文主要介绍了多维数组在C++语言中的定义、初始化、下标引用以及使用范围for语句处理多维数组的方法,具有一定的参考价值,感兴趣的可以了解一下... 目录多维数组多维数组的初始化多维数组的下标引用使用范围for语句处理多维数组指针和多维数组多维数组严格来说,C++语言没

在 Spring Boot 中使用 @Autowired和 @Bean注解的示例详解

《在SpringBoot中使用@Autowired和@Bean注解的示例详解》本文通过一个示例演示了如何在SpringBoot中使用@Autowired和@Bean注解进行依赖注入和Bean... 目录在 Spring Boot 中使用 @Autowired 和 @Bean 注解示例背景1. 定义 Stud