测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0

本文主要是介绍测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

测试模型TinyLlama-1.1B-Chat-v1.0修改推理参数,观察参数变化与推理时间变化之间的关系。
本地环境:

处理器 Intel® Core™ i5-8400 CPU @ 2.80GHz 2.80 GHz
机带 RAM 16.0 GB (15.9 GB 可用)
集显 Intel® UHD Graphics 630
独显 NVIDIA GeForce GTX 1050

主要测试修改:

outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)

源代码来源(镜像):https://hf-mirror.com/TinyLlama/TinyLlama-1.1B-Chat-v1.0

'''
https://hf-mirror.com/TinyLlama/TinyLlama-1.1B-Chat-v1.0
测试tinyLlama 1.1B效果不错,比Qwen1.8B经过量化的都好很多
'''# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerateimport os
from datetime import datetime
import torchos.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
from transformers import pipeline'''
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")# We use the tokenizer's chat template to format each message - see https://hf-mirror.com/docs/transformers/main/en/chat_templating
messages = [{"role": "system","content": "You are a friendly chatbot who always responds in the style of a pirate",},# {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},{"role": "user", "content": "你叫什么名字?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
'''# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
def load_pipeline():pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16,device_map="auto")return pipedef generate_text(content, length=20):"""根据给定的prompt生成文本"""messages = [{"role": "提示","content": "这是个友好的聊天机器人...",},# {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},{"role": "user", "content": content},]prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)datetime1 = datetime.now()outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime2 = datetime.now()time12_interval = datetime2 - datetime1print("时间间隔", time12_interval)if False:outputs = pipe(prompt, max_new_tokens=32, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime3 = datetime.now()time23_interval = datetime3 - datetime2print("时间间隔2", time23_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=False, top_k=50)print(outputs[0]["generated_text"])datetime4 = datetime.now()time34_interval = datetime4 - datetime3print("时间间隔3", time34_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=True, temperature=0.7, top_k=30, top_p=0.95)print(outputs[0]["generated_text"])datetime5 = datetime.now()time45_interval = datetime5 - datetime4print("时间间隔4", time45_interval)outputs = pipe(prompt, max_new_tokens=32, do_sample=False, top_k=30)print(outputs[0]["generated_text"])datetime6 = datetime.now()time56_interval = datetime6 - datetime5print("时间间隔5", time56_interval)outputs = pipe(prompt, max_new_tokens=12, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"])datetime7 = datetime.now()time67_interval = datetime7 - datetime6print("时间间隔6", time67_interval)'''结论:修改top_p不会显著降低推理时间,并且中英文相同的问题,中文问题推理时间是英文的两倍do_sample修改成False基本不会降低推理时间只有max_new_tokens才能显著降低推理时间,但是max_new_tokens与推理时间不是呈线性关系比如max_new_tokens=256,推理时间2分钟当max_new_tokens=32的时候,推理时间才会变成约1分钟因此,不如将max_new_tokens设置大些用于获取比较完整的答案'''return outputsif __name__ == "__main__":'''main function'''global pipepipe = load_pipeline()# print('load pipe ok')while True:prompt = input("请输入一个提示(或输入'exit'退出):")if prompt.lower() == 'exit':breaktry:generated_text = generate_text(prompt)print("生成的文本:")print(generated_text[0]["generated_text"])except Exception as e:print("发生错误:", e)
请输入一个提示(或输入'exit'退出):如何开门?
<|user|>
如何开门?</s>
<|assistant|>
Certainly! Opening a door is a simple process that involves several steps. Here are the general steps to follow to open a door:1. Turn off the lock: Turn off the lock with the key by pressing the "lock" button.2. Press the handle: Use the handle to push the door open. If the door is mechanical, you may need to turn a knob or pull the door handle to activate the door.3. Release the latch: Once the door is open, release the latch by pulling it backward.4. Slide the door: Slide the door forward by pushing it against the wall with your feet or using a push bar.5. Close the door: Once the door is open, close it by pressing the lock button or pulling the handle backward.6. Use a second key: If the lock has a second key, make sure it is properly inserted and then turn it to the correct position to unlock the door.Remember to always double-check the locks before opening a door, as some locks can be tricky to open. If you're unsure about the correct procedure for opening a door,
时间间隔 0:04:23.561065
生成的文本:
<|user|>
如何开门?</s>
<|assistant|>
Certainly! Opening a door is a simple process that involves several steps. Here are the general steps to follow to open a door:1. Turn off the lock: Turn off the lock with the key by pressing the "lock" button.2. Press the handle: Use the handle to push the door open. If the door is mechanical, you may need to turn a knob or pull the door handle to activate the door.3. Release the latch: Once the door is open, release the latch by pulling it backward.4. Slide the door: Slide the door forward by pushing it against the wall with your feet or using a push bar.5. Close the door: Once the door is open, close it by pressing the lock button or pulling the handle backward.6. Use a second key: If the lock has a second key, make sure it is properly inserted and then turn it to the correct position to unlock the door.Remember to always double-check the locks before opening a door, as some locks can be tricky to open. If you're unsure about the correct procedure for opening a door,
请输入一个提示(或输入'exit'退出):

这篇关于测试大语言模型在嵌入式设备部署的可能性——模型TinyLlama-1.1B-Chat-v1.0的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/924985

相关文章

Nginx设置连接超时并进行测试的方法步骤

《Nginx设置连接超时并进行测试的方法步骤》在高并发场景下,如果客户端与服务器的连接长时间未响应,会占用大量的系统资源,影响其他正常请求的处理效率,为了解决这个问题,可以通过设置Nginx的连接... 目录设置连接超时目的操作步骤测试连接超时测试方法:总结:设置连接超时目的设置客户端与服务器之间的连接

ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法

《ElasticSearch+Kibana通过Docker部署到Linux服务器中操作方法》本文介绍了Elasticsearch的基本概念,包括文档和字段、索引和映射,还详细描述了如何通过Docker... 目录1、ElasticSearch概念2、ElasticSearch、Kibana和IK分词器部署

部署Vue项目到服务器后404错误的原因及解决方案

《部署Vue项目到服务器后404错误的原因及解决方案》文章介绍了Vue项目部署步骤以及404错误的解决方案,部署步骤包括构建项目、上传文件、配置Web服务器、重启Nginx和访问域名,404错误通常是... 目录一、vue项目部署步骤二、404错误原因及解决方案错误场景原因分析解决方案一、Vue项目部署步骤

python使用fastapi实现多语言国际化的操作指南

《python使用fastapi实现多语言国际化的操作指南》本文介绍了使用Python和FastAPI实现多语言国际化的操作指南,包括多语言架构技术栈、翻译管理、前端本地化、语言切换机制以及常见陷阱和... 目录多语言国际化实现指南项目多语言架构技术栈目录结构翻译工作流1. 翻译数据存储2. 翻译生成脚本

Linux流媒体服务器部署流程

《Linux流媒体服务器部署流程》文章详细介绍了流媒体服务器的部署步骤,包括更新系统、安装依赖组件、编译安装Nginx和RTMP模块、配置Nginx和FFmpeg,以及测试流媒体服务器的搭建... 目录流媒体服务器部署部署安装1.更新系统2.安装依赖组件3.解压4.编译安装(添加RTMP和openssl模块

如何通过海康威视设备网络SDK进行Java二次开发摄像头车牌识别详解

《如何通过海康威视设备网络SDK进行Java二次开发摄像头车牌识别详解》:本文主要介绍如何通过海康威视设备网络SDK进行Java二次开发摄像头车牌识别的相关资料,描述了如何使用海康威视设备网络SD... 目录前言开发流程问题和解决方案dll库加载不到的问题老旧版本sdk不兼容的问题关键实现流程总结前言作为

0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型的操作流程

《0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeekR1模型的操作流程》DeepSeekR1模型凭借其强大的自然语言处理能力,在未来具有广阔的应用前景,有望在多个领域发... 目录0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型,3步搞定一个应

redis群集简单部署过程

《redis群集简单部署过程》文章介绍了Redis,一个高性能的键值存储系统,其支持多种数据结构和命令,它还讨论了Redis的服务器端架构、数据存储和获取、协议和命令、高可用性方案、缓存机制以及监控和... 目录Redis介绍1. 基本概念2. 服务器端3. 存储和获取数据4. 协议和命令5. 高可用性6.

Deepseek R1模型本地化部署+API接口调用详细教程(释放AI生产力)

《DeepseekR1模型本地化部署+API接口调用详细教程(释放AI生产力)》本文介绍了本地部署DeepSeekR1模型和通过API调用将其集成到VSCode中的过程,作者详细步骤展示了如何下载和... 目录前言一、deepseek R1模型与chatGPT o1系列模型对比二、本地部署步骤1.安装oll

Spring AI Alibaba接入大模型时的依赖问题小结

《SpringAIAlibaba接入大模型时的依赖问题小结》文章介绍了如何在pom.xml文件中配置SpringAIAlibaba依赖,并提供了一个示例pom.xml文件,同时,建议将Maven仓... 目录(一)pom.XML文件:(二)application.yml配置文件(一)pom.xml文件:首