本文主要是介绍如何在本地调试THUDM/chatglm2-6b大模型,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
模型下载网站:https://www.opencsg.com/models
安装git:
sudo apt install git
安装git-lfs,这个很重要。
sudo apt-get install git-lfs
下载模型:THUDM/chatglm2-6b
mkdir THUDM
cd THUDMgit lfs intsall
git clone https://portal.opencsg.com/models/THUDM/chatglm2-6b.git
安装依赖:https://www.opencsg.com/models/THUDM/chatglm2-6b
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple protobuf
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple transformers==4.30.2
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple cpm_kernels
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple torch
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple gradio
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple mdtex2html
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple sentencepiece
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple accelerate
接着,创建c01_token.py,填写如下代码进行测试:
from transformers import AutoTokenizer, AutoModeltokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda()
model = model.eval()response, history = model.chat(tokenizer, "你好", history=[])
print(response)response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
print(response)
这篇关于如何在本地调试THUDM/chatglm2-6b大模型的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!