deepspeed+transformers模型微调

2024-05-10 23:28

本文主要是介绍deepspeed+transformers模型微调,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、目录

  1. 代码讲解

二、实现。

1、代码讲解,trainer 实现。
transformers通过trainer 集成deepspeed功能,所以中需要进行文件配置,即可实现deepspeed的训练。
微调代码: 参数定义—>数据处理---->模型创建/评估方式---->trainer 框架训练
注意: V100 显卡,不包括float16 精度训练。

import deepspeed
deepspeed.ops.op_builder.CPUAdamBuilder().load()
import nltk
import torch
import evaluate
import datasets
import numpy as np
from nltk.tokenize import sent_tokenize
from torch.nn.utils.rnn import pad_sequence
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
nltk.download("punkt")
import gc
import torch######################################定义参数####################################
dataset_name = "samsum" # 数据集名称
#model_name="google/flan-t5-xxl" # 模型名称
model_name="google/flan-t5-xl" # 模型名称
max_input_length = 256
max_gen_length = 128
output_dir = "checkpoints"
num_train_epochs = 5
learning_rate = 5e-5
deepspeed_config = "ds_config.json" #          deepspeed配置文件
per_device_train_batch_size=5 # batch size设置为1,因为太大导致OOM
per_device_eval_batch_size=5
gradient_accumulation_steps=10 # 由于单卡的batch size为1,为了扩展batch size,使用梯度累加#################################加载数据集,与数据预处理#########################################
tokenizer = AutoTokenizer.from_pretrained(model_name)
dataset = datasets.load_dataset(dataset_name)
print(dataset["train"][0])def preprocess(examples):dialogues = ["summarize:" + dia for dia in examples["dialogue"]]# summaries = [summ for summ in examples["summary"]]model_inputs = tokenizer(dialogues, max_length=max_input_length, truncation=True)labels = tokenizer(text_target=examples["summary"], max_length=max_gen_length, truncation=True)model_inputs["labels"] = labels["input_ids"]return model_inputstokenized_dataset = dataset.map(preprocess, batched=True, remove_columns=["dialogue", "summary", "id"])
# print(tokenized_dataset["train"]["input_ids"][0]) # 打印结果    对map后的数据进行查看。def collate_fn(features):batch_input_ids = [torch.LongTensor(feature["input_ids"]) for feature in features]batch_attention_mask = [torch.LongTensor(feature["attention_mask"]) for feature in features]batch_labels = [torch.LongTensor(feature["labels"]) for feature in features]batch_input_ids = pad_sequence(batch_input_ids, batch_first=True, padding_value=tokenizer.pad_token_id)batch_attention_mask = pad_sequence(batch_attention_mask, batch_first=True, padding_value=0)batch_labels = pad_sequence(batch_labels, batch_first=True, padding_value=-100)return {"input_ids": batch_input_ids,"attention_mask": batch_attention_mask,"labels": batch_labels}##############################加载模型,采用seq2seqLM模型,并进行测试##############################
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#用于测试的代码
#dataloader = DataLoader(tokenized_dataset["test"], shuffle=False, batch_size=4, collate_fn=collate_fn)
# batch = next(iter(dataloader))
# #print(batch)
# # 用于测试的代码
# dataloader = DataLoader(tokenized_dataset["test"], shuffle=False, batch_size=4, collate_fn=collate_fn)
# batch = next(iter(dataloader))
# output = model(**batch)
#print(output)
#############################################模型训练,并采用trainer 架构####################################
print("==========train....================")
metric = evaluate.load("rouge")
def compute_metrics(eval_preds):preds, labels = eval_predsif isinstance(preds, tuple):preds = preds[0]decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)labels = np.where(labels != -100, labels, tokenizer.pad_token_id)decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds]decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels]result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)result = {k: round(v * 100, 4) for k, v in result.items()}prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]result["gen_len"] = np.mean(prediction_lens)return resulttraining_args = Seq2SeqTrainingArguments(output_dir=output_dir,per_device_train_batch_size=per_device_train_batch_size,per_device_eval_batch_size=per_device_eval_batch_size,gradient_accumulation_steps=gradient_accumulation_steps,eval_accumulation_steps=1, # 防止评估时导致OOMpredict_with_generate=True,learning_rate=learning_rate,num_train_epochs=num_train_epochs,# logging & evaluation strategieslogging_dir="logs",logging_strategy="steps",logging_steps=50, # 每50个step打印一次logevaluation_strategy="steps",eval_steps=500, # 每500个step进行一次评估save_steps=500,save_total_limit=2,load_best_model_at_end=True,deepspeed=deepspeed_config,                        # deepspeed配置文件的位置report_to="all"
)trainer = Seq2SeqTrainer(model=model,args=training_args,train_dataset=tokenized_dataset["train"],eval_dataset=tokenized_dataset["validation"],data_collator=collate_fn,compute_metrics=compute_metrics,
)
trainer.train()
gc.collect()
torch.cuda.empty_cache()
# 打印验证集上的结果
# print(trainer.evaluate(tokenized_dataset["validation"]))
# # 打印测试集上的结果
# print(trainer.evaluate(tokenized_dataset["test"]))
# 保存最优模型
trainer.save_model("best.pt")
#export NCCL_IB_DISABLE=1; export NCCL_P2P_DISABLE=1; NCCL_DEBUG=INFO deepspeed --include=localhost:0,1 test1.py

配置文件:ds_config.json

{"fp16": {"enabled": "auto"},"optimizer": {"type": "AdamW","params": {"lr": "auto","betas": "auto","eps": "auto","weight_decay": "auto"}},"scheduler": {"type": "WarmupLR","params": {"warmup_min_lr": "auto","warmup_max_lr": "auto","warmup_num_steps": "auto"}},"zero_optimization": {"stage": 3,"offload_optimizer": {"device": "cpu","pin_memory": true},"offload_param": {"device": "cpu","pin_memory": true},"overlap_comm": true,"contiguous_gradients": true,"sub_group_size": 1e9,"reduce_bucket_size": "auto","stage3_prefetch_bucket_size": "auto","stage3_param_persistence_threshold": "auto","stage3_max_live_parameters": 1e9,"stage3_max_reuse_distance": 1e9,"stage3_gather_16bit_weights_on_model_save": false},"gradient_accumulation_steps": "auto","gradient_clipping": "auto","steps_per_print": 2000,"train_batch_size": "auto","train_micro_batch_size_per_gpu": "auto","wall_clock_breakdown": false
}

启动: 单机多卡

export NCCL_IB_DISABLE=1; export NCCL_P2P_DISABLE=1; NCCL_DEBUG=INFO deepspeed --include=localhost:0,1 test1.py>output.log 2>&1 &

二、代码讲解,peft微调+trainer 实现。

import os
import torch
import random
import datasets
import numpy as np
from typing import Dict
from transformers import (AutoModelForCausalLM,AutoTokenizer,DataCollatorForSeq2Seq,TrainingArguments,Trainer
)
from peft import (LoraConfig,TaskType,get_peft_model,get_peft_model_state_dict,
)def set_random_seed(seed):if seed is not None and seed > 0:random.seed(seed)np.random.seed(seed)torch.manual_seed(seed)torch.random.manual_seed(seed)torch.cuda.manual_seed(seed)torch.cuda.manual_seed_all(seed)torch.backends.cudnn.deterministic = Trueset_random_seed(1234)# 1. 设置参数
# LoRA参数
LORA_R = 8
LORA_ALPHA = 32
LORA_DROPOUT = 0.1
# 训练参数
EPOCHS=3
LEARNING_RATE=5e-5
OUTPUT_DIR="./checkpoints"
BATCH_SIZE=4 # 2
GRADIENT_ACCUMULATION_STEPS=3
# 其他参数
MODEL_PATH = "bigscience/bloomz-7b1-mt"
DATA_PATH = "./data/belle_open_source_1M.train.json"
MAX_LENGTH = 512
PATTERN = "{}\n{}"
DS_CONFIG = "ds_zero2_config.json"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) # 加载tokenizer
# 加载数据
dataset = datasets.load_dataset("json", data_files=DATA_PATH)
# print(dataset["train"][0])# 2. tokenize
def tokenize(text: str, add_eos_token=True):result = tokenizer(text,truncation=True,max_length=MAX_LENGTH,padding=False,return_tensors=None)# 判断是否要添加eos_tokenif (result["input_ids"][-1] != tokenizer.eos_token_idand len(result["input_ids"]) < MAX_LENGTHand add_eos_token):result["input_ids"].append(tokenizer.eos_token_id)result["attention_mask"].append(1)result["labels"] = result["input_ids"].copy()return resultdef preprocess(example: Dict, train_on_inputs: bool = False):prompt = example["input"]response = example["target"]text = PATTERN.format(prompt, response)tokenized_inp = tokenize(text)# 若train_on_inputs为False,则将label中与input相关的token替换为-100if not train_on_inputs:tokenized_prompt = tokenize(prompt,add_eos_token=False)prompt_tokens_len = len(tokenized_prompt["input_ids"])tokenized_inp["labels"] = [-100]*prompt_tokens_len + tokenized_inp["labels"][prompt_tokens_len:]return tokenized_inptrain_data = dataset["train"].shuffle().map(preprocess, remove_columns=["id", "input", "target"])
print(train_data[0])# pad_to_multiple_of=8表示padding的长度是8的倍数
collate_fn = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True)# 2. 加载模型
evice_map = {"": int(os.environ.get("LOCAL_RANK") or 0)}
# device_map指定模型加载的GPU;troch_dtype=torch.float16表示半精度加载模型
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16, device_map=device_map)# 3. LoRA相关
lora_config = LoraConfig(task_type=TaskType.CAUSAL_LM,inference_mode=False,r=LORA_R,                         # LoRA中低秩近似的秩lora_alpha=LORA_ALPHA,            # 见上文中的低秩矩阵缩放超参数lora_dropout=LORA_DROPOUT,       # LoRA层的dropout
)
# 转换模型
model = get_peft_model(model, lora_config)
model.config.use_cache = False
old_state_dict = model.state_dict
model.state_dict = (lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
).__get__(model, type(model))
# 打印模型中的可训练参数
model.print_trainable_parameters()# 4. 训练参数
args = TrainingArguments(output_dir=OUTPUT_DIR, # checkpoint的存储目录per_device_train_batch_size=BATCH_SIZE, # 单设备上的batch sizegradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, # 梯度累加的step数warmup_steps=100,num_train_epochs=EPOCHS,learning_rate=LEARNING_RATE,fp16=True, # 使用混合精度训练logging_steps=50,evaluation_strategy="no", # 不进行评估save_strategy="steps",save_steps=2000, # 保存checkpoint的step数save_total_limit=5, # 最多保存5个checkpointdeepspeed=DS_CONFIG                               #deepspeed 配置
)# 5. 模型训练
trainer = Trainer(model=model,train_dataset=train_data,eval_dataset=None,args=args,data_collator=collate_fn
)
trainer.train()
model.save_pretrained("best_model")
{"train_micro_batch_size_per_gpu": "auto","gradient_accumulation_steps": "auto","steps_per_print": 50,"gradient_clipping": 1.0,"zero_optimization": {"stage": 2,"offload_optimizer": {"device": "cpu"},"contiguous_gradients": true,"overlap_comm": true},"zero_allow_untested_optimizer": true,"fp16": {"enabled": true,"loss_scale": 0,"loss_scale_window": 1000,"hysteresis": 2,"min_loss_scale": 1},"optimizer": {"type": "Adam","params": {"lr": "auto","betas": "auto","eps": "auto","weight_decay": "auto"}},"activation_checkpointing": {"partition_activations": true,"contiguous_memory_optimization": true},"wall_clock_breakdown": false
}

这篇关于deepspeed+transformers模型微调的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/977859

相关文章

0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型的操作流程

《0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeekR1模型的操作流程》DeepSeekR1模型凭借其强大的自然语言处理能力,在未来具有广阔的应用前景,有望在多个领域发... 目录0基础租个硬件玩deepseek,蓝耘元生代智算云|本地部署DeepSeek R1模型,3步搞定一个应

Deepseek R1模型本地化部署+API接口调用详细教程(释放AI生产力)

《DeepseekR1模型本地化部署+API接口调用详细教程(释放AI生产力)》本文介绍了本地部署DeepSeekR1模型和通过API调用将其集成到VSCode中的过程,作者详细步骤展示了如何下载和... 目录前言一、deepseek R1模型与chatGPT o1系列模型对比二、本地部署步骤1.安装oll

Spring AI Alibaba接入大模型时的依赖问题小结

《SpringAIAlibaba接入大模型时的依赖问题小结》文章介绍了如何在pom.xml文件中配置SpringAIAlibaba依赖,并提供了一个示例pom.xml文件,同时,建议将Maven仓... 目录(一)pom.XML文件:(二)application.yml配置文件(一)pom.xml文件:首

如何在本地部署 DeepSeek Janus Pro 文生图大模型

《如何在本地部署DeepSeekJanusPro文生图大模型》DeepSeekJanusPro模型在本地成功部署,支持图片理解和文生图功能,通过Gradio界面进行交互,展示了其强大的多模态处... 目录什么是 Janus Pro1. 安装 conda2. 创建 python 虚拟环境3. 克隆 janus

本地私有化部署DeepSeek模型的详细教程

《本地私有化部署DeepSeek模型的详细教程》DeepSeek模型是一种强大的语言模型,本地私有化部署可以让用户在自己的环境中安全、高效地使用该模型,避免数据传输到外部带来的安全风险,同时也能根据自... 目录一、引言二、环境准备(一)硬件要求(二)软件要求(三)创建虚拟环境三、安装依赖库四、获取 Dee

DeepSeek模型本地部署的详细教程

《DeepSeek模型本地部署的详细教程》DeepSeek作为一款开源且性能强大的大语言模型,提供了灵活的本地部署方案,让用户能够在本地环境中高效运行模型,同时保护数据隐私,在本地成功部署DeepSe... 目录一、环境准备(一)硬件需求(二)软件依赖二、安装Ollama三、下载并部署DeepSeek模型选

Golang的CSP模型简介(最新推荐)

《Golang的CSP模型简介(最新推荐)》Golang采用了CSP(CommunicatingSequentialProcesses,通信顺序进程)并发模型,通过goroutine和channe... 目录前言一、介绍1. 什么是 CSP 模型2. Goroutine3. Channel4. Channe

Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)

《Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)》:本文主要介绍Python基于火山引擎豆包大模型搭建QQ机器人详细的相关资料,包括开通模型、配置APIKEY鉴权和SD... 目录豆包大模型概述开通模型付费安装 SDK 环境配置 API KEY 鉴权Ark 模型接口Prompt

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G