本文主要是介绍GPT2从放弃到入门(一),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
引言
这是GPT系列的第二篇文章。在上篇文章中我们实现了GPT1并训练了一个小说生成器。
今天我们尝试实现GPT2来完成小说生成这件事。
模型架构
GPT2很大程度上遵循了GPT-1的细节,但做了一些优化/修改:
- 层归一化被移动到每个子块(sub-block)的输入处(Pre-LN);
- 在最后一个块之后添加了一个额外的层归一化;
- 使用了一种修改后的初始化方法,考虑了模型深度上残差路径的累积。在初始化时,将残差层的权重缩放因子设置为 1 / N 1/\sqrt N 1/N, N N N是残差层的数量;
- 词汇表扩充到50274个;
- 将上下文大小从512扩大到1024;
- 使用了更大的批大小512;
- 单任务建模 p ( output ∣ input ) p(\text{output}|\text{input}) p(output∣input)变成了基于任务建模 p ( output ∣ input,task ) p(\text{output}|\text{input,task}) p(output∣input,task),在输入中增加了任务描述,同样的输入在不同的任务中输出应该不同;
模型实现
强烈建议先阅读上篇文章,再继续本文。
这里重点介绍和GPT1实现有变化的网络层。
Transformer Block
class Block(nn.Module):def __init__(self, config: GPT2Config) -> None:super().__init__()n_embd = config.n_embdself.attn = Attention(config)self.ln_1 = nn.LayerNorm(n_embd)self.mlp = MLP(config)self.ln_2 = nn.LayerNorm(n_embd)def forward(self, x: Tensor, attention_mask: Tensor = None, output_attentions: bool = False) -> Tensor:"""Args:x (Tensor): (batch_size, seq_len, n_embd)attention_mask (Tensor, optional)output_attentions (bool, optional)Returns:Tensor: (batch_size, seq_len, n_embd) block outputTensor(optional): (batch_size, n_head, seq_len, seq_len) attn_weights"""# 保存了原始的输入residual = x # 经过层归一化x = self.ln_1(x) # 计算注意力attn_outputs = self.attn(x, attention_mask, output_attentions)# attn_output (batch_size, n_head, seq_len, n_embd / n_head)attn_output = attn_outputs[0]# resident connection# 最后和原始输入相加x = attn_output + residual# 和上面类似residual = x# x (batch_size, seq_len, n_embd)x = self.ln_2(x)# m (batch_size, seq_len, n_embd)x = self.mlp(x)# resident connection# x (batch_size, seq_len, n_embd)x = x + residualoutputs = [x] + attn_outputs[1:]return outputs
这里的改动是:层归一化被移动到每个子块(sub-block)的输入处(Pre-LN)。
GPT2PreTrainedModel
class GPT2PreTrainedModel(PreTrainedModel):"""An abstract class to handle weights initialization and a simple interface for downloading and loading pretrainedmodels."""config_class = GPT2Configbase_model_prefix = "transformer"def __init__(self, config: PretrainedConfig):super().__init__(config)def _init_weights(self, module):if isinstance(module, (nn.Linear, Conv1D)):module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)if module.bias is not None:module.bias.data.zero_()elif isinstance(module, nn.Embedding):module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)if module.padding_idx is not None:module.weight.data[module.padding_idx].zero_()elif isinstance(module, nn.LayerNorm):module.bias.data.zero_()module.weight.data.fill_(1.0)for name, p in module.named_parameters():if name == "c_proj.weight":# Special Scaled Initialization --> 每个Block有两个残差连接层p.data.normal_(mean=0.0,std=(self.config.initializer_range/ math.sqrt(2 * self.config.n_layer)),)
主要修改了最后一段代码:使用了一种修改后的初始化方法,考虑了模型深度上残差路径的累积。在初始化时,将残差层的权重缩放因子设置为 1 / N 1/\sqrt N 1/N, N N N是残差层的数量;
GPT2Model
class GPT2Model(GPT2PreTrainedModel):def __init__(self, config: GPT2Config) -> None:super().__init__(config)self.config = configself.tokens_embed = nn.Embedding(config.vocab_size, config.n_embd)self.positions_embed = nn.Embedding(config.n_positions, config.n_embd)self.dropout = nn.Dropout(config.dropout)self.h = nn.ModuleList([Block(config, scale=True) for _ in range(config.n_layer)])# 额外定义了一个层归一化self.ln = nn.LayerNorm(config.n_embd)self.register_buffer("position_ids", torch.arange(config.n_positions), persistent=False)self.post_init()def forward(self,input_ids: torch.LongTensor,attention_mask: Tensor = None,output_attentions: bool = False,output_hidden_states: bool = False,return_dict: bool = False,) -> Union[Tuple[torch.Tensor], BaseModelOutput]:"""Args:input_ids (torch.LongTensor): (batch_size, seq_len)output_attentions (bool, optional): whether or not to return the attentions tensors of all attention layers. Defaults to False.output_hidden_states (bool, optional): whether or not to return the hidden states of all layers. Defaults to False.return_dict (bool, optional): whether or not to return a ModelOutput instead of a plain tuple. Defaults to False.Returns:Union[Tuple[torch.Tensor], BaseModelOutput]: tuple or BaseModelOutput"""input_shape = input_ids.size()inputs_embeds = self.tokens_embed(input_ids)# generate position idsposition_ids = self.position_ids[None, : input_shape[-1]]position_embeds = self.positions_embed(position_ids)hidden_states = inputs_embeds + position_embedshidden_states = self.dropout(hidden_states)all_attentions = () if output_attentions else Noneall_hidden_states = () if output_hidden_states else Nonefor _, block in enumerate(self.h):if output_hidden_states:all_hidden_states = all_hidden_states + (hidden_states,)outputs = block(hidden_states, attention_mask, output_attentions)hidden_states = outputs[0]if output_attentions:all_attentions = all_attentions + (outputs[1],)# 最后的输出应用层归一化hidden_states = self.ln(hidden_states)# add last layerif output_hidden_states:all_hidden_states = all_hidden_states + (hidden_states,)if not return_dict:return tuple(vfor v in [hidden_states, all_hidden_states, all_attentions]if v is not None)return BaseModelOutput(last_hidden_state=hidden_states,hidden_states=all_hidden_states,attentions=all_attentions,)
这里体现的修改点是:在最后一个块之后添加了一个额外的层归一化;
from transformers import PretrainedConfigclass GPT2Config(PretrainedConfig):model_type = "gpt2"def __init__(self,vocab_size=5000,n_positions=1024, # 上下文大小改成1024n_embd=768,n_layer=12,n_head=12,dropout=0.1,initializer_range=0.02,bos_token_id=2,eos_token_id=3,**kwargs) -> None:"""Args:vocab_size (int, optional): vocabulary size. Defaults to 5000.n_positions (int, optional): the maximum sequence length that this model might ever be used with. Defaults to 1024.n_embd (int, optional): dimensionality of the embeddings and hidden states. Defaults to 768.n_layer (int, optional): number of hidden layers. Defaults to 12.n_head (int, optional): number of attention heads for each attention layer. Defaults to 12.dropout (float, optional): the dropout probability. Defaults to 0.1.initializer_range (tuple, optional): the standard deviation of the truncated_normal_initializer for initializing all weight matrices. Defaults to (0.02,)."""self.vocab_size = vocab_sizeself.n_positions = n_positionsself.n_embd = n_embdself.n_layer = n_layerself.n_head = n_headself.dropout = dropoutself.initializer_range = initializer_rangeself.bos_token_id = bos_token_idself.oes_token_id = eos_token_idsuper().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
将上下文大小从512扩大到1024;
训练分词器
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import BertPreTokenizer
from tokenizers.processors import TemplateProcessingfrom transformers import PreTrainedTokenizerFast
from transformers import AutoTokenizerfrom config import train_argsdef train(file_path: str,save_path="tokenizer.json",eos_token="<|endoftext|>",vocab_size: int = 5000,
) -> None:tokenizer = Tokenizer(BPE(unk_token=eos_token))# only has eos tokentrainer = BpeTrainer(special_tokens=[eos_token], vocab_size=vocab_size)tokenizer.pre_tokenizer = BertPreTokenizer()tokenizer.train([file_path], trainer)tokenizer.post_processor = TemplateProcessing(single=f"$A {eos_token}",pair=f"$A {eos_token} $B:1 {eos_token}:1",special_tokens=[(eos_token, tokenizer.token_to_id(eos_token)),],)print(f"vocab size: {tokenizer.get_vocab_size()}")tokenizer.save(save_path)if __name__ == "__main__":eos_token = "<|endoftext|>"train("./data/novel.txt", eos_token=eos_token, vocab_size=5000)tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json", model_max_length=1024)tokenizer.unk_token = eos_tokentokenizer.bos_token = tokenizer.unk_tokentokenizer.eos_token = tokenizer.unk_tokentokenizer.pad_token = tokenizer.unk_tokenif train_args.from_remote:tokenizer.push_to_hub(f"{train_args.owner}/{train_args.model_name}")tokenizer = AutoTokenizer.from_pretrained(f"{train_args.owner}/{train_args.model_name}")else:tokenizer.save_pretrained(train_args.model_name)tokenizer = AutoTokenizer.from_pretrained(train_args.model_name)
唯一修改的地方在于model_max_length
,改成了1024。
用新训练好的分词器重新对小说数据进行处理即可。
训练
from datasets import load_dataset, load_from_disk
from transformers import (AutoTokenizer,default_data_collator,get_linear_schedule_with_warmup,
)from torch.utils.data.dataloader import DataLoaderfrom torch.optim import AdamW
import torchfrom tqdm import tqdmfrom log import logger
from utils import EarlyStopper
from config import train_argsfrom configuration_gpt2 import GPT2Config
from modeling_gpt2 import GPT2LMHeadModeldef get_grouped_params(model, weight_decay, no_decay=["bias", "LayerNorm.weight"]):params_with_wd, params_without_wd = [], []for n, p in model.named_parameters():if any(nd in n for nd in no_decay):params_without_wd.append(p)else:params_with_wd.append(p)return [{"params": params_with_wd, "weight_decay": weight_decay},{"params": params_without_wd, "weight_decay": 0.0},]def train(model, train_dataloader, val_dataloader, optimizer, device, scheduler, args):max_grad_norm = args.max_grad_normlogging_steps = args.logging_stepsgradient_accumulation_steps = args.gradient_accumulation_stepstotal_loss = 0.0logging_loss = 0.0best_loss = 10000global_steps = 0early_stopper = EarlyStopper()for epoch in range(args.epochs):model.train()p_bar = tqdm(train_dataloader, disable=False)for step, batch in enumerate(p_bar):batch = {k: v.to(device) for k, v in batch.items()}outputs = model(batch["input_ids"], labels=batch["labels"])loss = outputs.losstotal_loss += loss.item()p_bar.set_description(f"epoch {epoch + 1:2d} (loss={loss.item():5.3f} | global_steps {global_steps:4d} | lr {scheduler.get_last_lr()[0]:.5f} )")if gradient_accumulation_steps > 1:loss = loss / gradient_accumulation_stepsloss.backward()torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)if (step + 1) % gradient_accumulation_steps == 0:optimizer.step()scheduler.step()optimizer.zero_grad()global_steps += 1if logging_steps > 0 and global_steps & logging_steps == 0:train_loss = (total_loss - logging_loss) / (logging_steps * gradient_accumulation_steps)if args.use_wandb:wandb.log({"global_steps": global_steps,"lr": scheduler.get_lr()[0],"train_loss:": train_loss,})logging_loss = total_losseval_loss = evalute(model, val_dataloader, device)logger.info(f"epoch {epoch} | global_steps {global_steps} | eval loss {eval_loss:.3f}")if args.use_wandb:wandb.log({"epoch": epoch, "eval_loss:": eval_loss})torch.cuda.empty_cache()if eval_loss < best_loss:best_loss = eval_losslogger.info(f"Saving model to {args.model_name} with best eval loss {eval_loss:.3f}")# save to local diskmodel.save_pretrained(f"{args.owner}/{args.model_name}")if early_stopper.step(eval_loss):print(f"Stop from early stopping.")break@torch.no_grad()
def evalute(model, dataloader, device):model.eval()p_bar = tqdm(dataloader, desc="iter", disable=False)total_loss = 0.0for batch in p_bar:batch = {k: v.to(device) for k, v in batch.items()}labels = batch["labels"]outputs = model(batch["input_ids"], labels=labels)total_loss += outputs.loss.item()test_loss = total_loss / len(dataloader)return test_lossif __name__ == "__main__":# run train_tokenizer.py to get tokenizerif train_args.from_remote:tokenizer = AutoTokenizer.from_pretrained(f"{train_args.owner}/{train_args.tokenizer_name}", use_fast=True)else:tokenizer = AutoTokenizer.from_pretrained(f"{train_args.tokenizer_name}", use_fast=True)if train_args.use_wandb:import wandbwandb.init(project="simple-gpt",config=vars(train_args),)config = GPT2Config()device = torch.device("cuda" if torch.cuda.is_available() else "cpu")model = GPT2LMHeadModel(config)model.to(device)# run data_process.py to get dataset# run data_process.py to get datasetif train_args.from_remote:tokenized_dataset = load_dataset(f"{train_args.owner}/{train_args.dataset_name}")else:tokenized_dataset = load_from_disk(f"{train_args.dataset_name}")tokenized_dataset.set_format("torch")train_dataset = tokenized_dataset["train"]eval_dataset = tokenized_dataset["valid"]batch_size = int(train_args.batch_size / train_args.gradient_accumulation_steps)train_dataloader = DataLoader(train_dataset,batch_size=batch_size,collate_fn=default_data_collator,)eval_dataloader = DataLoader(eval_dataset,batch_size=batch_size,collate_fn=default_data_collator,)total_training_steps = int(train_args.epochs* len(train_dataloader)/ train_args.gradient_accumulation_steps)print(f"total train steps={total_training_steps}")optimizer = AdamW(get_grouped_params(model, weight_decay=train_args.weight_decay),lr=train_args.learning_rate,)lr_scheduler = get_linear_schedule_with_warmup(optimizer,num_warmup_steps=int(train_args.warmup_proportion * total_training_steps),num_training_steps=total_training_steps,)train(model,train_dataloader,eval_dataloader,optimizer,device,lr_scheduler,train_args,)
主要训练代码保持不变。
epoch 1 (loss=4.774 | global_steps 432 | lr 0.00049 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:16<00:00, 4.42it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.64it/s]
2024-01-26 13:13:33 - INFO - root - epoch 0 | global_steps 433 | eval loss 4.865
2024-01-26 13:13:33 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 4.865
epoch 2 (loss=4.346 | global_steps 865 | lr 0.00046 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.42it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.54it/s]
2024-01-26 13:16:52 - INFO - root - epoch 1 | global_steps 866 | eval loss 4.501
2024-01-26 13:16:52 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 4.501
epoch 3 (loss=3.906 | global_steps 1298 | lr 0.00044 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.43it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.68it/s]
2024-01-26 13:20:11 - INFO - root - epoch 2 | global_steps 1299 | eval loss 4.104
2024-01-26 13:20:11 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 4.104
epoch 4 (loss=3.494 | global_steps 1731 | lr 0.00041 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.44it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.65it/s]
2024-01-26 13:23:30 - INFO - root - epoch 3 | global_steps 1732 | eval loss 3.791
2024-01-26 13:23:30 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 3.791
epoch 5 (loss=3.183 | global_steps 2164 | lr 0.00038 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.44it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.64it/s]
2024-01-26 13:26:49 - INFO - root - epoch 4 | global_steps 2165 | eval loss 3.625
2024-01-26 13:26:49 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 3.625
epoch 6 (loss=2.965 | global_steps 2597 | lr 0.00036 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:14<00:00, 4.44it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.69it/s]
2024-01-26 13:30:08 - INFO - root - epoch 5 | global_steps 2598 | eval loss 3.544
2024-01-26 13:30:08 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 3.544
epoch 7 (loss=2.737 | global_steps 3030 | lr 0.00033 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.44it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.64it/s]
2024-01-26 13:33:27 - INFO - root - epoch 6 | global_steps 3031 | eval loss 3.528
2024-01-26 13:33:27 - INFO - root - Saving model to simple-gpt2-doupo with best eval loss 3.528
epoch 8 (loss=2.541 | global_steps 3463 | lr 0.00031 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:16<00:00, 4.41it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.14it/s]
2024-01-26 13:36:47 - INFO - root - epoch 7 | global_steps 3464 | eval loss 3.546
2024-01-26 13:36:47 - INFO - root - early stop left: 4
epoch 9 (loss=2.301 | global_steps 3896 | lr 0.00028 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.43it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.60it/s]
2024-01-26 13:40:05 - INFO - root - epoch 8 | global_steps 3897 | eval loss 3.588
2024-01-26 13:40:05 - INFO - root - early stop left: 3
epoch 10 (loss=2.082 | global_steps 4329 | lr 0.00026 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.44it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.64it/s]
2024-01-26 13:43:23 - INFO - root - epoch 9 | global_steps 4330 | eval loss 3.645
2024-01-26 13:43:23 - INFO - root - early stop left: 2
epoch 11 (loss=1.938 | global_steps 4762 | lr 0.00023 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.43it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.53it/s]
2024-01-26 13:46:41 - INFO - root - epoch 10 | global_steps 4763 | eval loss 3.724
2024-01-26 13:46:41 - INFO - root - early stop left: 1
epoch 12 (loss=1.744 | global_steps 5195 | lr 0.00021 ): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 866/866 [03:15<00:00, 4.43it/s]
iter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:02<00:00, 19.36it/s]
2024-01-26 13:49:59 - INFO - root - epoch 11 | global_steps 5196 | eval loss 3.803
Stop from early stopping.萧炎经过不懈地修炼,终于达到了斗帝级别,而其他的实力,也仅仅只是一星斗皇的层次。“古元,你倒是想要打败我,不过,可惜”魂天帝淡淡一笑,目光望着面前一脸笑容的魂天帝,声音平淡的道。闻言,魂天帝脸庞顿时抽搐,他知道,在这个话题上说着话时,他也明白,现在的萧炎,已不再是当年的废物,如今,若是再让他顺利晋入那帝品雏丹,那么,便是得葬身在了萧炎手中,这个小子,真的很有心慈悲,但他却是忘记了,现在的他,已经不再具备着那种信心,一步的突破,即便是放眼整个中州,都能算做是太大的倚仗,当然,最重要的,还是那所谓的帝品丹药,在他眼中,有着莫名的创始人,然而,就在魂天帝此话刚刚脱口,那笼罩天地的古老大门,突然一颤,旋即一道璀璨的金色光柱,从天际倾洒而下,最后化为一道金光,射进大地之中。见到这一幕,雷赢与炎烬面色也是微微一变,这大阵,实在是太大了点吧“嗡嗡”金光芒一闪,遥遥天际,大手,狠狠的拍在那里啪啦的巨门之上,可怕的力道,直接是将空间震得塌陷,甚至连空间都是出现了裂开一道道漆黑裂缝,裂缝都是被撕裂而开,那般模样,就仿佛是被摧毁了大片的通道一般。突如其来的变故,让得联军方面相觑了一眼,皆是有些动容,他们没想到,这家伙居然会如此的狼狈,而且,这种时候,当真是有些出乎了他的意料。古元面色阴沉的点了点头,也不再多说废话,身形一动,身形再度掠出,在其身后,薰儿也是紧跟而上。萧炎一掌将古玉拦住,然后目光警惕的望着天空上那密密麻麻的光幕,手掌猛的一握,只见得那光印之中,居然是凝聚成了一柄足有千丈庞大的黑色巨手。随着这诡异黑芒的成形,虚无吞炎眼中也是掠过一抹凝重之色,双掌一曲,一柄锋利的长剑便是自其指尖暴射而出,刺向古元与烛坤硬轰而去
由于上下文变成了,批大小降了一倍,导致时间变成了一点。
但是基于GPT2进需要7个epoch就到达了最佳状态,比较利于进行调参。
推理
(需要先克隆代码仓库,见sample.py
)
from transformers import AutoTokenizerfrom modeling_gpt2 import GPT2LMHeadModeltokenizer = AutoTokenizer.from_pretrained("simple-gpt2-doupo")model = GPT2LMHeadModel.from_pretrained("simple-gpt2-doupo", pad_token_id=tokenizer.unk_token_id
)prefix = "萧炎经过不懈地修炼,终于达到了斗帝级别,"
input_ids = tokenizer.encode(prefix, return_tensors="pt", add_special_tokens=False)beam_output = model.generate(input_ids,max_length=512,num_beams=3,no_repeat_ngram_size=2,early_stopping=True,do_sample=True,repetition_penalty=1.25,
)print("Output:\n" + 100 * "-")
print(tokenizer.decode(beam_output[0], skip_special_tokens=True).replace(" ", ""))
代码地址
https://github.com/nlp-greyfoss/nlp-in-action-public/tree/master/transformers/gpt2
参考
- [论文笔记]GPT-2
- Transformers源码
这篇关于GPT2从放弃到入门(一)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!