Huggingface Transformers库学习笔记(二):使用Transformers(上)(Using Transformers Part 1)

2024-06-07 23:58

本文主要是介绍Huggingface Transformers库学习笔记(二):使用Transformers(上)(Using Transformers Part 1),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

前言

本部分是Transformer库的基础部分的上半部分,主要包括任务汇总、模型汇总和数据预处理三方面内容,由于许多模型我也不太了解,所以多为机器翻译得到,错误再所难免,内容仅供参考。

Huggingface Transformers库学习笔记(二):使用Transformers(Using Transformers Part 1)

  • 前言
  • 使用Transformers(Using Transformers)
    • 任务汇总(Summary of the tasks)
      • 序列分类(Sequence Classification)
      • 提取式问答(Extractive Question Answering)
      • 语言模型(Language Modeling)
        • 遮罩语言模型(Masked Language Modeling)
        • 因果语言模型(Causal Language Modeling)
        • 文本生成(Text Generation)
      • 命名实体识别(Named Entity Recognition)
      • 文本摘要(Summarization)
      • 文本翻译(Translation)
    • 模型汇总(Summary of the models)
      • 自回归模型(Autoregressive models)
        • 原始GPT(Original GPT)
        • GPT-2
        • CTRL
        • Transformer-XL
        • Reformer
        • XLNet
      • 自编码模型(Autoencoding models)
        • BERT
        • ALBERT
        • RoBERTa
        • DistilBERT
        • ConvBERT
        • XLM
        • XLM-RoBERTa
        • FlauBERT
        • ELECTRA
        • Funnel Transformer
        • Longformer
      • 序列到序列模型(Sequence-to-sequence models)
        • BART
        • Pegasus
        • MarianMT
        • T5
        • MT5
        • MBart
        • ProphetNet
        • XLM-ProphetNet
      • 多模态模型(Multimodal models)
        • MMBT
      • 基于检索的模型(Retrieval-based models)
        • DPR
        • RAG
      • 更多的技术内容(More technical aspects)
        • Full vs sparse attention
          • LSH attention
          • Local attention
        • Other tricks
    • 数据预处理(Preprocessing data)
      • 基本使用(Base use)
      • 对句子对进行处理(Preprocessing pairs of sentences)
      • 关于填充和截断(Everything you always wanted to know about padding and truncation)
        • No truncation
        • truncation to max model input length
        • truncation to specific length
      • Pre-tokenized输入(Pre-tokenized inputsPre-tokenized inputs)

使用Transformers(Using Transformers)

任务汇总(Summary of the tasks)

该部分介绍了使用该库时最常见的用例。可用的模型允许许多不同的配置,并且在各个实际用例中具有很大的通用性。可用的模型允许许多不同的配置,并且在用例中具有很大的通用性。

这些例子利用了auto-models,这些类将根据给定的checkpoint实例化一个模型,自动地选择正确的模型架构。

为了让模型在任务上良好地执行,它必须从与该任务对应的checkpoint加载。这些checkpoint通常针对大量数据进行预先训练,并针对特定任务进行微调。
这意味着以下内容

  • 并不是所有的模型都对所有的任务进行了微调。如果想对特定任务的模型进行微调,可以利用示例目录中的run_$ task .py脚本之一。
  • 微调模型是在特定数据集上进行微调的。这个数据集可能与我们要做的用例和域重叠,也可能不重叠。如前所述,可以利用示例脚本来微调模型,或者可以创建自己的训练脚本。

为了对任务进行推理,这个库提供了几种机制:

  • Pipelines: 非常容易使用的抽象,只需要两行代码。
  • 直接使用模型:抽象较少,但通过直接访问分词器(PyTorch/TensorFlow)和充分的推理能力,更灵活和强大。

下面的具体应用中展示了这两种方法。

序列分类(Sequence Classification)

序列分类是根据给定的类别数目对序列进行分类的任务。序列分类的一个例子是GLUE数据集,它完全基于该任务。如果想在GLUE序列分类任务上对模型进行微调,可以利用run_glue.py、run_tf_glue.py、run_tf_text_classification.py或run_xnlib .py脚本。

下面是一个使用Pipeline进行情感分析的例子:识别一个序列是积极的还是消极的。它在sst2上利用了一个经过微调的模型,这是一个GLUE任务。

这将在分数旁边返回一个标签(正的或负的),如下所示

from transformers import pipeline
nlp = pipeline("sentiment-analysis")
result = nlp("I hate you")[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
result = nlp("I love you")[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")

输出为

label: NEGATIVE, with score: 0.9991
label: POSITIVE, with score: 0.9999

下面是一个使用模型进行序列分类的例子,以确定两个序列是否相互转述(paraphrase)。
过程如下:

  1. 从checkpoint名称实例化分词器和模型。该模型被识别为BERT模型,并将存储在checkpoint中的权值加载到该模型中。
  2. 从这两个句子构建一个序列,使用正确的特定于模型的分隔符、token类型id和注意掩码(encode()和__call__()负责这一点)。
  3. 将这个序列传递到模型中,以便将其分类为两个可用类中的一个:0(不是释义)和1(是释义)。
  4. 计算结果的softmax以得到所有类的概率。
  5. 打印结果。

相关代码如下:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torchtokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")paraphrase_classification_logits = model(**paraphrase).logits
not_paraphrase_classification_logits = model(**not_paraphrase).logitsparaphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]# Should be paraphrase
for i in range(len(classes)):print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
print("---"*24)
# Should not be paraphrase
for i in range(len(classes)):print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")

得到输出如下

not paraphrase: 10%
paraphrase: 90%
------------------------------------------------------------------------
not paraphrase: 94%
paraphrase: 6%

提取式问答(Extractive Question Answering)

提取式问答是指从给定的文本中提取一个答案的任务。关于QA数据集的一个例子是SQuAD数据集,它完全基于该任务。如果想对一个任务的模型在SQuAD数据集上进行微调,可以使用run_qa.py和run_tf_team .py脚本。

下面是一个使用pipeline进行问题回答的示例:从给定问题的文本中提取答案。它利用了一个在SQuAD上微调过的模型。

from transformers import pipeline
nlp = pipeline("question-answering")context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.
"""result = nlp(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
print("---"*24)
result = nlp(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")

输出如下

Answer: 'the task of extracting an answer from a text given a question', score: 0.6226, start: 34, end: 95
------------------------------------------------------------------------
Answer: 'SQuAD dataset', score: 0.5053, start: 147, end: 160

下面是一个使用模型和分词器回答问题的示例。流程如下:

  1. 从checkpoint名称实例化一个分词器和模型。该模型被识别为BERT模型,并将存储在checkpoint中的权值加载到该模型中。
  2. 定义一篇文章和一些问题。
  3. 迭代问题并从文本和当前问题构建一个序列,使用正确的特定于模型的分隔符、token类型id和attention mask。
  4. 将此序列传递给模型。这将在整个序列token(问题和文本)中输出开始位置和结束位置的分数范围。
  5. 计算结果的softmax以获得token上的概率。
  6. 从标识的start和stop值中获取token,将这些token转换为字符串。
  7. 打印结果。

相关代码如下:


from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torchtokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = ["How many pretrained models are available in 🤗 Transformers?","What does 🤗 Transformers provide?","🤗 Transformers provides interoperability between which frameworks?",
]for question in questions:inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")input_ids = inputs["input_ids"].tolist()[0]outputs = model(**inputs)answer_start_scores = outputs.start_logitsanswer_end_scores = outputs.end_logitsanswer_start = torch.argmax(answer_start_scores)  # Get the most likely beginning of answer with the argmax of the scoreanswer_end = torch.argmax(answer_end_scores) + 1  # Get the most likely end of answer with the argmax of the scoreanswer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))print(f"Question: {question}")print(f"Answer: {answer}")

得到输出如下:

Question: How many pretrained models are available in 🤗 Transformers?
Answer: over 32 +
Question: What does 🤗 Transformers provide?
Answer: general - purpose architectures
Question: 🤗 Transformers provides interoperability between which frameworks?
Answer: tensorflow 2. 0 and pytorch

语言模型(Language Modeling)

语言模型是将模型拟合到语料库中的任务,可以是特定领域的。
所有流行的基于transformer的模型都是使用某种语言模型的变体进行训练的,例如BERT使用掩码语言建模,GPT-2使用因果语言建模。

遮罩语言模型(Masked Language Modeling)

屏蔽语言建模是用mask token屏蔽序列中的token,并提示模型用适当的token填充该屏蔽。这允许模型同时处理右上下文(掩码右边的token)和左上下文(掩码左边的token)。这样的训练为需要双向背景的下游任务创造了强大的基础,如SQuAD(问题回答,参见Lewis, Lui, Goyal等人,第4.2部分)。

下面是一个使用pipeline从序列中替换掩码的示例。

from transformers import pipeline
from pprint import pprintnlp = pipeline("fill-mask")
pprint(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))

得到输出如下:

[{'score': 0.17927521467208862,'sequence': 'HuggingFace is creating a tool that the community uses to solve ''NLP tasks.','token': 3944,'token_str': ' tool'},{'score': 0.1134946271777153,'sequence': 'HuggingFace is creating a framework that the community uses to ''solve NLP tasks.','token': 7208,'token_str': ' framework'},{'score': 0.05243523046374321,'sequence': 'HuggingFace is creating a library that the community uses to ''solve NLP tasks.','token': 5560,'token_str': ' library'},{'score': 0.034935325384140015,'sequence': 'HuggingFace is creating a database that the community uses to ''solve NLP tasks.','token': 8503,'token_str': ' database'},{'score': 0.028602493926882744,'sequence': 'HuggingFace is creating a prototype that the community uses to ''solve NLP tasks.','token': 17715,'token_str': ' prototype'}]

下面是一个使用模型和分词器进行掩码语言建模的示例。流程如下:

  1. 从checkpoint名称实例化一个分词器和模型。该模型被识别为DistilBERT,并使用存储在checkpoint中的权重加载该模型。
  2. 定义一个带有mask token的序列,放置 tokenizer.mask_token遮盖住一个单词。
  3. 将该序列编码到一个id列表中,并找到mask token在该列表中的位置。
  4. 在mask token的索引处检索预测:这个张量与词汇表的大小相同,值是赋给每个token的分数。模型会给它认为在该上下文中可能出现的token更高的分数。
  5. 使用PyTorch topk或TensorFlow的top_k方法检索前5个token。
  6. 用token替换mask token并打印结果

相关代码如下:

from transformers import AutoModelWithLMHead, AutoTokenizer
import torchtokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased")sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."
inputs = tokenizer.encode(sequence, return_tensors="pt")
mask_token_index = torch.where(inputs == tokenizer.mask_token_id)[1]token_logits = model(inputs).logits
mask_token_logits = token_logits[0, mask_token_index, :]
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()for token in top_5_tokens:print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))

输出结果为:

Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint.

输出打印了5个序列,其中包含模型预测的前5个token。

因果语言模型(Causal Language Modeling)

因果语言模型是在一系列token之后预测token的任务。在这种情况下,模型只处理左侧上下文(掩码左侧的token)。
这样的训练对于生成任务特别有趣。如果想在因果语言建模任务上对模型进行微调,可以利用run_clm.py脚本。

通常,通过对模型从输入序列中产生的最后一个隐藏状态的logit进行采样来预测下一个令牌。

下面是一个使用分词器和模型的示例,并利用top_k_top_p_filter()方法对token输入序列之后的下一个token进行取样。

from transformers import AutoModelWithLMHead, AutoTokenizer, top_k_top_p_filtering
import torch
from torch.nn import functional as Ftokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("gpt2")sequence = f"Hugging Face is based in DUMBO, New York City, and "
input_ids = tokenizer.encode(sequence, return_tensors="pt")# get logits of last hidden state
next_token_logits = model(input_ids).logits[:, -1, :]# filter
filtered_next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=50, top_p=1.0)# sample
probs = F.softmax(filtered_next_token_logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1)
generated = torch.cat([input_ids, next_token], dim=-1)resulting_string = tokenizer.decode(generated.tolist()[0])print(resulting_string)

输出结果为

Hugging Face is based in DUMBO, New York City, and  

这将输出一个(希望)与原始序列一致的下一个标记,(这里我输出的结果还是原来的句子,不知道哪里出了问题)

在下一节中,我们将展示如何在generate()中利用此功能生成多个用户定义长度的token。

文本生成(Text Generation)

在文本生成(也就是开放式文本生成)中,目标是创建文本的连贯部分,作为给定上下文的延续。下面的示例展示了如何在pipeline中使用GPT-2来生成文本。
默认情况下,当在pipeline中使用时,所有的模型都应用Top-K采样,就像在它们各自的配置中配置的那样(参见gpt-2配置)。

from transformers import pipelinetext_generator = pipeline("text-generation")print(text_generator("As far as I am concerned, I will", max_length=50, do_sample=False))

输出结果为:

[{'generated_text': 'As far as I am concerned, I will be the first to admit that I am not a fan of the idea of a "free market." I think that the idea of a free market is a bit of a stretch. I think that the idea'}]

在这里,模型从上下文“As far as I am concerned, I will”中生成一个最大长度为50个标记的随机文本。PreTrainedModel.generate()的默认参数可以在pipeline中直接覆盖,如上面所示的max_length参数。

下面是一个使用XLNet及其分词器生成文本的示例。


from transformers import AutoModelWithLMHead, AutoTokenizermodel = AutoModelWithLMHead.from_pretrained("xlnet-base-cased")
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")# Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""
prompt = "Today the weather is really nice and I am planning on "inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]print(generated)

输出为:

Today the weather is really nice and I am planning on anning on going out to see a new band in the next few days. I will see some young guys there when I get back. This will likely be the last time that I have been to the Twilight Zone for a long time. I have been wanting to go to the Twilight Zone for a long time but have not been able to go there. Maybe I will have

文本生成目前可以在PyTorch中使用GPT-2, OpenAi-GPT, CTRL, XLNet, Transfo-XL和Reformer,以及Tensorflow中的大多数模型。从上面的例子中可以看出,XLNet和Transfo-XL经常需要进行填充才能很好地工作。GPT-2通常是开放式文本生成的一个很好的选择,因为它是在数百万个带有因果语言建模目标的网页上进行训练的。

命名实体识别(Named Entity Recognition)

命名实体识别(NER)是根据类对token进行分类的任务,例如,将token标识为人、组织或位置。命名实体识别数据集的一个例子是CoNLL-2003数据集,它完全基于该任务。如果想在NER任务上对模型进行微调,可以利用run_ner.py脚本。

下面是一个使用pipeline进行命名实体识别的示例,具体来说,尝试将token标识为属于9个类中的一个:

  • O, 在命名实体之外,Outside of a named entity

  • B-MIS, 杂项实体开始,Beginning of a miscellaneous entity right after another miscellaneous entity

  • I-MIS, 杂项实体,Miscellaneous entity

  • B-PER, 人名实体开始,Beginning of a person’s name right after another person’s name

  • I-PER, 人名实体,Person’s name

  • B-ORG, 组织实体开始,Beginning of an organisation right after another organisation

  • I-ORG, 组织实体,Organisation

  • B-LOC, 位置实体开始,Beginning of a location right after another location

  • I-LOC, 位置实体,Location

它利用了在control -2003上经过微调的模型,由dbmdz的@stefan-it进行微调。这将输出一个所有单词的列表,这些单词被标识为上面定义的9个类中的一个实体。

from transformers import pipelinenlp = pipeline("ner")
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very""close to the Manhattan Bridge which is visible from the window."pprint(nlp(sequence))

输出为:

{'word': 'Hu', 'score': 0.999578595161438, 'entity': 'I-ORG', 'index': 1, 'start': 0, 'end': 2}
{'word': '##gging', 'score': 0.9909763932228088, 'entity': 'I-ORG', 'index': 2, 'start': 2, 'end': 7}
{'word': 'Face', 'score': 0.9982224702835083, 'entity': 'I-ORG', 'index': 3, 'start': 8, 'end': 12}
{'word': 'Inc', 'score': 0.9994880557060242, 'entity': 'I-ORG', 'index': 4, 'start': 13, 'end': 16}
{'word': 'New', 'score': 0.9994345307350159, 'entity': 'I-LOC', 'index': 11, 'start': 40, 'end': 43}
{'word': 'York', 'score': 0.9993196129798889, 'entity': 'I-LOC', 'index': 12, 'start': 44, 'end': 48}
{'word': 'City', 'score': 0.9993793964385986, 'entity': 'I-LOC', 'index': 13, 'start': 49, 'end': 53}
{'word': 'D', 'score': 0.9862582683563232, 'entity': 'I-LOC', 'index': 19, 'start': 79, 'end': 80}
{'word': '##UM', 'score': 0.9514269828796387, 'entity': 'I-LOC', 'index': 20, 'start': 80, 'end': 82}
{'word': '##BO', 'score': 0.933659017086029, 'entity': 'I-LOC', 'index': 21, 'start': 82, 'end': 84}
{'word': 'Manhattan', 'score': 0.9761653542518616, 'entity': 'I-LOC', 'index': 28, 'start': 114, 'end': 123}
{'word': 'Bridge', 'score': 0.9914628863334656, 'entity': 'I-LOC', 'index': 29, 'start': 124, 'end': 130}

请注意,Hugging Face Inc.被确定为一个组织,New York City、DUMBO和Manhattan Bridge被确定为地点。

下面是一个使用模型和分词器进行命名实体识别的示例。过程如下:

  1. 从checkpoint名称实例化分词器和模型。该模型被识别为BERT模型,并将存储在checkpoint中的权值加载到该模型中。
  2. 定义用于训练模型的标签列表。
  3. 定义一个具有已知实体的序列,例如将“Hugging Face”作为组织,将“New York City”作为地点。
  4. 将单词拆分为token,以便它们可以映射到预测。我们使用了一个小hack,首先,完全编码和解码序列,这样我们就得到了一个包含特殊token的字符串。
  5. 将该序列编码到id中(自动添加特殊标记)。
  6. 通过将输入传递给模型并获得第一个输出来检索预测。这将导致每个token分布在9个可能的类上。我们使用argmax来检索每个token最可能出现的类。
  7. 将每个token与其预测打包并打印。

代码如下:

from transformers import AutoModelForTokenClassification, AutoTokenizer
import torchmodel = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")label_list = ["O",       # Outside of a named entity"B-MISC",  # Beginning of a miscellaneous entity right after another miscellaneous entity"I-MISC",  # Miscellaneous entity"B-PER",   # Beginning of a person's name right after another person's name"I-PER",   # Person's name"B-ORG",   # Beginning of an organisation right after another organisation"I-ORG",   # Organisation"B-LOC",   # Beginning of a location right after another location"I-LOC"    # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \"close to the Manhattan Bridge."# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])

输出为:

[('[CLS]', 'O'), ('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG'), ('.', 'O'), ('is', 'O'), ('a', 'O'), ('company', 'O'), ('based', 'O'), ('in', 'O'), ('New', 'I-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC'), ('.', 'O'), ('Its', 'O'), ('headquarters', 'O'), ('are', 'O'), ('in', 'O'), ('D', 'I-LOC'), ('##UM', 'I-LOC'), ('##BO', 'I-LOC'), (',', 'O'), ('therefore', 'O'), ('very', 'O'), ('##c', 'O'), ('##lose', 'O'), ('to', 'O'), ('the', 'O'), ('Manhattan', 'I-LOC'), ('Bridge', 'I-LOC'), ('.', 'O'), ('[SEP]', 'O')]

这将输出映射到相应预测的每个token的列表。
与pipeline方法不同的是,这里每个token都有一个预测,因为我们没有删除第0个类,这意味着在该token上没有找到特定的实体。

文本摘要(Summarization)

摘要是将一份文件或一篇文章总结成较短的文本。如果想对汇总任务的模型进行微调,可以利用run_summary .py脚本。

摘要数据集的一个例子是CNN /每日邮报数据集,它由长新闻文章组成,是为摘要任务而创建的。

下面是一个使用pipeline进行汇总的示例。它利用了在CNN /每日邮报数据集上进行微调的Bart模型。

因为文本摘要pipeline依赖于PreTrainedModel.generate()方法,所以我们可以在max_length和min_length的管道中直接覆盖PreTrainedModel.generate()的默认参数,如下所示。

from transformers import pipelinesummarizer = pipeline("summarization")
ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison.  Her next court appearance is scheduled for May 18.
"""print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))

输出结果为:

[{'summary_text': ' Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002 . At one time, she was married to eight men at once, prosecutors say .'}]

下面是一个使用模型和分词器进行汇总的示例。流程如下:

  1. 从检查点名称实例化一个分词器和模型。摘要通常使用一个编码器-解码器模型来完成,例如Bart或T5。
  2. 定义应该总结的文章。
  3. 添加T5特有的前缀“summarize: “。
  4. 使用PreTrainedModel.generate()方法生成摘要。

在本例中,我们使用谷歌的T5模型。即使它只在多任务混合数据集(包括CNN /每日邮报)上进行了预先训练,它也会产生非常好的结果。

from transformers import AutoModelWithLMHead, AutoTokenizermodel = AutoModelWithLMHead.from_pretrained("t5-base")
tokenizer = AutoTokenizer.from_pretrained("t5-base")# T5 uses a max_length of 512 so we cut the article to 512 tokens.
inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="pt", max_length=512)
outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)print(tokenizer.decode(outputs.tolist()[0]))

输出为:

<pad> prosecutors say the marriages were part of an immigration scam. if convicted, barrientos faces two criminal counts of "offering a false instrument for filing in the first degree" she has been married 10 times, nine of them between 1999 and 2002.</s>

文本翻译(Translation)

文本翻译是把文本从一种语言翻译成另一种语言的任务。如果想对翻译任务中的模型进行微调,可以利用run_translation.py脚本。

翻译数据集的一个例子是WMT英语到德语数据集,它以英语句子作为输入数据,以相应的德语句子作为目标数据。

下面是一个使用pipeline进行翻译的示例。它利用了仅在多任务混合数据集(包括WMT)上预先训练的T5模型,然而,产生了令人印象深刻的翻译结果。

from transformers import pipelinetranslator = pipeline("translation_en_to_de")print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))**

输出为:

[{'translation_text': 'Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.'}]

因为文本翻译pipeline依赖于PreTrainedModel.generate()方法,所以我们可以直接在管道中覆盖PreTrainedModel.generate()的默认参数,如上文max_length所示。

下面是一个使用模型和分词器进行翻译的示例。过程如下:

  1. 从checkpoint名称实例化分词器和模型。摘要通常使用一个编码器-解码器模型来完成,例如Bart或T5。
  2. 定义应该总结的文章。
  3. 添加特定于T5的前缀“translate English to German:”
  4. 使用PreTrainedModel.generate()方法来执行翻译。

代码如下:

from transformers import AutoModelWithLMHead, AutoTokenizermodel = AutoModelWithLMHead.from_pretrained("t5-base")
tokenizer = AutoTokenizer.from_pretrained("t5-base")
inputs = tokenizer.encode("translate English to German: Hugging Face is a technology company based in New York and Paris", return_tensors="pt")
outputs = model.generate(inputs, max_length=40, num_beams=4, early_stopping=True)print(tokenizer.decode(outputs.tolist()[0]))

输出为:

<pad> Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.</s>

模型汇总(Summary of the models)

下面总结了本库中的所有模型。这里假设读者熟悉原始的transformer模型。我们将重点放在模型之间的高层差异上。库中的每一个模型都可以分为以下几类:

  • 自回归模型
  • 自编码模型
  • 序列对序列模型
  • 多模态模型
  • 基于检索的模型

自回归模型在经典的语言建模任务上进行了预先训练:在阅读了所有之前的token之后,猜测下一个token。它们对应于原Transformer模型的解码器,并且在整句话的顶部使用了掩码,以便注意头只能看到文本的前面,而不能看到后面。尽管这些模型可以进行微调,并在许多任务上取得良好的结果,但最自然的应用程序是文本生成。这种模型的典型例子是GPT。

自编码模型的预训练是通过以某种方式破坏输入标记并试图重构原始句子。它们对应于原始Transformer模型的编码器,因为它们可以在没有任何掩码的情况下获得全部输入。这些模型通常是对整个句子的双向表示。它们可以进行微调,并在许多任务(如文本生成)上取得良好的结果,但它们最自然的应用是句子分类或token分类。这类模型的一个典型例子是BERT。

请注意,自回归模型和自编码模型之间的唯一区别在于模型的预训练方式。因此,同样的体系结构可以用于自回归模型和自编码模型。当给定的模型被用于这两种类型的预训练时,我们将它放在与第一次介绍它的文章对应的类别中。

序列到序列模型同时使用原始Transformer的编码器和解码器,用于转换任务或将其他任务转换为序列到序列问题。它们可以被微调到许多任务中,但它们最自然的应用是翻译、总结和回答问题。原始的transformer模型是这种模型的一个示例(仅用于翻译),T5是一个可以在其他任务上进行微调的示例。

多模态模型将文本输入与其他类型(如图像)混合在一起,并且更特定于特定的任务。

自回归模型(Autoregressive models)

如前所述,这些模型依赖于原始Transformer的解码器部分,并使用注意掩码,以便在每个位置,模型只能看到注意头之前的标记。

原始GPT(Original GPT)

Improving Language Understanding by Generative Pre-Training, Alec Radford et al.

第一个基于Transformer体系结构的自回归模型,在图书语料库数据集上进行预训练。该库提供了用于语言建模和多任务语言模型/多选择分类的模型版本。

GPT-2

Language Models are Unsupervised Multitask Learners, Alec Radford et al.

GPT的一个更大更好的版本,在WebText(在Reddit网站上向外连接超过3karms的网页)。
该库提供了用于语言建模和多任务语言建模/多项选择分类的模型版本。

CTRL

CTRL: A Conditional Transformer Language Model for Controllable Generation, Nitish Shirish Keskar et al.

与GPT模型相同,但增加了控制代码的思想。文本由一个提示(可以是空的)和一个(或几个)控制代码生成,这些控制代码然后用于影响文本生成:以维基百科文章、一本书或电影评论的风格生成。这个库提供了一个仅用于语言建模的模型版本。

Transformer-XL

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context, Zihang Dai et al.

与常规的GPT模型相同,但引入了两个连续段的递归机制(类似于具有两个连续输入的常规RNN)。在这个上下文中,一个段是一系列连续的token(例如512),它们可以跨多个文档,段是按照模型的顺序输入的。

基本上,前一部分的隐藏状态与当前输入相连接,以计算注意力分数。这使得模型可以同时关注前一部分和当前部分中的信息。通过叠加多个注意层,感受野可以增加到多个先前的片段。

这将位置嵌入改变为位置相对嵌入(因为常规的位置嵌入会在给定位置的当前输入和当前隐藏状态中给出相同的结果),并且需要在计算注意力分数的方式上做一些调整。

这个库提供了一个仅用于语言建模的模型版本。

Reformer

Reformer: The Efficient Transformer, Nikita Kitaev et al .

一个具有许多技巧的自回归transformer模型,以减少内存占用和计算时间。这些技巧包括:

  • 使用轴向位置编码(参见下面了解更多细节)。它是一种通过分解成更小的矩阵来避免拥有一个巨大的位置编码矩阵(当序列长度非常大时)的机制。
  • 用LSH(本地敏感哈希)注意代替传统的注意(参见下面了解更多细节)。这是一种避免在注意层中计算完整产品查询键的技术。
  • 避免存储每一层的中间结果,方法是在反向过程中使用可逆变压器层获取中间结果(从下一层的输入中减去剩余将返回中间结果),或者为给定层中的结果重新计算中间结果(效率低于存储中间结果,但节省内存)。
  • 按块而不是整批计算前馈操作。

利用这些技巧,该模型可以比传统的变压器自回归模型输入更大的句子。

注意:这个模型可以很好地用于自动编码设置,但是对于这样的预训练还没有checkpoint。

这个库提供了一个仅用于语言建模的模型版本。

XLNet

XLNet: Generalized Autoregressive Pretraining for Language Understanding, Zhilin Yang et al.

XLNet不是传统的自回归模型,而是在此基础上使用了一种训练策略。它对句子中的记号进行排列,然后允许模型使用最后的n个记号来预测n+1个记号。由于这都是通过一个掩码完成的,所以句子实际上是按正确的顺序输入到模型中,但是XLNet没有屏蔽n+1的前n个标记,而是使用一个掩码,以序列长度为1的某些给定排列隐藏以前的标记。

自编码模型(Autoencoding models)

如前所述,这些模型依赖于原始Transformer的编码器部分,并且不使用掩码,因此模型可以查看注意头中的所有标记。
对于预训练,目标是原始的句子,输入是它们的损坏版本。

BERT

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Jacob Devlin et al.

通过使用随机mask破坏输入,更准确地说,在预训练期间,给定百分比的token(通常为15%)被mask

  • 80%的概率被mask token替换
  • 10%的概率被随机一个词替换
  • 10%的概率保持不动

注意:这里的80%、10%是在15%的基础上划分的。即先有15%决定被mask,然后被mask的情况下有80%被mask token替换。

模型必须预测原始句子,但有第二个目标:输入是两个句子A和B(中间有一个分隔符)。在语料库中,这些句子有50%的概率是连续的,剩下的50%的句子是不相关的。该模型必须预测句子是否连续。

该库为语言建模(传统的或屏蔽的)、下一个句子预测、token分类、句子分类、多选择分类和问题回答都提供了一个版本的模型。

ALBERT

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, Zhenzhong Lan et al.

和BERT一样,只是做了些调整:

  • 嵌入大小E与隐藏大小H是不同的,因为嵌入是上下文独立的(一个嵌入向量代表一个标记),而隐藏状态是上下文相关的(一个隐藏状态代表一个标记序列),所以H>>E更符合逻辑。此外,嵌入矩阵很大,因为它是 V x E (V是词汇表大小)。如果E < H,它的参数更少
  • 层被分割成共享参数的组(以节省内存)。
  • 下一个句子预测被一个句子排序预测取代:在输入中,我们有两个句子A和B(连续的),我们要么输入A,然后输入B,要么输入B,然后输入A。模型必须预测它们是否被交换了。

该库为遮罩语言模型、token分类、句子分类、多选择分类和问题回答都提供了一个版本的模型。

RoBERTa

RoBERTa: A Robustly Optimized BERT Pretraining Approach, Yinhan Liu et al.

和BERT一样,但有更好的预训练技巧:

  • 动态mask:每个epoch中mask是不同的,而BERT则是一样的
  • 没有NSP(下一个句子预测)loss,不是将两个句子放在一起,而是将一组连续的文本放在一起以达到512个标记(这样句子的顺序就可以跨多个文档)
  • 更大的batch
  • 使用BPE字节作为一个单元,而不是字符(因为unicode)

该库为遮罩语言模型、token分类、句子分类、多选择分类和问题回答都提供了一个版本的模型。

DistilBERT

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, Victor Sanh et al.

和BERT一样,但更小。通过对预先训练的BERT模型的蒸馏来训练,这意味着训练它预测的概率与较大模型相同。实际的目标是:

  • 找到与teacher model相同的概率
  • 正确预测masked token(没有下一句预测的目标)
  • 在student model和teacher model的隐层间的cosine similarity

该库为遮罩语言模型、token分类、句子分类和问题回答都提供了一个版本的模型。

ConvBERT

ConvBERT: Improving BERT with Span-based Dynamic Convolution, Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.

像BERT这样的预先训练过的语言模型及其变体最近在各种自然语言理解任务中取得了令人印象深刻的成绩。然而,BERT严重依赖全局的自我注意块,因此内存占用和计算成本很大。虽然所有的注意头都是从全局角度查询整个输入序列来生成注意图,但我们观察到一些注意头只需要学习局部依赖关系,这意味着计算冗余的存在。因此,我们提出了一种新的基于区间的动态卷积来代替这些自我注意头,直接建模局部依赖关系。新的卷积头与其他的自我注意头一起形成新的混合注意块,在全局和局部语境学习中都更有效。
我们给BERT配备了这种混合注意设计,并建立了ConvBERT模型。实验表明,ConvBERT在各种下游任务中显著优于BERT及其变体,其训练成本更低,模型参数更少。值得注意的是,ConvBERTbase模型的GLUE评分达到了86.4分,比ELECTRAbase高0.7分,而使用的培训成本不到1/4。

该库为遮罩语言模型、token分类、句子分类和问题回答都提供了一个版本的模型。

XLM

Cross-lingual Language Model Pretraining, Guillaume Lample and Alexis Conneau

用几种语言训练的transformer模型。这个模型有三种不同的训练方式,库为所有这些类型提供了checkpoint。

  • 因果语言建模(CLM)是传统的自回归训练(所以这个模型也可以在前一节中介绍)。为每个训练样本选择一种语言,模型输入是一个包含256个标记的句子,它可以跨越使用其中一种语言的多个文档。

  • 遮罩语言模型(MLM)就像RoBERTa。为每个训练样本选择一种语言,模型输入是一个包含256个标记的句子,它可以跨越使用其中一种语言的多个文档,并对标记进行动态屏蔽。

  • 遮罩语言模型(MLM)与翻译语言建模(TLM)的结合。这包括用两种不同的语言连接一个句子,并使用随机屏蔽。为了预测其中一个被屏蔽的标记,模型可以同时使用语言1中的上下文和语言2给出的上下文。

检查点指的是在名称中包含clm、mlm或mlm-tlm的方法用于预培训。
在位置嵌入的基础上,该模型还具有语言嵌入。
当使用MLM/CLM进行培训时,它会给模型一个使用语言的指示,而当使用MLM+TLM进行培训时,它会给每个部分使用的语言的指示。

该库为语言建模、token分类、句子分类和问题回答都提供了一个版本的模型。

XLM-RoBERTa

Unsupervised Cross-lingual Representation Learning at Scale, Alexis Conneau et al.

在XLM方法上使用RoBERTa的技巧,但不使用翻译语言建模目标。它只对来自一种语言的句子使用遮罩语言模型。然而,该模型训练了更多的语言(100种),并且没有使用语言嵌入,因此它能够自己检测输入语言。

该库为掩码语言建模、token分类、句子分类、多项选择分类和问题回答都提供了一个版本的模型。

FlauBERT

FlauBERT: Unsupervised Language Model Pre-training for French, Hang Le et al.

像RoBERTa,没有句子排序预测(所以只是训练在MLM目标)。该库提供了一个用于语言建模和句子分类的模型版本。

该库提供了一个用于语言建模和句子分类的模型版本。

ELECTRA

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, Kevin Clark et al.

ELECTRA是一个用另一个(小的)遮罩语言模型预训练的transformer模型。输入被该语言模型损坏,该模型接受一个随机mask的输入文本,并输出一个文本,ELECTRA必须在该文本中预测哪个token是原始的,哪个token被替换了。
像GAN训练一样,小语言模型训练了几个步骤(但以原始文本为目标,而不是像传统GAN设置那样愚弄ELECTRA模型),然后ELECTRA模型训练了几个步骤。

该库为遮罩语言模型、token分类和句子分类提供了一个版本的模型。

Funnel Transformer

Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing, Zihang Dai et al.

Funnel Transformer是一个使用池化的Transformer模型,有点像ResNet模型:层被分组在块中,在每个块的开始(除了第一个块),隐藏状态在序列维度中被池化。这样,它们的长度就除以2,这就加快了下一个隐藏状态的计算速度。
所有预先训练的模型都有三个区块,这意味着最终隐藏状态的序列长度是原始序列长度的四分之一。

对于分类这样的任务,这不是问题,但是对于掩藏语言建模或标记分类这样的任务,我们需要一个与原始输入具有相同序列长度的隐藏状态。在这些情况下,最终的隐藏状态被采样到输入序列长度,并经过两个额外的层。这就是为什么每个checkpoint有两个版本。
带-base后缀的版本只包含三个块,而不带该后缀的版本包含三个块和带有附加层的上采样头。

可用的预训练模型使用与ELECTRA相同的预训练目标。

该库为遮罩语言模型、token分类、句子分类、多选择分类和问题回答提供了一个版本的模型。

Longformer

Longformer: The Long-Document Transformer, Iz Beltagy et al.

为了加快速度而用稀疏矩阵代替注意力矩阵的Transformer模型。通常,本地上下文(例如,左边和右边的两个token是什么?)足以对给定的token采取行动。一些预先选定的输入token仍然得到全局关注,但注意矩阵的参数更少,导致加速。有关更多信息,请参阅本地注意部分。

这和RoBERTa一样是预先训练好的。

注意:这个模型可以很好地用于自回归模型设置,但是对于这样的预训练还没有checkpoint。

该库为遮罩语言模型、token分类、句子分类、多选择分类和问题回答提供了一个版本的模型。

序列到序列模型(Sequence-to-sequence models)

如前所述,这些模型保留了原Transformer的编码器和解码器。

BART

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension, Mike Lewis et al.

序列到序列模型,有一个编码器和一个解码器。编码器输入一个损坏版本的token,解码器输入原始token(但有一个掩码来隐藏未来的词,就像一个普通的transformer解码器)。以下转换的组合应用于编码器的预训练任务:

  • 随机mask掉一些token(如BERT)
  • 随机删除token
  • 将一段k个token的字段用一个mask token代替
  • 重新排列句子
  • 旋转文档,使其从特定token开始

这个库为条件生成和序列分类提供了这个模型的一个版本。

Pegasus

PEGASUS: Pre-training with Extracted Gap-sentences forAbstractive Summarization, Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.

Pegasus在两个自我监督的目标函数上进行了联合训练:遮罩语言模型(MLM)和一种新的特定于摘要的预训练目标,称为间隙句生成(GSG)。

  • MLM:编码器输入标记被一个mask token随机替换,并且必须由编码器预测(如BERT)
  • GSG:整个编码器输入的句子被替换为第二个掩码令牌并输入到解码器,但它有一个因果掩码来隐藏未来的单词,就像一个常规的自回归Transformer解码器。

与BART不同的是,Pegasus的预训练任务有意地类似于摘要:重要的句子被掩藏起来,并从剩余的句子中生成一个输出序列,类似于摘录摘要。

这个库为条件生成提供了这个模型的一个版本,它应该用于摘要。

MarianMT

Marian: Fast Neural Machine Translation in C++, Marcin Junczys-Dowmunt et al.

翻译模型的框架,使用与BART相同的模型。

这个库为条件生成提供了这个模型的一个版本。

T5

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, Colin Raffel et al.

使用传统的transformer模型(在每个层学习的位置嵌入中有微小的变化)。为了能够操作所有的自然语言处理任务,它通过使用特定的前缀:summarize:, question:, translate English To German:等等,将它们转换成文本到文本的问题。

预训练包括有监督和自监督训练。有监督训练是通过由GLUE和SuperGLUE提供的下游任务(如上所述,将它们转换为文本到文本的任务)来进行推断。

自监督训练则使用损坏的token方式,通过随机删除15%的token和替换他们为独立sentinel tokens(如果将多个连续token标记为要删除,则整个组将被替换为单个sentinel token)。编码器的输入是被破坏的句子,解码器的输入是原始的句子,然后目标是由它们的sentinel token分隔的被删除的标记。

例如,如果我们有一个句子“My dog is very cute .”,我们决定删除标记:“dog”,“is”和“cute”,编码器输入变成了“My very .”目标输入变成了“ dog is cute .”

这个库为条件生成提供了这个模型的一个版本。

MT5

mT5: A massively multilingual pre-trained text-to-text transformer, Linting Xue et al.

模型架构与T5相同。mT5的培训前目标包括T5的自监督训练,但不包括T5的有监督训练。mT5接受101种语言的训练。

这个库为条件生成提供了这个模型的一个版本。

MBart

Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.

模型体系结构和预训练目标与BART相同,但是BART是针对25种语言进行训练的,用于监督和非监督机器翻译。
MBart是第一个通过对多语言全文去噪来预训练完整序列到序列模型的方法

这个库为条件生成提供了这个模型的一个版本。

ProphetNet

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou.

ProphetNet引入了一种新的序列到序列的预训练目标,称为未来n-gram预测。在未来的n-gram预测中,模型在每个时间步都根据之前的上下文标记同时预测下一个n个标记,而不是只预测单个的next token。未来的n-gram预测明确鼓励模型计划未来的token,并防止强局部相关性的过拟合。该模型架构是在原transformer的基础上,但以一种主要的自我注意机制取代了解码器中的“标准”自我注意机制。

这个库为条件生成提供了这个模型的预训练版本,为摘要提供了一个经过微调的版本。

XLM-ProphetNet

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou.

XLM-ProphetNet的模型体系结构和预训练目标与ProphetNet相同,但XLM-ProphetNet是在跨语言数据集XGLUE上进行预训练的。

该库分别为多语言条件生成提供了该模型的预训练版本,为标题生成和问题生成提供了微调版本。

多模态模型(Multimodal models)

在这个库中有一个多模态模型,它没有像其他模型那样经过自我监督的预先训练。

MMBT

Supervised Multimodal Bitransformers for Classifying Images and Text, Douwe Kiela et al.

一种用于多模态设置的Transformer模型,结合文本和图像进行预测。transformer模型将作为输入的映射进行标记化的文本,最后在一个预训练好的图像resnet上通过一个线性层(从特征的数量决定Transformer隐藏层的维度)。

不同的输入被连接起来,并且在位置嵌入的基础上,添加一个分段嵌入,让模型知道输入向量的哪一部分对应于文本,哪一部分对应于图像。

预先训练的模型只适用于分类。

基于检索的模型(Retrieval-based models)

有些模型在(前)训练和推理过程中使用文档检索来回答开放领域的问题。

DPR

Dense Passage Retrieval for Open-Domain Question Answering, Vladimir Karpukhin et al.

密集通道检索(DPR) -是最先进的开放领域问答研究的一套工具和模型。

DPR包括三种模型:

  • 问题编码器:将问题编码为向量
  • 上下文编码器:将上下文编码为向量
  • 读者:在检索到的上下文中提取问题的答案,并给出一个相关性得分(如果推断的跨度确实回答了问题,那么得分就高)。

DPR的pipeline(尚未实现)使用检索步骤查找给定某个问题的前k个上下文,然后用问题和检索到的文档调用阅读器以获得答案。

RAG

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela

检索增强生成(RAG)模型结合了预先训练的密集检索(DPR)和Seq2Seq模型的功能。
RAG模型检索文档,将它们传递给seq2seq模型,然后边缘化以生成输出。
retriver和seq2seq模块由预先训练的模型初始化,并共同进行微调,允许检索和生成以适应下游任务。

RAG-Token模型和RAG-Sequence模型可以进行生成。

更多的技术内容(More technical aspects)

Full vs sparse attention

大多数Transformer模型在注意矩阵为方形的意义上使用了充分注意。当您有很长的文本时,它可能成为一个很大的计算瓶颈。Longformer和reformer是两种模型,它们试图提高效率,使用稀疏版本的注意力矩阵来加速训练。

LSH attention

Reformer使用LSH attention。在 s o f t m a x ( Q K T ) softmax(QK^T) softmax(QKT)中,只有最大的元素才会给出有用的贡献。所以对于q中的每一个查询q,我们可以只考虑k中接近q的键k。哈希函数用于确定q和k是否接近。注意掩码被修改为掩码当前标记(除了第一个位置),因为它将给出相同的查询和键(非常相似)。因为哈希可能有点随机,所以在实践中会使用几个哈希函数(由n_rounds参数确定),然后一起取平均值。

Local attention

Longformer使用了局部注意:通常,局部上下文(例如,左边和右边的两个标记是什么?)足以对给定的标记采取行动。此外,通过叠加具有小窗口的注意层,最后一层将拥有一个接收域,而不仅仅是窗口中的标记,允许它们构建整个句子的表示。

一些预先选择的输入token也会得到全局注意:对于那些少数的token,注意矩阵可以访问所有令牌,并且这个过程是对称的:所有其他token都可以访问这些特定的token(在其本地窗口的token之上)。
如图本文的图2d所示,下面是注意罩的示例

使用那些参数更少的注意矩阵可以让模型的输入具有更大的序列长度。在这里插入图片描述

Other tricks

Reformer使用轴向位置编码:在传统的Transformer模型中,位置编码E是一个大小为l × d的矩阵,l是序列长度,d是隐藏状态的维数。如果你有很长的文本,这个矩阵可能会很大,在GPU上占用太多的空间。为了减轻这种情况,轴向位置编码包括将大矩阵E分解为两个更小的矩阵E1和E2,维度为 l 1 × d 1 l_1 \times d_1 l1×d1 l 2 × d 2 l_2 \times d_2 l2×d2,这样 l 1 × l 2 = l l_1 \times l_2 = l l1×l2=l d 1 + d 2 = d d_1 + d_2=d d1+d2=d(加上长度的乘积,这最终会小得多)。时间步长j在E中的嵌入是将时间步长 j % l 1 j \% l_1 j%l1在E1中的嵌入和 j / / l 1 j//l_1 j//l1在E2中的嵌入连接起来得到的。

数据预处理(Preprocessing data)

在本教程中,我们将探讨如何使用 🤗 Transformers对数据进行预处理。这方面的主要工具是我们所说的分词器。我们可以使用与想要使用的模型相关联的tokenizer类,或者直接使用AutoTokenizer类来构建一个。

正如我们在Quick tour中看到的,分词器首先将给定文本分割成通常称为token的单词(或部分单词、标点符号等)。然后,它将把这些token转换为数字,以便能够利用它们构建一个张量,并将它们提供给模型。它还将添加模型可能希望正常工作的任何额外输入。

注意:如果你计划使用一个预训练模型,使用它相关联的预训练的分词器是很重要的。它将以与前训练语料库相同的方式分割输入文本成为token,并且它将使用与预训练期间相同的对应token索引(我们通常称之为词汇表)。

要自动下载预训练期间使用的词汇表或对给定模型进行微调,可以使用from_pretraining()方法。

from transformers import AutoTokenizer+tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')

基本使用(Base use)

一个 PreTrainedTokenizer 有很多方法,但是你需要记住的唯一一个预处理方法是它的__call__:你只需要把你的句子喂给你的分词器对象。

Note:__call__是python的魔法方法,可以把类实例当做函数调用。调用时执行的函数就是类实例的__call__方法。

encoded_input = tokenizer("Hello, I'm a single sentence!")
print(encoded_input)

输出为:

{'input_ids': [101, 8667, 117, 146, 112, 182, 170, 1423, 5650, 106, 102], 
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}

这将返回一个字典,值就是字符串对应到整数的列表。input_id是与句子中每个标记对应的索引。下面我们将看到attention_mask的用途,下一节将看到token_type_ids的目标。

分词器可以解码正确句子中的标记id列表。

tokenizer.decode(encoded_input["input_ids"])

输出为:

"[CLS] Hello, I'm a single sentence! [SEP]"

可以看到,分词器自动添加了模型所期望的一些特殊标记。并不是所有的模型都需要特殊的token。例如,如果我们使用gpt2-medium而不是bert-base-case来创建分词器,那么我们将看到与原来的句子相同的句子。你可以通过传递 add_special_tokens=False 来禁用这个行为(只有当你自己手动添加那些特殊的token时才会被建议)。

如果有几个句子需要处理,可以通过将它们作为列表发送给分词器来有效地完成这一任务:

batch_sentences = ["Hello I'm a single sentence","And another sentence","And the very very last one"]
encoded_inputs = tokenizer(batch_sentences)
print(encoded_inputs)

输出为:

{'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102], [101, 1262, 1330, 5650, 102], [101, 1262, 1103, 1304, 1304, 1314, 1141, 102]], 
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]], 
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1]]}

我们再次得到了一个字典,这次的值是整数列表的列表。

如果一次向tokenizer发送几个句子的目的是构建batch处理为模型提供信息,那么您可能需要这样做:

  • 把每个句子填充到batch中最大的长度。
  • 将每个句子截断到模型所能接受的最大长度(如果适用的话)。
  • 返回张量。

当将句子列表输入到分词器时,可以使用以下选项来完成所有这一切。

batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
print(batch)

输出为:

{'input_ids': tensor([[ 101, 8667,  146,  112,  182,  170, 1423, 5650,  102],[ 101, 1262, 1330, 5650,  102,    0,    0,    0,    0],[ 101, 1262, 1103, 1304, 1304, 1314, 1141,  102,    0]]), 
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0, 0]]), 
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 0, 0, 0, 0],[1, 1, 1, 1, 1, 1, 1, 1, 0]])}

它返回一个包含字符串键和张量值的字典。现在我们可以看到attention_mask是关于什么的:它指出模型应该注意哪些标记,哪些不应该注意(因为它们在本例中表示填充)。

注意,如果您的模型没有与其关联的最大长度,那么上面的命令将抛出一个警告。你可以安全地忽略它。您还可以传递 verbose=False 来阻止分词器抛出此类警告。

对句子对进行处理(Preprocessing pairs of sentences)

有时你需要给你的模型提供一对句子。例如,如果您想对一对中的两个句子是否相似进行分类,或者对问句回答模型进行分类,它们接受一个上下文和一个问题。对于BERT模型,输入可以这样表示:

[CLS] Sequence A [SEP] Sequence B [SEP]

我们可以将两个句子作为两个参数提供(不是一个列表,因为两个句子的列表将被解释为两个单个句子的批处理,正如我们前面看到的那样),从而以模型所期望的格式编码一对句子。这将再次返回一个dict字符串到整数列表。

encoded_input = tokenizer("How old are you?", "I'm 6 years old")
print(encoded_input)

输出为:

{'input_ids': [101, 1731, 1385, 1132, 1128, 136, 102, 146, 112, 182, 127, 1201, 1385, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}

这向我们展示了token_type_ids的用途:它们向模型指示输入的哪一部分对应第一个句子,哪一部分对应第二个句子。注意,token_type_ids不是所有模型都需要或处理的。默认情况下,分词器将只返回其关联模型所期望的输入。可以使用return_input_ids或return_token_type_ids强制返回(或不返回)任何这些特殊参数。

如果我们解码我们获得的token id,我们将看到特殊的token被适当地添加了。

tokenizer.decode(encoded_input["input_ids"])

输出为:

"[CLS] How old are you? [SEP] I'm 6 years old [SEP]"

如果有一个需要处理的序列对列表,那么应该将它们作为两个列表提供给分词器:第一个句子列表和第二个句子列表。


batch_sentences = ["Hello I'm a single sentence","And another sentence","And the very very last one"]
batch_of_second_sentences = ["I'm a sentence that goes with the first sentence","And I should be encoded with the second sentence","And I go with the very last one"]
encoded_inputs = tokenizer(batch_sentences, batch_of_second_sentences)
print(encoded_inputs)

输出为:

{'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102, 146, 112, 182, 170, 5650, 1115, 2947, 1114, 1103, 1148, 5650, 102],[101, 1262, 1330, 5650, 102, 1262, 146, 1431, 1129, 12544, 1114, 1103, 1248, 5650, 102],[101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 1262, 146, 1301, 1114, 1103, 1304, 1314, 1141, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}

正如我们所看到的,它返回一个字典,其中每个值都是一个整型数列表的列表。为了再次检查输入到模型中的内容,我们可以逐个解码input_ids中的每个列表。

for ids in encoded_inputs["input_ids"]:print(tokenizer.decode(ids))

输出为:

[CLS] Hello I'm a single sentence [SEP] I'm a sentence that goes with the first sentence [SEP]
[CLS] And another sentence [SEP] And I should be encoded with the second sentence [SEP]
[CLS] And the very very last one [SEP] And I go with the very last one [SEP]

同样,可以自动将输入填充到批处理中的最大句子长度,截断到模型可以接受的最大长度,并使用以下方法直接返回张量

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation=True, return_tensors="pt")
print(batch)

输出为:

{'input_ids': tensor([[  101,  8667,   146,   112,   182,   170,  1423,  5650,   102,   146,112,   182,   170,  5650,  1115,  2947,  1114,  1103,  1148,  5650,102],[  101,  1262,  1330,  5650,   102,  1262,   146,  1431,  1129, 12544,1114,  1103,  1248,  5650,   102,     0,     0,     0,     0,     0,0],[  101,  1262,  1103,  1304,  1304,  1314,  1141,   102,  1262,   146,1301,  1114,  1103,  1304,  1314,  1141,   102,     0,     0,     0,0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]])}

关于填充和截断(Everything you always wanted to know about padding and truncation)

我们已经看到了适用于大多数情况的命令(将批处理填充到最大句子的长度,并截断到模式可以接受的最大长度)。但是,如果需要的话,API支持更多的策略。为此需要知道的三个参数是padding、truncation和max_length。

  • padding控制填充。它可以是一个布尔值或字符串,应该是:

    • True或’longest’参数将使得每个句子都填充到batch中最长的序列(如果只提供一个序列,则不填充)。
    • 'max_length’参数将使得每个句子都填充到max_length参数指定的长度,或者如果没有提供max_length (max_length=None),则填充为模型可接受的最大长度(例如BERT为512个token)。如果你只提供一个序列,填充仍然会应用到它上面。
    • False或’do_not_pad’不填充序列。正如我们之前看到的,这是默认行为。
  • truncation控制截断。它可以是一个布尔值或字符串,应该是:

    • True或’only_first’截断为max_length参数指定的最大长度,或者如果没有提供max_length,则为模型接受的最大长度(max_length=None)。如果提供了一对序列(或一批序列),这只会截断一对序列的第一个句子。
    • 'only_second’截断为max_length参数指定的最大长度,如果没有提供max_length,则截断为模型接受的最大长度(max_length=None)。如果提供了一对序列(或一批序列),这只会截断一对序列的第二个句子。
    • 'longest_first’截断为max_length参数指定的最大长度,或者如果没有提供max_length,则为模型接受的最大长度(max_length=None)。这将逐个截断token,从该对中的最长序列中删除一个标记,直到达到适当的长度。
    • False或’do_not_truncate’不截断序列。正如我们之前看到的,这是默认行为。
  • max_length来控制填充/截断的长度。它可以是一个整数或None,在这种情况下,它将默认为模型可以接受的最大长度。如果模型没有特定的最大输入长度,截断/填充到max_length将被禁用。

下面是一个表,总结了设置填充和截断的建议方法。如果你用一双输入序列在下列例子中,你可以用一个在[‘only_first’,‘only_second’,’ longest_first ']的STRATEGY替换truncation=True,即 truncation=‘only_second’ or truncation= 'longest_first’来控制序列对中的两个序列如何截断。

TruncationPaddingInstruction
no truncationno paddingtokenizer(batch_sentences)
padding to max sequence in batchtokenizer(batch_sentences, padding=True) or
tokenizer(batch_sentences, padding='longest')
padding to max model input lengthtokenizer(batch_sentences, padding='max_length')
padding to specific lengthtokenizer(batch_sentences, padding='max_length', max_length=42)
truncation to max model input lengthno paddingtokenizer(batch_sentences, truncation=True) or
tokenizer(batch_sentences, truncation=STRATEGY)
padding to max sequence in batchtokenizer(batch_sentences, padding=True, truncation=True) or
tokenizer(batch_sentences, padding=True, truncation=STRATEGY)
padding to max model input lengthtokenizer(batch_sentences, padding='max_length', truncation=True) or
tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)
padding to specific lengthNot possible
truncation to specific lengthno paddingtokenizer(batch_sentences, truncation=True, max_length=42) or
tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)
padding to max sequence in batchtokenizer(batch_sentences, padding=True, truncation=True, max_length=42) or
tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)
padding to max model input lengthNot possible
padding to specific lengthtokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42) or
tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)

这里通过实验来验证一下这些指令,我使用的是一组pair的形式。

首先定义这对pair的batch形式,注意,这里batch_sentences的第i个元素和batch_of_second_sentences的第i个元素才是一个pair。

batch_sentences = ["Hello I'm a single sentence","And another sentence","And the very very last one"]
batch_of_second_sentences = ["I'm a sentence that goes with the first sentence","And I should be encoded with the second sentence","And I go with the very last one"]
No truncation

首先是no truncation,这里分为no padding、padding to max sequence in batch、padding to max model input length、padding to specific length四种情况。

no padding

batch = tokenizer(batch_sentences, batch_of_second_sentences)
for ids in batch['input_ids']:            # 打印输出的代码全部一样,后续省略print("===="*36)print(len(tokenizer.convert_ids_to_tokens(ids)))print(tokenizer.convert_ids_to_tokens(ids))print()

输出为:

================================================================================================================================================
21
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
17
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]']

这里输出了经过tokenizer后的sentence pair,再分词得到的token基础上共新增了三个特殊的token。

可以看到,在no padding、no truncation的情况下,并未有任何填充和截断的情形(除非句子长到模型最大输入才会截断)发生。

由于打印输出的代码都一样,后续打印输出代码省略。

padding to max sequence in batch

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True)
# 等价于
# batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='longest')

输出为:

================================================================================================================================================
21
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
21
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
21
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']

在这种情况下,文本已经填充对齐到了最长的那个sentence。

**padding to max model input length **

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length')

这种情况下,文本会自动填充到模型能接受的最长输入,BERT这里是512.
输出(由于填充到了512个长度,故以下输出只展示了几个PAD)为:

================================================================================================================================================
512
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]', '[PAD]', ]================================================================================================================================================
512
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]',]================================================================================================================================================
512
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]',]

padding to specific length

同样,我们也可以自己设置max_length。

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', max_length=24)
================================================================================================================================================
24
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
24
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
24
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']

这里设置max_length=24,可以看到文本都被填充到了24。

truncation to max model input length

no padding

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation=True)

输出为:

================================================================================================================================================
21
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
17
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]']

同时,在sentence pair的情况下,我们实验了以下三句代码输出都是一样的

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_first')
batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_second')
batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='longest_first')

输出同上,不再展示。

padding to max sequence in batch

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation=True)

输出为:

================================================================================================================================================
21
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
21
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
21
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']

这里的文本已经填充对齐,同时由于远远没有达到模型最大输入的长度限制,没有发生截断。

然后测试

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation='only_first')
batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation='only_second')
batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation='longest_first')

输出与上面完全一样。

padding to max model input length

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', truncation=True)

输出全部填充到了512长度。

================================================================================================================================================
512
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]', '[PAD]', ]================================================================================================================================================
512
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]',]================================================================================================================================================
512
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]',]

同样测试以下三句代码结果也都相同

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', truncation='only_first')
batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', truncation='only_second')
batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', truncation='longest_first')

同时在truncation to max model input length的设置下无法实现padding to specific length。

truncation to specific length

no padding

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation=True, max_length=20)

这里设置最大长度为16,输出为:

================================================================================================================================================
16
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
16
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', '[SEP]']

可以看到,第一三段文本被截断了,其他第二段既没有被截断也没有被填充。

接着,我们分别测试一下[‘only_first’,‘only_second’,’ longest_first ']的差异:

首先是

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_first', max_length=16)

其输出为

================================================================================================================================================
16
['[CLS]', 'Hello', 'I', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
16
['[CLS]', 'And', 'the', 'very', 'very', 'last', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]']

可以看到,在only_first设置下,第一个句子被优先截断。

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_second', max_length=16)

输出为:

================================================================================================================================================
16
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
16
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', '[SEP]']

可以看到,在only_second设置下,第二个句子被优先截断。

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='longest_first', max_length=16)

输出为:

================================================================================================================================================
16
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', '[SEP]']================================================================================================================================================
15
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]']================================================================================================================================================
16
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', '[SEP]']

此时,优先截断最长的那个(一个token一个token的截断,可能先截断第二句,删除几个token之后一二相同了,就继续截断第一句……)。

需要注意的是,截断长度一定要保证两句话都有内容(最少要剩下1个token),否则会报错。

batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_first', max_length=15)
batch = tokenizer(batch_sentences, batch_of_second_sentences, truncation='only_first', max_length=14)

例如上面第一句不会报错,因为这样截断还可以保证第一句有一个token为’I’,而第二句会报错。

padding to max sequence in batch
设置padding=True后,达不到batch中最长长度的句子会被填充。
如下代码:

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation='only_first', max_length=18)

输出为:

================================================================================================================================================
18
['[CLS]', 'Hello', 'I', "'", 'm', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
18
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
18
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]']

同时截断也正常发挥作用。

使用[‘only_first’,‘only_second’,’ longest_first ']设置时也是截断相应的部分。这里不再展示。

padding to specific length

这里将确保PAD到特定的长度(通过设置max_length来实现)。注意一下代码的对比:

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding='max_length', truncation=True, max_length=22)

输出:

================================================================================================================================================
22
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]', '[PAD]']================================================================================================================================================
22
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
22
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']

而如下代码:

batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation=True, max_length=22)

输出为:

================================================================================================================================================
21
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]', 'I', "'", 'm', 'a', 'sentence', 'that', 'goes', 'with', 'the', 'first', 'sentence', '[SEP]']================================================================================================================================================
21
['[CLS]', 'And', 'another', 'sentence', '[SEP]', 'And', 'I', 'should', 'be', 'encoded', 'with', 'the', 'second', 'sentence', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']================================================================================================================================================
21
['[CLS]', 'And', 'the', 'very', 'very', 'last', 'one', '[SEP]', 'And', 'I', 'go', 'with', 'the', 'very', 'last', 'one', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']

Pre-tokenized输入(Pre-tokenized inputsPre-tokenized inputs)

tokenizer还接受预标记化的输入。当想要计算命名实体识别(NER)或词性标记(POS标记)中的标签和提取预测时,这一点特别有用。

警告:预标记化并不意味着您的输入已经分词好了(如果是这样的话,您不需要通过tokenizer传递它们),而是将它们分割成单词(这通常是子单词标记化算法(如BPE)的第一步)。

如果您想使用预标记化的输入,只需在将输入传递给分词器时设置is_split_into_words=True。
例如,我们有

encoded_input = tokenizer(["Hello", "I'm", "a", "single", "sentence"], is_split_into_words=True)
print(encoded_input)

输出为:

{'input_ids': [101, 8667, 146, 112, 182, 170, 1423, 5650, 102],'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0],'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}

注意,分词器仍然会添加特殊标记的id(如果适用),除非传递add_special_tokens=False。

同时,tokenizer同样会进行分词操作,只是在设置is_split_into_words=True后,它不会在词和词直接进行分割。如下示例代码:

batch = tokenizer(["Hello", "I'm", "a", "single", "sentence"], is_split_into_words=True)
ids = batch['input_ids']            # 打印输出的代码全部一样,后续省略
print("===="*36)
print(len(tokenizer.convert_ids_to_tokens(ids)))
print(tokenizer.convert_ids_to_tokens(ids))
print()

输出为:

================================================================================================================================================
9
['[CLS]', 'Hello', 'I', "'", 'm', 'a', 'single', 'sentence', '[SEP]']

可以看到,I’m这个词内部仍被切分为了I,’, m三个token。

这和之前的一组句子或一组句子对完全一样。
你可以像这样编码一批句子

batch_sentences = [["Hello", "I'm", "a", "single", "sentence"],["And", "another", "sentence"],["And", "the", "very", "very", "last", "one"]]
encoded_inputs = tokenizer(batch_sentences, is_split_into_words=True)

或者像这样的一组成对句子

batch_of_second_sentences = [["I'm", "a", "sentence", "that", "goes", "with", "the", "first", "sentence"],["And", "I", "should", "be", "encoded", "with", "the", "second", "sentence"],["And", "I", "go", "with", "the", "very", "last", "one"]]
encoded_inputs = tokenizer(batch_sentences, batch_of_second_sentences, is_split_into_words=True)

你可以添加填充,截断以及像之前那样直接返回张量

batch = tokenizer(batch_sentences,batch_of_second_sentences,is_split_into_words=True,padding=True,truncation=True,return_tensors="pt")

这篇关于Huggingface Transformers库学习笔记(二):使用Transformers(上)(Using Transformers Part 1)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1040675

相关文章

C语言中联合体union的使用

本文编辑整理自: http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=179471 一、前言 “联合体”(union)与“结构体”(struct)有一些相似之处。但两者有本质上的不同。在结构体中,各成员有各自的内存空间, 一个结构变量的总长度是各成员长度之和。而在“联合”中,各成员共享一段内存空间, 一个联合变量

51单片机学习记录———定时器

文章目录 前言一、定时器介绍二、STC89C52定时器资源三、定时器框图四、定时器模式五、定时器相关寄存器六、定时器练习 前言 一个学习嵌入式的小白~ 有问题评论区或私信指出~ 提示:以下是本篇文章正文内容,下面案例可供参考 一、定时器介绍 定时器介绍:51单片机的定时器属于单片机的内部资源,其电路的连接和运转均在单片机内部完成。 定时器作用: 1.用于计数系统,可

问题:第一次世界大战的起止时间是 #其他#学习方法#微信

问题:第一次世界大战的起止时间是 A.1913 ~1918 年 B.1913 ~1918 年 C.1914 ~1918 年 D.1914 ~1919 年 参考答案如图所示

[word] word设置上标快捷键 #学习方法#其他#媒体

word设置上标快捷键 办公中,少不了使用word,这个是大家必备的软件,今天给大家分享word设置上标快捷键,希望在办公中能帮到您! 1、添加上标 在录入一些公式,或者是化学产品时,需要添加上标内容,按下快捷键Ctrl+shift++就能将需要的内容设置为上标符号。 word设置上标快捷键的方法就是以上内容了,需要的小伙伴都可以试一试呢!

Tolua使用笔记(上)

目录   1.准备工作 2.运行例子 01.HelloWorld:在C#中,创建和销毁Lua虚拟机 和 简单调用。 02.ScriptsFromFile:在C#中,对一个lua文件的执行调用 03.CallLuaFunction:在C#中,对lua函数的操作 04.AccessingLuaVariables:在C#中,对lua变量的操作 05.LuaCoroutine:在Lua中,

AssetBundle学习笔记

AssetBundle是unity自定义的资源格式,通过调用引擎的资源打包接口对资源进行打包成.assetbundle格式的资源包。本文介绍了AssetBundle的生成,使用,加载,卸载以及Unity资源更新的一个基本步骤。 目录 1.定义: 2.AssetBundle的生成: 1)设置AssetBundle包的属性——通过编辑器界面 补充:分组策略 2)调用引擎接口API

Javascript高级程序设计(第四版)--学习记录之变量、内存

原始值与引用值 原始值:简单的数据即基础数据类型,按值访问。 引用值:由多个值构成的对象即复杂数据类型,按引用访问。 动态属性 对于引用值而言,可以随时添加、修改和删除其属性和方法。 let person = new Object();person.name = 'Jason';person.age = 42;console.log(person.name,person.age);//'J

大学湖北中医药大学法医学试题及答案,分享几个实用搜题和学习工具 #微信#学习方法#职场发展

今天分享拥有拍照搜题、文字搜题、语音搜题、多重搜题等搜题模式,可以快速查找问题解析,加深对题目答案的理解。 1.快练题 这是一个网站 找题的网站海量题库,在线搜题,快速刷题~为您提供百万优质题库,直接搜索题库名称,支持多种刷题模式:顺序练习、语音听题、本地搜题、顺序阅读、模拟考试、组卷考试、赶快下载吧! 2.彩虹搜题 这是个老公众号了 支持手写输入,截图搜题,详细步骤,解题必备

Vim使用基础篇

本文内容大部分来自 vimtutor,自带的教程的总结。在终端输入vimtutor 即可进入教程。 先总结一下,然后再分别介绍正常模式,插入模式,和可视模式三种模式下的命令。 目录 看完以后的汇总 1.正常模式(Normal模式) 1.移动光标 2.删除 3.【:】输入符 4.撤销 5.替换 6.重复命令【. ; ,】 7.复制粘贴 8.缩进 2.插入模式 INSERT

Lipowerline5.0 雷达电力应用软件下载使用

1.配网数据处理分析 针对配网线路点云数据,优化了分类算法,支持杆塔、导线、交跨线、建筑物、地面点和其他线路的自动分类;一键生成危险点报告和交跨报告;还能生成点云数据采集航线和自主巡检航线。 获取软件安装包联系邮箱:2895356150@qq.com,资源源于网络,本介绍用于学习使用,如有侵权请您联系删除! 2.新增快速版,简洁易上手 支持快速版和专业版切换使用,快速版界面简洁,保留主