WordPiece词表的创建

2023-12-06 05:45
文章标签 创建 词表 wordpiece

本文主要是介绍WordPiece词表的创建,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

文章目录

    • 一、简单介绍
    • 二、步骤流程
      • 2.1 预处理
      • 2.2 计数
      • 2.3 分割
      • 2.4 添加subword
    • 三、代码实现

本篇内容主要介绍如何根据提供的文本内容创建 WordPiece vocabulary,代码来自谷歌;

一、简单介绍

wordpiece的目的是:通过考虑单词内部构造,充分利用subwords的优势,在把长word转化为短word提高文字的灵活性以及提高word转化的效率这两处之间取得一个良好的平衡;

前者会增加词表大小,后者会减少词表大小

二、步骤流程

2.1 预处理

在读取所有的文本内容后,第一步便是对文本内容预处理;

  • 对英文来说,我们可以把字符都转化为小写形式,去掉accents,á 变成 a,然后利用whitespacepunctuation进行分割;也就是空格和标点符号;
  • 对中文来说,我们可以把繁体转化为简体,分割的方式就只有单个单个字词进行分割了,而优化的方式只有从外部引入tokenizer对文本内容做分词,然后进行后续步骤,不然单个中文字词无法进行分解,有人可能想通过偏旁部首来,但偏旁部首如何区分顺序呢?后续的内容将围绕英文展开;

在将文本切割成以word为单位的小块后,我们进行下一步骤;

2.2 计数

在预处理这一阶段,我们得到了以word为单位的小块,为了了解总体words的情况,我们需要对word进行计数处理,并按照数量从大到小排列;如果文本内容很大,我们可以在此做一个优化,过滤掉数量太大或者太小的word以及过滤掉长度太长的word

由于word piece的本质是subwords,为了合理的把word转化为subwords,我们必须考虑word的基本单元;因为如果词表中缺少组成word的基本单元,那么该词的表示就无法实现或者不完整和其他词照成混淆;

所以在这里我们统计所有的词其单个字符出现的次数;同理,在这里我们可以优化一下,删除出现次数较少的字符,由于出现次数较少的字符删除了,哪包含这些字符的words也就无法表示,所以我们同时要删除包含这些字符的words

2.3 分割

在这一环节,我们对计数字典的word进行分割,其处理方式如下:

首先对word设置一个首指针和一个尾指针,以指针之间的内容求匹配计数字典和字符字典的合集,若成功,则将首指针指向尾指针,然后尾指针重新指向最后的位置,若失败,则将尾指针向首指针移动一步;直到停止首尾位置一致,若首尾在尾部则返回output_tokens,若在其他地方则说明不能分词,返回None

实现过程如图所示:

实现代码如下:

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  indices = []  start = 0  while start < len(word):  end = len(word)  while end > start:  subtoken = word[start:end]  # Subtoken includes the joiner token.  if include_joiner_token and start > 0:  subtoken = joiner + subtoken  # If subtoken is part of vocab, 'end' is a valid start index.  if subtoken in curr_tokens:  indices.append(end)  break  end -= 1  if end == start:  return 1  start = end  return indices  if __name__ == '__main__':  res = get_split_indices('hello', ['h', '##e', '##llo', '##o'], True, '##')  # print(res)  res: [1, 2, 5]

2.4 添加subword

上一步分割的作用实际上是在找最大的分词块,但是其采用的是一种贪婪算法,并不是最优解;在对word进行分割找最大的分割块的indice之后,我们可以更快的找到常常出现在一起的字符串;处理方式如下:获取每一个以indice位置开始,长度依次增加的subword,构建subword字典并计数,其每次增加的数目应该是word.count

这种遍历方式产生的subword的数目过于庞大,因此如果有需要,我们需要对其进行一些优化,比如删掉一些长度较长的subword,删除一些次数比较小的subword,这样添加subword的这一步骤就算完成了;

但是要注意的是,这里的subword出现了重复计数,我们考虑了长的字符串,那么短的字符串一定会被考虑,这里我们从长字符串到短字符开始遍历,当确定长字符串有一定数目确定为vocabulary中的元素时,我们把所有有相同前缀的短字符串减去长字符串的数目避免影响;

与此同时,vocabulary并不一定包含了字符字典,所以我们需要将其进行合并,最后得到的vocabulary就是wordpiece vocabulary

三、代码实现

首先我们对word进行预处理,这里代码省略;

在这里我们传入一个iterable迭代器,然后用collections库中的Counter,对每个词进行计数;

def count_words(iterable) -> collections.Counter:  """Converts a iterable of arrays of words into a `Counter` of word counts."""  counts = collections.Counter()  for words in iterable:  # Convert a RaggedTensor to a flat/dense Tensor.  words = getattr(words, 'flat_values', words)  # Flatten any dense tensor  words = np.reshape(words, [-1])  counts.update(words)  # Decode the words if necessary.  example_word = next(iter(counts.keys()))  if isinstance(example_word, bytes):  counts = collections.Counter(  {word.decode('utf-8'): count for word, count in counts.items()})  return counts

根据当前词频以及upper_threshlower_thresh确定词频的界限;

def get_search_threshs(word_counts, upper_thresh, lower_thresh):  """Clips the thresholds for binary search based on current word counts.  The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  counts = [count for _, count in word_counts]  max_count = max(counts)  min_count = min(counts)  if upper_thresh is None:  upper_search = max_count  else:  upper_search = max_count if max_count < upper_thresh else upper_thresh  if lower_thresh is None:  lower_search = min_count  else:  lower_search = min_count if min_count > lower_thresh else lower_thresh  return upper_search, lower_search

对单个的char的数量设置一个上限;

def get_allowed_chars(all_counts, max_unique_chars):  """Get the top max_unique_chars characters within our wordcounts.  We want each character to be in the vocabulary so that we can keep splitting    down to the character level if necessary. However, in order not to inflate    our vocabulary with rare characters, we only keep the top max_unique_chars    characters.  Args:      all_counts: list of (string, int) tuples      max_unique_chars: int, maximum number of unique single-character tokens  Returns:      set of strings containing top max_unique_chars characters in all_counts    """  char_counts = collections.defaultdict(int)  for word, count in all_counts:  for char in word:  char_counts[char] += count  # Sort by count, then alphabetically.  sorted_counts = sorted(sorted(char_counts.items(), key=lambda x: x[0]),  key=lambda x: x[1], reverse=True)  allowed_chars = set()  for i in range(min(len(sorted_counts), max_unique_chars)):  allowed_chars.add(sorted_counts[i][0])  return allowed_chars

结合all_countsallowed_chars,删掉包含allowed_char的字符,控制结果为max_input_tokens个出现次数最大的word

def filter_input_words(all_counts, allowed_chars, max_input_tokens):  """Filters out words with unallowed chars and limits words to max_input_tokens.  Args:      all_counts: list of (string, int) tuples      allowed_chars: list of single-character strings      max_input_tokens: int, maximum number of tokens accepted as input  Returns:      list of (string, int) tuples of filtered wordcounts    """    # Ensure that the input is sorted so that if `max_input_tokens` is reached  # the least common tokens are dropped.    all_counts = sorted(  all_counts, key=lambda word_and_count: word_and_count[1], reverse=True)  filtered_counts = []  for word, count in all_counts:  if (max_input_tokens != -1 and  len(filtered_counts) >= max_input_tokens):  break  has_unallowed_chars = False  for char in word:  if char not in allowed_chars:  has_unallowed_chars = True  break        if has_unallowed_chars:  continue  filtered_counts.append((word, count))  return filtered_counts

获得splitindex,要保证curr_tokens可以对word进行分割;

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  """Gets indices for valid substrings of word, for iterations > 0.  For iterations > 0, rather than considering every possible substring, we only    want to consider starting points corresponding to the start of wordpieces in    the current vocabulary.  Args:      word: string we want to split into substrings      curr_tokens: string to int dict of tokens in vocab (from previous iteration)      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  Returns:      list of ints containing valid starting indices for word    """  indices = []  start = 0  while start < len(word):  end = len(word)  while end > start:  subtoken = word[start:end]  # Subtoken includes the joiner token.  if include_joiner_token and start > 0:  subtoken = joiner + subtoken  # If subtoken is part of vocab, 'end' is a valid start index.  if subtoken in curr_tokens:  indices.append(end)  break  end -= 1  if end == start:  return None  start = end  return indices

进行最后的步骤;

import collections  
from typing import List, Optional  Params = collections.namedtuple('Params', [  'upper_thresh', 'lower_thresh', 'num_iterations', 'max_input_tokens',  'max_token_length', 'max_unique_chars', 'vocab_size', 'slack_ratio',  'include_joiner_token', 'joiner', 'reserved_tokens'  
])  def extract_char_tokens(word_counts):  """Extracts all single-character tokens from word_counts.  Args:      word_counts: list of (string, int) tuples  Returns:      set of single-character strings contained within word_counts    """  seen_chars = set()  for word, _ in word_counts:  for char in word:  seen_chars.add(char)  return seen_chars  def ensure_all_tokens_exist(input_tokens, output_tokens, include_joiner_token,  joiner):  """Adds all tokens in input_tokens to output_tokens if not already present.  Args:      input_tokens: set of strings (tokens) we want to include      output_tokens: string to int dictionary mapping token to count      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  Returns:      string to int dictionary with all tokens in input_tokens included    """  for token in input_tokens:  if token not in output_tokens:  output_tokens[token] = 1  if include_joiner_token:  joined_token = joiner + token  if joined_token not in output_tokens:  output_tokens[joined_token] = 1  return output_tokens  def get_search_threshs(word_counts, upper_thresh, lower_thresh):  """Clips the thresholds for binary search based on current word counts.  The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  counts = [count for _, count in word_counts]  max_count = max(counts)  min_count = min(counts)  if upper_thresh is None:  upper_search = max_count  else:  upper_search = max_count if max_count < upper_thresh else upper_thresh  if lower_thresh is None:  lower_search = min_count  else:  lower_search = min_count if min_count > lower_thresh else lower_thresh  return upper_search, lower_search  def get_input_words(word_counts, reserved_tokens, max_token_length):  """Filters out words that are longer than max_token_length or are reserved.  Args:      word_counts: list of (string, int) tuples      reserved_tokens: list of strings      max_token_length: int, maximum length of a token  Returns:      list of (string, int) tuples of filtered wordcounts    """  all_counts = []  for word, count in word_counts:  if len(word) > max_token_length or word in reserved_tokens:  continue  all_counts.append((word, count))  return all_counts  def generate_final_vocabulary(reserved_tokens, char_tokens, curr_tokens):  """Generates final vocab given reserved, single-character, and current tokens.  Args:      reserved_tokens: list of strings (tokens) that must be included in vocab      char_tokens: set of single-character strings      curr_tokens: string to int dict mapping token to count  Returns:      list of strings representing final vocabulary    """  sorted_char_tokens = sorted(list(char_tokens))  vocab_char_arrays = []  vocab_char_arrays.extend(reserved_tokens)  vocab_char_arrays.extend(sorted_char_tokens)  # Sort by count, then alphabetically.  sorted_tokens = sorted(sorted(curr_tokens.items(), key=lambda x: x[0]),  key=lambda x: x[1], reverse=True)  for token, _ in sorted_tokens:  vocab_char_arrays.append(token)  seen_tokens = set()  # Adding unique tokens to list to maintain sorted order.  vocab_words = []  for word in vocab_char_arrays:  if word in seen_tokens:  continue  seen_tokens.add(word)  vocab_words.append(word)  return vocab_words  def learn_with_thresh(word_counts, thresh, params):  """Wordpiece learning algorithm to produce a vocab given frequency threshold.  Args:      word_counts: list of (string, int) tuples      thresh: int, frequency threshold for a token to be included in the vocab      params: Params namedtuple, parameters for learning  Returns:      list of strings, vocabulary generated for the given thresh    """  # Set of single-character tokens.  char_tokens = extract_char_tokens(word_counts)  curr_tokens = ensure_all_tokens_exist(char_tokens, {},  params.include_joiner_token,  params.joiner)  for iteration in range(params.num_iterations):  subtokens = [dict() for _ in range(params.max_token_length + 1)]  # Populate array with counts of each subtoken.  for word, count in word_counts:  if iteration == 0:  split_indices = range(1, len(word) + 1)  else:  split_indices = get_split_indices(word, curr_tokens,  params.include_joiner_token,  params.joiner)  if not split_indices:  continue  start = 0  for index in split_indices:  for end in range(start + 1, len(word) + 1):  subtoken = word[start:end]  length = len(subtoken)  if params.include_joiner_token and start > 0:  subtoken = params.joiner + subtoken  if subtoken in subtokens[length]:  # Subtoken exists, increment count.  subtokens[length][subtoken] += count  else:  # New subtoken, add to dict.  subtokens[length][subtoken] = count  start = index  next_tokens = {}  # Get all tokens that have a count above the threshold.  for length in range(params.max_token_length, 0, -1):  for token, count in subtokens[length].items():  if count >= thresh:  next_tokens[token] = count  # Decrement the count of all prefixes.  if len(token) > length:  # This token includes the joiner.  joiner_len = len(params.joiner)  for i in range(1 + joiner_len, length + joiner_len):  prefix = token[0:i]  if prefix in subtokens[i - joiner_len]:  subtokens[i - joiner_len][prefix] -= count  else:  for i in range(1, length):  prefix = token[0:i]  if prefix in subtokens[i]:  subtokens[i][prefix] -= count  # Add back single-character tokens.  curr_tokens = ensure_all_tokens_exist(char_tokens, next_tokens,  params.include_joiner_token,  params.joiner)  vocab_words = generate_final_vocabulary(params.reserved_tokens, char_tokens,  curr_tokens)  return vocab_words  def learn_binary_search(word_counts, lower, upper, params):  """Performs binary search to find wordcount frequency threshold.  Given upper and lower bounds and a list of (word, count) tuples, performs    binary search to find the threshold closest to producing a vocabulary    of size vocab_size.  Args:      word_counts: list of (string, int) tuples      lower: int, lower bound for binary search      upper: int, upper bound for binary search      params: Params namedtuple, parameters for learning  Returns:      list of strings, vocab that is closest to target vocab_size    """    thresh = (upper + lower) // 2  current_vocab = learn_with_thresh(word_counts, thresh, params)  current_vocab_size = len(current_vocab)  # Allow count to be within k% of the target count, where k is slack ratio.  slack_count = params.slack_ratio * params.vocab_size  if slack_count < 0:  slack_count = 0  is_within_slack = (current_vocab_size <= params.vocab_size) and (  params.vocab_size - current_vocab_size <= slack_count)  # We've created a vocab within our goal range (or, ran out of search space).  if is_within_slack or lower >= upper or thresh <= 1:  return current_vocab  current_vocab = None  if current_vocab_size > params.vocab_size:  return learn_binary_search(word_counts, thresh + 1, upper, params)  else:  return learn_binary_search(word_counts, lower, thresh - 1, params)  

整合:

def learn(word_counts,  vocab_size: int,  reserved_tokens: List[str],  upper_thresh: Optional[int] = int(1e7),  lower_thresh: Optional[int] = 10,  num_iterations: int = 4,  max_input_tokens: Optional[int] = int(5e6),  max_token_length: int = 50,  max_unique_chars: int = 1000,  slack_ratio: float = 0.05,  include_joiner_token: bool = True,  joiner: str = '##') -> List[str]:  """Takes in wordcounts and returns wordpiece vocabulary.  Args:      word_counts: (word, count) pairs as a dictionary, or list of tuples.      vocab_size: The target vocabulary size. This is the maximum size.      reserved_tokens: A list of tokens that must be included in the vocabulary.      upper_thresh: Initial upper bound on the token frequency threshold.      lower_thresh: Initial lower bound on the token frequency threchold.      num_iterations: Number of iterations to run.      max_input_tokens: The maximum number of words in the initial vocabulary. The        words with the lowest counts are discarded. Use `None` or `-1` for "no        maximum".      max_token_length: The maximum token length. Counts for longer words are        discarded.      max_unique_chars: The maximum alphabet size. This prevents rare characters        from inflating the vocabulary. Counts for words containing characters        ouside of the selected alphabet are discarded.      slack_ratio: The maximum deviation acceptable from `vocab_size` for an        acceptable vocabulary. The acceptable range of vocabulary sizes is from        `vocab_size*(1-slack_ratio)` to `vocab_size`.      include_joiner_token: If true, include the `joiner` token in the output        vocabulary.      joiner: The prefix to include on suffix tokens in the output vocabulary.        Usually "##". For example 'places' could be tokenized as `['place',        '##s']`.  Returns:      string, final vocabulary with each word separated by newline    """    if isinstance(word_counts, dict):  word_counts = word_counts.items()  params = Params(upper_thresh, lower_thresh, num_iterations, max_input_tokens,  max_token_length, max_unique_chars, vocab_size, slack_ratio,  include_joiner_token, joiner, reserved_tokens)  upper_search, lower_search = get_search_threshs(word_counts,  params.upper_thresh,  params.lower_thresh)  all_counts = get_input_words(word_counts, params.reserved_tokens,  params.max_token_length)  allowed_chars = get_allowed_chars(all_counts, params.max_unique_chars)  filtered_counts = filter_input_words(all_counts, allowed_chars,  params.max_input_tokens)  vocab = learn_binary_search(filtered_counts, lower_search, upper_search,  params)  return vocab

这篇关于WordPiece词表的创建的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/460654

相关文章

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

在cscode中通过maven创建java项目

在cscode中创建java项目 可以通过博客完成maven的导入 建立maven项目 使用快捷键 Ctrl + Shift + P 建立一个 Maven 项目 1 Ctrl + Shift + P 打开输入框2 输入 "> java create"3 选择 maven4 选择 No Archetype5 输入 域名6 输入项目名称7 建立一个文件目录存放项目,文件名一般为项目名8 确定

Java 创建图形用户界面(GUI)入门指南(Swing库 JFrame 类)概述

概述 基本概念 Java Swing 的架构 Java Swing 是一个为 Java 设计的 GUI 工具包,是 JAVA 基础类的一部分,基于 Java AWT 构建,提供了一系列轻量级、可定制的图形用户界面(GUI)组件。 与 AWT 相比,Swing 提供了许多比 AWT 更好的屏幕显示元素,更加灵活和可定制,具有更好的跨平台性能。 组件和容器 Java Swing 提供了许多

顺序表之创建,判满,插入,输出

文章目录 🍊自我介绍🍊创建一个空的顺序表,为结构体在堆区分配空间🍊插入数据🍊输出数据🍊判断顺序表是否满了,满了返回值1,否则返回0🍊main函数 你的点赞评论就是对博主最大的鼓励 当然喜欢的小伙伴可以:点赞+关注+评论+收藏(一键四连)哦~ 🍊自我介绍   Hello,大家好,我是小珑也要变强(也是小珑),我是易编程·终身成长社群的一名“创始团队·嘉宾”

Maven创建项目中的groupId, artifactId, 和 version的意思

文章目录 groupIdartifactIdversionname groupId 定义:groupId 是 Maven 项目坐标的第一个部分,它通常表示项目的组织或公司的域名反转写法。例如,如果你为公司 example.com 开发软件,groupId 可能是 com.example。作用:groupId 被用来组织和分组相关的 Maven artifacts,这样可以避免

批处理以当前时间为文件名创建文件

批处理以当前时间为文件名创建文件 批处理创建空文件 有时候,需要创建以当前时间命名的文件,手动输入当然可以,但是有更省心的方法吗? 假设我是 windows 操作系统,打开命令行。 输入以下命令试试: echo %date:~0,4%_%date:~5,2%_%date:~8,2%_%time:~0,2%_%time:~3,2%_%time:~6,2% 输出类似: 2019_06

ORACLE 11g 创建数据库时 Enterprise Manager配置失败的解决办法 无法打开OEM的解决办法

在win7 64位系统下安装oracle11g,在使用Database configuration Assistant创建数据库时,在创建到85%的时候报错,错误如下: 解决办法: 在listener.ora中增加对BlueAeri-PC或ip地址的侦听,具体步骤如下: 1.启动Net Manager,在“监听程序”--Listener下添加一个地址,主机名写计

PHP7扩展开发之类的创建

本篇文章主要将如何在扩展中创建一个对象。创建的对象的过程,其实和一个小孩出生,成长的过程有些类似。 第一步,办准生证 生孩子第一步,先办准生证。声明我要生孩子了。对象创建的时候,如何办准生证呢?只要定义一个zend_class_entry变量即可。代码如下: zend_class_entry ce; zend_class_entry 是啥?可以认为它使一个原型,定义了一些对象应该有哪些东西

创建表时添加约束

查询表中的约束信息: SHOW KEYS FROM 表名; 示例: 创建depts表包含department_id该列为主键自动增长,department_name列不允许重复,location_id列不允许有空值。 create table depts(department_id int primary key auto_increment,department_name varcha

UML- 统一建模语言(Unified Modeling Language)创建项目的序列图及类图

陈科肇 ============= 1.主要模型 在UML系统开发中有三个主要的模型: 功能模型:从用户的角度展示系统的功能,包括用例图。 对象模型:采用对象、属性、操作、关联等概念展示系统的结构和基础,包括类图、对象图、包图。 动态模型:展现系统的内部行为。 包括序列图、活动图、状态图。 因为要创建个人空间项目并不是一个很大的项目,我这里只须关注两种图的创建就可以了,而在开始创建UML图