本文主要是介绍GraphRAG 文本分割优化,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
GraphRAG 文本分割优化
开始调整对微软的 GraphRAG 进行优化,这次优化有以下几点,
- ‘�’ 乱码问题
- 句子在中间被截断的问题
# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License"""A module containing run and split_text_on_tokens methods definition."""from collections.abc import Iterable
from typing import Anyimport tiktoken
from datashaper import ProgressTickerimport graphrag.config.defaults as defs
from graphrag.index.text_splitting import Tokenizer
from graphrag.index.verbs.text.chunk.typing import TextChunkdef trim_sentence(sentence):# 定义固定的分隔符列表delimiters = [',', '、', '。', '!', '?', ';', ':', '.']# 查找第一个非分隔符的字符位置start_index = 0for i, char in enumerate(sentence):if char in delimiters:start_index = i + 1break# 查找最后一个非分隔符的字符位置end_index = len(sentence)for i in range(len(sentence) - 1, -1, -1):if sentence[i] in delimiters:end_index = i + 1break# 返回修剪后的句子return sentence[start_index:end_index]def run(input: list[str], args: dict[str, Any], tick: ProgressTicker
) -> Iterable[TextChunk]:"""Chunks text into multiple parts. A pipeline verb."""tokens_per_chunk = args.get("chunk_size", defs.CHUNK_SIZE)chunk_overlap = args.get("chunk_overlap", defs.CHUNK_OVERLAP)encoding_name = args.get("encoding_name", defs.ENCODING_MODEL)enc = tiktoken.get_encoding(encoding_name)def encode(text: str) -> list[int]:if not isinstance(text, str):text = f"{text}"return enc.encode(text)def decode(tokens: list[int]) -> str:return enc.decode(tokens)return split_text_on_tokens(input,Tokenizer(chunk_overlap=chunk_overlap,tokens_per_chunk=tokens_per_chunk,encode=encode,decode=decode,),tick,)# Adapted from - https://github.com/langchain-ai/langchain/blob/77b359edf5df0d37ef0d539f678cf64f5557cb54/libs/langchain/langchain/text_splitter.py#L471
# So we could have better control over the chunking process
def split_text_on_tokens(texts: list[str], enc: Tokenizer, tick: ProgressTicker
) -> list[TextChunk]:"""Split incoming text and return chunks."""result = []mapped_ids = []for source_doc_idx, text in enumerate(texts):encoded = enc.encode(text)# print(f"{text=} {encoded=}")# encoded = tiktoken.get_encoding("utf8").encode(text)tick(1)mapped_ids.append((source_doc_idx, encoded))input_ids: list[tuple[int, int]] = [(source_doc_idx, id) for source_doc_idx, ids in mapped_ids for id in ids]start_idx = 0cur_idx = min(start_idx + enc.tokens_per_chunk, len(input_ids))chunk_ids = input_ids[start_idx:cur_idx]while start_idx < len(input_ids):chunk_text = enc.decode([id for _, id in chunk_ids])chunk_text = chunk_text.strip("�")chunk_text = trim_sentence(chunk_text)enc.encode(chunk_text)doc_indices = list({doc_idx for doc_idx, _ in chunk_ids})result.append(TextChunk(text_chunk=chunk_text,source_doc_indices=doc_indices,# n_tokens=len(chunk_ids),n_tokens=len(enc.encode(chunk_text)),))start_idx += enc.tokens_per_chunk - enc.chunk_overlapcur_idx = min(start_idx + enc.tokens_per_chunk, len(input_ids))chunk_ids = input_ids[start_idx:cur_idx]return result
这篇关于GraphRAG 文本分割优化的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!