Papers
arxiv:2410.12788

Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception

Published on Oct 16
· Submitted by Duguce on Oct 22
Authors:
,
,
,

Abstract

Retrieval-Augmented Generation (RAG), while serving as a viable complement to large language models (LLMs), often overlooks the crucial aspect of text chunking within its pipeline, which impacts the quality of knowledge-intensive tasks. This paper introduces the concept of Meta-Chunking, which refers to a granularity between sentences and paragraphs, consisting of a collection of sentences within a paragraph that have deep linguistic logical connections. To implement Meta-Chunking, we designed two strategies based on LLMs: Margin Sampling Chunking and Perplexity Chunking. The former employs LLMs to perform binary classification on whether consecutive sentences need to be segmented, making decisions based on the probability difference obtained from margin sampling. The latter precisely identifies text chunk boundaries by analyzing the characteristics of perplexity distribution. Additionally, considering the inherent complexity of different texts, we propose a strategy that combines Meta-Chunking with dynamic merging to achieve a balance between fine-grained and coarse-grained text chunking. Experiments conducted on eleven datasets demonstrate that Meta-Chunking can more efficiently improve the performance of single-hop and multi-hop question answering based on RAG. For instance, on the 2WikiMultihopQA dataset, it outperforms similarity chunking by 1.32 while only consuming 45.8% of the time. Our code is available at https://github.com/IAAR-Shanghai/Meta-Chunking.

Community

Paper submitter

Hello everyone, I would like to introduce our recent research paper!

This paper proposes the concept of Meta-Chunking along with its implementation strategies, namely Margin Sampling Chunking and PPL Chunking, which enable a more precise capture of the inherent logical structure of text, thereby providing a powerful tool for optimizing text segmentation within the RAG pipeline. To balance the effectiveness of fine-grained and coarse-grained text segmentation, we present a dynamic combination approach with Meta-Chunking to address the limitation when dealing with diverse texts. Our comprehensive evaluation using multiple metrics on eleven datasets demonstrates that Meta-Chunking outperforms both rule-based and similarity-based chunking, while also achieving a better balance between performance, time cost, and computational cost compared to current LLMs approaches.
abcd.png

We would greatly appreciate it if you could give us a like or share on Hugging Face!

Chinese Version:
大家好,非常开心给大家推荐我们最近一个关于文本分块的研究!

本文提出“元分块”概念及其实施策略——边际采样分块与困惑度分块,对文本内在逻辑结构实现更精细捕捉,为优化RAG管道中的文本分割提供了有力工具。为了平衡细粒度与粗粒度文本切分的效能,我们提出元分块配合动态组合的方式,解决分块在面对多样文本时的局限性。我们综合运用多项指标在11个数据集上进行的评估表明,元分块优于规则分块和相似性分块,同时与当前大模型分块方法相比,在性能与损耗之间取得了更佳的平衡点。

如果您能在 Hugging Face 上面为我们点赞或分享,我们将不胜感激!

GitHub: https://github.com/IAAR-Shanghai/Meta-Chunking
arXiv: https://arxiv.org/abs/2410.12788

·

Thanks for sharing! Very informative and insightful work!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.12788 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.12788 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.12788 in a Space README.md to link it from this page.

Collections including this paper 2