π₯Toward General Instruction-Following Alignment for Retrieval-Augmented Generation
π€οΈ Website β’ π€ VIF-RAG-QA-110K β’ π VIF-RAG-QA-20K β’ π Arxiv β’ π€ HF-Paper
We propose a instruction-following alignement pipline named **VIF-RAG framework** and auto-evaluation Benchmark named **FollowRAG**:
- **IF-RAG:** It is the first automated, scalable, and verifiable data synthesis pipeline for aligning complex instruction-following in RAG scenarios. VIF-RAG integrates a verification process at each step of data augmentation and combination. We begin by manually creating a minimal set of atomic instructions (<100) and then apply steps including instruction composition, quality verification, instruction-query combination, and dual-stage verification to generate a large-scale, high-quality VIF-RAG-QA dataset (>100K).
- **FollowRAG:** To address the gap in instruction-following auto-evaluation for RAG systems, we introduce FollowRAG Benchmark, which includes approximately 3K test samples, covering 22 categories of general instruction constraints and 4 knowledge-intensive QA datasets. Due to its robust pipeline design, FollowRAG can seamlessly integrate with different RAG benchmarks
## π Citation
Please star our github repo and cite our work if you find the repository helpful.
```
@article{dong2024general,
author = {Guanting Dong and
Xiaoshuai Song and
Yutao Zhu and
Runqi Qiao and
Zhicheng Dou and
Ji{-}Rong Wen},
title = {Toward General Instruction-Following Alignment for Retrieval-Augmented
Generation},
journal = {CoRR},
volume = {abs/2410.09584},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2410.09584},
doi = {10.48550/ARXIV.2410.09584},
eprinttype = {arXiv},
eprint = {2410.09584},
timestamp = {Fri, 22 Nov 2024 21:38:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2410-09584.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}}
```