The Dataset Viewer has been disabled on this dataset.
Dataset Summary
RetrievalQA is a short-form open-domain question answering (QA) dataset consisting of 1,271 questions covering new world and long-tail knowledge. We ensure the knowledge necessary to answer the questions is absent from most LLMs. Therefore, LLMs must truthfully decide whether to retrieve to be able to answer the questions correctly.
RetrievalQA enables us to evaluate the effectiveness of adaptive retrieval-augmented generation (RAG) approaches, an aspect predominantly overlooked in prior studies and recent RAG evaluation systems, which focus only on task performance, the relevance of retrieval context or the faithfulness of answers.
Dataset Sources
- Repository: https://github.com/hyintell/RetrievalQA
- Paper: https://arxiv.org/abs/2402.16457
Dataset Structure
Here is an example of a data instance:
{
"data_source": "realtimeqa",
"question_id": "realtimeqa_20231013_1",
"question": "What percentage of couples are 'sleep divorced', according to new research?",
"ground_truth": ["15%"],
"context": [
{
"title": "Do We Sleep Longer When We Share a Bed?",
"text": "1.4% of respondents have started a sleep divorce, or sleeping separately from their partner, and maintained it in the past year. Adults who have ..."
}, ...
]
}
where:
data_source
: the origin dataset of the question comes fromquestion
: the questionground_truth
: a list of possible answerscontext
: a list of dictionaries of retrieved relevant evidence. Note that thetitle
of the document might be empty.
Citation
@misc{zhang2024retrievalqa,
title={RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering},
author={Zihan Zhang and Meng Fang and Ling Chen},
year={2024},
eprint={2402.16457},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 39