license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- multi-hop
pretty_name: MoreHopQA
size_categories:
- 1K<n<10K
configs:
- config_name: verified
data_files: data/with_human_verification.json
default: true
- config_name: unverified
data_files: data/without_human_verification.json
MoreHopQA: More Than Multi-hop Reasoning
We propose a new multi-hop dataset, MoreHopQA, which shifts from extractive to generative answers. Our dataset is created by utilizing three existing multi-hop datasets: HotpotQA, 2Wiki-MultihopQA, and MuSiQue. Instead of relying solely on factual reasoning, we enhance the existing multi-hop questions by adding another layer of questioning.
Dataset Details
Dataset Description
Our dataset is created through a semi-automated process, resulting in a dataset with 1118 samples that have undergone human verification.
For each sample, we share our 6 evaluation cases, including the new question, the original question, all the necessary subquestions, and a composite question from the second entity to the final answer (case 3 above).
We share both a version where each question was verified by a human, and a larger, solely automatically generated version ("unverified"). We recommend to primarily use the human-verified version.
- Curated by: Aizawa Lab, National Institute of Informatics (NII), Tokyo, Japan
- Language(s) (NLP): English
Dataset Sources
- Repository: github.com/Alab-NII/morehopqa
- Paper: [More Information Needed]
Uses
We provide our dataset to the community and hope that other researchers find it a useful tool to analyze and improve the multi-hop reasoning capabilities of their models. MoreHopQA is designed to challenge systems with complex queries requiring synthesis from multiple sources, thereby advancing the field in understanding and generating nuanced, context-rich responses. Additionally, we aim for this dataset to spur further innovation in reasoning models, helping to bridge the gap between human-like understanding and AI capabilities.
Dataset Structure
We share both a version where each question was verified by a human, and a larger, solely automatically generated version ("unverified"). We recommend to primarily use the human-verified version, which is also the default option when loading the dataset.
Each sample in the dataset contains the following fields:
- question: Our new multi-hop question with added reasoning (case 1 above)
- answer: The answer to the last hop (case 1, 3 and 4 above)
- context: Relevant context information to answer the previous question (relevant for all cases except case 4)
- previous_question: The previous 2-hop question from the original dataset (case 2 above)
- previous_answer: The answer to the previous 2-hop question (case 2 and 5 above)
- question_decomposition: Each question of the reasoning chain. List of entries with keys "sub_id" (position in the chain), "question", "answer", "paragraph_support_title" (relevant context paragraph). (sub_id 1 → case 6; sub_id 2 → case 5; sub_id 3 → case 4)
- question_on_last_hop: Question for case 3 above
- answer_type: Type of the expected answer
- previous_answer_type: Type of the answer to the previous 2-hop question
- no_of_hops: Number of extra hops to answer the additional reasoning question (might be more than one for more complicated tasks)
- reasoning_type: Might contain "Symbolic", "Arithmetic", "Commonsense"; depending on which kind of reasoning is required for the additional reasoning
Dataset Creation
Source Data
Our dataset is created by utilizing three existing multi-hop datasets: HotpotQA, 2Wiki-MultihopQA, and MuSiQue
Citation
If you find this dataset helpful, please consider citing our paper
BibTeX:
[More Information Needed]