Datasets:

Languages:
English
ArXiv:
License:
File size: 5,242 Bytes
e573418
 
 
 
 
 
 
 
 
 
 
8385af4
02c0e76
75bce8d
02c0e76
 
75bce8d
de4da28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a63849
 
 
 
 
 
 
 
 
 
 
 
 
 
de4da28
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- multi-hop
pretty_name: MoreHopQA
size_categories:
- 1K<n<10K
configs:
- config_name: verified
  data_files: "data/with_human_verification.json"
  default: true
- config_name: unverified
  data_files: "data/without_human_verification.json"
---

# MoreHopQA: More Than Multi-hop Reasoning

<!-- Provide a quick summary of the dataset. -->

We propose a new multi-hop dataset, MoreHopQA, which shifts from extractive to generative answers. Our dataset is created by utilizing three existing multi-hop datasets: [HotpotQA](https://github.com/hotpotqa/hotpot), [2Wiki-MultihopQA](https://github.com/Alab-NII/2wikimultihop), and [MuSiQue](https://github.com/StonyBrookNLP/musique). Instead of relying solely on factual reasoning, we enhance the existing multi-hop questions by adding another layer of questioning.

<div align="center">
<img src="figures/overall-1.png" style="width:50%">
</div>

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

Our dataset is created through a semi-automated process, resulting in a dataset with 1118 samples that have undergone human verification.  
For each sample, we share our 6 evaluation cases, including the new question, the original question, all the necessary subquestions, and a composite question from the second entity to the final answer (case 3 above).
We share both a version where each question was verified by a human, and a larger, solely automatically generated version ("unverified"). We recommend to primarily use the human-verified version.

- **Curated by:** Aizawa Lab, National Institute of Informatics (NII), Tokyo, Japan
- **Language(s) (NLP):** English
- <p xmlns:cc="http://creativecommons.org/ns#" xmlns:dct="http://purl.org/dc/terms/"><b>License:</b> The <a property="dct:title" rel="cc:attributionURL" href="https://github.com/Alab-NII/morehopqa">MorehopQA</a> dataset is licensed under <a href="https://creativecommons.org/licenses/by/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">CC BY 4.0</a></p>



### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** [github.com/Alab-NII/morehopqa](https://github.com/Alab-NII/morehopqa)
- **Paper:** [More Information Needed]

## Uses

<!-- Address questions around how the dataset is intended to be used. -->
We provide our dataset to the community and hope that other researchers find it a useful tool to analyze and improve the multi-hop reasoning capabilities of their models. MoreHopQA is designed to challenge systems with complex queries requiring synthesis from multiple sources, thereby advancing the field in understanding and generating nuanced, context-rich responses. Additionally, we aim for this dataset to spur further innovation in reasoning models, helping to bridge the gap between human-like understanding and AI capabilities.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

We share both a version where each question was verified by a human, and a larger, solely automatically generated version ("unverified"). We recommend to primarily use the human-verified version, which is also the default option when loading the dataset.

Each sample in the dataset contains the following fields:

- **question**: Our new multi-hop question with added reasoning (case 1 above)
- **answer**: The answer to the last hop (case 1, 3 and 4 above)
- **context**: Relevant context information to answer the previous question (relevant for all cases except case 4)
- **previous_question**: The previous 2-hop question from the original dataset (case 2 above)
- **previous_answer**: The answer to the previous 2-hop question (case 2 and 5 above)
- **question_decomposition**: Each question of the reasoning chain. List of entries with keys "sub_id" (position in the chain), "question", "answer", "paragraph_support_title" (relevant context paragraph). (sub_id 1 &#8594; case 6; sub_id 2 &#8594; case 5; sub_id 3 &#8594; case 4)
- **question_on_last_hop**: Question for case 3 above
- **answer_type**: Type of the expected answer
- **previous_answer_type**: Type of the answer to the previous 2-hop question
- **no_of_hops**: Number of extra hops to answer the additional reasoning question (might be more than one for more complicated tasks)
- **reasoning_type**: Might contain "Symbolic", "Arithmetic", "Commonsense"; depending on which kind of reasoning is required for the additional reasoning
  
## Dataset Creation

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
Our dataset is created by utilizing three existing multi-hop datasets: [HotpotQA](https://github.com/hotpotqa/hotpot), [2Wiki-MultihopQA](https://github.com/Alab-NII/2wikimultihop), and [MuSiQue](https://github.com/StonyBrookNLP/musique)

## Citation

If you find this dataset helpful, please consider citing our paper

**BibTeX:**

[More Information Needed]