Datasets:

Languages:
English
ArXiv:
License:
albertvillanova HF staff commited on
Commit
44a5f3f
·
verified ·
1 Parent(s): 14110dd

Add documentation card

Browse files
Files changed (1) hide show
  1. README.md +303 -1
README.md CHANGED
@@ -1,3 +1,305 @@
1
  ---
2
- license: cc-by-sa-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ languages:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ licenses:
9
+ - cc-by-sa-3.0
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: feverous
13
+ pretty_name: FEVEROUS
14
+ size_categories:
15
+ - 100K<n<1M
16
+ source_datasets:
17
+ - extended|wikipedia
18
+ task_categories:
19
+ - text-classification
20
+ task_ids:
21
+ - text-classification-other-knowledge-verification
22
  ---
23
+ # Dataset Card for "fever"
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** https://fever.ai/dataset/feverous.html
52
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+ - **Paper:** [FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information](https://arxiv.org/abs/2106.05707)
54
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ### Dataset Summary
57
+
58
+ With billions of individual pages on the web providing information on almost every conceivable topic, we should have
59
+ the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
60
+ information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
61
+ transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
62
+ of recent research and media coverage: false information coming from unreliable sources.
63
+
64
+ The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
65
+
66
+ FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact
67
+ verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of
68
+ sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes,
69
+ or does not provide enough information to reach a verdict. The dataset also contains annotation metadata such as
70
+ annotator actions (query keywords, clicks on page, time signatures), and the type of challenge each claim poses.
71
+
72
+ ### Supported Tasks and Leaderboards
73
+
74
+ The task is verification of textual claims against textual sources.
75
+
76
+ When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
77
+ passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
78
+ verification systems it is retrieved from a large set of documents in order to form the evidence.
79
+
80
+ ### Languages
81
+
82
+ The dataset is in English (`en`).
83
+
84
+ ## Dataset Structure
85
+
86
+ ### Data Instances
87
+
88
+ - **Size of downloaded dataset files:** 187.82 MB
89
+ - **Size of the generated dataset:** 123.25 MB
90
+ - **Total amount of disk used:** 311.07 MB
91
+
92
+ An example of 'wikipedia_pages' looks as follows:
93
+ ```
94
+ {'id': 24435,
95
+ 'label': 1,
96
+ 'claim': 'Michael Folivi competed with ten teams from 2016 to 2021, appearing in 54 games and making seven goals in total.',
97
+ 'evidence': [{'content': ['Michael Folivi_cell_1_2_0',
98
+ 'Michael Folivi_cell_1_7_0',
99
+ 'Michael Folivi_cell_1_8_0',
100
+ 'Michael Folivi_cell_1_9_0',
101
+ 'Michael Folivi_cell_1_12_0'],
102
+ 'context': [['Michael Folivi_title',
103
+ 'Michael Folivi_section_4',
104
+ 'Michael Folivi_header_cell_1_0_0'],
105
+ ['Michael Folivi_title',
106
+ 'Michael Folivi_section_4',
107
+ 'Michael Folivi_header_cell_1_0_0'],
108
+ ['Michael Folivi_title',
109
+ 'Michael Folivi_section_4',
110
+ 'Michael Folivi_header_cell_1_0_0'],
111
+ ['Michael Folivi_title',
112
+ 'Michael Folivi_section_4',
113
+ 'Michael Folivi_header_cell_1_0_0'],
114
+ ['Michael Folivi_title',
115
+ 'Michael Folivi_section_4',
116
+ 'Michael Folivi_header_cell_1_0_0']]},
117
+ {'content': ['Michael Folivi_cell_0_13_1',
118
+ 'Michael Folivi_cell_0_14_1',
119
+ 'Michael Folivi_cell_0_15_1',
120
+ 'Michael Folivi_cell_0_16_1',
121
+ 'Michael Folivi_cell_0_18_1'],
122
+ 'context': [['Michael Folivi_title',
123
+ 'Michael Folivi_header_cell_0_13_0',
124
+ 'Michael Folivi_header_cell_0_11_0'],
125
+ ['Michael Folivi_title',
126
+ 'Michael Folivi_header_cell_0_14_0',
127
+ 'Michael Folivi_header_cell_0_11_0'],
128
+ ['Michael Folivi_title',
129
+ 'Michael Folivi_header_cell_0_15_0',
130
+ 'Michael Folivi_header_cell_0_11_0'],
131
+ ['Michael Folivi_title',
132
+ 'Michael Folivi_header_cell_0_16_0',
133
+ 'Michael Folivi_header_cell_0_11_0'],
134
+ ['Michael Folivi_title',
135
+ 'Michael Folivi_header_cell_0_18_0',
136
+ 'Michael Folivi_header_cell_0_11_0']]}],
137
+ 'annotator_operations': [{'operation': 'start',
138
+ 'value': 'start',
139
+ 'time': 0.0},
140
+ {'operation': 'Now on', 'value': '?search=', 'time': 0.78},
141
+ {'operation': 'search', 'value': 'Michael Folivi', 'time': 78.101},
142
+ {'operation': 'Now on', 'value': 'Michael Folivi', 'time': 78.822},
143
+ {'operation': 'Highlighting',
144
+ 'value': 'Michael Folivi_cell_1_2_0',
145
+ 'time': 96.202},
146
+ {'operation': 'Highlighting',
147
+ 'value': 'Michael Folivi_cell_1_7_0',
148
+ 'time': 96.9},
149
+ {'operation': 'Highlighting',
150
+ 'value': 'Michael Folivi_cell_1_8_0',
151
+ 'time': 97.429},
152
+ {'operation': 'Highlighting',
153
+ 'value': 'Michael Folivi_cell_1_9_0',
154
+ 'time': 97.994},
155
+ {'operation': 'Highlighting',
156
+ 'value': 'Michael Folivi_cell_1_12_0',
157
+ 'time': 99.02},
158
+ {'operation': 'Highlighting',
159
+ 'value': 'Michael Folivi_cell_0_13_1',
160
+ 'time': 106.108},
161
+ {'operation': 'Highlighting',
162
+ 'value': 'Michael Folivi_cell_0_14_1',
163
+ 'time': 106.702},
164
+ {'operation': 'Highlighting',
165
+ 'value': 'Michael Folivi_cell_0_15_1',
166
+ 'time': 107.423},
167
+ {'operation': 'Highlighting',
168
+ 'value': 'Michael Folivi_cell_0_16_1',
169
+ 'time': 108.186},
170
+ {'operation': 'Highlighting',
171
+ 'value': 'Michael Folivi_cell_0_17_1',
172
+ 'time': 108.788},
173
+ {'operation': 'Highlighting',
174
+ 'value': 'Michael Folivi_header_cell_0_17_0',
175
+ 'time': 108.8},
176
+ {'operation': 'Highlighting',
177
+ 'value': 'Michael Folivi_cell_0_18_1',
178
+ 'time': 109.469},
179
+ {'operation': 'Highlighting deleted',
180
+ 'value': 'Michael Folivi_cell_0_17_1',
181
+ 'time': 124.28},
182
+ {'operation': 'Highlighting deleted',
183
+ 'value': 'Michael Folivi_header_cell_0_17_0',
184
+ 'time': 124.293},
185
+ {'operation': 'finish', 'value': 'finish', 'time': 141.351}],
186
+ 'expected_challenge': '',
187
+ 'challenge': 'Numerical Reasoning'}
188
+ ```
189
+
190
+ ### Data Fields
191
+
192
+ The data fields are the same among all splits.
193
+
194
+ - `id` (int): ID of the sample.
195
+ - `label` (ClassLabel): Annotated label for the claim. Can be one of {"SUPPORTS", "REFUTES", "NOT ENOUGH INFO"}.
196
+ - `claim` (str): Text of the claim.
197
+ - `evidence` (list of dict): Evidence sets (at maximum three). Each set consists of dictionaries with two fields:
198
+ - `content` (list of str): List of element IDs serving as the evidence for the claim. Each element ID is in the format
199
+ `"[PAGE ID]_[EVIDENCE TYPE]_[NUMBER ID]"`, where `[EVIDENCE TYPE]` can be: `sentence`, `cell`, `header_cell`,
200
+ `table_caption`, `item`.
201
+ - `context` (list of list of str): List (for each element ID in `content`) of a list of Wikipedia elements that are
202
+ automatically associated with that element ID and serve as context. This includes an article's title, relevant
203
+ sections (the section and sub-section(s) the element is located in), and for cells the closest row and column
204
+ header (multiple row/column headers if they follow each other).
205
+ - `annotator_operations` (list of dict): List of operations an annotator used to find the evidence and reach a verdict,
206
+ given the claim. Each element in the list is a dictionary with the fields:
207
+ - `operation` (str): Operation name. Any of the following:
208
+ - `start`, `finish`: Annotation started/finished. The value is the name of the operation.
209
+ - `search`: Annotator used the Wikipedia search function. The value is the entered search term or the term selected
210
+ from the automatic suggestions. If the annotator did not select any of the suggestions but instead went into
211
+ advanced search, the term is prefixed with "contains...".
212
+ - `hyperlink`: Annotator clicked on a hyperlink in the page. The value is the anchor text of the hyperlink.
213
+ - `Now on`: The page the annotator has landed after a search or a hyperlink click. The value is the PAGE ID.
214
+ - `Page search`: Annotator search on a page. The value is the search term.
215
+ - `page-search-reset`: Annotator cleared the search box. The value is the name of the operation.
216
+ - `Highlighting`, `Highlighting deleted`: Annotator selected/unselected an element on the page. The value is
217
+ `ELEMENT ID`.
218
+ - `back-button-clicked`: Annotator pressed the back button. The value is the name of the operation.
219
+ - `value` (str): Value associated with the operation.
220
+ - `time` (float): Time in seconds from the start of the annotation.
221
+ - `expected_challenge` (str): The challenge the claim generator selected will be faced when verifying the claim, one
222
+ out of the following: `Numerical Reasoning`, `Multi-hop Reasoning`, `Entity Disambiguation`,
223
+ `Combining Tables and Text`, `Search terms not in claim`, `Other`.
224
+ - `challenge` (str): Main challenge to verify the claim, one out of the following: `Numerical Reasoning`,
225
+ `Multi-hop Reasoning`, `Entity Disambiguation`, `Combining Tables and Text`, `Search terms not in claim`, `Other`.
226
+
227
+ ### Data Splits
228
+
229
+ | | train | validation | test |
230
+ |--------------------|------:|-----------:|-----:|
231
+ | Number of examples | 71291 | 7890 | 7845 |
232
+
233
+ ## Dataset Creation
234
+
235
+ ### Curation Rationale
236
+
237
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
+
239
+ ### Source Data
240
+
241
+ #### Initial Data Collection and Normalization
242
+
243
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
244
+
245
+ #### Who are the source language producers?
246
+
247
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
248
+
249
+ ### Annotations
250
+
251
+ #### Annotation process
252
+
253
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
254
+
255
+ #### Who are the annotators?
256
+
257
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
258
+
259
+ ### Personal and Sensitive Information
260
+
261
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
+
263
+ ## Considerations for Using the Data
264
+
265
+ ### Social Impact of Dataset
266
+
267
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
268
+
269
+ ### Discussion of Biases
270
+
271
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
272
+
273
+ ### Other Known Limitations
274
+
275
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
+
277
+ ## Additional Information
278
+
279
+ ### Dataset Curators
280
+
281
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
282
+
283
+ ### Licensing Information
284
+
285
+ ```
286
+ These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Terms”). You may not use these files except in compliance with the applicable License Terms.
287
+ ```
288
+
289
+ ### Citation Information
290
+
291
+ If you use this dataset, please cite:
292
+ ```bibtex
293
+ @inproceedings{Aly21Feverous,
294
+ author = {Aly, Rami and Guo, Zhijiang and Schlichtkrull, Michael Sejr and Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Cocarascu, Oana and Mittal, Arpit},
295
+ title = {{FEVEROUS}: Fact Extraction and {VERification} Over Unstructured and Structured information},
296
+ eprint={2106.05707},
297
+ archivePrefix={arXiv},
298
+ primaryClass={cs.CL},
299
+ year = {2021}
300
+ }
301
+ ```
302
+
303
+ ### Contributions
304
+
305
+ Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.