Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
82b49a0
·
1 Parent(s): ffdbb80

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ARC-Challenge/ai2_arc-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2b94e2f3fde8f37becb9a4232e957faa6aefa70adcdc34cdf939ec8d8240e12
3
+ size 203807
ARC-Challenge/ai2_arc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8343a1c70f081f41ba674f7559c4dddc17a85835fc0149ba6d138cedf72b34ff
3
+ size 189908
ARC-Challenge/ai2_arc-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d615e7a713287c30db5c1ea1d0ee258a5e8637221dcb8cf4bdb57037e71b8e5e
3
+ size 55742
ARC-Easy/ai2_arc-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60f08f8e640fc4579a1ba439436bafcfcf1d6147e0cf6a4498a49e640716b06b
3
+ size 346256
ARC-Easy/ai2_arc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:088e260fb83850f01726112f529d431dc4ed6987840ce8e940ec0444dbdeeaf5
3
+ size 330597
ARC-Easy/ai2_arc-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2552da75b32032256d2d2dcc67f6bf6cf96e12aa0bb1afa980193b7c2f76e856
3
+ size 86079
README.md DELETED
@@ -1,270 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- language_bcp47:
9
- - en-US
10
- license:
11
- - cc-by-sa-4.0
12
- multilinguality:
13
- - monolingual
14
- size_categories:
15
- - 1K<n<10K
16
- source_datasets:
17
- - original
18
- task_categories:
19
- - question-answering
20
- task_ids:
21
- - open-domain-qa
22
- - multiple-choice-qa
23
- paperswithcode_id: null
24
- pretty_name: Ai2Arc
25
- dataset_info:
26
- - config_name: ARC-Challenge
27
- features:
28
- - name: id
29
- dtype: string
30
- - name: question
31
- dtype: string
32
- - name: choices
33
- sequence:
34
- - name: text
35
- dtype: string
36
- - name: label
37
- dtype: string
38
- - name: answerKey
39
- dtype: string
40
- splits:
41
- - name: train
42
- num_bytes: 351888
43
- num_examples: 1119
44
- - name: test
45
- num_bytes: 377740
46
- num_examples: 1172
47
- - name: validation
48
- num_bytes: 97254
49
- num_examples: 299
50
- download_size: 680841265
51
- dataset_size: 826882
52
- - config_name: ARC-Easy
53
- features:
54
- - name: id
55
- dtype: string
56
- - name: question
57
- dtype: string
58
- - name: choices
59
- sequence:
60
- - name: text
61
- dtype: string
62
- - name: label
63
- dtype: string
64
- - name: answerKey
65
- dtype: string
66
- splits:
67
- - name: train
68
- num_bytes: 623254
69
- num_examples: 2251
70
- - name: test
71
- num_bytes: 661997
72
- num_examples: 2376
73
- - name: validation
74
- num_bytes: 158498
75
- num_examples: 570
76
- download_size: 680841265
77
- dataset_size: 1443749
78
- ---
79
-
80
- # Dataset Card for "ai2_arc"
81
-
82
- ## Table of Contents
83
- - [Dataset Description](#dataset-description)
84
- - [Dataset Summary](#dataset-summary)
85
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
86
- - [Languages](#languages)
87
- - [Dataset Structure](#dataset-structure)
88
- - [Data Instances](#data-instances)
89
- - [Data Fields](#data-fields)
90
- - [Data Splits](#data-splits)
91
- - [Dataset Creation](#dataset-creation)
92
- - [Curation Rationale](#curation-rationale)
93
- - [Source Data](#source-data)
94
- - [Annotations](#annotations)
95
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
96
- - [Considerations for Using the Data](#considerations-for-using-the-data)
97
- - [Social Impact of Dataset](#social-impact-of-dataset)
98
- - [Discussion of Biases](#discussion-of-biases)
99
- - [Other Known Limitations](#other-known-limitations)
100
- - [Additional Information](#additional-information)
101
- - [Dataset Curators](#dataset-curators)
102
- - [Licensing Information](#licensing-information)
103
- - [Citation Information](#citation-information)
104
- - [Contributions](#contributions)
105
-
106
- ## Dataset Description
107
-
108
- - **Homepage:** [https://allenai.org/data/arc](https://allenai.org/data/arc)
109
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
- - **Size of downloaded dataset files:** 1298.60 MB
113
- - **Size of the generated dataset:** 2.17 MB
114
- - **Total amount of disk used:** 1300.77 MB
115
-
116
- ### Dataset Summary
117
-
118
- A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
119
- advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
120
- only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also
121
- including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
122
-
123
- ### Supported Tasks and Leaderboards
124
-
125
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
-
127
- ### Languages
128
-
129
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
-
131
- ## Dataset Structure
132
-
133
- ### Data Instances
134
-
135
- #### ARC-Challenge
136
-
137
- - **Size of downloaded dataset files:** 649.30 MB
138
- - **Size of the generated dataset:** 0.79 MB
139
- - **Total amount of disk used:** 650.09 MB
140
-
141
- An example of 'train' looks as follows.
142
- ```
143
- {
144
- "answerKey": "B",
145
- "choices": {
146
- "label": ["A", "B", "C", "D"],
147
- "text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."]
148
- },
149
- "id": "Mercury_SC_405487",
150
- "question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?"
151
- }
152
- ```
153
-
154
- #### ARC-Easy
155
-
156
- - **Size of downloaded dataset files:** 649.30 MB
157
- - **Size of the generated dataset:** 1.38 MB
158
- - **Total amount of disk used:** 650.68 MB
159
-
160
- An example of 'train' looks as follows.
161
- ```
162
- {
163
- "answerKey": "B",
164
- "choices": {
165
- "label": ["A", "B", "C", "D"],
166
- "text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."]
167
- },
168
- "id": "Mercury_SC_405487",
169
- "question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?"
170
- }
171
- ```
172
-
173
- ### Data Fields
174
-
175
- The data fields are the same among all splits.
176
-
177
- #### ARC-Challenge
178
- - `id`: a `string` feature.
179
- - `question`: a `string` feature.
180
- - `choices`: a dictionary feature containing:
181
- - `text`: a `string` feature.
182
- - `label`: a `string` feature.
183
- - `answerKey`: a `string` feature.
184
-
185
- #### ARC-Easy
186
- - `id`: a `string` feature.
187
- - `question`: a `string` feature.
188
- - `choices`: a dictionary feature containing:
189
- - `text`: a `string` feature.
190
- - `label`: a `string` feature.
191
- - `answerKey`: a `string` feature.
192
-
193
- ### Data Splits
194
-
195
- | name |train|validation|test|
196
- |-------------|----:|---------:|---:|
197
- |ARC-Challenge| 1119| 299|1172|
198
- |ARC-Easy | 2251| 570|2376|
199
-
200
- ## Dataset Creation
201
-
202
- ### Curation Rationale
203
-
204
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
-
206
- ### Source Data
207
-
208
- #### Initial Data Collection and Normalization
209
-
210
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
-
212
- #### Who are the source language producers?
213
-
214
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
-
216
- ### Annotations
217
-
218
- #### Annotation process
219
-
220
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
-
222
- #### Who are the annotators?
223
-
224
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
-
226
- ### Personal and Sensitive Information
227
-
228
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
-
230
- ## Considerations for Using the Data
231
-
232
- ### Social Impact of Dataset
233
-
234
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
-
236
- ### Discussion of Biases
237
-
238
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
-
240
- ### Other Known Limitations
241
-
242
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
-
244
- ## Additional Information
245
-
246
- ### Dataset Curators
247
-
248
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
249
-
250
- ### Licensing Information
251
-
252
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
253
-
254
- ### Citation Information
255
-
256
- ```
257
- @article{allenai:arc,
258
- author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
259
- Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
260
- title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
261
- journal = {arXiv:1803.05457v1},
262
- year = {2018},
263
- }
264
-
265
- ```
266
-
267
-
268
- ### Contributions
269
-
270
- Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ai2_arc.py DELETED
@@ -1,132 +0,0 @@
1
- """TODO(arc): Add a description here."""
2
-
3
-
4
- import json
5
- import os
6
-
7
- import datasets
8
-
9
-
10
- # TODO(ai2_arc): BibTeX citation
11
- _CITATION = """\
12
- @article{allenai:arc,
13
- author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
14
- Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
15
- title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
16
- journal = {arXiv:1803.05457v1},
17
- year = {2018},
18
- }
19
- """
20
-
21
- # TODO(ai2_arc):
22
- _DESCRIPTION = """\
23
- A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
24
- advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
25
- only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also
26
- including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
27
- """
28
-
29
- _URL = "https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip"
30
-
31
-
32
- class Ai2ArcConfig(datasets.BuilderConfig):
33
- """BuilderConfig for Ai2ARC."""
34
-
35
- def __init__(self, **kwargs):
36
- """BuilderConfig for Ai2Arc.
37
-
38
- Args:
39
- **kwargs: keyword arguments forwarded to super.
40
- """
41
- super(Ai2ArcConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
42
-
43
-
44
- class Ai2Arc(datasets.GeneratorBasedBuilder):
45
- """TODO(arc): Short description of my dataset."""
46
-
47
- # TODO(arc): Set up version.
48
- VERSION = datasets.Version("1.0.0")
49
- BUILDER_CONFIGS = [
50
- Ai2ArcConfig(
51
- name="ARC-Challenge",
52
- description="""\
53
- Challenge Set of 2590 “hard” questions (those that both a retrieval and a co-occurrence method fail to answer correctly)
54
- """,
55
- ),
56
- Ai2ArcConfig(
57
- name="ARC-Easy",
58
- description="""\
59
- Easy Set of 5197 questions
60
- """,
61
- ),
62
- ]
63
-
64
- def _info(self):
65
- # TODO(ai2_arc): Specifies the datasets.DatasetInfo object
66
- return datasets.DatasetInfo(
67
- # This is the description that will appear on the datasets page.
68
- description=_DESCRIPTION,
69
- # datasets.features.FeatureConnectors
70
- features=datasets.Features(
71
- {
72
- "id": datasets.Value("string"),
73
- "question": datasets.Value("string"),
74
- "choices": datasets.features.Sequence(
75
- {"text": datasets.Value("string"), "label": datasets.Value("string")}
76
- ),
77
- "answerKey": datasets.Value("string")
78
- # These are the features of your dataset like images, labels ...
79
- }
80
- ),
81
- # If there's a common (input, target) tuple from the features,
82
- # specify them here. They'll be used if as_supervised=True in
83
- # builder.as_dataset.
84
- supervised_keys=None,
85
- # Homepage of the dataset for documentation
86
- homepage="https://allenai.org/data/arc",
87
- citation=_CITATION,
88
- )
89
-
90
- def _split_generators(self, dl_manager):
91
- """Returns SplitGenerators."""
92
- # TODO(ai2_arc): Downloads the data and defines the splits
93
- # dl_manager is a datasets.download.DownloadManager that can be used to
94
- # download and extract URLs
95
- dl_dir = dl_manager.download_and_extract(_URL)
96
- data_dir = os.path.join(dl_dir, "ARC-V1-Feb2018-2")
97
- return [
98
- datasets.SplitGenerator(
99
- name=datasets.Split.TRAIN,
100
- # These kwargs will be passed to _generate_examples
101
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Train.jsonl")},
102
- ),
103
- datasets.SplitGenerator(
104
- name=datasets.Split.TEST,
105
- # These kwargs will be passed to _generate_examples
106
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Test.jsonl")},
107
- ),
108
- datasets.SplitGenerator(
109
- name=datasets.Split.VALIDATION,
110
- # These kwargs will be passed to _generate_examples
111
- gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, self.config.name + "-Dev.jsonl")},
112
- ),
113
- ]
114
-
115
- def _generate_examples(self, filepath):
116
- """Yields examples."""
117
- # TODO(ai2_arc): Yields (key, example) tuples from the dataset
118
- with open(filepath, encoding="utf-8") as f:
119
- for row in f:
120
- data = json.loads(row)
121
- answerkey = data["answerKey"]
122
- id_ = data["id"]
123
- question = data["question"]["stem"]
124
- choices = data["question"]["choices"]
125
- text_choices = [choice["text"] for choice in choices]
126
- label_choices = [choice["label"] for choice in choices]
127
- yield id_, {
128
- "id": id_,
129
- "answerKey": answerkey,
130
- "question": question,
131
- "choices": {"text": text_choices, "label": label_choices},
132
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"ARC-Challenge": {"description": "A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\n advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\n only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\n including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.\n", "citation": "@article{allenai:arc,\n author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and\n Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},\n title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},\n journal = {arXiv:1803.05457v1},\n year = {2018},\n}\n", "homepage": "https://allenai.org/data/arc", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "ai2_arc", "config_name": "ARC-Challenge", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 377740, "num_examples": 1172, "dataset_name": "ai2_arc"}, "train": {"name": "train", "num_bytes": 351888, "num_examples": 1119, "dataset_name": "ai2_arc"}, "validation": {"name": "validation", "num_bytes": 97254, "num_examples": 299, "dataset_name": "ai2_arc"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip": {"num_bytes": 680841265, "checksum": "6d2d5ab50b2ceec6ba5f79c921be77cf2de712ea25a2b3f4fff3acc101cecfa0"}}, "download_size": 680841265, "dataset_size": 826882, "size_in_bytes": 681668147}, "ARC-Easy": {"description": "A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\n advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\n only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\n including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.\n", "citation": "@article{allenai:arc,\n author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and\n Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},\n title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},\n journal = {arXiv:1803.05457v1},\n year = {2018},\n}\n", "homepage": "https://allenai.org/data/arc", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "ai2_arc", "config_name": "ARC-Easy", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 661997, "num_examples": 2376, "dataset_name": "ai2_arc"}, "train": {"name": "train", "num_bytes": 623254, "num_examples": 2251, "dataset_name": "ai2_arc"}, "validation": {"name": "validation", "num_bytes": 158498, "num_examples": 570, "dataset_name": "ai2_arc"}}, "download_checksums": {"https://s3-us-west-2.amazonaws.com/ai2-website/data/ARC-V1-Feb2018.zip": {"num_bytes": 680841265, "checksum": "6d2d5ab50b2ceec6ba5f79c921be77cf2de712ea25a2b3f4fff3acc101cecfa0"}}, "download_size": 680841265, "dataset_size": 1443749, "size_in_bytes": 682285014}}