Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
ba79068
1 Parent(s): 3c97d3b

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +178 -0
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - cc-by-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - extended|wikipedia
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - extractive-qa
21
+ ---
22
+
23
+ # Dataset Card for "squad"
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits Sample Size](#data-splits-sample-size)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## [Dataset Description](#dataset-description)
50
+
51
+ - **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
52
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+ - **Size of downloaded dataset files:** 33.51 MB
56
+ - **Size of the generated dataset:** 85.75 MB
57
+ - **Total amount of disk used:** 119.27 MB
58
+
59
+ ### [Dataset Summary](#dataset-summary)
60
+
61
+ Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
62
+
63
+ ### [Supported Tasks](#supported-tasks)
64
+
65
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
+
67
+ ### [Languages](#languages)
68
+
69
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
70
+
71
+ ## [Dataset Structure](#dataset-structure)
72
+
73
+ We show detailed information for up to 5 configurations of the dataset.
74
+
75
+ ### [Data Instances](#data-instances)
76
+
77
+ #### plain_text
78
+
79
+ - **Size of downloaded dataset files:** 33.51 MB
80
+ - **Size of the generated dataset:** 85.75 MB
81
+ - **Total amount of disk used:** 119.27 MB
82
+
83
+ An example of 'train' looks as follows.
84
+ ```
85
+ {
86
+ "answers": {
87
+ "answer_start": [1],
88
+ "text": ["This is a test text"]
89
+ },
90
+ "context": "This is a test context.",
91
+ "id": "1",
92
+ "question": "Is this a test?",
93
+ "title": "train test"
94
+ }
95
+ ```
96
+
97
+ ### [Data Fields](#data-fields)
98
+
99
+ The data fields are the same among all splits.
100
+
101
+ #### plain_text
102
+ - `id`: a `string` feature.
103
+ - `title`: a `string` feature.
104
+ - `context`: a `string` feature.
105
+ - `question`: a `string` feature.
106
+ - `answers`: a dictionary feature containing:
107
+ - `text`: a `string` feature.
108
+ - `answer_start`: a `int32` feature.
109
+
110
+ ### [Data Splits Sample Size](#data-splits-sample-size)
111
+
112
+ | name |train|validation|
113
+ |----------|----:|---------:|
114
+ |plain_text|87599| 10570|
115
+
116
+ ## [Dataset Creation](#dataset-creation)
117
+
118
+ ### [Curation Rationale](#curation-rationale)
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ### [Source Data](#source-data)
123
+
124
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
+
126
+ ### [Annotations](#annotations)
127
+
128
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
+
130
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
131
+
132
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
+
134
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
135
+
136
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ### [Discussion of Biases](#discussion-of-biases)
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ### [Other Known Limitations](#other-known-limitations)
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ## [Additional Information](#additional-information)
149
+
150
+ ### [Dataset Curators](#dataset-curators)
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ ### [Licensing Information](#licensing-information)
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ### [Citation Information](#citation-information)
159
+
160
+ ```
161
+ @article{2016arXiv160605250R,
162
+ author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
163
+ Konstantin and {Liang}, Percy},
164
+ title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
165
+ journal = {arXiv e-prints},
166
+ year = 2016,
167
+ eid = {arXiv:1606.05250},
168
+ pages = {arXiv:1606.05250},
169
+ archivePrefix = {arXiv},
170
+ eprint = {1606.05250},
171
+ }
172
+
173
+ ```
174
+
175
+
176
+ ### Contributions
177
+
178
+ Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.