system HF staff commited on
Commit
0b3e27f
1 Parent(s): 73d95b0

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "arcd"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/husseinmozannar/SOQAL/tree/master/data](https://github.com/husseinmozannar/SOQAL/tree/master/data)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1.85 MB
37
+ - **Size of the generated dataset:** 1.62 MB
38
+ - **Total amount of disk used:** 3.47 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.
43
+
44
+ ### [Supported Tasks](#supported-tasks)
45
+
46
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
47
+
48
+ ### [Languages](#languages)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ## [Dataset Structure](#dataset-structure)
53
+
54
+ We show detailed information for up to 5 configurations of the dataset.
55
+
56
+ ### [Data Instances](#data-instances)
57
+
58
+ #### plain_text
59
+
60
+ - **Size of downloaded dataset files:** 1.85 MB
61
+ - **Size of the generated dataset:** 1.62 MB
62
+ - **Total amount of disk used:** 3.47 MB
63
+
64
+ An example of 'train' looks as follows.
65
+ ```
66
+ This example was too long and was cropped:
67
+
68
+ {
69
+ "answers": "{\"answer_start\": [34], \"text\": [\"صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر،\"]}...",
70
+ "context": "\"حمزة بن عبد المطلب الهاشمي القرشي صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر، وهو خير أع...",
71
+ "id": "621723207492",
72
+ "question": "من هو حمزة بن عبد المطلب؟",
73
+ "title": "حمزة بن عبد المطلب"
74
+ }
75
+ ```
76
+
77
+ ### [Data Fields](#data-fields)
78
+
79
+ The data fields are the same among all splits.
80
+
81
+ #### plain_text
82
+ - `id`: a `string` feature.
83
+ - `title`: a `string` feature.
84
+ - `context`: a `string` feature.
85
+ - `question`: a `string` feature.
86
+ - `answers`: a dictionary feature containing:
87
+ - `text`: a `string` feature.
88
+ - `answer_start`: a `int32` feature.
89
+
90
+ ### [Data Splits Sample Size](#data-splits-sample-size)
91
+
92
+ | name |train|validation|
93
+ |----------|----:|---------:|
94
+ |plain_text| 693| 702|
95
+
96
+ ## [Dataset Creation](#dataset-creation)
97
+
98
+ ### [Curation Rationale](#curation-rationale)
99
+
100
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
+
102
+ ### [Source Data](#source-data)
103
+
104
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
+
106
+ ### [Annotations](#annotations)
107
+
108
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
+
110
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
111
+
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+
114
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
115
+
116
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
117
+
118
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
+
120
+ ### [Discussion of Biases](#discussion-of-biases)
121
+
122
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
+
124
+ ### [Other Known Limitations](#other-known-limitations)
125
+
126
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
+
128
+ ## [Additional Information](#additional-information)
129
+
130
+ ### [Dataset Curators](#dataset-curators)
131
+
132
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
+
134
+ ### [Licensing Information](#licensing-information)
135
+
136
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
+
138
+ ### [Citation Information](#citation-information)
139
+
140
+ ```
141
+ @inproceedings{mozannar-etal-2019-neural,
142
+ title = "Neural {A}rabic Question Answering",
143
+ author = "Mozannar, Hussein and
144
+ Maamary, Elie and
145
+ El Hajal, Karl and
146
+ Hajj, Hazem",
147
+ booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
148
+ month = aug,
149
+ year = "2019",
150
+ address = "Florence, Italy",
151
+ publisher = "Association for Computational Linguistics",
152
+ url = "https://www.aclweb.org/anthology/W19-4612",
153
+ doi = "10.18653/v1/W19-4612",
154
+ pages = "108--118",
155
+ abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
156
+ }
157
+
158
+ ```
159
+
160
+
161
+ ### Contributions
162
+
163
+ Thanks to [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@tayciryahmed](https://github.com/tayciryahmed) for adding this dataset.