Datasets:
OALL
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
Ali Elfilali commited on
Commit
164a57d
1 Parent(s): d13a746

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -254,3 +254,67 @@ configs:
254
  - split: train
255
  path: multiple_choice_sentiment_task/train-*
256
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  - split: train
255
  path: multiple_choice_sentiment_task/train-*
256
  ---
257
+ # AlGhafa Arabic LLM Benchmark
258
+
259
+ ### New fix: Normalized whitespace characters and ensured consistency across all datasets for improved data quality and compatibility.
260
+
261
+ Multiple-choice evaluation benchmark for zero- and few-shot evaluation of Arabic LLMs, we adapt the following tasks:
262
+
263
+ - Belebele Ar MSA [Bandarkar et al. (2023)](https://arxiv.org/abs/2308.16884): 900 entries
264
+ - Belebele Ar Dialects [Bandarkar et al. (2023)](https://arxiv.org/abs/2308.16884): 5400 entries
265
+ - COPA Ar: 89 entries machine-translated from English [COPA](https://people.ict.usc.edu/~gordon/copa.html) and verified by native Arabic speakers.
266
+ - Facts balanced (based on AraFacts) [Sheikh Ali et al. (2021)](https://aclanthology.org/2021.wanlp-1.26): 80 entries (after balancing dataset), consisting of a short article and a corresponding claim, to be deemed true or false
267
+ - MCQ Exams Ar [Hardalov et al. (2020)](https://aclanthology.org/2020.emnlp-main.438): 2248 entries
268
+ - OpenbookQA Ar: 336 entries. Machine-translated from English [OpenbookQA](https://api.semanticscholar.org/CorpusID:52183757) and verified native Arabic speakers.
269
+ - Rating sentiment (HARD-Arabic-Dataset) [Elnagar et al. (2018)](https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3): determine the sentiment
270
+ of reviews, with 3 possible categories (positive, neutral, negative) transformed to a review score (1-5) as follows: 1-2 negative, 3 neutral, 4-5 positive; 6000 entries (2000 for each of the three classes)
271
+ - Rating sentiment no neutral (HARD-Arabic-Dataset) [Elnagar et al., 2018](https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3): 8000 entries in which we remove the neutral class by extending the positive class (corresponding to scores 1-3); 8000 entries (4000 for each class)
272
+ - Sentiment [Abu Farha et al., 2021](https://aclanthology.org/2021.wanlp-1.36): 1725 entries based on Twitter posts, that can be classified as positive, negative, or neutral
273
+ - SOQAL [Mozannar et al., 2019](https://aclanthology.org/W19-4612): grounded statement task to assess in-context reading comprehension, consisting of a context and a related question; consists of 155 entries with one original correct answer, transformed to multiple choice task by adding four possible
274
+ human-curated incorrect choices per sample
275
+ - XGLUE (based on XGLUE-MLQA) [Liang et al., 2020](https://arxiv.org/abs/2004.01401); [Lewis et al., 2019](https://arxiv.org/abs/1910.07475): consists of
276
+ 155 entries transformed to a multiple choice task by adding four human-curated incorrect choices per sample
277
+
278
+
279
+ ## Citing the AlGhafa benchmark:
280
+
281
+ ```bibtex
282
+ @inproceedings{almazrouei-etal-2023-alghafa,
283
+ title = "{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models",
284
+ author = "Almazrouei, Ebtesam and
285
+ Cojocaru, Ruxandra and
286
+ Baldo, Michele and
287
+ Malartic, Quentin and
288
+ Alobeidli, Hamza and
289
+ Mazzotta, Daniele and
290
+ Penedo, Guilherme and
291
+ Campesan, Giulia and
292
+ Farooq, Mugariya and
293
+ Alhammadi, Maitha and
294
+ Launay, Julien and
295
+ Noune, Badreddine",
296
+ editor = "Sawaf, Hassan and
297
+ El-Beltagy, Samhaa and
298
+ Zaghouani, Wajdi and
299
+ Magdy, Walid and
300
+ Abdelali, Ahmed and
301
+ Tomeh, Nadi and
302
+ Abu Farha, Ibrahim and
303
+ Habash, Nizar and
304
+ Khalifa, Salam and
305
+ Keleg, Amr and
306
+ Haddad, Hatem and
307
+ Zitouni, Imed and
308
+ Mrini, Khalil and
309
+ Almatham, Rawan",
310
+ booktitle = "Proceedings of ArabicNLP 2023",
311
+ month = dec,
312
+ year = "2023",
313
+ address = "Singapore (Hybrid)",
314
+ publisher = "Association for Computational Linguistics",
315
+ url = "https://aclanthology.org/2023.arabicnlp-1.21",
316
+ doi = "10.18653/v1/2023.arabicnlp-1.21",
317
+ pages = "244--275",
318
+ abstract = "Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.",
319
+ }
320
+ ```