--- dataset_info: - config_name: mcq_exams_test_ar features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: label dtype: string splits: - name: test num_bytes: 152003 num_examples: 557 - name: validation num_bytes: 1135 num_examples: 5 download_size: 92764 dataset_size: 153138 - config_name: meta_ar_dialects features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: label dtype: string splits: - name: test num_bytes: 5612859 num_examples: 5395 - name: validation num_bytes: 4919 num_examples: 5 download_size: 2174106 dataset_size: 5617778 - config_name: meta_ar_msa features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: label dtype: string splits: - name: test num_bytes: 948833 num_examples: 895 - name: validation num_bytes: 5413 num_examples: 5 download_size: 380941 dataset_size: 954246 - config_name: multiple_choice_copa_translated_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: label dtype: string splits: - name: test num_bytes: 11904 num_examples: 84 - name: validation num_bytes: 848 num_examples: 5 download_size: 13056 dataset_size: 12752 - config_name: multiple_choice_facts_truefalse_balanced_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: label dtype: string splits: - name: train num_bytes: 129140 num_examples: 80 download_size: 67202 dataset_size: 129140 - config_name: multiple_choice_grounded_statement_soqal_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: sol5 dtype: string - name: label dtype: string splits: - name: train num_bytes: 161956 num_examples: 155 download_size: 59090 dataset_size: 161956 - config_name: multiple_choice_grounded_statement_xglue_mlqa_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: sol5 dtype: string - name: label dtype: string splits: - name: train num_bytes: 146071 num_examples: 155 download_size: 77150 dataset_size: 146071 - config_name: multiple_choice_openbookqa_translated_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: sol4 dtype: string - name: label dtype: string splits: - name: train num_bytes: 71543 num_examples: 336 download_size: 44973 dataset_size: 71543 - config_name: multiple_choice_rating_sentiment_no_neutral_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: label dtype: string splits: - name: train num_bytes: 1408389 num_examples: 8000 download_size: 481296 dataset_size: 1408389 - config_name: multiple_choice_rating_sentiment_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: label dtype: string splits: - name: train num_bytes: 1219534 num_examples: 6000 download_size: 375276 dataset_size: 1219534 - config_name: multiple_choice_sentiment_task features: - name: query dtype: string - name: sol1 dtype: string - name: sol2 dtype: string - name: sol3 dtype: string - name: label dtype: string splits: - name: train num_bytes: 457756 num_examples: 1725 download_size: 185976 dataset_size: 457756 configs: - config_name: mcq_exams_test_ar data_files: - split: test path: mcq_exams_test_ar/test-* - split: validation path: mcq_exams_test_ar/validation-* - config_name: meta_ar_dialects data_files: - split: test path: meta_ar_dialects/test-* - split: validation path: meta_ar_dialects/validation-* - config_name: meta_ar_msa data_files: - split: test path: meta_ar_msa/test-* - split: validation path: meta_ar_msa/validation-* - config_name: multiple_choice_copa_translated_task data_files: - split: test path: multiple_choice_copa_translated_task/test-* - split: validation path: multiple_choice_copa_translated_task/validation-* - config_name: multiple_choice_facts_truefalse_balanced_task data_files: - split: train path: multiple_choice_facts_truefalse_balanced_task/train-* - config_name: multiple_choice_grounded_statement_soqal_task data_files: - split: train path: multiple_choice_grounded_statement_soqal_task/train-* - config_name: multiple_choice_grounded_statement_xglue_mlqa_task data_files: - split: train path: multiple_choice_grounded_statement_xglue_mlqa_task/train-* - config_name: multiple_choice_openbookqa_translated_task data_files: - split: train path: multiple_choice_openbookqa_translated_task/train-* - config_name: multiple_choice_rating_sentiment_no_neutral_task data_files: - split: train path: multiple_choice_rating_sentiment_no_neutral_task/train-* - config_name: multiple_choice_rating_sentiment_task data_files: - split: train path: multiple_choice_rating_sentiment_task/train-* - config_name: multiple_choice_sentiment_task data_files: - split: train path: multiple_choice_sentiment_task/train-* --- # AlGhafa Arabic LLM Benchmark ### New fix: Normalized whitespace characters and ensured consistency across all datasets for improved data quality and compatibility. Multiple-choice evaluation benchmark for zero- and few-shot evaluation of Arabic LLMs, we adapt the following tasks: - Belebele Ar MSA [Bandarkar et al. (2023)](https://arxiv.org/abs/2308.16884): 900 entries - Belebele Ar Dialects [Bandarkar et al. (2023)](https://arxiv.org/abs/2308.16884): 5400 entries - COPA Ar: 89 entries machine-translated from English [COPA](https://people.ict.usc.edu/~gordon/copa.html) and verified by native Arabic speakers. - Facts balanced (based on AraFacts) [Sheikh Ali et al. (2021)](https://aclanthology.org/2021.wanlp-1.26): 80 entries (after balancing dataset), consisting of a short article and a corresponding claim, to be deemed true or false - MCQ Exams Ar [Hardalov et al. (2020)](https://aclanthology.org/2020.emnlp-main.438): 2248 entries - OpenbookQA Ar: 336 entries. Machine-translated from English [OpenbookQA](https://api.semanticscholar.org/CorpusID:52183757) and verified native Arabic speakers. - Rating sentiment (HARD-Arabic-Dataset) [Elnagar et al. (2018)](https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3): determine the sentiment of reviews, with 3 possible categories (positive, neutral, negative) transformed to a review score (1-5) as follows: 1-2 negative, 3 neutral, 4-5 positive; 6000 entries (2000 for each of the three classes) - Rating sentiment no neutral (HARD-Arabic-Dataset) [Elnagar et al., 2018](https://link.springer.com/chapter/10.1007/978-3-319-67056-0_3): 8000 entries in which we remove the neutral class by extending the positive class (corresponding to scores 1-3); 8000 entries (4000 for each class) - Sentiment [Abu Farha et al., 2021](https://aclanthology.org/2021.wanlp-1.36): 1725 entries based on Twitter posts, that can be classified as positive, negative, or neutral - SOQAL [Mozannar et al., 2019](https://aclanthology.org/W19-4612): grounded statement task to assess in-context reading comprehension, consisting of a context and a related question; consists of 155 entries with one original correct answer, transformed to multiple choice task by adding four possible human-curated incorrect choices per sample - XGLUE (based on XGLUE-MLQA) [Liang et al., 2020](https://arxiv.org/abs/2004.01401); [Lewis et al., 2019](https://arxiv.org/abs/1910.07475): consists of 155 entries transformed to a multiple choice task by adding four human-curated incorrect choices per sample ## Citing the AlGhafa benchmark: ```bibtex @inproceedings{almazrouei-etal-2023-alghafa, title = "{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models", author = "Almazrouei, Ebtesam and Cojocaru, Ruxandra and Baldo, Michele and Malartic, Quentin and Alobeidli, Hamza and Mazzotta, Daniele and Penedo, Guilherme and Campesan, Giulia and Farooq, Mugariya and Alhammadi, Maitha and Launay, Julien and Noune, Badreddine", editor = "Sawaf, Hassan and El-Beltagy, Samhaa and Zaghouani, Wajdi and Magdy, Walid and Abdelali, Ahmed and Tomeh, Nadi and Abu Farha, Ibrahim and Habash, Nizar and Khalifa, Salam and Keleg, Amr and Haddad, Hatem and Zitouni, Imed and Mrini, Khalil and Almatham, Rawan", booktitle = "Proceedings of ArabicNLP 2023", month = dec, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.arabicnlp-1.21", doi = "10.18653/v1/2023.arabicnlp-1.21", pages = "244--275", abstract = "Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.", } ```