Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
fineweb-edu-ar / README.md
DmitriiKhizbullin's picture
Update README.md
06b5be6 verified
metadata
pretty_name: FWEAR
datasets:
  - HuggingFaceTB/smollm-corpus
annotations_creators:
  - no-annotation
language_creators:
  - found
language:
  - ar
  - en
license:
  - cc-by-nc-4.0
multilinguality:
  - multilingual
size_categories:
  - 100M<n<1B
source_datasets:
  - HuggingFaceFW/fineweb-edu
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
dataset_info:
  - config_name: en
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_examples: 378810914
  - config_name: ar
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_examples: 378810914
configs:
  - config_name: en
    data_files:
      - split: train
        path: en/train/*.zip
  - config_name: ar
    data_files:
      - split: train
        path: ar/train/*.zip
extra_gated_fields:
  Company: text
  Country: country
  I agree to use this dataset for non-commercial use ONLY: checkbox

FineWeb-Edu-Ar

FineWeb-Edu-Ar is a machine-translated Arabic version of the FineWeb-Edu dataset designed to support the development of Arabic small language models (SLMs).

Dataset Details:

  • Languages: Arabic, English (paired)
  • Size: 202 billion tokens
  • License: CC-BY-NC-4.0
  • Source: Machine-translated from the deduplicated version of Hugging Face’s FineWeb-Edu dataset
  • Translation model: facebook/nllb-200-distilled-600M

Application:

FineWeb-Edu-Ar is suitable for pre-training Arabic language models, especially those focused on general knowledge and common-sense reasoning. Researchers and developers can utilize this dataset to enhance the performance of Arabic SLMs across various NLP tasks.

Python Usage:

To load and utilize the FineWeb-Edu-Ar dataset in Python, you can use the datasets package:

from datasets import load_dataset

dataset = load_dataset("kaust-generative-ai/fineweb-edu-ar", "ar", streaming=True)

print(next(iter(dataset["train"])))

Citation:

If you use FineWeb-Edu-Ar in your research, please cite the following paper:

https://arxiv.org/abs/2411.06402

@techreport{alrashed2024finewebeduar,
  author      = {Sultan Alrashed and Dmitrii Khizbullin and David R. Pugh},
  title       = {{FineWeb-Edu-Ar: Machine-translated Corpus to Support Arabic Small Language Models}},
  institution = {{King Abdullah University of Science and Technology (KAUST)}},
  year        = {2024},
  number      = {arXiv:2411.06402},
  url         = {https://arxiv.org/abs/2411.06402}
}