Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Moroccan Arabic
Size:
10K - 100K
dataset_info: | |
config_name: ary_Arab | |
features: | |
- name: gherbal_cleaned_text | |
dtype: string | |
- name: gherbal_predictions | |
list: | |
- name: lang | |
dtype: string | |
- name: score | |
dtype: float64 | |
- name: gherbal_lang | |
dtype: string | |
- name: gherbal_score | |
dtype: float64 | |
- name: text | |
dtype: string | |
- name: id | |
dtype: string | |
- name: metadata | |
struct: | |
- name: date | |
dtype: string | |
- name: dump | |
dtype: string | |
- name: file_path | |
dtype: string | |
- name: language | |
dtype: string | |
- name: language_score | |
dtype: float64 | |
- name: language_script | |
dtype: string | |
- name: minhash_cluster_size | |
dtype: int64 | |
- name: top_langs | |
dtype: string | |
- name: url | |
dtype: string | |
- name: domain | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 707821339 | |
num_examples: 37352 | |
download_size: 361389861 | |
dataset_size: 707821339 | |
configs: | |
- config_name: ary_Arab | |
data_files: | |
- split: train | |
path: ary_Arab/train-* | |
task_categories: | |
- text-generation | |
language: | |
- ary | |
# Gherbal’ing Multilingual Fineweb2 🍵 | |
Following up on their previous release, the fineweb team has been hard at work on their upcoming multilingual fineweb dataset which contains a massive collection of 50M+ sentences across 100+ languages. The data, sourced from the Common Crawl corpus, has been classified into these languages using GlotLID, a model able to recognize more than | |
2000 languages. The performance of GlotLID is quite impressive, considering the complexity | |
of the task. It is able to correctly identify the language of a sentence with a decent degree of | |
accuracy. However, especially for low-resource languages, it still makes some mistakes, and | |
some languages are more difficult for it to identify than others. | |
We were approached by the fineweb team to see if we could help them improve the quality of | |
the dataset using our Gherbal language detection model, which we recently released and | |
made available on our API platform. Our performance on several low-resource languages, | |
notably Moroccan, Persian and Swahili, is quite impressive, and the hope is to expand the | |
available resources for these underserved communities. We gladly accepted the challenge and | |
were given access to the pre-release dataset, with GlotLID classifications, and were tasked | |
with identifying the sentences that GlotLID misclassified. | |
As a first step, we chose to focus on Moroccan Arabic, a language spoken by millions of | |
people in Morocco and people of Moroccan descent in Europe. This report will detail the | |
process we went through to achieve our goal, and the results we obtained. | |
# The Dataset | |
The dataset we were given is a bunch of parquet files containing 50M+ sentences, with the | |
following columns: | |
* **id:** a unique identifier for the document | |
* **text:** the document itself, extracted from a webpage | |
* **metadata**: a json column containing metadata about the sentence, including the url of | |
the page it was found on, the date of the page, and the previous classification of the page | |
by GlotLID. | |
The dataset contains several configurations, each one corresponding to a different language. | |
We focused our attention on the Arabic arb_Arab_dedup and the Moroccan | |
ary_Arab_dedup configurations, which we will refer to as the “Arabic” and “Moroccan” | |
datasets respectively. | |
# Our Approach | |
## Dataset Processing | |
To tackle this challenge, we developed a systematic pipeline to process and analyze each | |
dataset. First, we performed thorough text cleanup to remove any unwanted artifacts and | |
standardize the content. This ensures we’re working with clean, natural language text, | |
especially as webpage content can be quite noisy. | |
Next, we leveraged the Natural Language Toolkit (NLTK) library to break down the | |
documents into individual sentences. While this is not a perfect solution and the noisy content | |
can make it difficult to identify sentences particulary for languages not supported by NLTK, | |
it is a good enough approximation for our purposes, which is to reduce the variance in a | |
webpage and avoid confusing the model with extremely long mixed-language content. This | |
step was crucial as it allowed us to analyze the text at a more granular level. | |
With our sentences prepared, we then ran each one through our Gherbal language detection model. The model evaluated each sentence and provided a confidence score across the 33 | |
languages Gherbal supports. We then aggregated these sentence-level results by averaging the | |
classification scores. This gave us a comprehensive understanding of the dominant language | |
patterns within each document. A more fine-grained analysis at the sentence level would have | |
yielded more data with higher quality, but ultimately postponed to a later release given the | |
time and resource constraints. | |
Finally, we applied a filtering step to focus specifically on content classified as Moroccan | |
Arabic in Arabic script (ary_Arab). The resulting dataset is available on Huggingface at | |
**sawalni-ai/ml-fw-darija**. | |
## Dataset Analysis | |
We used our Klimat library to analyze the dataset. Klimat is a tool we developed to perform | |
statistical analysis on language datasets, and is able to generate a number of interesting | |
insights into the data. We will share more about Klimat in a future blog post, but for now we | |
will focus on the results we obtained for the fineweb dataset. | |
## Website Analysis | |
We also performed an analysis on the websites that were used to source the data in | |
multilingual fineweb, and classified by Gherbal as Moroccan Arabic. This gave us an | |
interesting insight on where Moroccan Arabic is used on the web, which could be useful to | |
increase the quantity of high quality data for the language. We broke down the data by | |
multiple criteria, including the top level domain, the duration the website was online (based | |
on Common Crawl accessing it), and more. | |
We restricted the analysis to high confidence samples, and filtered to the top 1000 websites | |
by quantity of data. | |
# Our Results | |
Let’s start by looking at the results for the Moroccan dataset. | |
* Original count in ary_Arab: 5.8M | |
* Resulting count after filtering: 37352 (0.64% of original) | |
* Number of tokens in ary_Arab: 2.8B (estimated using tiktoken for multilinguality) | |
* Number of tokens in filtered dataset: 75.3M | |
## False Positives | |
A manual review of the filtered dataset revealed that human preferences were consistent with | |
the results of Gherbal, and that the filtered dataset should be a good resource for training and | |
evaluating models for Moroccan, despite the small sample size. It is worth noting that | |
Algerian and Tunisian Arabic were also misclassified as Moroccan Arabic due to the elevated | |
mutual intelligibility between the three. This is a known current limitation of Gherbal which | |
only supports Moroccan and Egyptian varieties of Arabic and should be addressed in future | |
releases. | |
## False Negatives | |
Looking at our Gherbal paper (pending publication), specifically at the benchmark results on | |
the flores-200 devtest set, we can estimate that the false negative rate from Standard Arabic | |
(arb_Arab) to Moroccan Arabic (ary_Arab) is around 10%. Extrapolating this figure, we can | |
estimate the false negative rate for the filtered dataset to be around 37352 * 0.1 = 3735 | |
Moroccan Arabic sentences that were incorrectly filtered out. | |
## Other dataset configurations | |
We also applied the same process to the other dataset configurations, namely the Arabic | |
(arb_Arab) and the latin-script Arabic (arb_Latn) configurations. While the results are not yet | |
complete, we can already draw some interesting observations: | |
- Arabic (arb_Arab) | |
* Original count in arb_Arab: 24.2M | |
* Resulting count after filtering: 0 | |
While some samples (<100 in total) were classified as Moroccan Arabic, a manual review | |
revealed that these were all incorrect classifications by Gherbal and that the filtered dataset is | |
indeed empty. This might change as we process the rest of the dataset, or as we improve | |
Gherbal’s performance on Arabic and its related languages. The resulting dataset will be | |
made available as an additional configuration on the same dataset here when the processing is | |
complete. | |
- Arabic in Latin script (arb_Latn) | |
* Original count in arb_Latn: 600K | |
* Resulting count after filtering: 15K (2.5% of original) | |
This dataset is classified as arb_Latn by GlotLID, which presents extreme variance in the data | |
as Arabic can be transliterated in so many different ways. As Gherbal is able to correctly | |
identify ary_Latn (Moroccan Arabic in Latin script) with a high degree of accuracy, we are | |
able to recover a significant amount of data that was previously quite unusable among all the | |
noise. We also observe that this dataset contains the most variations in the actual language as | |
classified by Gherbal, which confirms that the arb_Latn label from GlotLID is not a good | |
proxy for high quality Arabic data in latin script. The resulting dataset will be made available | |
as an additional configuration on the same dataset here when the analysis is complete. |