Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ pretty_name: Arabic Tashkeel Dataset
|
|
37 |
This is a fairly large dataset gathered from five main sources:
|
38 |
- [`tashkeela`](https://huggingface.co/datasets/community-datasets/tashkeela) **(1.79GB - 45.05%)**: The entire Tashkeela dataset, repurposed in sentences. Some rows were omitted as they contain low diacritic (tashkeel characters) rate.
|
39 |
- `shamela` **(1.67GB - 42.10%)**: Random pages from over 2,000 books on the [Shamela Library](https://shamela.ws/). Pages were selected using the below function (high diacritics rate)
|
40 |
-
- `wikipedia` **(269.94MB - 6.64%)**: A collection of Wikipedia articles. Diacritics were added using OpenAI's [GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) model. At the time of writing this, many other LLMs were tried (such as GPT-4o, Claude 3 Haiku, Claude 3.5 Sonnet, Llama 3.1 70b, among others), and this one (surprisingly) scored the highest a subset of the tashkeela dataset.
|
41 |
- `ashaar` **(117.86MB - 2.90%)**: [APCD](https://huggingface.co/datasets/arbml/APCD), [APCDv2](https://huggingface.co/datasets/arbml/APCDv2), [Ashaar_diacritized](https://huggingface.co/datasets/arbml/Ashaar_diacritized), [Ashaar_meter](https://huggingface.co/datasets/arbml/Ashaar_meter) merged. Most rows from these datasets were excluded, and only those with sufficient diacritics were retained.
|
42 |
- [`quran-riwayat`](https://huggingface.co/datasets/Abdou/quran-riwayat) **(71.73MB - 1.77%)**: Six different riwayat of Quran.
|
43 |
- [`hadith`](https://huggingface.co/datasets/arbml/LK_Hadith) **(62.69MB - 1.54%)**: Leeds University and King Saud University (LK) Hadith Corpus.
|
|
|
37 |
This is a fairly large dataset gathered from five main sources:
|
38 |
- [`tashkeela`](https://huggingface.co/datasets/community-datasets/tashkeela) **(1.79GB - 45.05%)**: The entire Tashkeela dataset, repurposed in sentences. Some rows were omitted as they contain low diacritic (tashkeel characters) rate.
|
39 |
- `shamela` **(1.67GB - 42.10%)**: Random pages from over 2,000 books on the [Shamela Library](https://shamela.ws/). Pages were selected using the below function (high diacritics rate)
|
40 |
+
- `wikipedia` **(269.94MB - 6.64%)**: A collection of Wikipedia articles. Diacritics were added using OpenAI's [GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) model. At the time of writing this, many other LLMs were tried (such as GPT-4o, Claude 3 Haiku, Claude 3.5 Sonnet, Llama 3.1 70b, among others), and this one (surprisingly) scored the highest in a subset of the tashkeela dataset.
|
41 |
- `ashaar` **(117.86MB - 2.90%)**: [APCD](https://huggingface.co/datasets/arbml/APCD), [APCDv2](https://huggingface.co/datasets/arbml/APCDv2), [Ashaar_diacritized](https://huggingface.co/datasets/arbml/Ashaar_diacritized), [Ashaar_meter](https://huggingface.co/datasets/arbml/Ashaar_meter) merged. Most rows from these datasets were excluded, and only those with sufficient diacritics were retained.
|
42 |
- [`quran-riwayat`](https://huggingface.co/datasets/Abdou/quran-riwayat) **(71.73MB - 1.77%)**: Six different riwayat of Quran.
|
43 |
- [`hadith`](https://huggingface.co/datasets/arbml/LK_Hadith) **(62.69MB - 1.54%)**: Leeds University and King Saud University (LK) Hadith Corpus.
|