Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,36 @@
|
|
1 |
-
---
|
2 |
-
license: cc0-1.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc0-1.0
|
3 |
+
task_categories:
|
4 |
+
- automatic-speech-recognition
|
5 |
+
language:
|
6 |
+
- ar
|
7 |
+
tags:
|
8 |
+
- augmented
|
9 |
+
- common-voice-12.0
|
10 |
+
- modern-standard-arabic
|
11 |
+
- quran
|
12 |
+
pretty_name: Voice Converted Arabic Common Voice 12.0
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
---
|
16 |
+
# Dataset Card for Voice Converted Arabic Common Voice 12.0
|
17 |
+
|
18 |
+
This dataset is derived from the [**Common Voice Arabic Corpus 12.0**](https://commonvoice.mozilla.org/en/datasets) and includes automatically diacritized transcriptions and phoneme representations for the original augmented audio data. The recordings feature Arabic text read aloud by users, where the text was initially undiacritized, allowing for potential reading errors. The diacritization and phonemes were generated automatically, resulting in a dataset that is valuable for [**speech recognition**](https://hf.co/tasks/automatic-speech-recognition) tasks but inherently noisy.
|
19 |
+
|
20 |
+
## Dataset Details
|
21 |
+
|
22 |
+
### Dataset Description
|
23 |
+
|
24 |
+
The dataset was created by adapting and performing voice conversion on the dataset provided by [@mostafaashahin](https://hf.co/datasets/mostafaashahin/common_voice_Arabic_12.0_Augmented_SWS_lam_phoneme) as part of the **SDAIA Winter School** held at **King Saud University, Riyadh, in December 2024**. It is intended for researchers and practitioners interested in diacritized speech data and voice converted audio, particularly for **Modern Standard Arabic**.
|
25 |
+
|
26 |
+
## Dataset Creation
|
27 |
+
|
28 |
+
Since the audio files lacked speaker IDs, **speaker embeddings** were extracted using the **"voice_conversion_models/multilingual/vctk/freevc24"** model from [xTTS-v2](https://hf.co/coqui/XTTS-v2). These embeddings were clustered and then used for voice conversion, enhancing the dataset for further research in speech processing.
|
29 |
+
|
30 |
+
#### Data Collection and Processing
|
31 |
+
|
32 |
+
The original recordings were contributed by **volunteers** as part of the **Common Voice Arabic Corpus 12.0**. No new recordings were added; the dataset consists solely of processed versions of the existing files.
|
33 |
+
|
34 |
+
## Bias, Risks, and Limitations
|
35 |
+
|
36 |
+
The dataset includes recordings from various **dialects** across the Arab world, but specific demographic or dialectal statistics are not available. The **audio quality** is suboptimal, with issues such as **dropped segments**, **noisy backgrounds**, **perturbed pitch**, **potential reading errors**, and **automatically generated diacritization**, which may impact certain tasks requiring high-quality, clean data.
|