xmodar's picture
Update README.md
c306573 verified
metadata
license: cc0-1.0
task_categories:
  - automatic-speech-recognition
language:
  - ar
tags:
  - augmented
  - common-voice-12.0
  - modern-standard-arabic
  - quran
pretty_name: Voice Converted Arabic Common Voice 12.0
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: name
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: duration
      dtype: float64
    - name: speaker
      dtype: int64
    - name: text
      dtype: string
    - name: phoneme
      dtype: string
  splits:
    - name: train
      num_bytes: 36389983971.5
      num_examples: 285564
    - name: dev
      num_bytes: 1535831881
      num_examples: 10352
  download_size: 36241895101
  dataset_size: 37925815852.5
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*

Dataset Card for Voice Converted Arabic Common Voice 12.0

This dataset is derived from the Common Voice Arabic Corpus 12.0 and includes automatically diacritized transcriptions and phoneme representations for the original augmented audio data. The recordings feature Arabic text read aloud by users, where the text was initially undiacritized, allowing for potential reading errors. The diacritization and phonemes were generated automatically, resulting in a dataset that is valuable for speech recognition tasks but inherently noisy.

More information about the speakers can be found in this dataset.

Dataset Details

Dataset Description

The dataset was created by adapting and performing voice conversion on the dataset provided by @mostafaashahin as part of the SDAIA Winter School held at King Saud University, Riyadh, in December 2024. It is intended for researchers and practitioners interested in diacritized speech data and voice converted audio, particularly for Modern Standard Arabic.

Dataset Creation

Since the audio files lacked speaker IDs, speaker embeddings were extracted using the "voice_conversion_models/multilingual/vctk/freevc24" model from xTTS-v2. These embeddings were clustered and then used for voice conversion, enhancing the dataset for further research in speech processing.

Data Collection and Processing

The original recordings were contributed by volunteers as part of the Common Voice Arabic Corpus 12.0. No new recordings were added; the dataset consists solely of processed versions of the existing files.

Bias, Risks, and Limitations

The dataset includes recordings from various dialects across the Arab world, but specific demographic or dialectal statistics are not available. The audio quality is suboptimal, with issues such as dropped segments, noisy backgrounds, perturbed pitch, potential reading errors, and automatically generated diacritization, which may impact certain tasks requiring high-quality, clean data.