wikipedia_spanish / README.md
carlosdanielhernandezmena's picture
Adding lincense info
0afbf01 verified
metadata
license: cc-by-sa-3.0
dataset_info:
  config_name: wikipedia_spanish
  features:
    - name: audio_id
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: speaker_id
      dtype: string
    - name: gender
      dtype: string
    - name: duration
      dtype: float32
    - name: normalized_text
      dtype: string
  splits:
    - name: train
      num_bytes: 1916035098.198
      num_examples: 11569
  download_size: 1887119130
  dataset_size: 1916035098.198
configs:
  - config_name: wikipedia_spanish
    data_files:
      - split: train
        path: wikipedia_spanish/train-*
    default: true
task_categories:
  - automatic-speech-recognition
language:
  - es
tags:
  - wikipedia grabada
  - wikipedia spanish
  - ciempiess-unam
  - ciempiess-unam project
  - read speech
  - spanish speech
pretty_name: WIKIPEDIA SPANISH CORPUS
size_categories:
  - 10K<n<100K

Dataset Card for wikipedia_spanish

Table of Contents

Dataset Description

Dataset Summary

According to the project page of the WikiProject Spoken Wikipedia:

The WikiProject Spoken Wikipedia aims to produce recordings of Wikipedia articles being read aloud. Therefore, the WIKIPEDIA SPANISH CORPUS is a dataset created from the Spanish version of the WikiProject Spoken Wikipedia, called Wikipedia Grabada

The WIKIPEDIA SPANISH CORPUS aims to be used in the Automatic Speech Recognition (ASR) task. It is a gender unbalanced corpus of 25 hours of duration. It contains read speech of several articles of the Wikipedia Grabada; most of such articles are recorded by male speakers. Transcriptions in this corpus were generated from the scratch by native speakers.

Example Usage

The WIKIPEDIA SPANISH CORPUS contains only the train split:

from datasets import load_dataset
wikipedia_spanish = load_dataset("ciempiess/wikipedia_spanish")

It is also valid to do:

from datasets import load_dataset
wikipedia_spanish = load_dataset("ciempiess/wikipedia_spanish",split="train")

Supported Tasks

automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

Languages

The language of the corpus is Spanish.

Dataset Structure

Data Instances

{
  'audio_id': 'WKSP_F_0019_E1_0023', 
  'audio': {
    'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/d31e2de01af3d9cdc4d4b6397720048caa39f6e552a8d38b6617a50af250bdcb/train/female/F_0019/WKSP_F_0019_E1_0023.flac', 
    'array': array([0.08535767, 0.13946533, 0.11572266, ..., 0.13168335, 0.12426758,
       0.14508057], dtype=float32), 'sampling_rate': 16000
  }, 
  'speaker_id': 'F_0019', 
  'gender': 'female', 
  'duration': 8.170000076293945, 
  'normalized_text': 'donde revelaba de sus placas de vidrio al colodión controversias y equivocaciones'
}

Data Fields

  • audio_id (string) - id of audio segment
  • audio (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
  • speaker_id (string) - id of speaker
  • gender (string) - gender of speaker (male or female)
  • duration (float32) - duration of the audio file in seconds.
  • normalized_text (string) - normalized audio segment transcription

Data Splits

The corpus counts just with the train split which has a total of 11569 speech files from 43 female speakers and 150 male speakers with a total duration of 25 hours and 37 minutes.

Dataset Creation

Curation Rationale

The WIKIPEDIA SPANISH CORPUS (WSC) has the following characteristics:

  • The WSC has an exact duration of 25 hours and 37 minutes. It has 11569 audio files.

  • The WSC counts with 193 different speakers: 150 men and 43 women.

  • Every audio file in the WSC has a duration between 3 and 10 seconds approximately.

  • Data in WSC is classified by speaker. It means, all the recordings of one single speaker are stored in one single directory.

  • Data is also classified according to the gender (male/female) of the speakers.

  • Audio and transcriptions in the WSC are segmented and transcribed from the scratch by native speakers of the Spanish language

  • Audio files in the WSC are distributed in a 16khz@16bit mono format.

  • Every audio file has an ID that is compatible with ASR engines such as Kaldi and CMU-Sphinx.

Source Data

Initial Data Collection and Normalization

The WIKIPEDIA SPANISH CORPUS is a speech corpus designed to train acoustic models for automatic speech recognition and it is made out of several articles of the Wikipedia Grabada read by volunteers.

Annotations

Annotation process

The annotation process is at follows:

    1. A whole podcast is manually segmented keeping just the portions containing good quality speech.
    1. A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
    1. The resulting speech files between 5 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.

Who are the annotators?

The WIKIPEDIA SPANISH CORPUS was created under the umbrella of the social service program "Desarrollo de Tecnologías del Habla" of the "Facultad de Ingeniería" (FI) in the "Universidad Nacional Autónoma de México" (UNAM) between 2018 and 2020 by Carlos Daniel Hernández Mena, head of the program.

Personal and Sensitive Information

The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.

Considerations for Using the Data

Social Impact of Dataset

This dataset is valuable because it contains well pronounced speech with low noise.

Discussion of Biases

The dataset is not gender balanced. It is comprised of 43 female speakers and 150 male speakers.

Other Known Limitations

WIKIPEDIA SPANISH CORPUS by Carlos Daniel Hernández Mena is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported CC BY-SA 3.0 and it utilizes material from Wikipedia Grabada. This work was done with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Dataset Curators

The dataset was collected by students belonging to the social service program "Desarrollo de Tecnologías del Habla". It was curated by Carlos Daniel Hernández Mena in 2020.

Licensing Information

CC BY-SA 3.0

Citation Information

@misc{carlosmena2021wikipediaspanish,
      title={WIKIPEDIA SPANISH CORPUS: Audio and Transcriptions taken from Wikipedia Grabada},
      ldc_catalog_no={LDC2021S07},
      DOI={https://doi.org/10.35111/7m1j-sa17},
      author={Hernandez Mena, Carlos Daniel and Meza Ruiz, Ivan Vladimir},
      journal={Linguistic Data Consortium, Philadelphia},
      year={2021},
      url={https://catalog.ldc.upenn.edu/LDC2021S07},
}

Contributions

The authors would like to thank to Alberto Templos Carbajal, Elena Vera and Angélica Gutiérrez for their support to the social service program "Desarrollo de Tecnologías del Habla" at the Facultad de Ingeniería (FI) of the Universidad Nacional Autónoma de México (UNAM). We also thank to the social service students for all the hard work.

Special thanks to the Team of "Wikipedia Grabada" for publishing all the recordings that constitute the WIKIPEDIA SPANISH CORPUS.

This dataset card was created as part of the objectives of the 16th edition of the Severo Ochoa Mobility Program (PN039300 - Severo Ochoa 2021 - E&T).