Datasets:
Search is not available for this dataset
audio
audioduration (s) 2.47
12.1
|
---|
End of preview. Expand
in Dataset Viewer.
Google Myanmar ASR Dataset
This dataset is a processed and organized version of the original OpenSLR-80 Burmese Speech Corpus. It has been carefully structured for ASR tasks with additional preprocessing steps to enhance usability and consistency.
Dataset Description
The Google Myanmar ASR Dataset is based on the Burmese Speech Corpus, originally published by Google. It consists of audio files and their corresponding transcriptions. The dataset is primarily aimed at building Automatic Speech Recognition (ASR) models.
Key Highlights
- Language: Myanmar (Burmese)
- Sample Rate: All audio files are resampled to 16 kHz.
- Features: Mel-spectrogram features extracted with 80 mel bins and 3000 frames.
- Data Structure:
- Train and Test Splits: 95% of the dataset is used for training, and 5% for testing.
- Metadata: Includes
transcripts.json
,transcripts.csv
, andvocab.txt
.
Dataset Structure
Main Folders
/train
: Training data/audio
: 16 kHz.wav
files for training/features
: Precomputed features/PT
: Torch.pt
files/NPY
: NumPy.npy
files
/test
: Testing data/audio
: 16 kHz.wav
files for testing/features
: Precomputed features/PT
: Torch.pt
files/NPY
: NumPy.npy
files
/metadata
: Metadata filestranscripts.json
: JSON file with detailed transcription and tokenized text.transcripts.csv
: CSV version of the transcription metadata.vocab.txt
: Vocabulary list derived from tokenized transcriptions.
Preprocessing Details
The following steps were applied to prepare the dataset:
Audio Processing:
- Resampled all audio to 16 kHz using
ffmpeg
. - Ensured all
.wav
files are consistent in format.
- Resampled all audio to 16 kHz using
Feature Extraction:
- Generated Mel-spectrogram features with 80 mel bins and 3000 frames per file.
- Saved features in both
.pt
(Torch tensors) and.npy
(NumPy arrays) formats.
Metadata Preparation:
- Mapped each audio file to its transcription.
- Added fields like
duration
andtokenized_transcription
.
How to Use
You can load this dataset in Python using the Hugging Face datasets
library:
from datasets import load_dataset
dataset = load_dataset("freococo/Google_Myanmar_ASR")
train_data = dataset["train"]
test_data = dataset["test"]
# Example
print(train_data[0])
Attribution
This dataset is based on the OpenSLR-80 Burmese Speech Corpus.
Original Citation:
@inproceedings{oo-etal-2020-burmese,
title = {{Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech}},
author = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
pages = “6328–6339”,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.777},
ISBN = {979-10-95546-34-4},
}
License
This dataset is distributed under the CC0 1.0 Universal (CC0 1.0) Public Domain Dedication License. It allows users to freely use, distribute, and modify the dataset, without any restrictions.
Limitations and Considerations
• Attribution: Ensure that any derived models or publications credit the original creators appropriately.
• Dataset Limitations: The dataset may have limitations in speaker diversity, noise handling, and dialectal variation.
- Downloads last month
- 13