# Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges (Main EMNLP 2024) ## Introduction This document presents the accompanying dataset for the paper titled **"Multi-Dialect Vietnamese: Task, Dataset, Baseline Models, and Challenges"**. The dataset, referred to as the **Vi**etnamese **M**ulti-**D**ialect (ViMD) dataset, is a comprehensive resource designed to capture the linguistic diversity represented by 63 provincial dialects spoken across Vietnam. The paper is available at https://aclanthology.org/2024.emnlp-main.426. ## Citation If you use this paper and its dataset in your research, please cite it as follows: ```bibtex @inproceedings{dinh-etal-2024-multi, title = "Multi-Dialect {V}ietnamese: Task, Dataset, Baseline Models and Challenges", author = "Dinh, Nguyen and Dang, Thanh and Thanh Nguyen, Luan and Nguyen, Kiet", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.426", pages = "7476--7498", abstract = "Vietnamese, a low-resource language, is typically categorized into three primary dialect groups that belong to Northern, Central, and Southern Vietnam. However, each province within these regions exhibits its own distinct pronunciation variations. Despite the existence of various speech recognition datasets, none of them has provided a fine-grained classification of the 63 dialects specific to individual provinces of Vietnam. To address this gap, we introduce Vietnamese Multi-Dialect (ViMD) dataset, a novel comprehensive dataset capturing the rich diversity of 63 provincial dialects spoken across Vietnam. Our dataset comprises 102.56 hours of audio, consisting of approximately 19,000 utterances, and the associated transcripts contain over 1.2 million words. To provide benchmarks and simultaneously demonstrate the challenges of our dataset, we fine-tune state-of-the-art pre-trained models for two downstream tasks: (1) Dialect identification and (2) Speech recognition. The empirical results suggest two implications including the influence of geographical factors on dialects, and the constraints of current approaches in speech recognition tasks involving multi-dialect speech data. Our dataset is available for research purposes.", } ``` ## Overview of the Dataset - **Source**: News programs from the broadcasting stations of the 63 provinces of Vietnam. - **Overall Statistics**:
Per Provincial Dialect Data Set Total
Min. Max. Mean Std. Train Valid. Test
Duration 89.11m 117.98m 97.68m 4.18m 81.43h 10.26h 10.87h 102.56h
#record 263 363 301 21 15,023 1,900 2,026 18,949
#speaker 88 309 206 47 10,291 1,320 1,344 12,955
#word 17,038 24,557 19,669 1,174 981,391 125,305 132,471 1,239,167
#unique-word 1,120 1,639 1,405 103 4,813 2,660 2,773 5,155
- **Attributes**:
Key Description
set The set of audio: `{'train', 'valid', 'test'}`.
filename The filename follows the syntax `{province code}_{Sequence Number of Audio}`.
text Transcript of the audio.
length Length of the audio in seconds.
province The provincial dialect code.
region The regional dialect: `{'North', 'Central', 'South'}`.
speakerID The speaker identification code follows the syntax `spk_{province code}_{Sequence Number of Speaker}`.
gender Gender of the speaker (0 represents female and 1 represents male).
- For further statistics, please refer to the paper.