File size: 2,432 Bytes
528147e dd42455 84c9583 12eb29a 8c8ae42 84c9583 7f2c644 dd8b9a2 18e9872 4a59595 18e9872 4a59595 57a2b0b dd8b9a2 4a59595 5fd7127 ca27f04 eb0dbf2 dd8b9a2 eb0dbf2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
viewer: true
---
# VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain
## Description:
We introduced a Vietnamese speech recognition dataset in the medical domain comprising 16h of labeled medical speech, 1000h of unlabeled medical speech and 1200h of unlabeled general-domain speech.
To our best knowledge, VietMed is by far **the world’s largest public medical speech recognition dataset** in 7 aspects:
total duration, number of speakers, diseases, recording conditions, speaker roles, unique medical terms and accents.
VietMed is also by far the largest public Vietnamese speech dataset in terms of total duration.
Additionally, we are the first to present a medical ASR dataset covering all ICD-10 disease groups and all accents within a country.
Please cite this paper: https://arxiv.org/abs/2404.05659
@inproceedings{VietMed_dataset,
title={VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain},
author={Khai Le-Duc},
year={2024},
booktitle = {Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
}
To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing)
## Limitations:
Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
However, forced alignment only learns what it is taught by humans.
Therefore, no transcript is perfect. We will conduct human-machine collaboration to get "more perfect" transcript in the next paper.
## Contact:
If any links are broken, please contact me for fixing!
Thanks [Phan Phuc](https://www.linkedin.com/in/pphuc/) for dataset viewer <3
```
Le Duc Khai
University of Toronto, Canada
Email: [email protected]
GitHub: https://github.com/leduckhai
``` |