leduckhai commited on
Commit
56ee00d
·
verified ·
1 Parent(s): b98c9e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -156,25 +156,25 @@ configs:
156
  ## Description:
157
  Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants.
158
  This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics.
159
- In this work, we introduce \textit{MultiMed}, a collection of small-to-large end-to-end ASR models for the medical domain, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese, together with the corresponding real-world ASR dataset.
160
- To our best knowledge, \textit{MultiMed} stands as the largest and the first multilingual medical ASR dataset, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
161
 
162
 
163
- Please cite this paper: https://arxiv.org/abs/2404.05659
164
 
165
- @inproceedings{VietMed_dataset,
166
- title={VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain},
167
  author={Khai Le-Duc},
168
  year={2024},
169
- booktitle = {Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
170
  }
171
- To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
172
 
173
- For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing)
174
 
175
  ## Limitations:
176
 
177
- Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
178
  That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
179
  In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
180
  However, forced alignment only learns what it is taught by humans.
 
156
  ## Description:
157
  Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants.
158
  This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics.
159
+ In this work, we introduce *MultiMed*, a collection of small-to-large end-to-end ASR models for the medical domain, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese, together with the corresponding real-world ASR dataset.
160
+ To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
161
 
162
 
163
+ Please cite this paper: **TODO**
164
 
165
+ @inproceedings{**TODO**,
166
+ title={**TODO**},
167
  author={Khai Le-Duc},
168
  year={2024},
169
+ booktitle = {**TODO**},
170
  }
171
+ **TODO** To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
172
 
173
+ **TODO** For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing)
174
 
175
  ## Limitations:
176
 
177
+ **TODO** Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
178
  That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
179
  In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
180
  However, forced alignment only learns what it is taught by humans.