Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -27,25 +27,31 @@ This repo releases the Robust HyPoradise dataset in paper "Large Language Models
|
|
27 |
|
28 |
**UPDATE (Apr-18-2024):** We have released the training data, which follows the same format as test data.
|
29 |
Considering the file size, the uploaded training data does not contain the speech features (vast size).
|
30 |
-
Alternatively, we have provided a script named
|
31 |
You need to specify the raw speech path from utterance id in the script.
|
32 |
Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/:f:/g/personal/yuchen005_e_ntu_edu_sg/EuLgMQbjrIJHk7dKPkjcDMIB4SYgXKKP8VBxyiZk3qgdgA),
|
33 |
[VB-DEMAND](https://datashare.ed.ac.uk/handle/10283/2791), [LS-FreeSound](https://github.com/archiki/Robust-E2E-ASR), [NOIZEUS](https://ecs.utdallas.edu/loizou/speech/noizeus/).
|
34 |
|
35 |
-
**IMPORTANT:** The vast speech feature size mentioned above is because Whisper requires a fix input length of 30s that is too long. Please do the follwing step before running data generation:
|
36 |
-
- Modified the [model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) `x = (x + self.positional_embedding).to(x.dtype)` to be `x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)`
|
37 |
|
38 |
-
**
|
39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
|
42 |
If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.
|
43 |
|
44 |
-
|
45 |
@inproceedings{hu2024large,
|
46 |
title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
|
47 |
author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
|
48 |
booktitle={International Conference on Learning Representations},
|
49 |
year={2024}
|
50 |
}
|
51 |
-
|
|
|
27 |
|
28 |
**UPDATE (Apr-18-2024):** We have released the training data, which follows the same format as test data.
|
29 |
Considering the file size, the uploaded training data does not contain the speech features (vast size).
|
30 |
+
Alternatively, we have provided a script named ***add_speech_feats_to_train_data.py*** to generate them from raw speech (.wav).
|
31 |
You need to specify the raw speech path from utterance id in the script.
|
32 |
Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/:f:/g/personal/yuchen005_e_ntu_edu_sg/EuLgMQbjrIJHk7dKPkjcDMIB4SYgXKKP8VBxyiZk3qgdgA),
|
33 |
[VB-DEMAND](https://datashare.ed.ac.uk/handle/10283/2791), [LS-FreeSound](https://github.com/archiki/Robust-E2E-ASR), [NOIZEUS](https://ecs.utdallas.edu/loizou/speech/noizeus/).
|
34 |
|
|
|
|
|
35 |
|
36 |
+
**IMPORTANT:** The vast speech feature size mentioned above is because Whisper requires a fix input length of 30s that is too long. Please do the follwing step to remove it before running ***add_speech_feats_to_train_data.py***:
|
37 |
+
- Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) "x = (x + self.positional_embedding).to(x.dtype)" to be "x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)"
|
38 |
+
|
39 |
+
|
40 |
+
**UPDATE (Apr-29-2024):** To support customization, We release the script ***generate_robust_hp.py*** for users to generate train/test data from their own ASR datasets.
|
41 |
+
We also release two necessary packages for generation: "my_jiwer" and "decoding.py".
|
42 |
+
To summary, you will need to do the following three steps before running ***generate_robust_hp.py***:
|
43 |
+
- Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) "x = (x + self.positional_embedding).to(x.dtype)" to be "x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)"
|
44 |
+
- Specify the absolute path of "my_jiwer" directory in ***generate_robust_hp.py*** (sys.path.append)
|
45 |
+
- Put our whisper decoding script "decoding.py" under your locally installed whisper directory "<your-path>/whisper/whisper"
|
46 |
|
47 |
|
48 |
If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.
|
49 |
|
50 |
+
"""bib
|
51 |
@inproceedings{hu2024large,
|
52 |
title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
|
53 |
author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
|
54 |
booktitle={International Conference on Learning Representations},
|
55 |
year={2024}
|
56 |
}
|
57 |
+
"""
|