yuchen005 commited on
Commit
5e8e12d
1 Parent(s): 66619d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -34,24 +34,24 @@ Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/
34
 
35
 
36
  **IMPORTANT:** The vast speech feature size mentioned above is because Whisper requires a fix input length of 30s that is too long. Please do the follwing step to remove it before running ***add_speech_feats_to_train_data.py***:
37
- - Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) "x = (x + self.positional_embedding).to(x.dtype)" to be "x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)"
38
 
39
 
40
  **UPDATE (Apr-29-2024):** To support customization, We release the script ***generate_robust_hp.py*** for users to generate train/test data from their own ASR datasets.
41
  We also release two necessary packages for generation: "my_jiwer" and "decoding.py".
42
  To summary, you will need to do the following three steps before running ***generate_robust_hp.py***:
43
- - Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) "x = (x + self.positional_embedding).to(x.dtype)" to be "x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)"
44
- - Specify the absolute path of "my_jiwer" directory in ***generate_robust_hp.py*** (sys.path.append)
45
- - Put our whisper decoding script "decoding.py" under your locally installed whisper directory "<your-path>/whisper/whisper"
46
 
47
 
48
  If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.
49
 
50
- """bib
51
  @inproceedings{hu2024large,
52
  title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
53
  author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
54
  booktitle={International Conference on Learning Representations},
55
  year={2024}
56
  }
57
- """
 
34
 
35
 
36
  **IMPORTANT:** The vast speech feature size mentioned above is because Whisper requires a fix input length of 30s that is too long. Please do the follwing step to remove it before running ***add_speech_feats_to_train_data.py***:
37
+ - Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) `x = (x + self.positional_embedding).to(x.dtype)` to be `x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)`
38
 
39
 
40
  **UPDATE (Apr-29-2024):** To support customization, We release the script ***generate_robust_hp.py*** for users to generate train/test data from their own ASR datasets.
41
  We also release two necessary packages for generation: "my_jiwer" and "decoding.py".
42
  To summary, you will need to do the following three steps before running ***generate_robust_hp.py***:
43
+ - Modified the [whisper model code](https://github.com/openai/whisper/blob/main/whisper/model.py#L167) `x = (x + self.positional_embedding).to(x.dtype)` to be `x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)`
44
+ - Specify the absolute path of "my_jiwer" directory in ***generate_robust_hp.py*** (`sys.path.append()`)
45
+ - Put our whisper decoding script "decoding.py" under your locally installed whisper directory "\<your-path\>/whisper/whisper"
46
 
47
 
48
  If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.
49
 
50
+ ```bib
51
  @inproceedings{hu2024large,
52
  title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
53
  author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
54
  booktitle={International Conference on Learning Representations},
55
  year={2024}
56
  }
57
+ ```