|
--- |
|
language: |
|
- en |
|
tags: |
|
- esb |
|
datasets: |
|
- esb/datasets |
|
- revdotcom/earnings22 |
|
--- |
|
|
|
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: |
|
```python |
|
#!/usr/bin/env bash |
|
python run_flax_speech_recognition_ctc.py \ |
|
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \ |
|
--tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \ |
|
--dataset_name="esb/datasets" \ |
|
--dataset_config_name="earnings22" \ |
|
--output_dir="./" \ |
|
--wandb_project="wav2vec2-ctc" \ |
|
--wandb_name="wav2vec2-ctc-earnings22" \ |
|
--max_steps="50000" \ |
|
--save_steps="10000" \ |
|
--eval_steps="10000" \ |
|
--learning_rate="3e-4" \ |
|
--logging_steps="25" \ |
|
--warmup_steps="5000" \ |
|
--preprocessing_num_workers="1" \ |
|
--hidden_dropout="0.2" \ |
|
--activation_dropout="0.2" \ |
|
--feat_proj_dropout="0.2" \ |
|
--do_train \ |
|
--do_eval \ |
|
--do_predict \ |
|
--overwrite_output_dir \ |
|
--gradient_checkpointing \ |
|
--freeze_feature_encoder \ |
|
--push_to_hub \ |
|
--use_auth_token |
|
``` |
|
|