Whisper Tiny FineTuning Experiment
Collection
My experiment on trying to fine tune an ASR model (Whisper Tiny)
•
3 items
•
Updated
This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
This model is part of my school project, it uses shuffled 100k rows of train dataset since the computation power is limited.
Additional information can be found in this github: HanCreation/Whisper-Tiny-German
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.4824 | 0.16 | 1000 | 0.6305 | 35.5019 |
0.4284 | 0.32 | 2000 | 0.5855 | 33.3615 |
0.4152 | 0.48 | 3000 | 0.5610 | 32.1068 |
0.4387 | 0.64 | 4000 | 0.5505 | 31.4346 |
More information needed
More information needed
More information needed
Base model
openai/whisper-tiny