--- library_name: transformers language: - uz license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer - automatic-speech-recognition - whisper datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: Whisper Small Uzbek results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 args: 'config: uz, split: test' metrics: - type: wer value: 35.8660 name: Wer --- # Whisper Small Uzbek This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3776 - Wer: 35.8660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - training_steps: 5500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.913 | 0.2 | 500 | 0.8213 | 62.5843 | | 0.6404 | 0.4 | 1000 | 0.6082 | 51.8716 | | 0.5734 | 0.6 | 1500 | 0.5458 | 48.0513 | | 0.5051 | 0.8 | 2000 | 0.4846 | 43.8649 | | 0.4407 | 1.0 | 2500 | 0.4483 | 41.3901 | | 0.3436 | 1.2 | 3000 | 0.4321 | 41.0277 | | 0.3092 | 1.4 | 3500 | 0.4184 | 40.1141 | | 0.2861 | 1.6 | 4000 | 0.4091 | 39.9753 | | 0.289 | 1.8 | 4500 | 0.3811 | 36.7950 | | 0.2816 | 2.0 | 5000 | 0.3730 | 36.7102 | | 0.1547 | 2.2 | 5500 | 0.3776 | 35.8660 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.1.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0