Whisper Small En - Sridhar Vanga

This model is a fine-tuned version of openai/whisper-small on the sridhar1ga/UCLASS dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0974
  • Wer: 41.4951

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0513 18.5185 500 1.5771 61.3218
0.0056 37.0370 1000 1.8013 42.1452
0.0089 55.5556 1500 1.8905 65.0054
0.0054 74.0741 2000 1.7860 44.9621
0.0016 92.5926 2500 1.9571 41.9285
0.0001 111.1111 3000 2.0281 41.0618
0.0001 129.6296 3500 2.0595 41.9285
0.0001 148.1481 4000 2.0805 41.3868
0.0001 166.6667 4500 2.0927 41.4951
0.0001 185.1852 5000 2.0974 41.4951

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
19
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sridhar1ga/whisper-small-en

Finetuned
(2104)
this model

Evaluation results