saurabhy27-outcomes's picture
End of training
32c8862 verified
metadata
library_name: transformers
language:
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - saurabhy27-outcomes/singlish_speech_corpus
metrics:
  - wer
model-index:
  - name: Whisper small singlish v2
    results: []

Whisper small singlish v2

This model is a fine-tuned version of openai/whisper-small on the Singlish Speech Corpus dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2000
  • Wer: 33.7188

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1765 0.7407 500 0.1531 9.8226
0.0711 1.4815 1000 0.1481 32.2996
0.0415 2.2222 1500 0.1568 51.1892
0.0221 2.9630 2000 0.1582 33.1932
0.0064 3.7037 2500 0.1750 37.4310
0.0024 4.4444 3000 0.1856 40.0394
0.0017 5.1852 3500 0.1903 34.0539
0.0008 5.9259 4000 0.1948 34.5269
0.0006 6.6667 4500 0.1996 33.6662
0.0005 7.4074 5000 0.2000 33.7188

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1