Beijuka's picture
End of training
3178b0e verified
metadata
library_name: transformers
language:
  - sh
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - DigitalUmuganda/Afrivoice
metrics:
  - wer
model-index:
  - name: Whisper Small Shona - Beijuka Bruno
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Afrivoice_shona
          type: DigitalUmuganda/Afrivoice
          args: 'config: sh, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 0.40968948928519366

Whisper Small Shona - Beijuka Bruno

This model is a fine-tuned version of openai/whisper-small on the Afrivoice_shona dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9407
  • Wer: 0.4097
  • Cer: 0.1044

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.828 1.0 876 0.4722 0.3663 0.0789
0.3137 2.0 1752 0.4031 0.3228 0.0660
0.182 3.0 2628 0.4081 0.3196 0.0667
0.102 4.0 3504 0.4312 0.3207 0.0658
0.0537 5.0 4380 0.4600 0.3150 0.0617
0.0292 6.0 5256 0.4840 0.3196 0.0669
0.0178 7.0 6132 0.5033 0.3123 0.0615
0.0118 8.0 7008 0.5443 0.3068 0.0621
0.0092 9.0 7884 0.5597 0.3062 0.0610
0.0072 10.0 8760 0.5778 0.3176 0.0641
0.0063 11.0 9636 0.5991 0.3116 0.0623
0.006 12.0 10512 0.5886 0.2986 0.0597
0.0053 13.0 11388 0.6122 0.3099 0.0625
0.0053 14.0 12264 0.6129 0.3070 0.0614
0.0054 15.0 13140 0.6246 0.2990 0.0604
0.0051 16.0 14016 0.6465 0.3105 0.0608
0.0037 17.0 14892 0.6433 0.3040 0.0620
0.0036 18.0 15768 0.6522 0.3039 0.0613
0.004 19.0 16644 0.6465 0.2983 0.0595
0.0035 20.0 17520 0.6700 0.3049 0.0603
0.0035 21.0 18396 0.6835 0.2988 0.0596
0.0027 22.0 19272 0.6802 0.3068 0.0653

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.2.0+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1