|
--- |
|
library_name: transformers |
|
language: |
|
- sn |
|
license: apache-2.0 |
|
base_model: openai/whisper-small |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- DigitalUmuganda/Afrivoice |
|
metrics: |
|
- wer |
|
model-index: |
|
- name: Whisper Small Shona - Beijuka Bruno |
|
results: |
|
- task: |
|
name: Automatic Speech Recognition |
|
type: automatic-speech-recognition |
|
dataset: |
|
name: Afrivoice_shona |
|
type: DigitalUmuganda/Afrivoice |
|
metrics: |
|
- name: Wer |
|
type: wer |
|
value: 0.2574299634591961 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# Whisper Small Shona - Beijuka Bruno |
|
|
|
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Afrivoice_shona dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.5995 |
|
- Wer: 0.2574 |
|
- Cer: 0.0468 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 500 |
|
- num_epochs: 100 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |
|
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| |
|
| 0.5547 | 1.0 | 2204 | 0.3322 | 0.2800 | 0.0544 | |
|
| 0.2437 | 2.0 | 4408 | 0.3013 | 0.2732 | 0.0572 | |
|
| 0.1557 | 3.0 | 6612 | 0.3104 | 0.2698 | 0.0526 | |
|
| 0.0954 | 4.0 | 8816 | 0.3244 | 0.2713 | 0.0520 | |
|
| 0.0536 | 5.0 | 11020 | 0.3554 | 0.2719 | 0.0509 | |
|
| 0.0297 | 6.0 | 13224 | 0.3843 | 0.2679 | 0.0505 | |
|
| 0.0185 | 7.0 | 15428 | 0.4178 | 0.2679 | 0.0502 | |
|
| 0.0132 | 8.0 | 17632 | 0.4407 | 0.2642 | 0.0502 | |
|
| 0.0108 | 9.0 | 19836 | 0.4662 | 0.2653 | 0.0498 | |
|
| 0.0087 | 10.0 | 22040 | 0.4724 | 0.2628 | 0.0498 | |
|
| 0.0077 | 11.0 | 24244 | 0.4939 | 0.2652 | 0.0495 | |
|
| 0.0068 | 12.0 | 26448 | 0.4969 | 0.2666 | 0.0503 | |
|
| 0.0062 | 13.0 | 28652 | 0.5177 | 0.2681 | 0.0495 | |
|
| 0.006 | 14.0 | 30856 | 0.5313 | 0.2694 | 0.0513 | |
|
| 0.0053 | 15.0 | 33060 | 0.5357 | 0.2644 | 0.0491 | |
|
| 0.0049 | 16.0 | 35264 | 0.5420 | 0.2636 | 0.0499 | |
|
| 0.0042 | 17.0 | 37468 | 0.5464 | 0.2620 | 0.0503 | |
|
| 0.0042 | 18.0 | 39672 | 0.5526 | 0.2563 | 0.0485 | |
|
| 0.004 | 19.0 | 41876 | 0.5666 | 0.2639 | 0.0501 | |
|
| 0.0038 | 20.0 | 44080 | 0.5678 | 0.2631 | 0.0501 | |
|
| 0.0036 | 21.0 | 46284 | 0.5750 | 0.2672 | 0.0495 | |
|
| 0.0036 | 22.0 | 48488 | 0.5739 | 0.2562 | 0.0490 | |
|
| 0.0032 | 23.0 | 50692 | 0.5869 | 0.2639 | 0.0499 | |
|
| 0.003 | 24.0 | 52896 | 0.5912 | 0.2625 | 0.0491 | |
|
| 0.003 | 25.0 | 55100 | 0.5896 | 0.2596 | 0.0494 | |
|
| 0.0032 | 26.0 | 57304 | 0.5945 | 0.2596 | 0.0495 | |
|
| 0.0025 | 27.0 | 59508 | 0.6071 | 0.2630 | 0.0525 | |
|
| 0.0025 | 28.0 | 61712 | 0.6165 | 0.2560 | 0.0483 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.2 |
|
- Pytorch 2.2.0+cu121 |
|
- Datasets 3.0.1 |
|
- Tokenizers 0.20.1 |
|
|