he-cantillation
This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0127
- Wer: 31.4035
- Avg Precision Exact: 0.5775
- Avg Recall Exact: 0.5799
- Avg F1 Exact: 0.5786
- Avg Precision Letter Shift: 0.5795
- Avg Recall Letter Shift: 0.5829
- Avg F1 Letter Shift: 0.5809
- Avg Precision Word Level: 0.5836
- Avg Recall Word Level: 0.5883
- Avg F1 Word Level: 0.5854
- Avg Precision Word Shift: 0.6875
- Avg Recall Word Shift: 0.7001
- Avg F1 Word Shift: 0.6917
- Precision Median Exact: 0.9545
- Recall Median Exact: 0.9621
- F1 Median Exact: 0.9613
- Precision Max Exact: 1.0
- Recall Max Exact: 1.0
- F1 Max Exact: 1.0
- Precision Min Exact: 0.0
- Recall Min Exact: 0.0
- F1 Min Exact: 0.0
- Precision Min Letter Shift: 0.0
- Recall Min Letter Shift: 0.0
- F1 Min Letter Shift: 0.0
- Precision Min Word Level: 0.0
- Recall Min Word Level: 0.0
- F1 Min Word Level: 0.0
- Precision Min Word Shift: 0.0
- Recall Min Word Shift: 0.0
- F1 Min Word Shift: 0.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Avg Precision Exact | Avg Recall Exact | Avg F1 Exact | Avg Precision Letter Shift | Avg Recall Letter Shift | Avg F1 Letter Shift | Avg Precision Word Level | Avg Recall Word Level | Avg F1 Word Level | Avg Precision Word Shift | Avg Recall Word Shift | Avg F1 Word Shift | Precision Median Exact | Recall Median Exact | F1 Median Exact | Precision Max Exact | Recall Max Exact | F1 Max Exact | Precision Min Exact | Recall Min Exact | F1 Min Exact | Precision Min Letter Shift | Recall Min Letter Shift | F1 Min Letter Shift | Precision Min Word Level | Recall Min Word Level | F1 Min Word Level | Precision Min Word Shift | Recall Min Word Shift | F1 Min Word Shift |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.3659 | 0.3101 | 1000 | 0.4443 | 55.3801 | 0.3563 | 0.3691 | 0.3620 | 0.3797 | 0.3929 | 0.3855 | 0.3883 | 0.4004 | 0.3937 | 0.5803 | 0.6096 | 0.5933 | 0.4104 | 0.4387 | 0.4264 | 0.8 | 0.8421 | 0.8205 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.05 | 0.0769 | 0.0606 |
0.1704 | 0.6202 | 2000 | 0.1796 | 108.5965 | 0.1652 | 0.1653 | 0.1651 | 0.1802 | 0.1801 | 0.1798 | 0.1877 | 0.1852 | 0.1857 | 0.3054 | 0.3010 | 0.3022 | 0.0 | 0.0 | 0.0 | 1.0 | 0.9545 | 0.9767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.1597 | 0.9302 | 3000 | 0.1151 | 28.7719 | 0.5223 | 0.5324 | 0.5270 | 0.5340 | 0.5441 | 0.5387 | 0.5437 | 0.5504 | 0.5466 | 0.6927 | 0.7055 | 0.6983 | 0.6929 | 0.7042 | 0.6956 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0831 | 1.2403 | 4000 | 0.0854 | 32.3977 | 0.4387 | 0.4427 | 0.4405 | 0.4455 | 0.4503 | 0.4477 | 0.4502 | 0.4553 | 0.4524 | 0.6281 | 0.6409 | 0.6335 | 0.1303 | 0.1366 | 0.1327 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0738 | 1.5504 | 5000 | 0.0646 | 22.3977 | 0.5615 | 0.5614 | 0.5613 | 0.5675 | 0.5674 | 0.5673 | 0.5725 | 0.5739 | 0.5730 | 0.7393 | 0.7458 | 0.7419 | 0.7907 | 0.7980 | 0.7952 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0913 | 1.8605 | 6000 | 0.0462 | 26.0234 | 0.5871 | 0.5894 | 0.5881 | 0.5930 | 0.5955 | 0.5941 | 0.5961 | 0.5991 | 0.5975 | 0.7016 | 0.7077 | 0.7033 | 0.8775 | 0.8819 | 0.8776 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0279 | 2.1705 | 7000 | 0.0330 | 37.3684 | 0.4606 | 0.4657 | 0.4630 | 0.4649 | 0.4704 | 0.4675 | 0.4698 | 0.4757 | 0.4726 | 0.6104 | 0.6275 | 0.6165 | 0.0889 | 0.0883 | 0.0885 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0581 | 2.4806 | 8000 | 0.0228 | 21.4620 | 0.6468 | 0.6467 | 0.6467 | 0.6506 | 0.6505 | 0.6504 | 0.6560 | 0.6569 | 0.6563 | 0.7490 | 0.7511 | 0.7496 | 0.9468 | 0.9524 | 0.9468 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0321 | 2.7907 | 9000 | 0.0170 | 25.7895 | 0.6249 | 0.6274 | 0.6260 | 0.6283 | 0.6311 | 0.6295 | 0.6324 | 0.6348 | 0.6335 | 0.7323 | 0.7476 | 0.7373 | 0.9456 | 0.9506 | 0.9498 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0479 | 3.1008 | 10000 | 0.0127 | 31.4035 | 0.5775 | 0.5799 | 0.5786 | 0.5795 | 0.5829 | 0.5809 | 0.5836 | 0.5883 | 0.5854 | 0.6875 | 0.7001 | 0.6917 | 0.9545 | 0.9621 | 0.9613 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 2.12.0
- Tokenizers 0.20.1
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for cantillation/Teamim-medium_Random_WeightDecay-0.005_Augmented_New-Data_date-11-03-2025
Base model
openai/whisper-medium