pere's picture
Saving weights and logs of step 249 - epoch 5
23200a3
|
raw
history blame
2.99 kB
metadata
language:
  - 'no'
license: apache-2.0
base_model: NbAiLab/nb-whisper-medium-v0.8-vad3
tags:
  - audio
  - asr
  - automatic-speech-recognition
  - hf-asr-leaderboard
model-index:
  - name: nb-whisper-medium-v0.8-vad3-verbatim
    results: []

nb-whisper-medium-v0.8-vad3-verbatim

This model is a fine-tuned version of NbAiLab/nb-whisper-medium-v0.8-vad3 on the NbAiLab/NPSC dataset. It achieves the following results on the evaluation set:

  • step: 249
  • validation_loss: 0.6296
  • train_loss: 0.4324
  • validation_wer: 8.2769
  • validation_cer: 2.8193
  • validation_exact_wer: 8.4048
  • validation_exact_cer: 2.8363

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 32
  • total_train_batch_size_per_node: 128
  • total_train_batch_size: 1024
  • total_optimization_steps: 250
  • starting_optimization_step: None
  • finishing_optimization_step: 250
  • num_train_dataset_workers: 32
  • num_hosts: 8
  • total_num_training_examples: 256,000
  • steps_per_epoch: 45
  • num_beams: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.98
  • adam_epsilon: 1e-06
  • dropout: True
  • bpe_dropout_probability: 0.2
  • activation_dropout_probability: 0.1

Training results

step validation_loss train_loss validation_wer validation_cer validation_exact_wer validation_exact_cer
0 1.5895 1.4606 17.5605 10.5650 33.0099 13.8415
40 0.6409 0.5035 9.1662 3.0250 9.3637 3.0542
80 0.6309 0.4790 8.7132 2.9755 8.8730 2.9952
120 0.6250 0.4480 8.4503 2.8812 8.6079 2.9019
160 0.6294 0.4423 8.4000 2.8641 8.5345 2.8810
200 0.6276 0.4467 8.3161 2.8345 8.4668 2.8534
240 0.6287 0.4376 8.2266 2.7917 8.3597 2.8087
249 0.6296 0.4324 8.2769 2.8193 8.4048 2.8363

Framework versions

  • Transformers 4.34.1
  • Datasets 2.16.1
  • Tokenizers 0.14.1