w2v2-bert-urdu / README.md
UmarRamzan's picture
Upload tokenizer
e5c0fcf verified
|
raw
history blame
2.42 kB
metadata
license: mit
tags:
  - generated_from_trainer
base_model: facebook/w2v-bert-2.0
datasets:
  - common_voice_17_0
metrics:
  - wer
model-index:
  - name: w2v2-bert-urdu
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: common_voice_17_0
          type: common_voice_17_0
          config: ur
          split: test[:100]
          args: ur
        metrics:
          - type: wer
            value: 0.6273224043715847
            name: Wer

w2v2-bert-urdu

This model is a fine-tuned version of facebook/w2v-bert-2.0 on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1498
  • Wer: 0.6273

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.5968 0.1695 50 3.1737 1.0
3.1414 0.3390 100 2.9666 1.0
2.3694 0.5085 150 1.0788 0.6525
0.7692 0.6780 200 0.5647 0.4186
0.5488 0.8475 250 0.4491 0.3486
0.5568 1.0169 300 0.5883 0.7388
0.7925 1.1864 350 1.0338 0.7967
1.4791 1.3559 400 1.1474 0.6251
1.2758 1.5254 450 1.1359 0.6251
1.2763 1.6949 500 1.1497 0.6273
1.2789 1.8644 550 1.1498 0.6273

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1