wav2vec2-base-timit-demo-google-colab

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0392
  • Wer: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 400
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Wer
5.2993 8.0 200 3.0327 1.0
3.0806 16.0 400 3.0476 1.0
3.0219 24.0 600 3.0472 1.0
3.0179 32.0 800 3.0435 1.0
3.0157 40.0 1000 3.0546 1.0
3.0146 48.0 1200 3.0484 1.0
3.0139 56.0 1400 3.0344 1.0
3.0118 64.0 1600 3.0351 1.0
3.0114 72.0 1800 3.0559 1.0
3.0114 80.0 2000 3.0526 1.0
3.0108 88.0 2200 3.0417 1.0
3.0092 96.0 2400 3.0629 1.0
3.0089 104.0 2600 3.0352 1.0
3.0083 112.0 2800 3.0503 1.0
3.0078 120.0 3000 3.0529 1.0
3.0072 128.0 3200 3.0378 1.0
3.0068 136.0 3400 3.0481 1.0
3.0063 144.0 3600 3.0392 1.0

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.3
  • Tokenizers 0.13.3
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for digicazter/wav2vec2-base-timit-demo-google-colab

Finetuned
(691)
this model