vasista22's picture
first commit
a68d203
|
raw
history blame
2.05 kB
---
language:
- hi
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Hindi Large-v2 - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: hi_in
split: test
metrics:
- type: wer
value: 20.00
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
metrics:
- type: wer
value: 20.00
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Hindi Large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Hindi data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
## Training and evaluation data at Speech Lab, IITM
Training Data: GramVaani ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Google/Fleurs (Train+Dev) set.
Evaluation Data: GramVaani ASR Corpus Test, Google/Fleurs Test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25000
- training_steps: 57000 (Initially set to 116255 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at Speech Lab, IITM. The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.