waveletdeboshir's picture
Update README.md
c798b57 verified
|
raw
history blame
2.45 kB
metadata
license: apache-2.0
language:
  - ru
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
  - asr
  - Pytorch
  - pruned
  - audio
  - automatic-speech-recognition

Whisper-tiny-ru-pruned

Model info

This is a pruned version of openai/whisper-tiny model with only russian tokens left. Pruning was made without any fine-tuning. Method from this post was used.

Size

Only 10% tokens was left including special whisper tokens, added whisper tokens, 100 most popular tokens from tokenizer and 3000 most popular Russian tokens computed by tokenization of russian text corpus.

Model size is 50% less then original whisper-tiny:

openai/whisper-tiny waveletdeboshir/whisper-tiny-ru-pruned
n of parameters 38 M 19.6 M
n of parameters (with proj_out layer) 57.6 M 21.5 M
model file size 151 Mb 86 Mb
vocab_size 51865 4705

Usage

Model can be used as an original whisper:

>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> import torchaudio

>>> # load audio
>>> wav, sr = torchaudio.load("audio.wav")

>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("waveletdeboshir/whisper-tiny-ru-pruned")
>>> model = WhisperForConditionalGeneration.from_pretrained("waveletdeboshir/whisper-tiny-ru-pruned")

>>> input_features = processor(wav[0], sampling_rate=sr, return_tensors="pt").input_features 

>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|ru|><|transcribe|><|notimestamps|> Начинаем работу.<|endoftext|>']

The context tokens can be removed from the start of the transcription by setting skip_special_tokens=True.

Other pruned whisper models

Metrics

TODO

You can fine-tune this model on your data to achive better performance.

Colab for pruning

TODO