RogerioFreitas commited on
Commit
df860a5
·
1 Parent(s): b925bf3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -28
README.md CHANGED
@@ -29,41 +29,22 @@ model-index:
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
- # Portuguese Medium Whisper
33
 
34
- This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
35
- It achieves the following results on the evaluation set:
36
- - Loss: 0.2628
37
- - Wer: 6.5987
38
 
39
- ## Blog post
40
 
41
- All information about this model in this blog post: [Speech-to-Text & IA | Transcreva qualquer áudio para o português com o Whisper (OpenAI)... sem nenhum custo!](https://medium.com/@pierre_guillou/speech-to-text-ia-transcreva-qualquer-%C3%A1udio-para-o-portugu%C3%AAs-com-o-whisper-openai-sem-ad0c17384681).
42
 
43
- ## New SOTA
 
44
 
45
- The Normalized WER in the [OpenAI Whisper article](https://cdn.openai.com/papers/whisper.pdf) with the [Common Voice 9.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) test dataset is 8.1.
46
 
47
- As this test dataset is similar to the [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) test dataset used to evaluate our model (WER and WER Norm), it means that **our Portuguese Medium Whisper is better than the [Medium Whisper](https://huggingface.co/openai/whisper-medium) model at transcribing audios Portuguese in text** (and even better than the [Whisper Large](https://huggingface.co/openai/whisper-large) that has a WER Norm of 7.1!).
48
 
49
- ![OpenAI results with Whisper Medium and Test dataset of Commons Voice 9.0](https://huggingface.co/pierreguillou/whisper-medium-portuguese/resolve/main/whisper_medium_portuguese_wer_commonvoice9.png)
50
-
51
- ## Training procedure
52
-
53
- ### Training hyperparameters
54
-
55
- The following hyperparameters were used during training:
56
- - learning_rate: 9e-06
57
- - train_batch_size: 32
58
- - eval_batch_size: 16
59
- - seed: 42
60
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
- - lr_scheduler_type: linear
62
- - lr_scheduler_warmup_steps: 500
63
- - training_steps: 6000
64
- - mixed_precision_training: Native AMP
65
-
66
- ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:------:|
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
+ # Modelo Flax do Pierre em Português para Reconhecimento de Fala (ASR)
33
 
34
+ Este repositório é um fork do repositório original criado por [Pierre Guillou](https://github.com/piegu). Ele contém uma versão convertida do modelo Whisper da OpenAI, fine-tuned no conjunto de dados `common_voice_11_0` para o idioma Português.
 
 
 
35
 
36
+ ## Resultados
37
 
38
+ O modelo atinge os seguintes resultados no conjunto de avaliação:
39
 
40
+ - Perda (Loss): 0.2628
41
+ - Taxa de Erro de Palavra (Word Error Rate - WER): 6.5987
42
 
43
+ Para obter mais informações sobre este modelo, consulte este post do autor no blog: [Speech-to-Text & IA | Transcreva qualquer áudio para o português com o Whisper (OpenAI)... sem nenhum custo!](https://medium.com/@pierre_guillou).
44
 
45
+ Este modelo, batizado de "Portuguese Medium Whisper", é superior ao modelo original Whisper Medium da OpenAI na transcrição de áudios em português (e inclusive melhor que o modelo Whisper Large, que possui um WER de 7.1).
46
 
47
+ ## Treinamento
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Wer |
50
  |:-------------:|:-----:|:----:|:---------------:|:------:|