Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -19,10 +19,10 @@ model-index:
19
  metrics:
20
  - name: Test WER
21
  type: wer
22
- value: 0.2934
23
  - name: Test CER
24
  type: cer
25
- value: 0.0786
26
  - task:
27
  name: Automatic Speech Recognition
28
  type: automatic-speech-recognition
@@ -33,10 +33,10 @@ model-index:
33
  metrics:
34
  - name: Test WER
35
  type: wer
36
- value: 0.5209
37
  - name: Test CER
38
  type: cer
39
- value: 0.1790
40
  datasets:
41
  - mozilla-foundation/common_voice_15_0
42
  language:
@@ -52,8 +52,8 @@ should probably proofread and complete it, then remove this comment. -->
52
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
53
  It achieves the following results on the evaluation set:
54
  - Loss: 0.3611
55
- - Wer: 0.2992
56
- - Cer: 0.0786
57
 
58
  View the results on Kaggle Notebook: https://www.kaggle.com/code/kingabzpro/wav2vec-2-eval
59
 
@@ -105,13 +105,13 @@ def evaluate(batch):
105
 
106
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
107
 
108
- print("WER: {}".format(wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
109
- print("CER: {}".format(cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
110
 
111
  ```
112
- **WER: 0.5209850206372026**
113
 
114
- **CER: 0.17902923538230883**
115
 
116
  ### Training hyperparameters
117
 
 
19
  metrics:
20
  - name: Test WER
21
  type: wer
22
+ value: 29.34
23
  - name: Test CER
24
  type: cer
25
+ value: 7.86
26
  - task:
27
  name: Automatic Speech Recognition
28
  type: automatic-speech-recognition
 
33
  metrics:
34
  - name: Test WER
35
  type: wer
36
+ value: 52.09
37
  - name: Test CER
38
  type: cer
39
+ value: 17.90
40
  datasets:
41
  - mozilla-foundation/common_voice_15_0
42
  language:
 
52
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
53
  It achieves the following results on the evaluation set:
54
  - Loss: 0.3611
55
+ - Wer: 29.92%
56
+ - Cer: 7.86%
57
 
58
  View the results on Kaggle Notebook: https://www.kaggle.com/code/kingabzpro/wav2vec-2-eval
59
 
 
105
 
106
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
107
 
108
+ print("WER: {}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
109
+ print("CER: {}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
110
 
111
  ```
112
+ **WER: 52.09850206372026**
113
 
114
+ **CER: 17.902923538230883**
115
 
116
  ### Training hyperparameters
117