transiteration
commited on
Commit
•
c633b18
1
Parent(s):
b8f74c7
Update README.md
Browse files
README.md
CHANGED
@@ -73,9 +73,9 @@ The model was finetuned to Kazakh speech based on the pre-trained English Model
|
|
73 |
In total, KSC2 contains around 1.2k hours of high-quality transcribed data comprising over 600k utterances.
|
74 |
|
75 |
## Performance
|
76 |
-
|
77 |
-
Average WER: 15.53
|
78 |
-
|
79 |
## Limitations
|
80 |
|
81 |
Because the GPU has limited power, we used a lightweight model architecture for fine-tuning.\
|
|
|
73 |
In total, KSC2 contains around 1.2k hours of high-quality transcribed data comprising over 600k utterances.
|
74 |
|
75 |
## Performance
|
76 |
+
The model achieved:\
|
77 |
+
Average WER: 15.53%\
|
78 |
+
through the applying of **Greedy Decoding**.
|
79 |
## Limitations
|
80 |
|
81 |
Because the GPU has limited power, we used a lightweight model architecture for fine-tuning.\
|