smajumdar94
commited on
Commit
•
63d5e0b
1
Parent(s):
108045f
Update README.md
Browse files
README.md
CHANGED
@@ -206,7 +206,7 @@ The tokenizers for these models were built using the text transcripts of the tra
|
|
206 |
|
207 |
### Datasets
|
208 |
|
209 |
-
The model in this collection is trained on a composite dataset (NeMo ASRSet En
|
210 |
|
211 |
- Librispeech 960 hours of English speech
|
212 |
- Fisher Corpus
|
|
|
206 |
|
207 |
### Datasets
|
208 |
|
209 |
+
The model in this collection is trained on a composite dataset (NeMo ASRSet En) comprising several thousand hours of English speech:
|
210 |
|
211 |
- Librispeech 960 hours of English speech
|
212 |
- Fisher Corpus
|