Audio files in wav format
Browse files
README.md
CHANGED
@@ -21,3 +21,66 @@ Also we create 3-gram KenLM language model using an open Common Crawl corpus.
|
|
21 |
| Archive | Size | Link |
|
22 |
|:----------------:|:----------:|:-------------------:|
|
23 |
| golos_opus.tar | 20.5 GB | https://sc.link/JpD |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
| Archive | Size | Link |
|
22 |
|:----------------:|:----------:|:-------------------:|
|
23 |
| golos_opus.tar | 20.5 GB | https://sc.link/JpD |
|
24 |
+
|
25 |
+
### **Audio files in wav format**
|
26 |
+
|
27 |
+
Manifest files with all the training transcription texts are in the train_crowd9.tar archive listed in the table:
|
28 |
+
|
29 |
+
| Archives | Size | Links |
|
30 |
+
|-------------------|------------|---------------------|
|
31 |
+
| train_farfield.tar| 15.4 GB | https://sc.link/1Z3 |
|
32 |
+
| train_crowd0.tar | 11 GB | https://sc.link/Lrg |
|
33 |
+
| train_crowd1.tar | 14 GB | https://sc.link/MvQ |
|
34 |
+
| train_crowd2.tar | 13.2 GB | https://sc.link/NwL |
|
35 |
+
| train_crowd3.tar | 11.6 GB | https://sc.link/Oxg |
|
36 |
+
| train_crowd4.tar | 15.8 GB | https://sc.link/Pyz |
|
37 |
+
| train_crowd5.tar | 13.1 GB | https://sc.link/Qz7 |
|
38 |
+
| train_crowd6.tar | 15.7 GB | https://sc.link/RAL |
|
39 |
+
| train_crowd7.tar | 12.7 GB | https://sc.link/VG5 |
|
40 |
+
| train_crowd8.tar | 12.2 GB | https://sc.link/WJW |
|
41 |
+
| train_crowd9.tar | 8.08 GB | https://sc.link/XKk |
|
42 |
+
| test.tar | 1.3 GB | https://sc.link/Kqr |
|
43 |
+
|
44 |
+
|
45 |
+
### **Acoustic and language models**
|
46 |
+
|
47 |
+
Acoustic model built using [QuartzNet15x5](https://arxiv.org/pdf/1910.10261.pdf) architecture and trained using [NeMo toolkit](https://github.com/NVIDIA/NeMo/tree/r1.0.0b4)
|
48 |
+
|
49 |
+
|
50 |
+
Three n-gram language models created using [KenLM Language Model Toolkit](https://kheafield.com/code/kenlm)
|
51 |
+
|
52 |
+
* LM built on [Common Crawl](https://commoncrawl.org) Russian dataset
|
53 |
+
* LM built on Golos train set
|
54 |
+
* LM built on [Common Crawl](https://commoncrawl.org) and Golos datasets together (50/50)
|
55 |
+
|
56 |
+
| Archives | Size | Links |
|
57 |
+
|--------------------------|------------|-----------------|
|
58 |
+
| QuartzNet15x5_golos.nemo | 68 MB | https://sc.link/ZMv |
|
59 |
+
| KenLMs.tar | 4.8 GB | https://sc.link/YL0 |
|
60 |
+
|
61 |
+
|
62 |
+
Golos data and models are also available in the hub of pre-trained models, datasets, and containers - DataHub ML Space. You can train the model and deploy it on the high-performance SberCloud infrastructure in [ML Space](https://sbercloud.ru/ru/aicloud/mlspace) - full-cycle machine learning development platform for DS-teams collaboration based on the Christofari Supercomputer.
|
63 |
+
|
64 |
+
|
65 |
+
## **Evaluation**
|
66 |
+
|
67 |
+
Percents of Word Error Rate for different test sets
|
68 |
+
|
69 |
+
|
70 |
+
| Decoder \ Test set | Crowd test | Farfield test | MCV<sup>1</sup> dev | MCV<sup>1</sup> test |
|
71 |
+
|-------------------------------------|-----------|----------|-----------|----------|
|
72 |
+
| Greedy decoder | 4.389 % | 14.949 % | 9.314 % | 11.278 % |
|
73 |
+
| Beam Search with Common Crawl LM | 4.709 % | 12.503 % | 6.341 % | 7.976 % |
|
74 |
+
| Beam Search with Golos train set LM | 3.548 % | 12.384 % | - | - |
|
75 |
+
| Beam Search with Common Crawl and Golos LM | 3.318 % | 11.488 % | 6.4 % | 8.06 % |
|
76 |
+
|
77 |
+
|
78 |
+
<sup>1</sup> [Common Voice](https://commonvoice.mozilla.org) - Mozilla's initiative to help teach machines how real people speak.
|
79 |
+
|
80 |
+
## **Resources**
|
81 |
+
|
82 |
+
[[arxiv.org] Golos: Russian Dataset for Speech Research](https://arxiv.org/abs/2106.10161)
|
83 |
+
|
84 |
+
[[habr.com] Golos — самый большой русскоязычный речевой датасет, размеченный вручную, теперь в открытом доступе](https://habr.com/ru/company/sberdevices/blog/559496/)
|
85 |
+
|
86 |
+
[[habr.com] Как улучшить распознавание русской речи до 3% WER с помощью открытых данных](https://habr.com/ru/company/sberdevices/blog/569082/)
|