transiteration
commited on
Commit
•
357e0a4
1
Parent(s):
9b78aab
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,12 @@ tags:
|
|
18 |
|
19 |
In order to prepare, adjust, or experiment with the model, it's necessary to install [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) [1].
|
20 |
We advise installing it once you've installed the most recent version of PyTorch.
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
```
|
22 |
pip install nemo_toolkit['all']
|
23 |
```
|
@@ -66,7 +72,7 @@ Average WER: 15.53%
|
|
66 |
|
67 |
## Limitations
|
68 |
|
69 |
-
Because the GPU
|
70 |
In general, this makes it faster for inference but might show less overall performance.
|
71 |
In addition, if the speech includes technical terms or dialect words the model hasn't learned, it may not work as well.
|
72 |
|
|
|
18 |
|
19 |
In order to prepare, adjust, or experiment with the model, it's necessary to install [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) [1].
|
20 |
We advise installing it once you've installed the most recent version of PyTorch.
|
21 |
+
This model trained on NVIDIA GeForce RTX 2070:
|
22 |
+
Python 3.7.15
|
23 |
+
NumPy 1.21.6
|
24 |
+
PyTorch 1.21.1
|
25 |
+
NVIDIA NeMo 1.7.0
|
26 |
+
|
27 |
```
|
28 |
pip install nemo_toolkit['all']
|
29 |
```
|
|
|
72 |
|
73 |
## Limitations
|
74 |
|
75 |
+
Because the GPU has limited power, we used a lightweight model architecture for fine-tuning.
|
76 |
In general, this makes it faster for inference but might show less overall performance.
|
77 |
In addition, if the speech includes technical terms or dialect words the model hasn't learned, it may not work as well.
|
78 |
|