Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ Copy the github repo, build the spiritlm python package and put models in `check
|
|
4 |
|
5 |
You need around 15.5GB of VRAM to run the model with short output length and around 19GB to output 800 tokens.
|
6 |
|
7 |
-
Edit: Audio to audio inference doesn't seem great. Potentially I am tokenizing the audio wrong.
|
8 |
|
9 |
|
10 |
```python
|
|
|
4 |
|
5 |
You need around 15.5GB of VRAM to run the model with short output length and around 19GB to output 800 tokens.
|
6 |
|
7 |
+
Edit: Audio to audio inference doesn't seem great. Potentially I am tokenizing the audio wrong. Could be also that model doesn't work well with audio IN audio OUT.
|
8 |
|
9 |
|
10 |
```python
|