adamo1139 commited on
Commit
fd2395c
1 Parent(s): ea9a33c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,11 +1,14 @@
1
  # Spirit LM Inference Gradio Demo
2
 
3
- Copy the github repo, build the spiritlm python package and put models in `checkpoints` folder before running the script. I would suggest to use conda environment for this.
4
 
5
- You need around 15.5GB of VRAM to run the model with short output length and around 19GB to output 800 tokens.
6
 
7
- Edit: Audio to audio inference doesn't seem great. Potentially I am tokenizing the audio wrong. Could be also that model doesn't work well with audio IN audio OUT.
8
 
 
 
 
9
 
10
  ```python
11
  import gradio as gr
 
1
  # Spirit LM Inference Gradio Demo
2
 
3
+ Copy the github repo, build the [spiritlm](https://github.com/facebookresearch/spiritlm) python package and put models in `checkpoints` folder before running the script. I would suggest to use conda environment for this.
4
 
5
+ You need around 15.5GB of VRAM to run the model with 200 tokens output length and around 19GB to output 800 tokens.
6
 
7
+ If you're concerned about pickles from unknown uploader - grab them from a repo maintained by HF staffer - [https://huggingface.co/spirit-lm/Meta-spirit-lm](https://huggingface.co/spirit-lm/Meta-spirit-lm)
8
 
9
+ Audio to audio inference doesn't seem good at all. Potentially I am tokenizing the audio wrong. Could be also that model doesn't work well with audio IN audio OUT.
10
+
11
+ Script here works with just single speaker - if you know how to get other speakers let me know and I'll update it.
12
 
13
  ```python
14
  import gradio as gr