Lwasinam commited on
Commit
6327888
1 Parent(s): f4c9bd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  # Model Card for Model ID
13
  Voicera is a AR text-to-speech model trained on ~1000hrs of speech data.
14
  speech is converted to discrete tokens using "Multi-Scale Neural Audio Codec (SNAC)" model
15
- NB: This is not a SOTA model, and not accuarate enough for production usecase
16
 
17
 
18
 
@@ -29,7 +29,7 @@ It's a project to explore TTS technology and improve audio output quality.
29
 
30
 
31
  - **Developed by:** Lwasinam Dilli
32
- - **Funded by [optional]:** Lwasinam Dilli
33
  - **Model type:** GPT2-Transformer architecture
34
  - **License:** Free and Open to use I guess :)
35
 
@@ -40,7 +40,7 @@ It's a project to explore TTS technology and improve audio output quality.
40
 
41
  - **Repository:** [More Information Needed]
42
  - **Paper [optional]:** [More Information Needed]
43
- - **Demo [optional]:** [More Information Needed]
44
 
45
 
46
 
@@ -79,7 +79,7 @@ Hugging Face had pretty much all the datasets I needed. I just had to filter out
79
  I should probably work on this, the loss went down and the output got better :)
80
 
81
  ### Results
82
- Check out the demo page her -> [Demo]()
83
 
84
  #### Summary
85
 
 
12
  # Model Card for Model ID
13
  Voicera is a AR text-to-speech model trained on ~1000hrs of speech data.
14
  speech is converted to discrete tokens using "Multi-Scale Neural Audio Codec (SNAC)" model
15
+ **NB: This is not a SOTA model, and not accuarate enough for production usecase**
16
 
17
 
18
 
 
29
 
30
 
31
  - **Developed by:** Lwasinam Dilli
32
+ - **Funded by :** Lwasinam Dilli
33
  - **Model type:** GPT2-Transformer architecture
34
  - **License:** Free and Open to use I guess :)
35
 
 
40
 
41
  - **Repository:** [More Information Needed]
42
  - **Paper [optional]:** [More Information Needed]
43
+ - **Demo :** [Demos](https://lwasinam.github.io/)
44
 
45
 
46
 
 
79
  I should probably work on this, the loss went down and the output got better :)
80
 
81
  ### Results
82
+ Check out the demo page her -> [Demo](https://lwasinam.github.io/)
83
 
84
  #### Summary
85