Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,170 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
|
|
43 |
|
44 |
-
|
45 |
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
|
|
|
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
|
|
|
|
|
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
|
|
|
|
73 |
|
74 |
-
|
|
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
-
|
|
|
|
|
|
|
79 |
|
80 |
-
|
|
|
|
|
|
|
|
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
|
|
|
|
|
|
|
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- text-to-speech
|
5 |
+
- annotation
|
6 |
+
license: apache-2.0
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- fr
|
10 |
+
- es
|
11 |
+
- pt
|
12 |
+
- pl
|
13 |
+
- de
|
14 |
+
- nl
|
15 |
+
- it
|
16 |
+
pipeline_tag: text-to-speech
|
17 |
+
inference: false
|
18 |
+
datasets:
|
19 |
+
- facebook/multilingual_librispeech
|
20 |
+
- PHBJT/cml-tts-cleaned-levenshtein
|
21 |
+
- PHBJT/multilingual_librispeech_text_description_capitalized
|
22 |
+
- PPHBJT/cml-tts-description-punctuation-and-casing-restored
|
23 |
+
- parler-tts/libritts_r_filtered
|
24 |
+
- parler-tts/libritts-r-filtered-speaker-descriptions
|
25 |
+
- parler-tts/mls_eng
|
26 |
+
- parler-tts/mls-eng-speaker-descriptions
|
27 |
---
|
28 |
|
29 |
+
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
30 |
|
|
|
31 |
|
32 |
+
# Parler-TTS Mini v1.1 Multilingual
|
33 |
|
34 |
+
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
|
35 |
+
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
36 |
+
</a>
|
37 |
|
38 |
+
**Parler-TTS Mini v1.1 Multilingual** is a multilingual extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1).
|
39 |
|
40 |
+
It is a fine-tuned version, trained on a [cleaned version](https://huggingface.co/datasets/PHBJT/cml-tts-cleaned-levenshtein) of [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) and on the non-English version of [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech).
|
41 |
+
In all, this represents some 9,200 hours of non-English data. To retain English capabilities, we also added back the [LibriTTS-R English dataset](https://huggingface.co/datasets/parler-tts/libritts_r_filtered), some 580h of high-quality English data.
|
42 |
|
43 |
+
**Parler-TTS Mini v1.1 Multilingual** can speak in 7 European languages: English, French, Spanish, Portuguese, Polish, German, Italian and Dutch.
|
44 |
|
45 |
+
Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training.
|
46 |
|
47 |
+
🚨 This work is the result of a collaboration between the **HuggingFace audio team** and the **[Quantum Squadra](https://quantumsquadra.com/) team**. The **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team** also provided advice and assistance in improving tokenization. 🚨
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
|
|
49 |
|
50 |
+
## 📖 Quick Index
|
51 |
+
* [👨💻 Installation](#👨💻-installation)
|
52 |
+
* [🎲 Using a random voice](#🎲-random-voice)
|
53 |
+
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
|
54 |
+
* [Motivation](#motivation)
|
55 |
+
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
|
56 |
|
57 |
+
## 🛠️ Usage
|
|
|
|
|
58 |
|
59 |
+
🚨Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description.🚨
|
60 |
|
61 |
+
### 👨💻 Installation
|
62 |
|
63 |
+
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
|
64 |
|
65 |
+
```sh
|
66 |
+
pip install git+https://github.com/huggingface/parler-tts.git
|
67 |
+
```
|
68 |
|
69 |
+
### 🎲 Random voice
|
70 |
|
|
|
71 |
|
72 |
+
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
|
73 |
|
74 |
+
```py
|
75 |
+
import torch
|
76 |
+
from parler_tts import ParlerTTSForConditionalGeneration
|
77 |
+
from transformers import AutoTokenizer
|
78 |
+
import soundfile as sf
|
79 |
|
80 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
81 |
|
82 |
+
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1.1").to(device)
|
83 |
+
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1.1")
|
84 |
+
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
|
85 |
|
86 |
+
prompt = "Hey, how are you doing today?"
|
87 |
+
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
|
88 |
|
89 |
+
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
|
90 |
+
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
91 |
|
92 |
+
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
|
93 |
+
audio_arr = generation.cpu().numpy().squeeze()
|
94 |
+
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
|
95 |
+
```
|
96 |
|
97 |
+
### 🎯 Using a specific speaker
|
98 |
|
99 |
+
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
|
100 |
|
101 |
+
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
|
102 |
|
103 |
+
```py
|
104 |
+
import torch
|
105 |
+
from parler_tts import ParlerTTSForConditionalGeneration
|
106 |
+
from transformers import AutoTokenizer
|
107 |
+
import soundfile as sf
|
108 |
|
109 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
110 |
|
111 |
+
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1.1").to(device)
|
112 |
+
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1.1")
|
113 |
+
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)
|
114 |
|
115 |
+
prompt = "Hey, how are you doing today?"
|
116 |
+
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
|
117 |
|
118 |
+
input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
|
119 |
+
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
120 |
|
121 |
+
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
|
122 |
+
audio_arr = generation.cpu().numpy().squeeze()
|
123 |
+
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
|
124 |
+
```
|
125 |
|
126 |
+
**Tips**:
|
127 |
+
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
|
128 |
+
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
|
129 |
+
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
|
130 |
+
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
|
131 |
|
132 |
+
## Motivation
|
133 |
|
134 |
+
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
|
135 |
|
136 |
+
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
|
137 |
+
Parler-TTS was released alongside:
|
138 |
+
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
|
139 |
+
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
|
140 |
+
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
|
141 |
|
142 |
+
## Citation
|
143 |
|
144 |
+
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
|
145 |
|
146 |
+
```
|
147 |
+
@misc{lacombe-etal-2024-parler-tts,
|
148 |
+
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
|
149 |
+
title = {Parler-TTS},
|
150 |
+
year = {2024},
|
151 |
+
publisher = {GitHub},
|
152 |
+
journal = {GitHub repository},
|
153 |
+
howpublished = {\url{https://github.com/huggingface/parler-tts}}
|
154 |
+
}
|
155 |
+
```
|
156 |
|
157 |
+
```
|
158 |
+
@misc{lyth2024natural,
|
159 |
+
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
|
160 |
+
author={Dan Lyth and Simon King},
|
161 |
+
year={2024},
|
162 |
+
eprint={2402.01912},
|
163 |
+
archivePrefix={arXiv},
|
164 |
+
primaryClass={cs.SD}
|
165 |
+
}
|
166 |
+
```
|
167 |
|
168 |
+
## License
|
169 |
|
170 |
+
This model is permissively licensed under the Apache 2.0 license.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|