speechmaster commited on
Commit
87c9175
1 Parent(s): 426d2a6

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +147 -0
  2. handler.py +44 -0
  3. requirements.txt +1 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - text-to-speech
5
+ - annotation
6
+ license: apache-2.0
7
+ language:
8
+ - en
9
+ pipeline_tag: text-to-speech
10
+ inference: false
11
+ datasets:
12
+ - parler-tts/mls_eng
13
+ - parler-tts/libritts_r_filtered
14
+ - parler-tts/libritts-r-filtered-speaker-descriptions
15
+ - parler-tts/mls-eng-speaker-descriptions
16
+ ---
17
+
18
+ <img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
19
+
20
+
21
+ # Parler-TTS Mini v1
22
+
23
+ <a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
24
+ <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
25
+ </a>
26
+
27
+ **Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
28
+
29
+ With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
30
+
31
+ ## 📖 Quick Index
32
+ * [👨‍💻 Installation](#👨‍💻-installation)
33
+ * [🎲 Using a random voice](#🎲-random-voice)
34
+ * [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
35
+ * [Motivation](#motivation)
36
+ * [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
37
+
38
+ ## 🛠️ Usage
39
+
40
+ ### 👨‍💻 Installation
41
+
42
+ Using Parler-TTS is as simple as "bonjour". Simply install the library once:
43
+
44
+ ```sh
45
+ pip install git+https://github.com/huggingface/parler-tts.git
46
+ ```
47
+
48
+ ### 🎲 Random voice
49
+
50
+
51
+ **Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
52
+
53
+ ```py
54
+ import torch
55
+ from parler_tts import ParlerTTSForConditionalGeneration
56
+ from transformers import AutoTokenizer
57
+ import soundfile as sf
58
+
59
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
60
+
61
+ model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
62
+ tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
63
+
64
+ prompt = "Hey, how are you doing today?"
65
+ description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
66
+
67
+ input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
68
+ prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
69
+
70
+ generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
71
+ audio_arr = generation.cpu().numpy().squeeze()
72
+ sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
73
+ ```
74
+
75
+ ### 🎯 Using a specific speaker
76
+
77
+ To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
78
+
79
+ To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
80
+
81
+ ```py
82
+ import torch
83
+ from parler_tts import ParlerTTSForConditionalGeneration
84
+ from transformers import AutoTokenizer
85
+ import soundfile as sf
86
+
87
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
88
+
89
+ model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
90
+ tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
91
+
92
+ prompt = "Hey, how are you doing today?"
93
+ description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
94
+
95
+ input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
96
+ prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
97
+
98
+ generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
99
+ audio_arr = generation.cpu().numpy().squeeze()
100
+ sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
101
+ ```
102
+
103
+ **Tips**:
104
+ * We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
105
+ * Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
106
+ * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
107
+ * The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
108
+
109
+ ## Motivation
110
+
111
+ Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
112
+
113
+ Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
114
+ Parler-TTS was released alongside:
115
+ * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
116
+ * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
117
+ * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
118
+
119
+ ## Citation
120
+
121
+ If you found this repository useful, please consider citing this work and also the original Stability AI paper:
122
+
123
+ ```
124
+ @misc{lacombe-etal-2024-parler-tts,
125
+ author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
126
+ title = {Parler-TTS},
127
+ year = {2024},
128
+ publisher = {GitHub},
129
+ journal = {GitHub repository},
130
+ howpublished = {\url{https://github.com/huggingface/parler-tts}}
131
+ }
132
+ ```
133
+
134
+ ```
135
+ @misc{lyth2024natural,
136
+ title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
137
+ author={Dan Lyth and Simon King},
138
+ year={2024},
139
+ eprint={2402.01912},
140
+ archivePrefix={arXiv},
141
+ primaryClass={cs.SD}
142
+ }
143
+ ```
144
+
145
+ ## License
146
+
147
+ This model is permissively licensed under the Apache 2.0 license.
handler.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Any
2
+ from parler_tts import ParlerTTSForConditionalGeneration
3
+ from transformers import AutoTokenizer
4
+ import torch
5
+
6
+ class EndpointHandler:
7
+ def __init__(self, path=""):
8
+ # load model and processor from path
9
+ self.tokenizer = AutoTokenizer.from_pretrained(path)
10
+ self.model = ParlerTTSForConditionalGeneration.from_pretrained(path, torch_dtype=torch.float16).to("cuda")
11
+
12
+ def __call__(self, data: Dict[str, Any]) -> Dict[str, str]:
13
+ """
14
+ Args:
15
+ data (:dict:):
16
+ The payload with the text prompt and generation parameters.
17
+ """
18
+ # process input
19
+ inputs = data.pop("inputs", data)
20
+ voice_description = data.pop("voice_description", "data")
21
+ parameters = data.pop("parameters", None)
22
+
23
+ gen_kwargs = {"min_new_tokens": 10}
24
+ if parameters is not None:
25
+ gen_kwargs.update(parameters)
26
+
27
+ # preprocess
28
+ inputs = self.tokenizer(
29
+ text=[inputs],
30
+ padding=True,
31
+ return_tensors="pt",).to("cuda")
32
+ voice_description = self.tokenizer(
33
+ text=[voice_description],
34
+ padding=True,
35
+ return_tensors="pt",).to("cuda")
36
+
37
+ # pass inputs with all kwargs in data
38
+ with torch.autocast("cuda"):
39
+ outputs = self.model.generate(**voice_description, prompt_input_ids=inputs.input_ids, **gen_kwargs)
40
+
41
+ # postprocess the prediction
42
+ prediction = outputs[0].cpu().numpy().tolist()
43
+
44
+ return [{"generated_audio": prediction}]
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ git+https://github.com/huggingface/parler-tts.git