PHBJT's picture
Update README.md
dff727b verified
|
raw
history blame
6.47 kB
metadata
library_name: transformers
tags:
  - text-to-speech
  - annotation
license: apache-2.0
language:
  - en
  - fr
  - es
  - pt
  - pl
  - de
  - nl
  - it
pipeline_tag: text-to-speech
inference: false
datasets:
  - facebook/multilingual_librispeech
  - PHBJT/cml-tts-cleaned-levenshtein
  - PHBJT/multilingual_librispeech_text_description_capitalized
  - PPHBJT/cml-tts-description-punctuation-and-casing-restored
  - parler-tts/libritts_r_filtered
  - parler-tts/libritts-r-filtered-speaker-descriptions
  - parler-tts/mls_eng
  - parler-tts/mls-eng-speaker-descriptions
Parler Logo

Parler-TTS Mini Multilingual

Open in HuggingFace

Parler-TTS Mini Multilingual v1.1 is a multilingual extension of Parler-TTS Mini.

It is a fine-tuned version, trained on a cleaned version of CML-TTS and on the non-English version of Multilingual LibriSpeech. In all, this represents some 9,200 hours of non-English data. To retain English capabilities, we also added back the LibriTTS-R English dataset, some 580h of high-quality English data.

Parler-TTS Mini Multilingual can speak in 8 European languages: English, French, Spanish, Portuguese, Polish, German, Italian and Dutch.

Thanks to its better prompt tokenizer, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training.

🚨 This work is the result of a collaboration between the HuggingFace audio team and the Quantum Squadra team. The AI4Bharat team also provided advice and assistance in improving tokenization. 🚨

πŸ“– Quick Index

πŸ› οΈ Usage

🚨Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description.🚨

πŸ‘¨β€πŸ’» Installation

Using Parler-TTS is as simple as "bonjour". Simply install the library once:

pip install git+https://github.com/huggingface/parler-tts.git

Inference

Parler-TTS has been trained to generate speech with features that can be controlled with a simple text prompt, for example:

import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual")
description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path)

prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."

input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)

generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)

Tips:

  • We've set up an inference guide to make generation faster. Think SDPA, torch.compile, batching and streaming!
  • Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
  • Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
  • The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt

Motivation

Parler-TTS is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.

Contrarily to other TTS models, Parler-TTS is a fully open-source release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. Parler-TTS was released alongside:

Citation

If you found this repository useful, please consider citing this work and also the original Stability AI paper:

@misc{lacombe-etal-2024-parler-tts,
  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
  title = {Parler-TTS},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
@misc{lyth2024natural,
      title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
      author={Dan Lyth and Simon King},
      year={2024},
      eprint={2402.01912},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}

License

This model is permissively licensed under the Apache 2.0 license.