|
Metadata-Version: 2.1 |
|
Name: TTS |
|
Version: 0.21.3 |
|
Summary: Deep learning for Text to Speech by Coqui. |
|
Home-page: https://github.com/coqui-ai/TTS |
|
Author: Eren Gölge |
|
Author-email: [email protected] |
|
License: MPL-2.0 |
|
Project-URL: Documentation, https://github.com/coqui-ai/TTS/wiki |
|
Project-URL: Tracker, https://github.com/coqui-ai/TTS/issues |
|
Project-URL: Repository, https://github.com/coqui-ai/TTS |
|
Project-URL: Discussions, https://github.com/coqui-ai/TTS/discussions |
|
Platform: UNKNOWN |
|
Classifier: Programming Language :: Python |
|
Classifier: Programming Language :: Python :: 3 |
|
Classifier: Programming Language :: Python :: 3.9 |
|
Classifier: Programming Language :: Python :: 3.10 |
|
Classifier: Programming Language :: Python :: 3.11 |
|
Classifier: Development Status :: 3 - Alpha |
|
Classifier: Intended Audience :: Science/Research |
|
Classifier: Intended Audience :: Developers |
|
Classifier: Operating System :: POSIX :: Linux |
|
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0) |
|
Classifier: Topic :: Software Development |
|
Classifier: Topic :: Software Development :: Libraries :: Python Modules |
|
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech |
|
Classifier: Topic :: Multimedia :: Sound/Audio |
|
Classifier: Topic :: Multimedia |
|
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence |
|
Requires-Python: >=3.9.0, <3.12 |
|
Description-Content-Type: text/markdown |
|
Provides-Extra: all |
|
Provides-Extra: dev |
|
Provides-Extra: ja |
|
Provides-Extra: notebooks |
|
License-File: LICENSE.txt |
|
|
|
|
|
|
|
- 📣 ⓍTTSv2 is here with 16 languages and better performance across the board. |
|
- 📣 ⓍTTS fine-tuning code is out. Check the [example recipes](https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech). |
|
- 📣 ⓍTTS can now stream with <200ms latency. |
|
- 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released [Blog Post](https://coqui.ai/blog/tts/open_xtts), [Demo](https://huggingface.co/spaces/coqui/xtts), [Docs](https://tts.readthedocs.io/en/dev/models/xtts.html) |
|
- 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with unconstrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html) |
|
- 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS. |
|
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html) |
|
- 📣 **Coqui Studio API** is landed on 🐸TTS. - [Example](https://github.com/coqui-ai/TTS/blob/dev/README.md |
|
- 📣 [**Coqui Studio API**](https://docs.coqui.ai/docs) is live. |
|
- 📣 Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin)!! - [Blog Post](https://coqui.ai/blog/tts/prompt-to-voice) |
|
- 📣 Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin). |
|
- 📣 Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin). |
|
|
|
<div align="center"> |
|
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" /> |
|
|
|
|
|
|
|
|
|
**🐸TTS is a library for advanced Text-to-Speech generation.** |
|
|
|
🚀 Pretrained models in +1100 languages. |
|
|
|
🛠️ Tools for training new models and fine-tuning existing models in any language. |
|
|
|
📚 Utilities for dataset analysis and curation. |
|
______________________________________________________________________ |
|
|
|
[](https://discord.gg/5eXr5seRrv) |
|
[](https://opensource.org/licenses/MPL-2.0) |
|
[](https://badge.fury.io/py/TTS) |
|
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md) |
|
[](https://pepy.tech/project/tts) |
|
[](https://zenodo.org/badge/latestdoi/265612440) |
|
|
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
[](https://tts.readthedocs.io/en/latest/) |
|
|
|
</div> |
|
|
|
______________________________________________________________________ |
|
|
|
|
|
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it. |
|
|
|
| Type | Platforms | |
|
| ------------------------------- | --------------------------------------- | |
|
| 🚨 **Bug Reports** | [GitHub Issue Tracker] | |
|
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] | |
|
| 👩💻 **Usage Questions** | [GitHub Discussions] | |
|
| 🗯 **General Discussion** | [GitHub Discussions] or [Discord] | |
|
|
|
[github issue tracker]: https://github.com/coqui-ai/tts/issues |
|
[github discussions]: https://github.com/coqui-ai/TTS/discussions |
|
[discord]: https://discord.gg/5eXr5seRrv |
|
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials |
|
|
|
|
|
## 🔗 Links and Resources |
|
| Type | Links | |
|
| ------------------------------- | --------------------------------------- | |
|
| 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/) |
|
| 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#installation)| |
|
| 👩💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)| |
|
| 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378) |
|
| 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)| |
|
| 📰 **Papers** | [TTS Papers](https://github.com/erogol/TTS-papers)| |
|
|
|
|
|
## 🥇 TTS Performance |
|
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p> |
|
|
|
Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices. |
|
|
|
## Features |
|
- High-performance Deep Learning models for Text2Speech tasks. |
|
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). |
|
- Speaker Encoder to compute speaker embeddings efficiently. |
|
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN) |
|
- Fast and efficient model training. |
|
- Detailed training logs on the terminal and Tensorboard. |
|
- Support for Multi-speaker TTS. |
|
- Efficient, flexible, lightweight but feature complete `Trainer API`. |
|
- Released and ready-to-use models. |
|
- Tools to curate Text2Speech datasets under```dataset_analysis```. |
|
- Utilities to use and test your models. |
|
- Modular (but not too much) code base enabling easy implementation of new ideas. |
|
|
|
## Model Implementations |
|
### Spectrogram models |
|
- Tacotron: [paper](https://arxiv.org/abs/1703.10135) |
|
- Tacotron2: [paper](https://arxiv.org/abs/1712.05884) |
|
- Glow-TTS: [paper](https://arxiv.org/abs/2005.11129) |
|
- Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802) |
|
- Align-TTS: [paper](https://arxiv.org/abs/2003.01950) |
|
- FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf) |
|
- FastSpeech: [paper](https://arxiv.org/abs/1905.09263) |
|
- FastSpeech2: [paper](https://arxiv.org/abs/2006.04558) |
|
- SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557) |
|
- Capacitron: [paper](https://arxiv.org/abs/1906.03402) |
|
- OverFlow: [paper](https://arxiv.org/abs/2211.06892) |
|
- Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320) |
|
- Delightful TTS: [paper](https://arxiv.org/abs/2110.12612) |
|
|
|
### End-to-End Models |
|
- ⓍTTS: [blog](https://coqui.ai/blog/tts/open_xtts) |
|
- VITS: [paper](https://arxiv.org/pdf/2106.06103) |
|
- 🐸 YourTTS: [paper](https://arxiv.org/abs/2112.02418) |
|
- 🐢 Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts) |
|
- 🐶 Bark: [orig. repo](https://github.com/suno-ai/bark) |
|
|
|
### Attention Methods |
|
- Guided Attention: [paper](https://arxiv.org/abs/1710.08969) |
|
- Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006) |
|
- Graves Attention: [paper](https://arxiv.org/abs/1910.10288) |
|
- Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/) |
|
- Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf) |
|
- Alignment Network: [paper](https://arxiv.org/abs/2108.10447) |
|
|
|
### Speaker Encoder |
|
- GE2E: [paper](https://arxiv.org/abs/1710.10467) |
|
- Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf) |
|
|
|
### Vocoders |
|
- MelGAN: [paper](https://arxiv.org/abs/1910.06711) |
|
- MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106) |
|
- ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480) |
|
- GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646) |
|
- WaveRNN: [origin](https://github.com/fatchord/WaveRNN/) |
|
- WaveGrad: [paper](https://arxiv.org/abs/2009.00713) |
|
- HiFiGAN: [paper](https://arxiv.org/abs/2010.05646) |
|
- UnivNet: [paper](https://arxiv.org/abs/2106.07889) |
|
|
|
### Voice Conversion |
|
- FreeVC: [paper](https://arxiv.org/abs/2210.15418) |
|
|
|
You can also help us implement more models. |
|
|
|
## Installation |
|
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.9, < 3.12.**. |
|
|
|
If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option. |
|
|
|
```bash |
|
pip install TTS |
|
``` |
|
|
|
If you plan to code or train models, clone 🐸TTS and install it locally. |
|
|
|
```bash |
|
git clone https://github.com/coqui-ai/TTS |
|
pip install -e .[all,dev,notebooks] # Select the relevant extras |
|
``` |
|
|
|
If you are on Ubuntu (Debian), you can also run following commands for installation. |
|
|
|
```bash |
|
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS. |
|
$ make install |
|
``` |
|
|
|
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system). |
|
|
|
|
|
## Docker Image |
|
You can also try TTS without install with the docker image. |
|
Simply run the following command and you will be able to run TTS without installing it. |
|
|
|
```bash |
|
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu |
|
python3 TTS/server/server.py --list_models #To get the list of available models |
|
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server |
|
``` |
|
|
|
You can then enjoy the TTS server [here](http://[::1]:5002/) |
|
More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html) |
|
|
|
|
|
## Synthesizing speech by 🐸TTS |
|
|
|
### 🐍 Python API |
|
|
|
#### Running a multi-speaker and multi-lingual model |
|
|
|
```python |
|
import torch |
|
from TTS.api import TTS |
|
|
|
# Get device |
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
# List available 🐸TTS models |
|
print(TTS().list_models()) |
|
|
|
# Init TTS |
|
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device) |
|
|
|
# Run TTS |
|
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language |
|
# Text to speech list of amplitude values as output |
|
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en") |
|
# Text to speech to a file |
|
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") |
|
``` |
|
|
|
#### Running a single speaker model |
|
|
|
```python |
|
# Init TTS with the target model name |
|
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device) |
|
|
|
# Run TTS |
|
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH) |
|
|
|
# Example voice cloning with YourTTS in English, French and Portuguese |
|
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device) |
|
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav") |
|
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav") |
|
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav") |
|
``` |
|
|
|
#### Example voice conversion |
|
|
|
Converting the voice in `source_wav` to the voice of `target_wav` |
|
|
|
```python |
|
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda") |
|
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav") |
|
``` |
|
|
|
#### Example voice cloning together with the voice conversion model. |
|
This way, you can clone voices by using any model in 🐸TTS. |
|
|
|
```python |
|
|
|
tts = TTS("tts_models/de/thorsten/tacotron2-DDC") |
|
tts.tts_with_vc_to_file( |
|
"Wie sage ich auf Italienisch, dass ich dich liebe?", |
|
speaker_wav="target/speaker.wav", |
|
file_path="output.wav" |
|
) |
|
``` |
|
|
|
#### Example using [🐸Coqui Studio](https://coqui.ai) voices. |
|
You access all of your cloned voices and built-in speakers in [🐸Coqui Studio](https://coqui.ai). |
|
To do this, you'll need an API token, which you can obtain from the [account page](https://coqui.ai/account). |
|
After obtaining the API token, you'll need to configure the COQUI_STUDIO_TOKEN environment variable. |
|
|
|
Once you have a valid API token in place, the studio speakers will be displayed as distinct models within the list. |
|
These models will follow the naming convention `coqui_studio/en/<studio_speaker_name>/coqui_studio` |
|
|
|
```python |
|
# XTTS model |
|
models = TTS(cs_api_model="XTTS").list_models() |
|
# Init TTS with the target studio speaker |
|
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False) |
|
# Run TTS |
|
tts.tts_to_file(text="This is a test.", language="en", file_path=OUTPUT_PATH) |
|
|
|
# V1 model |
|
models = TTS(cs_api_model="V1").list_models() |
|
# Run TTS with emotion and speed control |
|
# Emotion control only works with V1 model |
|
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5) |
|
``` |
|
|
|
#### Example text to speech using **Fairseq models in ~1100 languages** 🤯. |
|
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`. |
|
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) |
|
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms). |
|
|
|
```python |
|
# TTS with on the fly voice conversion |
|
api = TTS("tts_models/deu/fairseq/vits") |
|
api.tts_with_vc_to_file( |
|
"Wie sage ich auf Italienisch, dass ich dich liebe?", |
|
speaker_wav="target/speaker.wav", |
|
file_path="output.wav" |
|
) |
|
``` |
|
|
|
### Command-line `tts` |
|
|
|
<!-- begin-tts-readme --> |
|
|
|
Synthesize speech on command line. |
|
|
|
You can either use your trained model or choose a model from the provided list. |
|
|
|
If you don't specify any models, then it uses LJSpeech based English model. |
|
|
|
#### Single Speaker Models |
|
|
|
- List provided models: |
|
|
|
``` |
|
$ tts --list_models |
|
``` |
|
|
|
- Get model info (for both tts_models and vocoder_models): |
|
|
|
- Query by type/name: |
|
The model_info_by_name uses the name as it from the --list_models. |
|
``` |
|
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" |
|
``` |
|
For example: |
|
``` |
|
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts |
|
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2 |
|
``` |
|
- Query by type/idx: |
|
The model_query_idx uses the corresponding idx from --list_models. |
|
|
|
``` |
|
$ tts --model_info_by_idx "<model_type>/<model_query_idx>" |
|
``` |
|
|
|
For example: |
|
|
|
``` |
|
$ tts --model_info_by_idx tts_models/3 |
|
``` |
|
|
|
- Query info for model info by full name: |
|
``` |
|
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" |
|
``` |
|
|
|
- Run TTS with default models: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --out_path output/path/speech.wav |
|
``` |
|
|
|
- Run TTS and pipe out the generated TTS wav file data: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay |
|
``` |
|
|
|
- Run TTS and define speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_name "coqui_studio/<language>/<dataset>/<model_name>" --speed 1.2 --out_path output/path/speech.wav |
|
``` |
|
|
|
- Run a TTS model with its default vocoder model: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav |
|
``` |
|
|
|
For example: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav |
|
``` |
|
|
|
- Run with specific TTS and vocoder models from the list: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav |
|
``` |
|
|
|
For example: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav |
|
``` |
|
|
|
- Run your own TTS model (Using Griffin-Lim Vocoder): |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav |
|
``` |
|
|
|
- Run your own TTS and Vocoder models: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav |
|
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json |
|
``` |
|
|
|
|
|
|
|
- List the available speakers and choose a <speaker_id> among them: |
|
|
|
``` |
|
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs |
|
``` |
|
|
|
- Run the multi-speaker TTS model with the target speaker ID: |
|
|
|
``` |
|
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id> |
|
``` |
|
|
|
- Run your own multi-speaker TTS model: |
|
|
|
``` |
|
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id> |
|
``` |
|
|
|
|
|
|
|
``` |
|
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav> |
|
``` |
|
|
|
<!-- end-tts-readme --> |
|
|
|
|
|
``` |
|
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.) |
|
|- utils/ (common utilities.) |
|
|- TTS |
|
|- bin/ (folder for all the executables.) |
|
|- train*.py (train your target model.) |
|
|- ... |
|
|- tts/ (text to speech models) |
|
|- layers/ (model layer definitions) |
|
|- models/ (model definitions) |
|
|- utils/ (model specific utilities.) |
|
|- speaker_encoder/ (Speaker Encoder models.) |
|
|- (same) |
|
|- vocoder/ (Vocoder models.) |
|
|- (same) |
|
``` |
|
|
|
|
|
|