2021-03-11 12:02:46 +00:00
# <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
2020-07-17 10:37:18 +00:00
2021-03-09 15:39:17 +00:00
🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
2021-12-17 10:03:03 +00:00
🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects.
2019-07-19 10:37:35 +00:00
2021-06-30 12:30:55 +00:00
[](https://github.com/coqui-ai/TTS/actions)
2021-01-26 02:20:24 +00:00
[](https://badge.fury.io/py/TTS)
2021-03-07 02:58:46 +00:00
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
2021-04-01 14:10:52 +00:00
[](https://pepy.tech/project/tts)
2021-04-08 13:09:25 +00:00
[](https://zenodo.org/badge/latestdoi/265612440)
2021-09-06 14:21:47 +00:00
[](https://tts.readthedocs.io/en/latest/)
[](https://gitter.im/coqui-ai/TTS?utm_source=badge& utm_medium=badge& utm_campaign=pr-badge)
[](https://opensource.org/licenses/MPL-2.0)
2020-07-17 10:37:18 +00:00
2021-03-18 13:57:08 +00:00
📰 [**Subscribe to 🐸Coqui.ai Newsletter** ](https://coqui.ai/?subscription=true )
2021-03-18 12:13:42 +00:00
2021-01-27 10:46:01 +00:00
📢 [English Voice Samples ](https://erogol.github.io/ddc-samples/ ) and [SoundCloud playlist ](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2 )
2018-01-22 09:48:59 +00:00
2021-01-27 10:46:01 +00:00
📄 [Text-to-Speech paper collection ](https://github.com/erogol/TTS-papers )
2020-10-15 01:49:39 +00:00
2021-10-26 11:04:51 +00:00
< img src = "https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" / >
2021-01-06 13:46:28 +00:00
## 💬 Where to ask questions
2021-06-27 18:55:20 +00:00
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
2018-01-22 09:48:59 +00:00
2021-01-06 13:46:28 +00:00
| Type | Platforms |
| ------------------------------- | --------------------------------------- |
| 🚨 **Bug Reports** | [GitHub Issue Tracker] |
2021-01-11 01:12:08 +00:00
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] |
2021-03-13 00:06:51 +00:00
| 👩💻 **Usage Questions** | [Github Discussions] |
2021-06-29 11:20:40 +00:00
| 🗯 **General Discussion** | [Github Discussions] or [Gitter Room] |
2020-05-20 15:51:42 +00:00
2021-03-05 01:46:33 +00:00
[github issue tracker]: https://github.com/coqui-ai/tts/issues
[github discussions]: https://github.com/coqui-ai/TTS/discussions
2021-03-13 00:06:51 +00:00
[gitter room]: https://gitter.im/coqui-ai/TTS?utm_source=share-link& utm_medium=link& utm_campaign=share-link
2021-03-05 01:46:33 +00:00
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
2021-01-06 13:46:28 +00:00
2021-01-11 01:12:08 +00:00
## 🔗 Links and Resources
| Type | Links |
| ------------------------------- | --------------------------------------- |
2021-06-27 18:55:20 +00:00
| 💼 **Documentation** | [ReadTheDocs ](https://tts.readthedocs.io/en/latest/ )
2021-04-09 11:51:29 +00:00
| 💾 **Installation** | [TTS/README.md ](https://github.com/coqui-ai/TTS/tree/dev#install-tts )|
2021-04-21 11:50:35 +00:00
| 👩💻 **Contributing** | [CONTRIBUTING.md ](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md )|
| 📌 **Road Map** | [Main Development Plans ](https://github.com/coqui-ai/TTS/issues/378 )
2021-04-09 11:51:29 +00:00
| 🚀 **Released Models** | [TTS Releases ](https://github.com/coqui-ai/TTS/releases ) and [Experimental Models ](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models )|
2021-01-11 01:12:08 +00:00
2021-03-05 01:59:52 +00:00
## 🥇 TTS Performance
2021-03-11 12:02:46 +00:00
< p align = "center" > < img src = "https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width = "800" / > < / p >
2019-04-30 22:56:58 +00:00
2021-03-09 15:39:17 +00:00
Underlined "TTS*" and "Judy*" are 🐸TTS models
2021-03-05 01:59:52 +00:00
<!-- [Details... ](https://github.com/coqui-ai/TTS/wiki/Mean-Opinion-Score-Results ) -->
2019-04-30 22:56:58 +00:00
2020-05-20 14:44:52 +00:00
## Features
2021-06-27 18:55:20 +00:00
- High-performance Deep Learning models for Text2Speech tasks.
2021-01-13 10:14:59 +00:00
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
2020-06-04 12:26:51 +00:00
- Speaker Encoder to compute speaker embeddings efficiently.
2021-01-13 10:14:59 +00:00
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
2020-06-19 12:52:43 +00:00
- Fast and efficient model training.
2021-06-27 18:55:20 +00:00
- Detailed training logs on the terminal and Tensorboard.
- Support for Multi-speaker TTS.
- Efficient, flexible, lightweight but feature complete `Trainer API` .
2022-03-06 13:10:16 +00:00
- Released and ready-to-use models.
2020-05-20 14:44:52 +00:00
- Tools to curate Text2Speech datasets under```dataset_analysis```.
2021-06-27 18:55:20 +00:00
- Utilities to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
2019-12-11 10:02:39 +00:00
2021-01-06 13:46:28 +00:00
## Implemented Models
### Text-to-Spectrogram
- Tacotron: [paper ](https://arxiv.org/abs/1703.10135 )
- Tacotron2: [paper ](https://arxiv.org/abs/1712.05884 )
- Glow-TTS: [paper ](https://arxiv.org/abs/2005.11129 )
2021-01-12 15:32:15 +00:00
- Speedy-Speech: [paper ](https://arxiv.org/abs/2008.03802 )
2021-03-23 02:21:27 +00:00
- Align-TTS: [paper ](https://arxiv.org/abs/2003.01950 )
2021-09-07 08:01:49 +00:00
- FastPitch: [paper ](https://arxiv.org/pdf/2006.06873.pdf )
2021-09-12 15:33:27 +00:00
- FastSpeech: [paper ](https://arxiv.org/abs/1905.09263 )
2021-01-06 13:46:28 +00:00
2021-08-09 13:12:51 +00:00
### End-to-End Models
- VITS: [paper ](https://arxiv.org/pdf/2106.06103 )
2021-01-06 13:46:28 +00:00
### Attention Methods
- Guided Attention: [paper ](https://arxiv.org/abs/1710.08969 )
- Forward Backward Decoding: [paper ](https://arxiv.org/abs/1907.09006 )
2021-08-09 10:20:05 +00:00
- Graves Attention: [paper ](https://arxiv.org/abs/1910.10288 )
2021-01-06 13:46:28 +00:00
- Double Decoder Consistency: [blog ](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/ )
2021-03-30 12:44:43 +00:00
- Dynamic Convolutional Attention: [paper ](https://arxiv.org/pdf/1910.10288.pdf )
2021-09-07 08:01:49 +00:00
- Alignment Network: [paper ](https://arxiv.org/abs/2108.10447 )
2021-01-06 13:46:28 +00:00
### Speaker Encoder
- GE2E: [paper ](https://arxiv.org/abs/1710.10467 )
2021-01-12 15:32:15 +00:00
- Angular Loss: [paper ](https://arxiv.org/pdf/2003.11982.pdf )
2018-07-11 15:40:30 +00:00
2021-01-06 13:46:28 +00:00
### Vocoders
2021-02-04 05:06:39 +00:00
- MelGAN: [paper ](https://arxiv.org/abs/1910.06711 )
2021-01-06 13:46:28 +00:00
- MultiBandMelGAN: [paper ](https://arxiv.org/abs/2005.05106 )
- ParallelWaveGAN: [paper ](https://arxiv.org/abs/1910.11480 )
- GAN-TTS discriminators: [paper ](https://arxiv.org/abs/1909.11646 )
- WaveRNN: [origin ](https://github.com/fatchord/WaveRNN/ )
- WaveGrad: [paper ](https://arxiv.org/abs/2009.00713 )
2021-04-15 01:15:12 +00:00
- HiFiGAN: [paper ](https://arxiv.org/abs/2010.05646 )
2021-06-27 18:55:20 +00:00
- UnivNet: [paper ](https://arxiv.org/abs/2106.07889 )
2018-09-27 13:10:15 +00:00
2021-06-27 18:55:20 +00:00
You can also help us implement more models.
2018-09-27 13:10:15 +00:00
2021-01-06 13:46:28 +00:00
## Install TTS
2021-03-09 15:39:17 +00:00
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.6, < 3.9** .
2018-09-27 13:10:15 +00:00
2021-08-17 11:36:01 +00:00
If you are only interested in [synthesizing speech ](https://tts.readthedocs.io/en/latest/inference.html ) with the released 🐸TTS models, installing from PyPI is the easiest option.
2021-01-26 02:06:58 +00:00
2021-01-27 10:26:38 +00:00
```bash
2021-01-26 02:08:45 +00:00
pip install TTS
```
2021-01-26 02:06:58 +00:00
2021-03-09 15:39:17 +00:00
If you plan to code or train models, clone 🐸TTS and install it locally.
2021-01-26 02:06:58 +00:00
2021-01-27 10:26:38 +00:00
```bash
2021-03-05 01:46:33 +00:00
git clone https://github.com/coqui-ai/TTS
2022-02-10 15:14:54 +00:00
pip install -e .[all,dev,notebooks] # Select the relevant extras
2021-01-26 02:08:45 +00:00
```
2018-09-27 13:10:15 +00:00
2021-04-09 10:06:38 +00:00
If you are on Ubuntu (Debian), you can also run following commands for installation.
2021-04-09 11:51:29 +00:00
2021-04-09 10:06:38 +00:00
```bash
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a diffent OS.
$ make install
```
2021-04-09 11:51:29 +00:00
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here ](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system ).
2021-06-27 18:55:20 +00:00
2021-12-17 10:02:34 +00:00
## Use TTS
### Single Speaker Models
- List provided models:
```
$ tts --list_models
```
- Run TTS with default models:
```
$ tts --text "Text for TTS"
```
- Run a TTS model with its default vocoder model:
```
$ tts --text "Text for TTS" --model_name "< language > /< dataset > /< model_name >
```
- Run with specific TTS and vocoder models from the list:
```
$ tts --text "Text for TTS" --model_name "< language > /< dataset > /< model_name > " --vocoder_name "< language > /< dataset > /< model_name > " --output_path
```
- Run your own TTS model (Using Griffin-Lim Vocoder):
```
$ tts --text "Text for TTS" --model_path path/to/model.pth.tar --config_path path/to/config.json --out_path output/path/speech.wav
```
- Run your own TTS and Vocoder models:
```
$ tts --text "Text for TTS" --model_path path/to/config.json --config_path path/to/model.pth.tar --out_path output/path/speech.wav
--vocoder_path path/to/vocoder.pth.tar --vocoder_config_path path/to/vocoder_config.json
```
### Multi-speaker Models
- List the available speakers and choose as < speaker_id > among them:
```
$ tts --model_name "< language > /< dataset > /< model_name > " --list_speaker_idxs
```
- Run the multi-speaker TTS model with the target speaker ID:
```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "< language > /< dataset > /< model_name > " --speaker_idx < speaker_id >
```
- Run your own multi-speaker TTS model:
```
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/config.json --config_path path/to/model.pth.tar --speakers_file_path path/to/speaker.json --speaker_idx < speaker_id >
```
2021-01-11 01:12:08 +00:00
## Directory Structure
2020-06-19 13:56:02 +00:00
```
2020-07-17 10:50:20 +00:00
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/ (common utilities.)
2020-07-17 10:56:42 +00:00
|- TTS
|- bin/ (folder for all the executables.)
|- train*.py (train your target model.)
|- distribute.py (train your TTS model using Multiple GPUs.)
|- compute_statistics.py (compute dataset statistics for normalization.)
2021-06-27 18:55:20 +00:00
|- ...
2020-07-17 10:56:42 +00:00
|- tts/ (text to speech models)
|- layers/ (model layer definitions)
|- models/ (model definitions)
|- utils/ (model specific utilities.)
|- speaker_encoder/ (Speaker Encoder models.)
|- (same)
|- vocoder/ (Vocoder models.)
|- (same)
2021-09-04 08:36:28 +00:00
```