2022-10-15 17:11:16 +00:00
2023-04-10 12:03:08 +00:00
## 🐸Coqui.ai News
2023-11-06 14:37:32 +00:00
- 📣 ⓍTTSv2 is here with 16 languages and better performance across the board.
- 📣 ⓍTTS fine-tuning code is out. Check the [example recipes ](https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech ).
2023-11-08 09:51:23 +00:00
- 📣 ⓍTTS can now stream with < 200ms latency .
2023-09-19 22:57:09 +00:00
- 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released [Blog Post ](https://coqui.ai/blog/tts/open_xtts ), [Demo ](https://huggingface.co/spaces/coqui/xtts ), [Docs ](https://tts.readthedocs.io/en/dev/models/xtts.html )
- 📣 [🐶Bark ](https://github.com/suno-ai/bark ) is now available for inference with unconstrained voice cloning. [Docs ](https://tts.readthedocs.io/en/dev/models/bark.html )
2023-06-05 09:15:13 +00:00
- 📣 You can use [~1100 Fairseq models ](https://github.com/facebookresearch/fairseq/tree/main/examples/mms ) with 🐸TTS.
2023-06-28 10:30:54 +00:00
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs ](https://tts.readthedocs.io/en/dev/models/tortoise.html )
2023-06-05 09:15:13 +00:00
- 📣 **Coqui Studio API** is landed on 🐸TTS. - [Example ](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api )
2023-06-08 07:47:10 +00:00
- 📣 [**Coqui Studio API** ](https://docs.coqui.ai/docs ) is live.
2023-06-05 09:15:13 +00:00
- 📣 Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin )!! - [Blog Post ](https://coqui.ai/blog/tts/prompt-to-voice )
- 📣 Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin ).
- 📣 Voice cloning is live on [**Coqui Studio** ](https://app.coqui.ai/auth/signin ).
2023-04-10 12:03:08 +00:00
2023-07-24 11:30:19 +00:00
< div align = "center" >
< img src = "https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" / >
2023-04-10 12:03:08 +00:00
## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
2022-10-15 17:11:16 +00:00
2020-07-17 10:37:18 +00:00
2023-07-24 11:30:19 +00:00
**🐸TTS is a library for advanced Text-to-Speech generation.**
🚀 Pretrained models in +1100 languages.
🛠️ Tools for training new models and fine-tuning existing models in any language.
📚 Utilities for dataset analysis and curation.
______________________________________________________________________
2019-07-19 10:37:35 +00:00
2023-11-15 12:59:56 +00:00
[](https://discord.gg/5eXr5seRrv)
2022-05-13 12:56:49 +00:00
[](https://opensource.org/licenses/MPL-2.0)
2021-01-26 02:20:24 +00:00
[](https://badge.fury.io/py/TTS)
2021-03-07 02:58:46 +00:00
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
2021-04-01 14:10:52 +00:00
[](https://pepy.tech/project/tts)
2021-04-08 13:09:25 +00:00
[](https://zenodo.org/badge/latestdoi/265612440)
2022-05-13 12:56:49 +00:00








2022-12-12 11:20:50 +00:00



2021-09-06 14:21:47 +00:00
[](https://tts.readthedocs.io/en/latest/)
2020-07-17 10:37:18 +00:00
2023-07-24 11:30:19 +00:00
< / div >
2018-01-22 09:48:59 +00:00
2023-07-24 11:30:19 +00:00
______________________________________________________________________
2021-10-26 11:04:51 +00:00
2021-01-06 13:46:28 +00:00
## 💬 Where to ask questions
2021-06-27 18:55:20 +00:00
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
2018-01-22 09:48:59 +00:00
2021-01-06 13:46:28 +00:00
| Type | Platforms |
| ------------------------------- | --------------------------------------- |
| 🚨 **Bug Reports** | [GitHub Issue Tracker] |
2021-01-11 01:12:08 +00:00
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] |
2022-11-14 09:41:27 +00:00
| 👩💻 **Usage Questions** | [GitHub Discussions] |
2022-11-14 09:44:17 +00:00
| 🗯 **General Discussion** | [GitHub Discussions] or [Discord] |
2020-05-20 15:51:42 +00:00
2021-03-05 01:46:33 +00:00
[github issue tracker]: https://github.com/coqui-ai/tts/issues
[github discussions]: https://github.com/coqui-ai/TTS/discussions
2022-11-14 09:44:17 +00:00
[discord]: https://discord.gg/5eXr5seRrv
2021-03-05 01:46:33 +00:00
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
2021-01-06 13:46:28 +00:00
2021-01-11 01:12:08 +00:00
## 🔗 Links and Resources
| Type | Links |
| ------------------------------- | --------------------------------------- |
2021-06-27 18:55:20 +00:00
| 💼 **Documentation** | [ReadTheDocs ](https://tts.readthedocs.io/en/latest/ )
2023-11-30 12:03:33 +00:00
| 💾 **Installation** | [TTS/README.md ](https://github.com/coqui-ai/TTS/tree/dev#installation )|
2021-04-21 11:50:35 +00:00
| 👩💻 **Contributing** | [CONTRIBUTING.md ](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md )|
| 📌 **Road Map** | [Main Development Plans ](https://github.com/coqui-ai/TTS/issues/378 )
2021-04-09 11:51:29 +00:00
| 🚀 **Released Models** | [TTS Releases ](https://github.com/coqui-ai/TTS/releases ) and [Experimental Models ](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models )|
2023-07-24 11:30:19 +00:00
| 📰 **Papers** | [TTS Papers ](https://github.com/erogol/TTS-papers )|
2021-01-11 01:12:08 +00:00
2021-03-05 01:59:52 +00:00
## 🥇 TTS Performance
2021-03-11 12:02:46 +00:00
< p align = "center" > < img src = "https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width = "800" / > < / p >
2019-04-30 22:56:58 +00:00
2023-08-13 10:15:17 +00:00
Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices.
2019-04-30 22:56:58 +00:00
2020-05-20 14:44:52 +00:00
## Features
2021-06-27 18:55:20 +00:00
- High-performance Deep Learning models for Text2Speech tasks.
2021-01-13 10:14:59 +00:00
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
2020-06-04 12:26:51 +00:00
- Speaker Encoder to compute speaker embeddings efficiently.
2021-01-13 10:14:59 +00:00
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
2020-06-19 12:52:43 +00:00
- Fast and efficient model training.
2021-06-27 18:55:20 +00:00
- Detailed training logs on the terminal and Tensorboard.
- Support for Multi-speaker TTS.
- Efficient, flexible, lightweight but feature complete `Trainer API` .
2022-03-06 13:10:16 +00:00
- Released and ready-to-use models.
2020-05-20 14:44:52 +00:00
- Tools to curate Text2Speech datasets under```dataset_analysis```.
2021-06-27 18:55:20 +00:00
- Utilities to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
2019-12-11 10:02:39 +00:00
2023-07-24 11:30:19 +00:00
## Model Implementations
2022-09-16 10:01:46 +00:00
### Spectrogram models
2021-01-06 13:46:28 +00:00
- Tacotron: [paper ](https://arxiv.org/abs/1703.10135 )
- Tacotron2: [paper ](https://arxiv.org/abs/1712.05884 )
- Glow-TTS: [paper ](https://arxiv.org/abs/2005.11129 )
2021-01-12 15:32:15 +00:00
- Speedy-Speech: [paper ](https://arxiv.org/abs/2008.03802 )
2021-03-23 02:21:27 +00:00
- Align-TTS: [paper ](https://arxiv.org/abs/2003.01950 )
2021-09-07 08:01:49 +00:00
- FastPitch: [paper ](https://arxiv.org/pdf/2006.06873.pdf )
2021-09-12 15:33:27 +00:00
- FastSpeech: [paper ](https://arxiv.org/abs/1905.09263 )
2023-02-06 10:15:43 +00:00
- FastSpeech2: [paper ](https://arxiv.org/abs/2006.04558 )
2022-09-08 09:10:39 +00:00
- SC-GlowTTS: [paper ](https://arxiv.org/abs/2104.05557 )
2022-09-16 10:01:46 +00:00
- Capacitron: [paper ](https://arxiv.org/abs/1906.03402 )
2022-12-12 11:44:15 +00:00
- OverFlow: [paper ](https://arxiv.org/abs/2211.06892 )
2023-01-23 10:53:04 +00:00
- Neural HMM TTS: [paper ](https://arxiv.org/abs/2108.13320 )
2023-08-13 10:04:12 +00:00
- Delightful TTS: [paper ](https://arxiv.org/abs/2110.12612 )
2021-01-06 13:46:28 +00:00
2021-08-09 13:12:51 +00:00
### End-to-End Models
2023-09-19 22:57:09 +00:00
- ⓍTTS: [blog ](https://coqui.ai/blog/tts/open_xtts )
2021-08-09 13:12:51 +00:00
- VITS: [paper ](https://arxiv.org/pdf/2106.06103 )
2023-07-02 11:09:40 +00:00
- 🐸 YourTTS: [paper ](https://arxiv.org/abs/2112.02418 )
- 🐢 Tortoise: [orig. repo ](https://github.com/neonbjb/tortoise-tts )
- 🐶 Bark: [orig. repo ](https://github.com/suno-ai/bark )
2021-08-09 13:12:51 +00:00
2021-01-06 13:46:28 +00:00
### Attention Methods
- Guided Attention: [paper ](https://arxiv.org/abs/1710.08969 )
- Forward Backward Decoding: [paper ](https://arxiv.org/abs/1907.09006 )
2021-08-09 10:20:05 +00:00
- Graves Attention: [paper ](https://arxiv.org/abs/1910.10288 )
2021-01-06 13:46:28 +00:00
- Double Decoder Consistency: [blog ](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/ )
2021-03-30 12:44:43 +00:00
- Dynamic Convolutional Attention: [paper ](https://arxiv.org/pdf/1910.10288.pdf )
2021-09-07 08:01:49 +00:00
- Alignment Network: [paper ](https://arxiv.org/abs/2108.10447 )
2021-01-06 13:46:28 +00:00
### Speaker Encoder
- GE2E: [paper ](https://arxiv.org/abs/1710.10467 )
2021-01-12 15:32:15 +00:00
- Angular Loss: [paper ](https://arxiv.org/pdf/2003.11982.pdf )
2018-07-11 15:40:30 +00:00
2021-01-06 13:46:28 +00:00
### Vocoders
2021-02-04 05:06:39 +00:00
- MelGAN: [paper ](https://arxiv.org/abs/1910.06711 )
2021-01-06 13:46:28 +00:00
- MultiBandMelGAN: [paper ](https://arxiv.org/abs/2005.05106 )
- ParallelWaveGAN: [paper ](https://arxiv.org/abs/1910.11480 )
- GAN-TTS discriminators: [paper ](https://arxiv.org/abs/1909.11646 )
- WaveRNN: [origin ](https://github.com/fatchord/WaveRNN/ )
- WaveGrad: [paper ](https://arxiv.org/abs/2009.00713 )
2021-04-15 01:15:12 +00:00
- HiFiGAN: [paper ](https://arxiv.org/abs/2010.05646 )
2021-06-27 18:55:20 +00:00
- UnivNet: [paper ](https://arxiv.org/abs/2106.07889 )
2018-09-27 13:10:15 +00:00
2023-04-10 12:03:08 +00:00
### Voice Conversion
- FreeVC: [paper ](https://arxiv.org/abs/2210.15418 )
2021-06-27 18:55:20 +00:00
You can also help us implement more models.
2018-09-27 13:10:15 +00:00
2023-07-24 11:30:19 +00:00
## Installation
2023-10-16 10:00:59 +00:00
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.9, < 3.12.** .
2018-09-27 13:10:15 +00:00
2021-08-17 11:36:01 +00:00
If you are only interested in [synthesizing speech ](https://tts.readthedocs.io/en/latest/inference.html ) with the released 🐸TTS models, installing from PyPI is the easiest option.
2021-01-26 02:06:58 +00:00
2021-01-27 10:26:38 +00:00
```bash
2021-01-26 02:08:45 +00:00
pip install TTS
```
2021-01-26 02:06:58 +00:00
2021-03-09 15:39:17 +00:00
If you plan to code or train models, clone 🐸TTS and install it locally.
2021-01-26 02:06:58 +00:00
2021-01-27 10:26:38 +00:00
```bash
2021-03-05 01:46:33 +00:00
git clone https://github.com/coqui-ai/TTS
2022-02-10 15:14:54 +00:00
pip install -e .[all,dev,notebooks] # Select the relevant extras
2021-01-26 02:08:45 +00:00
```
2018-09-27 13:10:15 +00:00
2021-04-09 10:06:38 +00:00
If you are on Ubuntu (Debian), you can also run following commands for installation.
2021-04-09 11:51:29 +00:00
2021-04-09 10:06:38 +00:00
```bash
2022-07-26 11:28:21 +00:00
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
2021-04-09 10:06:38 +00:00
$ make install
```
2021-04-09 11:51:29 +00:00
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here ](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system ).
2021-06-27 18:55:20 +00:00
2022-11-15 23:21:56 +00:00
## Docker Image
You can also try TTS without install with the docker image.
Simply run the following command and you will be able to run TTS without installing it.
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
```
You can then enjoy the TTS server [here ](http://[::1]:5002/ )
2022-11-16 15:13:07 +00:00
More details about the docker images (like GPU support) can be found [here ](https://tts.readthedocs.io/en/latest/docker_images.html )
2022-11-15 23:21:56 +00:00
2022-12-12 11:20:50 +00:00
## Synthesizing speech by 🐸TTS
2021-12-17 10:02:34 +00:00
2022-12-12 11:20:50 +00:00
### 🐍 Python API
2023-08-28 09:19:00 +00:00
#### Running a multi-speaker and multi-lingual model
2022-12-12 11:20:50 +00:00
```python
2023-08-28 09:19:00 +00:00
import torch
2022-12-12 11:20:50 +00:00
from TTS.api import TTS
2023-08-28 09:19:00 +00:00
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
2022-12-12 11:20:50 +00:00
2023-10-17 08:27:11 +00:00
# List available 🐸TTS models
print(TTS().list_models())
2022-12-12 11:20:50 +00:00
# Init TTS
2023-11-08 13:51:42 +00:00
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
2023-06-05 09:15:13 +00:00
2022-12-12 11:20:50 +00:00
# Run TTS
2023-10-17 08:27:11 +00:00
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
2022-12-12 11:20:50 +00:00
# Text to speech to a file
2023-10-17 08:27:11 +00:00
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
2023-08-13 10:04:12 +00:00
```
2022-12-12 11:20:50 +00:00
2023-08-13 10:04:12 +00:00
#### Running a single speaker model
2022-12-12 11:20:50 +00:00
2023-08-13 10:04:12 +00:00
```python
2022-12-12 11:20:50 +00:00
# Init TTS with the target model name
2023-08-28 09:19:00 +00:00
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)
2022-12-12 11:20:50 +00:00
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
2023-02-06 10:20:32 +00:00
2023-06-05 09:15:13 +00:00
# Example voice cloning with YourTTS in English, French and Portuguese
2023-08-28 09:19:00 +00:00
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
2023-02-06 10:20:32 +00:00
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
2023-04-05 10:23:07 +00:00
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
2023-08-13 10:04:12 +00:00
```
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
#### Example voice conversion
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
Converting the voice in `source_wav` to the voice of `target_wav`
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
```python
2023-08-28 09:19:00 +00:00
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
2023-04-05 13:06:50 +00:00
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
2023-08-13 10:04:12 +00:00
```
#### Example voice cloning together with the voice conversion model.
This way, you can clone voices by using any model in 🐸TTS.
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
```python
2023-04-05 13:06:50 +00:00
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
2023-07-04 10:14:54 +00:00
file_path="output.wav"
2023-04-05 13:06:50 +00:00
)
2023-08-13 10:04:12 +00:00
```
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
#### Example using [🐸Coqui Studio](https://coqui.ai) voices.
2023-09-08 10:40:31 +00:00
You access all of your cloned voices and built-in speakers in [🐸Coqui Studio ](https://coqui.ai ).
2023-08-13 10:04:12 +00:00
To do this, you'll need an API token, which you can obtain from the [account page ](https://coqui.ai/account ).
After obtaining the API token, you'll need to configure the COQUI_STUDIO_TOKEN environment variable.
2023-06-05 09:15:13 +00:00
2023-09-08 10:40:31 +00:00
Once you have a valid API token in place, the studio speakers will be displayed as distinct models within the list.
2023-08-13 10:04:12 +00:00
These models will follow the naming convention `coqui_studio/en/<studio_speaker_name>/coqui_studio`
2023-04-05 13:06:50 +00:00
2023-08-13 10:04:12 +00:00
```python
# XTTS model
models = TTS(cs_api_model="XTTS").list_models()
2023-04-05 13:06:50 +00:00
# Init TTS with the target studio speaker
2023-08-28 09:19:00 +00:00
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False)
2023-04-05 13:06:50 +00:00
# Run TTS
2023-11-08 09:51:23 +00:00
tts.tts_to_file(text="This is a test.", language="en", file_path=OUTPUT_PATH)
2023-08-13 10:04:12 +00:00
# V1 model
models = TTS(cs_api_model="V1").list_models()
2023-04-05 13:06:50 +00:00
# Run TTS with emotion and speed control
2023-08-13 10:04:12 +00:00
# Emotion control only works with V1 model
2023-04-05 13:06:50 +00:00
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
2023-08-13 10:04:12 +00:00
```
2023-06-05 09:15:13 +00:00
2023-08-13 10:04:12 +00:00
#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits` .
You can find the language ISO codes [here ](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html )
and learn about the Fairseq models [here ](https://github.com/facebookresearch/fairseq/tree/main/examples/mms ).
2023-06-05 09:15:13 +00:00
2023-08-13 10:04:12 +00:00
```python
2023-06-05 09:15:13 +00:00
# TTS with on the fly voice conversion
api = TTS("tts_models/deu/fairseq/vits")
api.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
2023-07-04 10:14:54 +00:00
file_path="output.wav"
2023-06-05 09:15:13 +00:00
)
2022-12-12 11:20:50 +00:00
```
2023-07-24 11:30:19 +00:00
### Command-line `tts`
2023-09-25 12:03:51 +00:00
<!-- begin - tts - readme -->
Synthesize speech on command line.
You can either use your trained model or choose a model from the provided list.
If you don't specify any models, then it uses LJSpeech based English model.
2022-12-12 11:20:50 +00:00
#### Single Speaker Models
2021-12-17 10:02:34 +00:00
- List provided models:
2023-09-25 12:03:51 +00:00
```
$ tts --list_models
```
2022-08-01 10:17:47 +00:00
- Get model info (for both tts_models and vocoder_models):
2022-09-16 10:01:46 +00:00
2023-09-25 12:03:51 +00:00
- Query by type/name:
The model_info_by_name uses the name as it from the --list_models.
```
$ tts --model_info_by_name "< model_type > /< language > /< dataset > /< model_name > "
```
For example:
```
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
- Query by type/idx:
The model_query_idx uses the corresponding idx from --list_models.
2021-12-17 10:02:34 +00:00
```
2023-09-25 12:03:51 +00:00
$ tts --model_info_by_idx "< model_type > /< model_query_idx > "
2021-12-17 10:02:34 +00:00
```
2023-09-25 12:03:51 +00:00
For example:
2021-12-17 10:02:34 +00:00
```
2023-09-25 12:03:51 +00:00
$ tts --model_info_by_idx tts_models/3
2022-06-27 08:32:43 +00:00
```
2023-09-25 12:03:51 +00:00
- Query info for model info by full name:
2022-06-27 08:32:43 +00:00
```
2023-09-25 12:03:51 +00:00
$ tts --model_info_by_name "< model_type > /< language > /< dataset > /< model_name > "
2021-12-17 10:02:34 +00:00
```
2023-09-25 12:03:51 +00:00
- Run TTS with default models:
2021-12-17 10:02:34 +00:00
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --out_path output/path/speech.wav
```
2023-10-16 10:07:21 +00:00
- Run TTS and pipe out the generated TTS wav file data:
```
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```
- Run TTS and define speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0:
```
$ tts --text "Text for TTS" --model_name "coqui_studio/< language > /< dataset > /< model_name > " --speed 1.2 --out_path output/path/speech.wav
```
2023-09-25 12:03:51 +00:00
- Run a TTS model with its default vocoder model:
```
$ tts --text "Text for TTS" --model_name "< model_type > /< language > /< dataset > /< model_name > " --out_path output/path/speech.wav
```
2022-06-27 08:32:43 +00:00
2022-07-26 11:28:21 +00:00
For example:
2022-06-27 08:32:43 +00:00
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
```
- Run with specific TTS and vocoder models from the list:
```
$ tts --text "Text for TTS" --model_name "< model_type > /< language > /< dataset > /< model_name > " --vocoder_name "< model_type > /< language > /< dataset > /< model_name > " --out_path output/path/speech.wav
```
For example:
2021-12-17 10:02:34 +00:00
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
```
2022-06-27 08:32:43 +00:00
2021-12-17 10:02:34 +00:00
- Run your own TTS model (Using Griffin-Lim Vocoder):
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
```
2021-12-17 10:02:34 +00:00
- Run your own TTS and Vocoder models:
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
```
2021-12-17 10:02:34 +00:00
2022-12-12 11:20:50 +00:00
#### Multi-speaker Models
2021-12-17 10:02:34 +00:00
2023-04-26 13:22:57 +00:00
- List the available speakers and choose a < speaker_id > among them:
2021-12-17 10:02:34 +00:00
2023-09-25 12:03:51 +00:00
```
$ tts --model_name "< language > /< dataset > /< model_name > " --list_speaker_idxs
```
2021-12-17 10:02:34 +00:00
- Run the multi-speaker TTS model with the target speaker ID:
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "< language > /< dataset > /< model_name > " --speaker_idx < speaker_id >
```
2021-12-17 10:02:34 +00:00
- Run your own multi-speaker TTS model:
2023-09-25 12:03:51 +00:00
```
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx < speaker_id >
```
### Voice Conversion Models
```
$ tts --out_path output/path/speech.wav --model_name "< language > /< dataset > /< model_name > " --source_wav < path / to / speaker / wav > --target_wav < path / to / reference / wav >
```
<!-- end - tts - readme -->
2021-12-17 10:02:34 +00:00
2021-01-11 01:12:08 +00:00
## Directory Structure
2020-06-19 13:56:02 +00:00
```
2020-07-17 10:50:20 +00:00
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/ (common utilities.)
2020-07-17 10:56:42 +00:00
|- TTS
|- bin/ (folder for all the executables.)
|- train*.py (train your target model.)
2021-06-27 18:55:20 +00:00
|- ...
2020-07-17 10:56:42 +00:00
|- tts/ (text to speech models)
|- layers/ (model layer definitions)
|- models/ (model definitions)
|- utils/ (model specific utilities.)
|- speaker_encoder/ (Speaker Encoder models.)
|- (same)
|- vocoder/ (Vocoder models.)
|- (same)
2021-09-04 08:36:28 +00:00
```