mirror of https://github.com/coqui-ai/TTS.git
* Link source files from the docs * Update glowTTS recipes for docs * Add dataset downloaders |
||
---|---|---|
.. | ||
align_tts | ||
fast_pitch | ||
fast_speech | ||
glow_tts | ||
hifigan | ||
multiband_melgan | ||
speedy_speech | ||
tacotron2-DCA | ||
tacotron2-DDC | ||
univnet | ||
vits_tts | ||
wavegrad | ||
wavernn | ||
README.md | ||
download_ljspeech.sh |
README.md
🐸💬 TTS LJspeech Recipes
For running the recipes
-
Download the LJSpeech dataset here either manually from its official website or using
download_ljspeech.sh
. -
Go to your desired model folder and run the training.
Running Python files. (Choose the desired GPU ID for your run and set
CUDA_VISIBLE_DEVICES
)CUDA_VISIBLE_DEVICES="0" python train_modelX.py
Running bash scripts.
bash run.sh
💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.