14083c4da6 | ||
---|---|---|
datasets | ||
layers | ||
models | ||
notebooks | ||
png | ||
tests | ||
utils | ||
.gitignore | ||
LICENSE.txt | ||
README.md | ||
__init__.py | ||
best_model_config.json | ||
debug_config.py | ||
requirements.txt | ||
train.py |
README.md
TTS (Work in Progress...)
This project is a part of Mozilla Common Voice. TTS targets a Text2Speech engine lightweight in computation with high quality speech synthesis. You might hear a sample here.
Here we have pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. We plan to improve the model by the time with new architectural updates.
You can find here a brief note pointing possible TTS architectures and their comparisons.
Requirements
Highly recommended to use miniconda for easier installation.
- python 3.6
- pytorch 0.4
- librosa
- tensorboard
- tensorboardX
- matplotlib
- unidecode
Checkpoints and Audio Samples
Checkout here to compare the samples (except the first) below.
Models | Commit | Audio Sample |
---|---|---|
iter-62410 | 99d56f7 | link |
iter-170K | e00bc66 | link |
Best: iter-270K | 256ed63 | link |
Data
Currently TTS provides data loaders for
Training the network
To run your own training, you need to define a config.json
file (simple template below) and call with the command.
train.py --config_path config.json
If you like to use specific set of GPUs.
CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json
Each run creates an experiment folder with the date and the time, under the folder you set in config.json
. And if there is no checkpoint yet under that folder, it is going to be removed when you exit the training or an error is raised.
You can also enjoy Tensorboard with couple of good training indicators, if you point the Tensorboard argument--logdir
to the experiment folder.
Example config.json
:
{
"model_name": "my-model", // used in the experiment folder name
"num_mels": 80,
"num_freq": 1025,
"sample_rate": 20000,
"frame_length_ms": 50,
"frame_shift_ms": 12.5,
"preemphasis": 0.97,
"min_level_db": -100,
"ref_level_db": 20,
"embedding_size": 256,
"text_cleaner": "english_cleaners",
"epochs": 1000,
"lr": 0.002,
"warmup_steps": 4000,
"batch_size": 32,
"eval_batch_size":32,
"r": 5,
"griffin_lim_iters": 60,
"power": 1.5,
"num_loader_workers": 8,
"checkpoint": true,
"save_step": 376,
"data_path": "/my/training/data/path",
"min_seq_len": 0,
"output_path": "/my/experiment/folder/path"
}
Testing
Best way to test your pretrained network is to use Notebooks under notebooks
folder.
Contribution
Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on track as this repo gets bigger.
TODO
Checkout issues and Project field.
References
- Efficient Neural Audio Synthesis
- Attention-Based models for speech recognition
- Generating Sequences With Recurrent Neural Networks
- Char2Wav: End-to-End Speech Synthesis
- VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop
- WaveRNN
- Faster WaveNet
- Parallel WaveNet
Precursor implementations
- https://github.com/keithito/tacotron (Dataset and Test processing)
- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)