🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
 
 
 
 
Go to file
Eren G f81b45aaac .compute script and config.json update 2018-07-20 11:45:57 +02:00
datasets more prunning and dataloader file encoding for win10 2018-07-12 19:03:39 +02:00
layers Merge branch 'master' of https://github.com/Mozilla/TTS 2018-05-25 05:14:04 -07:00
models fix import statements 2018-06-21 16:33:30 +02:00
notebooks Rename utils due to name conflict 2018-07-05 17:02:14 +02:00
png Beginning 2018-01-22 01:48:59 -08:00
server model path changes for server and string strip 2018-06-06 07:30:45 -07:00
tests Merge branch 'master' of https://github.com/Mozilla/TTS 2018-05-25 05:14:04 -07:00
utils logging fix 2018-07-18 14:32:45 +02:00
.compute .compute script and config.json update 2018-07-20 11:45:57 +02:00
.gitignore add setup.py 2018-06-21 15:48:13 +02:00
.install Change snakepit scripts 2018-07-09 15:56:11 +02:00
LICENSE.txt Create LICENSE.txt 2018-02-13 22:37:59 +01:00
README.md README update 2018-07-11 17:40:30 +02:00
best_model_config.json Merge branch 'master' of https://github.com/Mozilla/TTS 2018-05-25 05:14:04 -07:00
config.json .compute script and config.json update 2018-07-20 11:45:57 +02:00
debug_config.py pep8 check 2018-04-03 03:24:57 -07:00
requirements.txt unix line endings 2018-06-21 15:46:20 +02:00
setup.py fix import statements 2018-06-21 16:33:30 +02:00
train.py Fix for eval time logging 2018-07-19 16:55:00 +02:00

README.md

TTS (Work in Progress...)

This project is a part of Mozilla Common Voice. TTS targets a Text2Speech engine lightweight in computation with high quality speech synthesis. You might hear a sample here.

Here we have pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. We plan to improve the model by the time with new architectural updates.

You can find here a brief note pointing possible TTS architectures and their comparisons.

Requirements

Highly recommended to use miniconda for easier installation.

  • python 3.6
  • pytorch 0.4
  • librosa
  • tensorboard
  • tensorboardX
  • matplotlib
  • unidecode

Checkpoints and Audio Samples

Checkout here to compare the samples (except the first) below.

Models Commit Audio Sample Details
iter-62410 99d56f7 link First model with plain Tacotron implementation.
iter-170K e00bc66 link More stable and longer trained model.
Best: iter-270K 256ed63 link Stop-Token prediction is added, to detect end of speech.

Data

Currently TTS provides data loaders for

Training and Finetunning

Split metadata.csv into train and validation subsets respectively metadata_train.csv and metadata_val.csv.

shuf metadata.csv > metadata_shuf.csv
head -n 12000 metadata_shuf.csv > metadata_train.csv
tail -n 11000 metadata_shuf.csv > metadata_val.csv

To train a new model, you need to define a config.json file (simple template below) and call with the command below.

train.py --config_path config.json

To finetune a model, use --restore_path argument.

train.py --config_path config.json --restore_path /path/to/your/model.pth.tar

If you like to use specific set of GPUs, you need set an environment variable. The code uses automatically all the provided GPUs for data parallel training. If you don't specify the GPUs, it uses all GPUs of the system.

CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json

Each run creates an experiment folder with some meta information, under the folder you set in config.json. In case of any error or intercepted execution, if there is no checkpoint yet under the execution folder, the whole folder is going to be removed.

You can also enjoy Tensorboard, if you point the Tensorboard argument--logdir to the experiment folder.

Example config.json:

{
  "model_name": "my-model", // used in the experiment folder name
  "num_mels": 80,
  "num_freq": 1025,
  "sample_rate": 20000,
  "frame_length_ms": 50,
  "frame_shift_ms": 12.5,
  "preemphasis": 0.97,
  "min_level_db": -100,
  "ref_level_db": 20,
  "embedding_size": 256,
  "text_cleaner": "english_cleaners",

  "epochs": 1000,
  "lr": 0.002,
  "warmup_steps": 4000,
  "batch_size": 32,
  "eval_batch_size":32,
  "r": 5,

  "griffin_lim_iters": 60,
  "power": 1.5,

  "num_loader_workers": 8,

  "checkpoint": true,
  "save_step": 376,
  "data_path": "/my/training/data/path",
  "min_seq_len": 0,
  "output_path": "/my/experiment/folder/path"
}

Testing

Best way to test your pretrained network is to use Notebooks under notebooks folder.

Contribution

Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on track as this repo gets bigger.

TODO

Checkout issues and Project field.

References

Precursor implementations