🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
 
 
 
 
Go to file
Eren Golge 14083c4da6 Readme update 2018-05-25 04:52:36 -07:00
datasets remove Variable from LJSpeecj.py 2018-05-10 16:35:04 -07:00
layers bug fix 2018-05-25 04:28:40 -07:00
models stop token prediction update for tacotron model 2018-05-11 04:15:06 -07:00
notebooks Notebook updates 2018-05-25 04:30:00 -07:00
png Beginning 2018-01-22 01:48:59 -08:00
tests tests update 2018-05-10 16:00:21 -07:00
utils Notebook updates 2018-05-25 04:30:00 -07:00
.gitignore gitignore update 2018-01-31 07:25:37 -08:00
LICENSE.txt Create LICENSE.txt 2018-02-13 22:37:59 +01:00
README.md Readme update 2018-05-25 04:52:36 -07:00
__init__.py Beginning 2018-01-22 01:48:59 -08:00
best_model_config.json config update - add model name 2018-05-10 16:15:50 -07:00
debug_config.py pep8 check 2018-04-03 03:24:57 -07:00
requirements.txt Add missing dependencies from requirements.txt 2018-04-09 12:51:35 -03:00
train.py restore fix 2018-05-16 19:20:40 -07:00

README.md

TTS (Work in Progress...)

This project is a part of Mozilla Common Voice. TTS targets a Text2Speech engine lightweight in computation with high quality speech synthesis. You might hear a sample here.

Here we have pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. We plan to improve the model by the time with new architectural updates.

You can find here a brief note pointing possible TTS architectures and their comparisons.

Requirements

Highly recommended to use miniconda for easier installation.

  • python 3.6
  • pytorch 0.4
  • librosa
  • tensorboard
  • tensorboardX
  • matplotlib
  • unidecode

Checkpoints and Audio Samples

Checkout here to compare the samples (except the first) below.

Models Commit Audio Sample
iter-62410 99d56f7 link
iter-170K e00bc66 link
Best: iter-270K 256ed63 link

Data

Currently TTS provides data loaders for

Training the network

To run your own training, you need to define a config.json file (simple template below) and call with the command.

train.py --config_path config.json

If you like to use specific set of GPUs.

CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json

Each run creates an experiment folder with the date and the time, under the folder you set in config.json. And if there is no checkpoint yet under that folder, it is going to be removed when you exit the training or an error is raised.

You can also enjoy Tensorboard with couple of good training indicators, if you point the Tensorboard argument--logdir to the experiment folder.

Example config.json:

{
  "model_name": "my-model", // used in the experiment folder name
  "num_mels": 80,
  "num_freq": 1025,
  "sample_rate": 20000,
  "frame_length_ms": 50,
  "frame_shift_ms": 12.5,
  "preemphasis": 0.97,
  "min_level_db": -100,
  "ref_level_db": 20,
  "embedding_size": 256,
  "text_cleaner": "english_cleaners",

  "epochs": 1000,
  "lr": 0.002,
  "warmup_steps": 4000,
  "batch_size": 32,
  "eval_batch_size":32,
  "r": 5,
    
  "griffin_lim_iters": 60,
  "power": 1.5,

  "num_loader_workers": 8,

  "checkpoint": true,
  "save_step": 376,
  "data_path": "/my/training/data/path",
  "min_seq_len": 0, 
  "output_path": "/my/experiment/folder/path"
}

Testing

Best way to test your pretrained network is to use Notebooks under notebooks folder.

Contribution

Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on track as this repo gets bigger.

TODO

Checkout issues and Project field.

References

Precursor implementations