🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
 
 
 
 
Go to file
Eren Golge 1aad10c31c bug fix 2018-04-17 09:59:50 -07:00
datasets Config chnages for TWEB 2018-04-17 09:55:46 -07:00
layers threshold changed 2018-04-16 11:54:49 -07:00
models Message fix 2018-04-10 09:35:49 -07:00
notebooks a new hacky way to stop generation and test notebook update 2018-04-13 05:09:14 -07:00
png Beginning 2018-01-22 01:48:59 -08:00
tests Add TWEB data loader tests 2018-04-17 09:56:31 -07:00
utils Add new gl implementations 2018-04-17 09:57:57 -07:00
.gitignore gitignore update 2018-01-31 07:25:37 -08:00
LICENSE.txt Create LICENSE.txt 2018-02-13 22:37:59 +01:00
README.md Update README.md 2018-04-12 14:01:40 +02:00
__init__.py Beginning 2018-01-22 01:48:59 -08:00
config.json bug fix 2018-04-17 09:59:50 -07:00
debug_config.py pep8 check 2018-04-03 03:24:57 -07:00
module.py pep8 check 2018-04-03 03:24:57 -07:00
requirements.txt Add missing dependencies from requirements.txt 2018-04-09 12:51:35 -03:00
synthesis.py pep8 check 2018-04-03 03:24:57 -07:00
train.py Small edits on training and audio scripts 2018-04-17 09:57:15 -07:00

README.md

TTS (Work in Progress...)

TTS targets a Text2Speech engine lightweight in computation with hight quality speech construction.

Here we have pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model as the start point. We plan to improve the model by the recent updated at the field.

You can find here a brief note pointing possible TTS architectures and their comparisons.

Requirements

Highly recommended to use miniconda for easier installation.

  • python 3.6
  • pytorch > 0.2.0
  • TODO

Audio Samples

All samples below are generated in test setting and can be generated a model trained on the master branch.

Checkpoints

Models Commit Audio Sample
iter-62410 99d56f7 https://soundcloud.com/user-565970875/99d56f7-iter62410

Data

Currently TTS provides data loaders for

Training the network

To run your own training, you need to define a config.json file (simple template below) and call with the command.

train.py --config_path config.json

If you like to use specific set of GPUs.

CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json

Each run creates an experiment folder with the corresponfing date and time, under the folder you set in config.json. And if there is no checkpoint yet under that folder, it is going to be removed when you press Ctrl+C.

You can also enjoy Tensorboard with couple of good training logs, if you point --logdir the experiment folder.

Example config.json:

{
  // Data loading parameters
  "num_mels": 80,
  "num_freq": 1024,
  "sample_rate": 20000,
  "frame_length_ms": 50.0,
  "frame_shift_ms": 12.5,
  "preemphasis": 0.97,
  "min_level_db": -100,
  "ref_level_db": 20,
  "hidden_size": 128,
  "embedding_size": 256,
  "text_cleaner": "english_cleaners",

  // Training parameters
  "epochs": 2000,
  "lr": 0.001,
  "batch_size": 256,
  "griffinf_lim_iters": 60,
  "power": 1.5,
  "r": 5,            // number of decoder outputs for Tacotron

  // Number of data loader processes
  "num_loader_workers": 8,

  // Experiment logging parameters
  "checkpoint": true,  // if save checkpoint per save_step
  "save_step": 200,
  "data_path": "/path/to/KeithIto/LJSpeech-1.0",
  "output_path": "/path/to/my_experiment",
  "log_dir": "/path/to/my/tensorboard/logs/"
}

Testing

Best way to test your pretrained network is to use the Notebook under notebooks folder.

Contribution

Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on the track as this repo gets bigger.

TODO

Precursor implementations