Go to file
Michael Nguyen 591f3cb77e added longer sequences in eval 2018-09-05 12:40:46 -05:00
datasets took out hparams for preprocessing 2018-07-03 19:41:44 -05:00
models fixed dropout and prenet bug 2018-09-05 12:38:03 -05:00
tests Remove dependency on inflect 2018-02-05 16:41:56 -08:00
text Remove dependency on inflect 2018-02-05 16:41:56 -08:00
util took out hparams for preprocessing 2018-07-03 19:41:44 -05:00
.gitignore added kusal preprocess code 2018-06-18 12:20:09 -05:00
LICENSE Apache 2 license 2018-02-05 14:28:23 -08:00
README.md Update README.md 2018-08-24 19:00:55 -05:00
TRAINING_DATA.md Rename "pipeline" to "cleaners" 2017-09-04 21:54:23 -07:00
analyze.py fixed dropout and prenet bug 2018-09-05 12:38:03 -05:00
cpu.Dockerfile update docker to tensorflow 1.8 and python3 2018-07-05 14:30:02 -05:00
demo_server.py add synthesize helper as option, change export to return alignments 2018-07-20 13:58:41 -05:00
eval.py added longer sequences in eval 2018-09-05 12:40:46 -05:00
export.py add synthesize helper as option, change export to return alignments 2018-07-20 13:58:41 -05:00
gpu.Dockerfile update docker to tensorflow 1.8 and python3 2018-07-05 14:30:02 -05:00
hparams.py fixed dropout and prenet bug 2018-09-05 12:38:03 -05:00
preprocess.py write meta data to file after preprocessing 2018-08-28 13:11:13 -05:00
requirements.txt update numpy 2018-08-13 17:30:57 -05:00
synthesize_helper.py add synthesize helper as option, change export to return alignments 2018-07-20 13:58:41 -05:00
synthesizer.py fixed eval 2018-08-25 16:38:11 -05:00
train.py preprocess.py now can set hparams, added mailabs dataset 2018-06-27 14:12:39 -05:00

README.md

mimic2

This is a fork of keithito/tacotron with changes specific to Mimic 2 applied.

Background

Google published a paper, Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model, where they present a neural text-to-speech model that learns to synthesize speech directly from (text, audio) pairs. However, they didn't release their source code or training data. This is an attempt to provide an open-source implementation of the model described in their paper.

The quality isn't as good as Google's demo yet, but hopefully it will get there someday :-). Pull requests are welcome!

Quick Start

Installing dependencies

  1. make sure you have docker installed

  2. Build Docker

    the Dockerfile comes with a gpu option or cpu option. If you want to use the GPU in docker make sure you have nvidia-docker installed

    gpu: docker build -t mycroft/mimic2:gpu -f gpu.Dockerfile .

    cpu: docker build -t mycroft/mimic2:cpu -f cpu.Dockerfile .

  3. Run Docker

    gpu: nvidia-docker run -it -p 3000:3000 mycroft/mimic2:gpu

    cpu: docker run -it -p 3000:3000 mycroft/mimic2:cpu

manually

  1. Install Python 3.

  2. Install the latest version of TensorFlow for your platform. For better performance, install with GPU support if it's available. This code has been tested on tensorflow 1.8.

  3. Install requirements:

    pip install -r requirements.txt
    

Training

Note: you need at least 40GB of free disk space to train a model.

  1. Download a speech dataset.

    The following are supported out of the box:

    You can use other datasets if you convert them to the right format. See TRAINING_DATA.md for more info.

  2. Unpack the dataset into ~/tacotron

    After unpacking, your tree should look like this for LJ Speech:

    tacotron
      |- LJSpeech-1.1
          |- metadata.csv
          |- wavs
    

    or like this for Blizzard 2012:

    tacotron
      |- Blizzard2012
          |- ATrampAbroad
          |   |- sentence_index.txt
          |   |- lab
          |   |- wav
          |- TheManThatCorruptedHadleyburg
              |- sentence_index.txt
              |- lab
              |- wav
    

    For M-AILABS follow the directory structure from here

  3. Preprocess the data

    python3 preprocess.py --dataset ljspeech
    
    • other datasets can be used i.e. --dataset blizzard for Blizzard data
    • for the mailabs dataset, do preprocess.py --help for options. Also note that mailabs uses sample_size of 16000
  4. Train a model

    python3 train.py
    

    Tunable hyperparameters are found in hparams.py. You can adjust these at the command line using the --hparams flag, for example --hparams="batch_size=16,outputs_per_step=2". Hyperparameters should generally be set to the same values at both training and eval time. I highly recommend setting the params in the hparams.py file to gurantee consistentcy during preprocessing, training, evaluating, and running the demo server.

  5. Monitor with Tensorboard (optional)

    tensorboard --logdir ~/tacotron/logs-tacotron
    

    The trainer dumps audio and alignments every 1000 steps. You can find these in ~/tacotron/logs-tacotron.

  6. Synthesize from a checkpoint

    python3 demo_server.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000
    

    Replace "185000" with the checkpoint number that you want to use, then open a browser to localhost:3000 and type what you want to speak. Alternately, you can run eval.py at the command line:

    python3 eval.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000
    

    If you set the --hparams flag when training, set the same value here.