Go to file
Keith Ito d8910bcb84 Scale audio to use full 16-bit range 2017-07-12 08:21:40 -07:00
datasets Initial commit 2017-07-08 13:08:26 -04:00
models Initial commit 2017-07-08 13:08:26 -04:00
tests Initial commit 2017-07-08 13:08:26 -04:00
util Scale audio to use full 16-bit range 2017-07-12 08:21:40 -07:00
.gitignore Initial commit 2017-07-08 13:08:26 -04:00
LICENSE Initial commit 2017-07-08 13:08:26 -04:00
README.md Spell miscellaneous right 2017-07-08 19:16:39 -04:00
demo_server.py Initial commit 2017-07-08 13:08:26 -04:00
eval.py Initial commit 2017-07-08 13:08:26 -04:00
hparams.py Initial commit 2017-07-08 13:08:26 -04:00
preprocess.py Initial commit 2017-07-08 13:08:26 -04:00
requirements.txt Initial commit 2017-07-08 13:08:26 -04:00
synthesizer.py Initial commit 2017-07-08 13:08:26 -04:00
train.py Initial commit 2017-07-08 13:08:26 -04:00

README.md

tacotron

An implementation of Google's Tacotron speech synthesis model in Tensorflow.

Example Output

  • Audio Samples after training for 185k steps (~2 days). The model is still training. I'll update the samples when it's further along.

Background

Earlier this year, Google published a paper, Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model, where they present a neural text-to-speech model that learns to synthesize speech directly from (text, audio) pairs. However, they didn't release their source code or training data. This is an attempt to provide an open-source implementation of the model described in their paper.

The quality isn't as good as Google's demo yet, but hopefully it will get there someday :-).

Quick Start

Installing dependencies

pip install -r requirements.txt

Using a pre-trained model

  1. Download and unpack a model:

    curl http://data.keithito.com/data/speech/tacotron-20170708.tar.bz2 | tar x -C /tmp
    
  2. Run the demo server:

    python3 demo_server.py --checkpoint /tmp/tacotron-20170708/model.ckpt
    
  3. Point your browser at localhost:9000

    • Type what you want to synthesize

Training

  1. Download a speech dataset.

    The following are supported out of the box:

    You can use other datasets if you convert them to the right format. See ljspeech.py for an example.

  2. Unpack the dataset into ~/tacotron

    After unpacking, your tree should look like this for LJ Speech:

    tacotron
      |- LJSpeech-1.0
          |- metadata.csv
          |- wavs
    

    or like this for Blizzard 2012:

    tacotron
      |- Blizzard2012
          |- ATrampAbroad
          |   |- sentence_index.txt
          |   |- lab
          |   |- wav
          |- TheManThatCorruptedHadleyburg
              |- sentence_index.txt
              |- lab
              |- wav
    
  3. Preprocess the data

    python3 preprocess.py --dataset ljspeech
    
    • Use --dataset blizzard for Blizzard data
  4. Train a model

    python3 train.py
    
  5. Monitor with Tensorboard (optional)

    tensorboard --logdir ~/tacotron/logs-tacotron
    

    The trainer dumps audio and alignments every 1000 steps. You can find these in ~/tacotron/logs-tacotron.

  6. Synthesize from a checkpoint

    python demo_server.py --checkpoint ~/tacotron/logs-tacotron/model.ckpt-185000
    

    Replace "185000" with the checkpoint number that you want to use, then open a browser to localhost:9000 and type what you want to speak.

Miscellaneous Notes

  • TCMalloc seems to improve training speed and avoids occasional slowdowns seen with the default allocator. You can enable it by installing it and setting LD_PRELOAD=/usr/lib/libtcmalloc.so.

  • You can train with CMUDict by downloading the dictionary to ~/tacotron/training and then passing the flag --hparams="use_cmudict=True" to train.py. This will allow you to pass ARPAbet phonemes enclosed in curly braces at eval time to force a particular pronunciation, e.g. Turn left on {HH AW1 S S T AH0 N} Street.

  • If you pass a Slack incoming webhook URL as the --slack_url flag to train.py, it will send you progress updates every 1000 steps.

Other Implementations