TTS/README.md

107 lines
4.8 KiB
Markdown
Raw Normal View History

2018-01-23 14:20:13 +00:00
# TTS (Work in Progress...)
2018-05-25 11:52:36 +00:00
This project is a part of [Mozilla Common Voice](https://voice.mozilla.org/en). TTS targets a Text2Speech engine lightweight in computation with high quality speech synthesis. You might hear a sample [here]().
2018-01-22 09:48:59 +00:00
2018-05-25 11:28:24 +00:00
Here we have pytorch implementation of Tacotron: [A Fully End-to-End Text-To-Speech Synthesis Model](https://arxiv.org/abs/1703.10135). We plan to improve the model by the time with new architectural updates.
2018-01-22 09:48:59 +00:00
2018-05-25 11:28:24 +00:00
You can find [here](http://www.erogol.com/text-speech-deep-learning-architectures/) a brief note pointing possible TTS architectures and their comparisons.
2018-01-22 09:48:59 +00:00
## Requirements
2018-01-23 13:18:09 +00:00
Highly recommended to use [miniconda](https://conda.io/miniconda.html) for easier installation.
* python 3.6
2018-05-10 22:40:34 +00:00
* pytorch 0.4
* librosa
* tensorboard
* tensorboardX
* matplotlib
* unidecode
## Checkpoints and Audio Samples
Checkout [here](https://mycroft.ai/blog/available-voices/#the-human-voice-is-the-most-perfect-instrument-of-all-arvo-part) to compare the samples (except the first) below.
2018-04-12 11:56:46 +00:00
| Models | Commit | Audio Sample |
| ------------- |:-----------------:|:-------------|
2018-05-10 22:40:34 +00:00
| [iter-62410](https://drive.google.com/open?id=1pjJNzENL3ZNps9n7k_ktGbpEl6YPIkcZ)| [99d56f7](https://github.com/mozilla/TTS/tree/99d56f7e93ccd7567beb0af8fcbd4d24c48e59e9) | [link](https://soundcloud.com/user-565970875/99d56f7-iter62410 )|
2018-05-25 11:28:24 +00:00
| [iter-170K](https://drive.google.com/open?id=16L6JbPXj6MSlNUxEStNn28GiSzi4fu1j) | [e00bc66](https://github.com/mozilla/TTS/tree/e00bc66) |[link](https://soundcloud.com/user-565970875/april-13-2018-07-06pm-e00bc66-iter170k)|
2018-05-25 11:52:36 +00:00
| Best: [iter-270K](https://drive.google.com/drive/folders/1Q6BKeEkZyxSGsocK2p_mqgzLwlNvbHFJ?usp=sharing)|[256ed63](https://github.com/mozilla/TTS/tree/256ed63)|[link](https://soundcloud.com/user-565970875/sets/samples-1650226)|
2018-04-03 10:02:47 +00:00
2018-01-22 09:48:59 +00:00
## Data
2018-01-26 11:05:24 +00:00
Currently TTS provides data loaders for
- [LJ Speech](https://keithito.com/LJ-Speech-Dataset/)
2018-01-22 09:48:59 +00:00
2018-05-25 12:51:14 +00:00
## Training and Finetunning
To train a new model, you need to define a ```config.json``` file (simple template below) and call with the command below.
2018-01-26 11:05:24 +00:00
```train.py --config_path config.json```
2018-05-25 12:51:14 +00:00
To finetune a model, use ```--restore_path``` argument.
```train.py --config_path config.json --restore_path /path/to/your/model.pth.tar```
If you like to use specific set of GPUs, you need set an environment variable. The code uses automatically all the provided GPUs for data parallel training. If you don't specify the GPUs, it uses all GPUs of the system.
2018-01-26 11:05:24 +00:00
```CUDA_VISIBLE_DEVICES="0,1,4" train.py --config_path config.json```
2018-05-25 12:51:14 +00:00
Each run creates an experiment folder with some meta information, under the folder you set in ```config.json```.
In case of any error or intercepted execution, if there is no checkpoint yet under the execution folder, the whole folder is going to be removed.
2018-01-26 11:05:24 +00:00
2018-05-25 12:51:14 +00:00
You can also enjoy Tensorboard, if you point the Tensorboard argument```--logdir``` to the experiment folder.
2018-02-22 13:22:27 +00:00
2018-01-26 11:05:24 +00:00
Example ```config.json```:
```
{
2018-05-25 11:28:24 +00:00
"model_name": "my-model", // used in the experiment folder name
2018-01-26 11:05:24 +00:00
"num_mels": 80,
2018-05-10 22:40:34 +00:00
"num_freq": 1025,
2018-05-25 11:28:24 +00:00
"sample_rate": 20000,
2018-05-10 22:40:34 +00:00
"frame_length_ms": 50,
2018-01-26 11:05:24 +00:00
"frame_shift_ms": 12.5,
"preemphasis": 0.97,
"min_level_db": -100,
"ref_level_db": 20,
"embedding_size": 256,
"text_cleaner": "english_cleaners",
2018-05-25 11:28:24 +00:00
"epochs": 1000,
2018-05-10 22:40:34 +00:00
"lr": 0.002,
"warmup_steps": 4000,
"batch_size": 32,
"eval_batch_size":32,
"r": 5,
2018-05-25 11:28:24 +00:00
2018-05-10 22:40:34 +00:00
"griffin_lim_iters": 60,
2018-05-25 11:28:24 +00:00
"power": 1.5,
2018-05-10 22:40:34 +00:00
2018-01-26 11:05:24 +00:00
"num_loader_workers": 8,
2018-05-25 11:28:24 +00:00
"checkpoint": true,
"save_step": 376,
"data_path": "/my/training/data/path",
"min_seq_len": 0,
"output_path": "/my/experiment/folder/path"
2018-01-26 11:05:24 +00:00
}
2018-05-25 11:28:24 +00:00
2018-02-13 16:17:54 +00:00
```
2018-02-22 13:22:27 +00:00
## Testing
2018-05-25 11:28:24 +00:00
Best way to test your pretrained network is to use Notebooks under ```notebooks``` folder.
2018-02-22 13:22:27 +00:00
## Contribution
2018-05-25 11:28:24 +00:00
Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on track as this repo gets bigger.
2018-02-27 14:25:55 +00:00
## TODO
2018-05-10 22:40:34 +00:00
Checkout issues and Project field.
## References
- [Efficient Neural Audio Synthesis](https://arxiv.org/pdf/1802.08435.pdf)
- [Attention-Based models for speech recognition](https://arxiv.org/pdf/1506.07503.pdf)
- [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850.pdf)
- [Char2Wav: End-to-End Speech Synthesis](https://openreview.net/pdf?id=B1VWyySKx)
- [VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop](https://arxiv.org/pdf/1707.06588.pdf)
- [WaveRNN](https://arxiv.org/pdf/1802.08435.pdf)
- [Faster WaveNet](https://arxiv.org/abs/1611.09482)
- [Parallel WaveNet](https://arxiv.org/abs/1711.10433)
2018-04-04 13:21:58 +00:00
### Precursor implementations
- https://github.com/keithito/tacotron (Dataset and Test processing)
- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)