Add `forward_tts` docs

pull/800/head
Eren Gölge 2021-09-12 15:34:27 +00:00
parent 1ea011571a
commit edc8d4d833
3 changed files with 66 additions and 29 deletions

View File

@ -45,7 +45,7 @@
models/glow_tts.md
models/vits.md
models/fast_pitch.md
models/forward_tts.md
.. toctree::
:maxdepth: 2

View File

@ -1,28 +0,0 @@
# FastPitch
FastPitch is a feed-forward encoder-decoder TTS model. It computes mel-spectrogram from the given input character sequence.
It uses a duration predictor network to predict the duration of each input character in the output sequence. In the original paper, they use a pre-trained Tacotron model to generate the labels for the duration predictor. In this implementation, you can also use an aligner network to learn the durations from the data and train the duration predictor in parallel. Original FastPitch model uses FeedForwardTransformer networks for both encoder and decoder. But in this implementation, you have the freedom to choose different encoder and decoder networks by just changing the relevant fields in the model configuration. Please see `FastPitchArgs` and `FastPitchConfig` below for more details.
## Important resources & papers
- FastPitch: https://arxiv.org/abs/2006.06873
- FastSpeech: https://arxiv.org/pdf/1905.09263
- Aligner Network: https://arxiv.org/abs/2108.10447
- What is Pitch: https://www.britannica.com/topic/pitch-speech
## FastPitchConfig
```{eval-rst}
.. autoclass:: TTS.tts.configs.fast_pitch_config.FastPitchConfig
:members:
```
## FastPitchArgs
```{eval-rst}
.. autoclass:: TTS.tts.models.fast_pitch.FastPitchArgs
:members:
```
## FastPitch Model
```{eval-rst}
.. autoclass:: TTS.tts.models.fast_pitch.FastPitch
:members:
```

View File

@ -0,0 +1,65 @@
# Forward TTS model(s)
A general feed-forward TTS model implementation that can be configured to different architectures by setting different
encoder and decoder networks. It can be trained with either pre-computed durations (from pre-trained Tacotron) or
an alignment network that learns the text to audio alignment from the input data.
Currently we provide the following pre-configured architectures:
- **FastSpeech:**
It's a feed-forward model TTS model that uses Feed Forward Transformer (FFT) modules as the encoder and decoder.
- **FastPitch:**
It uses the same FastSpeech architecture that us conditioned on fundemental frequency (f0) contours with the
promise of more expressive speech.
- **SpeedySpeech:**
It uses Residual Convolution layers instead of Transformers that leads to a more compute friendly model.
- **FastSpeech2 (TODO):**
Similar to FastPitch but it also uses a spectral energy values as an addition.
## Important resources & papers
- FastPitch: https://arxiv.org/abs/2006.06873
- SpeedySpeech: https://arxiv.org/abs/2008.03802
- FastSpeech: https://arxiv.org/pdf/1905.09263
- FastSpeech2: https://arxiv.org/abs/2006.04558
- Aligner Network: https://arxiv.org/abs/2108.10447
- What is Pitch: https://www.britannica.com/topic/pitch-speech
## ForwardTTSArgs
```{eval-rst}
.. autoclass:: TTS.tts.models.forward_tts.ForwardTTSArgs
:members:
```
## ForwardTTS Model
```{eval-rst}
.. autoclass:: TTS.tts.models.forward_tts.ForwardTTS
:members:
```
## FastPitchConfig
```{eval-rst}
.. autoclass:: TTS.tts.configs.fast_pitch_config.FastPitchConfig
:members:
```
## SpeedySpeechConfig
```{eval-rst}
.. autoclass:: TTS.tts.configs.speedy_speech_config.SpeedySpeechConfig
:members:
```
## FastSpeechConfig
```{eval-rst}
.. autoclass:: TTS.tts.configs.fast_speech_config.FastSpeechConfig
:members:
```