Eren Golge
|
82cde95cfa
|
add bias to attention v
|
2019-03-19 12:21:36 +01:00 |
Eren Golge
|
1b68d3cb4e
|
control synthesis lenght as an additional stop condition
|
2019-03-15 14:01:43 +01:00 |
Eren Golge
|
4f89029577
|
Merge branch 'state-pass' into dev-tacotron2
|
2019-03-12 09:52:15 +01:00 |
Eren Golge
|
3128378bdf
|
bug fix for stop token prediciton
|
2019-03-12 00:20:57 +01:00 |
Eren Golge
|
527567d7ce
|
renaming
|
2019-03-12 00:20:43 +01:00 |
Eren Golge
|
b9b79fcf0f
|
inference truncated NEED TO BE TESTED
|
2019-03-11 17:40:09 +01:00 |
Eren Golge
|
4ffda89c42
|
reshape input vectors before and after bn layer
|
2019-03-11 13:03:43 +01:00 |
Eren Golge
|
a144acf466
|
use BN for prenet
|
2019-03-07 16:28:50 +01:00 |
Eren Golge
|
4b116a2a88
|
Look for the last two attention values for stop condition and attend to the first encoder verctor if it is the first decoder iteration
|
2019-03-06 23:46:02 +01:00 |
Eren Golge
|
b031a65677
|
compute sequence mask in model, add tacotron2 relatedfiles
|
2019-03-06 13:14:58 +01:00 |
Eren Golge
|
a4474abd83
|
tacotron parse output bug fix
|
2019-03-06 13:10:54 +01:00 |
Eren Golge
|
4326582bb1
|
TTSDataset formatting and batch sorting to use pytorch pack for rnns
|
2019-03-06 13:10:05 +01:00 |
Eren Golge
|
44c66c6e3e
|
remove comments
|
2019-03-05 13:34:33 +01:00 |
Eren Golge
|
1e8fdec084
|
Modularize functions in Tacotron
|
2019-03-05 13:25:50 +01:00 |
Eren Golge
|
1c99be2ffd
|
Change window size for attention
|
2019-02-18 13:06:26 +01:00 |
Eren Golge
|
6ea31e47df
|
Constant queue size for autoregression window
|
2019-02-16 03:18:49 +01:00 |
Eren Golge
|
90f0cd640b
|
memoru queueing
|
2019-02-12 15:27:42 +01:00 |
Eren Golge
|
c5b6227848
|
init with embedding lyaers
|
2019-02-06 16:54:33 +01:00 |
Eren Golge
|
d28bbe09fb
|
Attention bias setting Revert to old
|
2019-02-06 16:23:01 +01:00 |
Eren Golge
|
e12bbc2a5c
|
init decoder states with a function
|
2019-01-22 18:25:55 +01:00 |
Eren Golge
|
66f8d0e260
|
Attention biased chaged
|
2019-01-22 18:18:21 +01:00 |
Eren Golge
|
562d73d3d1
|
Soem bug fixes
|
2019-01-17 15:48:22 +01:00 |
Eren Golge
|
4431e04b48
|
use sigmoid for attention
|
2019-01-16 16:26:05 +01:00 |
Eren Golge
|
7e020d4084
|
Bug fixes
|
2019-01-16 16:23:04 +01:00 |
Eren Golge
|
af22bed149
|
set bias
|
2019-01-16 15:53:24 +01:00 |
Eren Golge
|
f4fa155cd3
|
Make attn windowing optional
|
2019-01-16 12:33:07 +01:00 |
Eren Golge
|
8969d59902
|
Use the last attention value as a threshold to stop decoding. since stoptoken prediction is not precies enough to synthsis at the right time.
|
2019-01-16 12:32:40 +01:00 |
Eren Golge
|
ed1f648b83
|
Enalbe attention windowing and make in configurable at model level.
|
2019-01-16 12:32:40 +01:00 |
Eren Golge
|
916f5df5f9
|
config update and increase dropout p 0.5
|
2019-01-16 12:14:58 +01:00 |
Eren Golge
|
84814db73f
|
reduce dropout ratio
|
2019-01-02 12:52:17 +01:00 |
Eren Golge
|
4826e7db9c
|
remove intermediate tensor asap
|
2018-12-28 14:22:41 +01:00 |
Eren Golge
|
3a72d75ecd
|
Merge branch 'master' of github.com:mozilla/TTS
|
2018-12-12 17:08:39 +01:00 |
Eren Golge
|
8d865629a0
|
partial model initialization
|
2018-12-12 15:43:58 +01:00 |
Eren Golge
|
703be04993
|
bug fix
|
2018-12-12 12:02:10 +01:00 |
Eren Golge
|
dc3d09304e
|
Cache attention annot vectors for the whole sequence.
|
2018-12-11 16:06:02 +01:00 |
Eren Golge
|
cdaaff9dbb
|
Modularize memory reshaping in decoder layer
|
2018-11-28 16:31:29 +01:00 |
Eren Golge
|
4838d16fec
|
config and comment updates
|
2018-11-13 12:10:12 +01:00 |
Eren Golge
|
440f51b61d
|
correct import statements
|
2018-11-03 23:19:23 +01:00 |
Eren Golge
|
0b6a9995fc
|
change import statements
|
2018-11-03 19:15:06 +01:00 |
Eren Golge
|
d96690f83f
|
Config updates and add sigmoid to mel network again
|
2018-11-02 17:27:31 +01:00 |
Eren Golge
|
c8a552e627
|
Batch update after data-loss
|
2018-11-02 16:13:51 +01:00 |
Eren
|
1ae3b3e442
|
Enable CPU training and fix restore_epoch
|
2018-10-25 14:05:27 +02:00 |
Eren
|
006354320e
|
Merge branch 'attention-smoothing' into attn-smoothing-bgs-sigmoid-wd
|
2018-09-26 16:51:42 +02:00 |
Eren
|
f60e4497a6
|
apply sigmoid to outputs
|
2018-09-19 15:49:42 +02:00 |
Eren
|
f2ef1ca36a
|
Smmothed attention as in https://arxiv.org/pdf/1506.07503.pdf
|
2018-09-19 15:47:24 +02:00 |
Eren
|
67df385275
|
Explicit padding for unbalanced padding sizes
|
2018-09-19 14:20:02 +02:00 |
Eren
|
fd830c6416
|
Attention convolution padding correction for TF "SAME"
|
2018-09-19 14:16:21 +02:00 |
Eren
|
2bcd7dbb6f
|
Change functional padding with padding layer
|
2018-09-18 16:00:47 +02:00 |
Eren
|
00c0c9cde6
|
Padding with funcitonal interface to match TF "SAME"
|
2018-09-17 20:19:09 +02:00 |
Eren
|
f377cd3cb8
|
larger attention filter size and mode filters
|
2018-09-06 15:27:15 +02:00 |