Eren Golge
|
44c66c6e3e
|
remove comments
|
2019-03-05 13:34:33 +01:00 |
Eren Golge
|
1e8fdec084
|
Modularize functions in Tacotron
|
2019-03-05 13:25:50 +01:00 |
Eren Golge
|
1c99be2ffd
|
Change window size for attention
|
2019-02-18 13:06:26 +01:00 |
Eren Golge
|
6ea31e47df
|
Constant queue size for autoregression window
|
2019-02-16 03:18:49 +01:00 |
Eren Golge
|
90f0cd640b
|
memoru queueing
|
2019-02-12 15:27:42 +01:00 |
Eren Golge
|
c5b6227848
|
init with embedding lyaers
|
2019-02-06 16:54:33 +01:00 |
Eren Golge
|
d28bbe09fb
|
Attention bias setting Revert to old
|
2019-02-06 16:23:01 +01:00 |
Eren Golge
|
e12bbc2a5c
|
init decoder states with a function
|
2019-01-22 18:25:55 +01:00 |
Eren Golge
|
66f8d0e260
|
Attention biased chaged
|
2019-01-22 18:18:21 +01:00 |
Eren Golge
|
562d73d3d1
|
Soem bug fixes
|
2019-01-17 15:48:22 +01:00 |
Eren Golge
|
4431e04b48
|
use sigmoid for attention
|
2019-01-16 16:26:05 +01:00 |
Eren Golge
|
7e020d4084
|
Bug fixes
|
2019-01-16 16:23:04 +01:00 |
Eren Golge
|
af22bed149
|
set bias
|
2019-01-16 15:53:24 +01:00 |
Eren Golge
|
f4fa155cd3
|
Make attn windowing optional
|
2019-01-16 12:33:07 +01:00 |
Eren Golge
|
8969d59902
|
Use the last attention value as a threshold to stop decoding. since stoptoken prediction is not precies enough to synthsis at the right time.
|
2019-01-16 12:32:40 +01:00 |
Eren Golge
|
ed1f648b83
|
Enalbe attention windowing and make in configurable at model level.
|
2019-01-16 12:32:40 +01:00 |
Eren Golge
|
916f5df5f9
|
config update and increase dropout p 0.5
|
2019-01-16 12:14:58 +01:00 |
Eren Golge
|
84814db73f
|
reduce dropout ratio
|
2019-01-02 12:52:17 +01:00 |
Eren Golge
|
4826e7db9c
|
remove intermediate tensor asap
|
2018-12-28 14:22:41 +01:00 |
Eren Golge
|
3a72d75ecd
|
Merge branch 'master' of github.com:mozilla/TTS
|
2018-12-12 17:08:39 +01:00 |
Eren Golge
|
8d865629a0
|
partial model initialization
|
2018-12-12 15:43:58 +01:00 |
Eren Golge
|
703be04993
|
bug fix
|
2018-12-12 12:02:10 +01:00 |
Eren Golge
|
dc3d09304e
|
Cache attention annot vectors for the whole sequence.
|
2018-12-11 16:06:02 +01:00 |
Eren Golge
|
cdaaff9dbb
|
Modularize memory reshaping in decoder layer
|
2018-11-28 16:31:29 +01:00 |
Eren Golge
|
4838d16fec
|
config and comment updates
|
2018-11-13 12:10:12 +01:00 |
Eren Golge
|
440f51b61d
|
correct import statements
|
2018-11-03 23:19:23 +01:00 |
Eren Golge
|
0b6a9995fc
|
change import statements
|
2018-11-03 19:15:06 +01:00 |
Eren Golge
|
d96690f83f
|
Config updates and add sigmoid to mel network again
|
2018-11-02 17:27:31 +01:00 |
Eren Golge
|
c8a552e627
|
Batch update after data-loss
|
2018-11-02 16:13:51 +01:00 |
Eren
|
1ae3b3e442
|
Enable CPU training and fix restore_epoch
|
2018-10-25 14:05:27 +02:00 |
Eren
|
006354320e
|
Merge branch 'attention-smoothing' into attn-smoothing-bgs-sigmoid-wd
|
2018-09-26 16:51:42 +02:00 |
Eren
|
f60e4497a6
|
apply sigmoid to outputs
|
2018-09-19 15:49:42 +02:00 |
Eren
|
f2ef1ca36a
|
Smmothed attention as in https://arxiv.org/pdf/1506.07503.pdf
|
2018-09-19 15:47:24 +02:00 |
Eren
|
67df385275
|
Explicit padding for unbalanced padding sizes
|
2018-09-19 14:20:02 +02:00 |
Eren
|
fd830c6416
|
Attention convolution padding correction for TF "SAME"
|
2018-09-19 14:16:21 +02:00 |
Eren
|
2bcd7dbb6f
|
Change functional padding with padding layer
|
2018-09-18 16:00:47 +02:00 |
Eren
|
00c0c9cde6
|
Padding with funcitonal interface to match TF "SAME"
|
2018-09-17 20:19:09 +02:00 |
Eren
|
f377cd3cb8
|
larger attention filter size and mode filters
|
2018-09-06 15:27:15 +02:00 |
Eren
|
7d66bdc5f4
|
update model size to paper
|
2018-09-06 14:37:19 +02:00 |
Eren
|
a15b3ec9a1
|
pytorch 0.4.1update
|
2018-08-13 15:02:17 +02:00 |
Eren
|
3b2654203d
|
fixing size mismatch
|
2018-08-10 18:48:43 +02:00 |
Eren
|
e0bce1d2d1
|
Pass mask instead of length to model
|
2018-08-10 17:45:17 +02:00 |
Eren G
|
d5febfb187
|
Setting up network size according to the reference paper
|
2018-08-08 12:34:44 +02:00 |
Eren G
|
f5537dc48f
|
pep8 format all
|
2018-08-02 16:34:17 +02:00 |
Eren G
|
25b6769246
|
Some bug fixes
|
2018-07-26 13:33:05 +02:00 |
Eren G
|
fce6bd27b8
|
Smaller attention filters
|
2018-07-19 17:36:31 +02:00 |
Eren G
|
4ef3ecf37f
|
loca sens attn fix
|
2018-07-17 17:43:51 +02:00 |
Eren G
|
4e6596a8e1
|
Loc sens attention
|
2018-07-17 17:01:40 +02:00 |
Eren G
|
ddaf414434
|
attentio update
|
2018-07-17 16:24:39 +02:00 |
Eren G
|
90d7e885e7
|
Add attention-cum
|
2018-07-17 15:59:18 +02:00 |
Eren G
|
adbe603af1
|
Bug fixes
|
2018-07-13 15:24:50 +02:00 |
Eren G
|
dac8fdffa9
|
Attn masking
|
2018-07-13 14:50:55 +02:00 |
Eren G
|
9f52833151
|
Merge branch 'loc-sens-attn' into loc-sens-attn-new and attention without attention-cum
|
2018-07-13 14:27:51 +02:00 |
Eren Golge
|
f791f4e5e7
|
Use MSE loss instead of L1 Loss
|
2018-06-06 07:42:51 -07:00 |
Eren Golge
|
1b8d0f5b26
|
Master merge
|
2018-05-28 01:24:06 -07:00 |
Eren Golge
|
ad943120ae
|
Do not avg cummulative attention weight
|
2018-05-25 15:01:16 -07:00 |
Eren Golge
|
24644b20d4
|
Merge branch 'master' of https://github.com/Mozilla/TTS
Conflicts:
README.md
best_model_config.json
datasets/LJSpeech.py
layers/tacotron.py
notebooks/TacotronPlayGround.ipynb
notebooks/utils.py
tests/layers_tests.py
tests/loader_tests.py
tests/tacotron_tests.py
train.py
utils/generic_utils.py
|
2018-05-25 05:14:04 -07:00 |
Eren Golge
|
ff245a16cb
|
bug fix
|
2018-05-25 04:28:40 -07:00 |
Eren Golge
|
256ed6307c
|
More comments for new layers
|
2018-05-25 03:25:26 -07:00 |
Eren Golge
|
4127b66359
|
remove abundant arguments
|
2018-05-25 03:25:01 -07:00 |
Eren Golge
|
a5f66b58e0
|
Remove empty lines
|
2018-05-25 03:24:45 -07:00 |
Eren Golge
|
fe99baec5a
|
Add a missing class variable to attention class
|
2018-05-23 06:20:04 -07:00 |
Eren Golge
|
8ffc85008a
|
Comment StopNet arguments
|
2018-05-23 06:18:27 -07:00 |
Eren Golge
|
14f9d06b31
|
Add a constant attnetion model type to attention class
|
2018-05-23 06:18:09 -07:00 |
Eren Golge
|
819011e1a2
|
Remove depricated comment
|
2018-05-23 06:17:48 -07:00 |
Eren Golge
|
0f933106ca
|
Configurable alignment method
|
2018-05-23 06:17:01 -07:00 |
Eren Golge
|
d8c460442a
|
Commenting the attention code
|
2018-05-23 06:16:39 -07:00 |
Eren Golge
|
7acf4eab94
|
A major bug fix for location sensitive attention.
|
2018-05-23 06:04:28 -07:00 |
Eren Golge
|
6bcec24d13
|
Remove flatten_parameters due to a bug at pytorch 0.4
|
2018-05-18 06:00:16 -07:00 |
Eren Golge
|
288a6b5b1d
|
add location attn to decoder
|
2018-05-18 03:34:07 -07:00 |
Eren Golge
|
243204bc3e
|
Add location sens attention
|
2018-05-18 03:33:41 -07:00 |
Eren Golge
|
7b9fd63649
|
Correct commnet
|
2018-05-18 03:32:17 -07:00 |
Eren Golge
|
d43b4ccc1e
|
Update decoder accepting location sensitive attention
|
2018-05-17 08:03:40 -07:00 |
Eren Golge
|
3348d462d2
|
Add location sensitive attention
|
2018-05-17 08:03:16 -07:00 |
Eren Golge
|
70beccf328
|
Add a length constraint for test time stop signal, to avoid stopage at a mid point stop sign
|
2018-05-16 04:00:59 -07:00 |
Eren Golge
|
40f1a3d3a5
|
RNN stop-token prediction
|
2018-05-15 16:12:47 -07:00 |
Eren Golge
|
a31e60e928
|
bug fix, average mel spec validation loss
|
2018-05-15 07:13:46 -07:00 |
Eren Golge
|
02d72ccbfe
|
predict stop token from rnn out + mel
|
2018-05-14 19:00:50 -07:00 |
Eren Golge
|
d629dafb20
|
Update stopnet with more layers
|
2018-05-14 07:02:24 -07:00 |
Eren Golge
|
cac7e9ca5b
|
Stop test time model with stop_token
|
2018-05-13 06:31:59 -07:00 |
Eren Golge
|
8ed9f57a6d
|
bug fix
|
2018-05-11 04:19:28 -07:00 |
Eren Golge
|
3ea1a5358d
|
Stop token layer on decoder
|
2018-05-11 04:14:27 -07:00 |
Eren Golge
|
2c1f66a0fc
|
remove Variable from models/tacotron.py
|
2018-05-10 16:35:38 -07:00 |
Eren Golge
|
4ab8cbb016
|
remove Variable from tacotron.py
|
2018-05-10 16:30:43 -07:00 |
Eren Golge
|
a856f76791
|
remove Variable from losses.py
|
2018-05-10 16:27:55 -07:00 |
Eren Golge
|
14c9e9cde9
|
Loss bug fix - target_flat vs target
|
2018-05-10 15:59:05 -07:00 |
Eren Golge
|
f8d5bbd5d2
|
Perform stop token prediction to stop the model.
|
2018-05-03 05:56:06 -07:00 |
Eren Golge
|
a4561c5096
|
config
|
2018-05-02 04:56:35 -07:00 |
Eren Golge
|
82ffe819eb
|
thweb finetune
|
2018-04-29 06:12:14 -07:00 |
Eren Golge
|
7bfdc32b7b
|
remove requires_grad_()
|
2018-04-26 05:27:08 -07:00 |
Eren Golge
|
07f71b1761
|
Remove variables
|
2018-04-25 08:00:48 -07:00 |
Eren Golge
|
e257bd7278
|
bug fix loss
|
2018-04-25 08:00:30 -07:00 |
Eren Golge
|
52b4bc6bed
|
Remove variables
|
2018-04-25 08:00:19 -07:00 |
Eren Golge
|
154ec7ba24
|
Test notebook update
|
2018-04-21 04:23:54 -07:00 |
Eren Golge
|
0c5d0b98d8
|
threshold changed
|
2018-04-16 11:54:49 -07:00 |
Eren Golge
|
4762569c95
|
a new hacky way to stop generation and test notebook update
|
2018-04-13 05:09:14 -07:00 |
Eren Golge
|
06d4b231e9
|
bug fix
|
2018-04-12 05:59:40 -07:00 |
Eren Golge
|
bc90050ee9
|
bug fix on training avg loss printing and computing
|
2018-04-12 05:57:52 -07:00 |
Eren Golge
|
a9eadd1b8a
|
pep8 check
|
2018-04-03 03:24:57 -07:00 |
Eren Golge
|
af6fd9b941
|
loss bug fix
|
2018-03-28 18:20:56 -07:00 |