Commit Graph

74 Commits (f7074608868aadd337d46f5e7ab35ff848a58c73)

Author SHA1 Message Date
erogol 957f7dcbc5 padding idx for embedding layer 2020-02-20 12:24:54 +01:00
erogol cf6e16254f add torch.no_grad decorator for inference 2020-02-19 18:30:25 +01:00
Eren Golge df1b8b3ec7 linter and test updates for speaker_encoder, gmm_Attention 2019-11-12 12:42:42 +01:00
Eren Golge adf9ebd629 Graves attention and setting attn type by config.json 2019-11-12 11:18:57 +01:00
Eren Golge 60b6ec18fe bug fix for synthesis.py 2019-10-29 17:38:59 +01:00
Eren Golge 002991ca15 bug fixes, linter update and test updates 2019-10-29 14:28:49 +01:00
Eren Golge 89ef71ead8 bug fix tacotron2, decoder return order fixed 2019-10-29 13:32:20 +01:00
Eren Golge 5a56a2c096 bug fixes for forward backward training and load_data for parsing data_loader 2019-10-29 02:58:42 +01:00
Eren Golge e83a4b07d2 commention model outputs for tacotron, align outputs shapes of tacotron and tracotron2, merge bidirectional decoder 2019-10-28 14:51:19 +01:00
Eren Golge 98af061d2e formatting, merge GST model with Tacotron 2019-09-24 16:18:48 +02:00
Eren Golge 1f4ec804b6 compute and add style tokens in gst 2019-09-21 09:58:58 +02:00
Eren Golge 6561013d28 sum style tokesn with encoder outputs instead of concat 2019-09-20 18:46:59 +02:00
Eren Golge 14a4d1a061 update TacotronGST and its test. Inherit it from Tacotron class 2019-09-12 23:06:59 +02:00
Eren Golge a1322530df integrade concatinative speker embedding to tacotron 2019-09-12 10:39:15 +02:00
Reuben Morais 3c5aeb5e22 Fix installation by using an explicit symlink 2019-08-29 11:49:53 +02:00
Eugene Ingerman 2563fb873e Fixed postnet for GST. 2019-08-26 22:50:21 -07:00
Eren Golge 2bbb3f7a40 don't use sigmoid output for tacotron, fix bug for memory queue handling, remove maxout 2019-07-22 15:09:05 +02:00
Eren Golge d47ba5d310 gradual traning with memory queue 2019-07-20 12:33:21 +02:00
Reuben Morais e3f841742d Address even more lint problems 2019-07-19 11:48:12 +02:00
Reuben Morais 057a50a889 Merge branch 'dev' into contribution-grease 2019-07-19 11:41:14 +02:00
Eren Golge 4336e1d338 fix unittests for the latest updates 2019-07-19 11:12:48 +02:00
Reuben Morais 11e7895329 Fix Pylint issues 2019-07-19 09:08:51 +02:00
Thomas Werkmeister 6390c3b2e6 num_speakers larger than 1 2019-07-02 14:46:41 +02:00
Thomas Werkmeister 81c5df71f6 removed in-place changes 2019-07-01 21:19:35 +02:00
Thomas Werkmeister ba8cc8054b disabling multispeaker with num_speakers=0 2019-07-01 14:01:34 +02:00
Thomas Werkmeister d172a3d3d5 multispeaker 2019-06-26 12:59:14 +02:00
Eren Golge 0f8936d744 GST inference 2019-06-12 12:12:01 +02:00
Eren Golge fef3aecc09 tacotrongst 2019-06-05 18:46:28 +02:00
Eren Golge 4678c66599 forward_attn_mask and config update 2019-06-04 00:39:29 +02:00
Eren Golge ba492f43be Set tacotron model parameters to adap to common_layers.py - Prenet and Attention 2019-05-27 14:40:28 +02:00
Eren Golge e62659da94 update separate stopnet flow to make it faster. 2019-05-17 16:15:43 +02:00
Eren Golge 6331bccefc make dropout oprional #2 2019-05-12 17:35:31 +02:00
Eren Golge e2439fde9a make location attention optional and keep all attention weights in attention class 2019-04-29 11:37:01 +02:00
Eren Golge 312a539a0e Enable optional forward attention with transition agent 2019-04-10 16:41:30 +02:00
Eren Golge 961af0f5cd setup_model externally based on model selection. Make forward attention and prenet type configurable in config.json 2019-04-05 17:49:18 +02:00
Eren Golge bc51b81aae parameter name fix 2019-03-26 00:52:47 +01:00
Eren Golge 0a92c6d5a7 Set attention norm method by config.json 2019-03-26 00:48:12 +01:00
Eren Golge b9b79fcf0f inference truncated NEED TO BE TESTED 2019-03-11 17:40:09 +01:00
Eren Golge a9ce1d4f19 bug fix for tacotron and tests update 2019-03-06 13:43:29 +01:00
Eren Golge b031a65677 compute sequence mask in model, add tacotron2 relatedfiles 2019-03-06 13:14:58 +01:00
Eren Golge 1e8fdec084 Modularize functions in Tacotron 2019-03-05 13:25:50 +01:00
Eren Golge bf5f18d11e Formatting changes and distributed training 2019-02-27 09:50:52 +01:00
Eren Golge 6ea31e47df Constant queue size for autoregression window 2019-02-16 03:18:49 +01:00
Eren Golge b011dafbab pass num_chars in train.py 2019-01-21 14:52:40 +01:00
Eren Golge b78fc96115 Change embedding layer init to old version 2019-01-16 12:33:24 +01:00
Eren Golge ed1f648b83 Enalbe attention windowing and make in configurable at model level. 2019-01-16 12:32:40 +01:00
Eren Golge c2637f114e New initialization for embedding layer and use phonemes count instead of symbols for embedding layer init 2019-01-16 12:18:54 +01:00
Eren Golge 440f51b61d correct import statements 2018-11-03 23:19:23 +01:00
Eren Golge 0b6a9995fc change import statements 2018-11-03 19:15:06 +01:00
Eren e7278437ee Please enter the commit message for your changes. Lines starting 2018-09-19 15:49:42 +02:00