logo
banner

Journals & Publications

Journals Publications Papers

Papers

Improving Deep Neural Network Based Speech Synthesis through Contextual Feature Parametrization and Multi-Task Learning
Jul 11, 2018Author:
PrintText Size A A

Title: Improving Deep Neural Network Based Speech Synthesis through Contextual Feature Parametrization and Multi-Task Learning

Authors: Wen, ZQ; Li, KH; Huang, Z; Lee, CH; Tao, JH

Author Full Names: Wen, Zhengqi; Li, Kehuang; Huang, Zhen; Lee, Chin-Hui; Tao, Jianhua

Source: JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 90 (7):1025-1037; SI 10.1007/s11265-017-1293-z JUL 2018

Language: English

Abstract: We propose three techniques to improve speech synthesis based on deep neural network (DNN). First, at the DNN input we use real-valued contextual feature vector to represent phoneme identity, part of speech and pause information instead of the conventional binary vector. Second, at the DNN output layer, parameters for pitch-scaled spectrum and aperiodicity measures are estimated for constructing the excitation signal used in our baseline synthesis vocoder. Third, the bidirectional recurrent neural network architecture with long short term memory (BLSTM) units is adopted and trained with multi-task learning for DNN-based speech synthesis. Experimental results demonstrate that the quality of synthesized speech has been improved by adopting the new input vector and output parameters. The proposed BLSTM architecture for DNN is also beneficial to learning the mapping function from the input contextual feature to the speech parameters and to improve speech quality.

ISSN: 1939-8018

eISSN: 1939-8115

IDS Number: GH6LK

Unique ID: WOS:000433555600007

*Click Here to View Full Record