In Movement (music), Zhen-Hua Ling works on issues like Acoustics, which are connected to Identity (music). He regularly links together related areas like Acoustics in his Identity (music) studies. His Artificial intelligence study typically links adjacent topics like Discriminative model. He regularly ties together related areas like Artificial intelligence in his Discriminative model studies. His research links Speaker verification with Speech recognition. Zhen-Hua Ling combines Natural language processing and Word error rate in his studies. Zhen-Hua Ling performs multidisciplinary studies into Word error rate and Natural language processing in his work. He conducted interdisciplinary study in his works that combined Linguistics and Utterance. Utterance and Linguistics are two areas of study in which Zhen-Hua Ling engages in interdisciplinary research.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Enhanced LSTM for Natural Language Inference
Qian Chen;Xiaodan Zhu;Zhen-Hua Ling;Si Wei.
meeting of the association for computational linguistics (2017)
Deep Learning for Acoustic Modeling in Parametric Speech Generation: A systematic review of existing techniques and future trends
Zhen-Hua Ling;Shi-Yin Kang;Heiga Zen;Andrew Senior.
IEEE Signal Processing Magazine (2015)
Robust Speaker-Adaptive HMM-Based Text-to-Speech Synthesis
J. Yamagishi;T. Nose;H. Zen;Zhen-Hua Ling.
IEEE Transactions on Audio, Speech, and Language Processing (2009)
Voice conversion using deep neural networks with layer-wise generative training
Ling-Hui Chen;Zhen-Hua Ling;Li-Juan Liu;Li-Rong Dai.
IEEE Transactions on Audio, Speech, and Language Processing (2014)
The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods
Jaime Lorenzo-Trueba;Junichi Yamagishi;Tomoki Toda;Daisuke Saito.
The Speaker and Language Recognition Workshop (Odyssey 2018) (2018)
Neural Natural Language Inference Models Enhanced with External Knowledge
Qian Chen;Xiaodan Zhu;Zhen-Hua Ling;Diana Inkpen.
meeting of the association for computational linguistics (2018)
Modeling Spectral Envelopes Using Restricted Boltzmann Machines and Deep Belief Networks for Statistical Parametric Speech Synthesis
Zhen-Hua Ling;Li Deng;Dong Yu.
IEEE Transactions on Audio, Speech, and Language Processing (2013)
Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints
Quan Liu;Hui Jiang;Si Wei;Zhen-Hua Ling.
international joint conference on natural language processing (2015)
USTC System for Blizzard Challenge 2006 an Improved HMM-based Speech Synthesis Method
Zhen-Hua Ling;Yi-Jian Wu;Yu-Ping Wang;Long Qin.
(2006)
Enhancing and Combining Sequential and Tree LSTM for Natural Language Inference.
Qian Chen;Xiaodan Zhu;Zhen-Hua Ling;Si Wei.
arXiv: Computation and Language (2016)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
National Institute of Informatics
Queen's University
Nagoya University
York University
University of Eastern Finland
University of Edinburgh
University of Ottawa
University of Science and Technology of China
Citadel
Google (United States)
Tel Aviv University
University of Leeds
Google (United States)
National University of Ireland, Galway
Forschungszentrum Jülich
La Trobe University
Kyung Hee University
University of California, Los Angeles
University of Washington
Harvard Medical School
Marshfield Clinic
Saarland University
Mayo Clinic
University of Southern California
University of Manchester
Johns Hopkins University