Junichi Yamagishi mostly deals with Speech synthesis, Speech recognition, Hidden Markov model, Artificial intelligence and Natural language processing. The study incorporates disciplines such as Duration, Spoofing attack, Acoustic model, Speech processing and Waveform in addition to Speech synthesis. His Speech recognition study combines topics from a wide range of disciplines, such as Feature extraction and Perception.
He combines subjects such as Speaker diarisation, Speaker adaptation, Emotional expression, Sound quality and Signal with his study of Hidden Markov model. His research investigates the connection between Artificial intelligence and topics such as Pattern recognition that intersect with problems in Regression analysis, Cluster analysis and Linear regression. The concepts of his Natural language processing study are interwoven with issues in Speaking style, Relation, Database, Information processing and Robustness.
His primary scientific interests are in Speech recognition, Speech synthesis, Artificial intelligence, Hidden Markov model and Natural language processing. His work is dedicated to discovering how Speech recognition, Waveform are connected with Filter and other disciplines. His study in the field of Speech corpus also crosses realms of Naturalness.
His Artificial intelligence study incorporates themes from Machine learning, Computer vision and Pattern recognition. His Hidden Markov model study integrates concerns from other disciplines, such as Emotional expression, Style and Speech processing. His research in Spoofing attack intersects with topics in Reliability, Anti spoofing, Replay attack and Biometrics.
His primary scientific interests are in Speech recognition, Speech synthesis, Artificial intelligence, Waveform and Artificial neural network. The Speech recognition study combines topics in areas such as Similarity, Reverberation and Phone. Junichi Yamagishi interconnects End-to-end principle and Natural in the investigation of issues within Speech synthesis.
His Artificial intelligence research is multidisciplinary, incorporating perspectives in Machine learning and Natural language processing. His Waveform research incorporates elements of Speech enhancement, Convolution, Filter and Spectrogram. His Artificial neural network research incorporates themes from Probabilistic logic and Active listening.
Junichi Yamagishi spends much of his time researching Speech recognition, Speech synthesis, Artificial intelligence, Task and Similarity. His Speech recognition research is multidisciplinary, relying on both Waveform, Reverberation and Active listening. In his study, Use case is inextricably linked to Spoofing attack, which falls within the broad field of Speech synthesis.
As part of the same scientific family, Junichi Yamagishi usually focuses on Artificial intelligence, concentrating on Machine learning and intersecting with Language model and Training set. His Similarity research integrates issues from Natural and Adaptation. His work deals with themes such as Feature vector, Hidden Markov model and Pattern recognition, which intersect with Artificial neural network.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
The HMM-based speech synthesis system (HTS) version 2.0.
Heiga Zen;Takashi Nose;Junichi Yamagishi;Shinji Sako.
SSW (2007)
SUPERSEDED - CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit
Christophe Veaux;Junichi Yamagishi;Kirsten MacDonald.
The Rainbow Passage which the speakers read out can be found in the International Dialects of English Archive: (http://web.ku.edu/~idea/readings/rainbow.htm). (2016)
MesoNet: a Compact Facial Video Forgery Detection Network
Darius Afchar;Vincent Nozick;Junichi Yamagishi;Isao Echizen.
international workshop on information forensics and security (2018)
Spoofing and countermeasures for speaker verification
Zhizheng Wu;Nicholas Evans;Tomi Kinnunen;Junichi Yamagishi.
Speech Communication (2015)
Speech Synthesis Based on Hidden Markov Models
K. Tokuda;Y. Nankaku;T. Toda;H. Zen.
Proceedings of the IEEE (2013)
Analysis of Speaker Adaptation Algorithms for HMM-Based Speech Synthesis and a Constrained SMAPLR Adaptation Algorithm
J. Yamagishi;T. Kobayashi;Y. Nakano;K. Ogata.
IEEE Transactions on Audio, Speech, and Language Processing (2009)
ASVspoof 2015: the First Automatic Speaker Verification Spoofing and Countermeasures Challenge
Zhizheng Wu;Tomi Kinnunen;Nicholas W. D. Evans;Junichi Yamagishi.
conference of the international speech communication association (2015)
The ASVspoof 2017 Challenge: Assessing the Limits of Replay Spoofing Attack Detection
Tomi Kinnunen;Md. Sahidullah;Héctor Delgado;Massimiliano Todisco.
conference of the international speech communication association (2017)
HMM-Based Speech Synthesis Utilizing Glottal Inverse Filtering
T Raitio;A Suni;J Yamagishi;H Pulakka.
IEEE Transactions on Audio, Speech, and Language Processing (2011)
Average-Voice-Based Speech Synthesis Using HSMM-Based Speaker Adaptation and Adaptive Training
Junichi Yamagishi;Takao Kobayashi.
The IEICE transactions on information and systems (2007)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Edinburgh
University of Eastern Finland
Nagoya Institute of Technology
EURECOM
Tokyo Institute of Technology
University of Science and Technology of China
Nagoya University
Chinese University of Hong Kong, Shenzhen
Preferred Networks, Inc.
Aalto University
University of Zaragoza
Intel (United States)
ETH Zurich
Kangwon National University
Monash University
University of Copenhagen
Hebrew University of Jerusalem
Yamaguchi University
Oregon Health & Science University
The Ohio State University
University of Bern
François Rabelais University
University of Alberta
University of California, San Diego
Fred Hutchinson Cancer Research Center
Central Queensland University