His primary areas of investigation include Artificial intelligence, Speech recognition, Pattern recognition, Source separation and Recurrent neural network. His Cluster analysis, Deep learning and Inference study, which is part of a larger body of work in Artificial intelligence, is frequently linked to Matrix decomposition, bridging the gap between disciplines. His Speech recognition research is multidisciplinary, incorporating elements of Artificial neural network, Speech enhancement and Communication channel.
His Pattern recognition research includes themes of Channel and Monte Carlo method. The Source separation study which covers Noise that intersects with Speech reconstruction. As part of the same scientific family, John R. Hershey usually focuses on Recurrent neural network, concentrating on Applied mathematics and intersecting with Kullback–Leibler divergence, Divergence and Mixture model.
John R. Hershey mostly deals with Speech recognition, Artificial intelligence, Pattern recognition, Speech enhancement and Artificial neural network. His Speech processing, Language model, Hidden Markov model and Word error rate study in the realm of Speech recognition connects with subjects such as Sequence. When carried out as part of a general Artificial intelligence research project, his work on Deep learning, Source separation and Discriminative model is frequently linked to work in Set, therefore connecting diverse disciplines of study.
John R. Hershey usually deals with Pattern recognition and limits it to topics linked to Cluster analysis and Spectrogram. John R. Hershey focuses mostly in the field of Speech enhancement, narrowing it down to matters related to Algorithm and, in some cases, Masking. His Artificial neural network research incorporates elements of End-to-end principle, Context and Decoding methods.
His scientific interests lie mostly in Speech recognition, Sound separation, Artificial intelligence, Sound and Separation. John R. Hershey has included themes like Encoder, Feature, Encoding and Cluster analysis in his Speech recognition study. His Sound separation research also works with subjects such as
The Artificial intelligence study combines topics in areas such as Signal-to-noise ratio, Beamforming and Pattern recognition. The various areas that he examines in his Pattern recognition study include Artificial neural network, Speech enhancement, Covariance function and Word error rate. His Sound research is multidisciplinary, relying on both Focus, Source separation and Benchmark.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models
J. R. Hershey;P. A. Olsen.
international conference on acoustics, speech, and signal processing (2007)
Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models
J. R. Hershey;P. A. Olsen.
international conference on acoustics, speech, and signal processing (2007)
Deep clustering: Discriminative embeddings for segmentation and separation
John R. Hershey;Zhuo Chen;Jonathan Le Roux;Shinji Watanabe.
international conference on acoustics, speech, and signal processing (2016)
Deep clustering: Discriminative embeddings for segmentation and separation
John R. Hershey;Zhuo Chen;Jonathan Le Roux;Shinji Watanabe.
international conference on acoustics, speech, and signal processing (2016)
Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks
Hakan Erdogan;John R. Hershey;Shinji Watanabe;Jonathan Le Roux.
international conference on acoustics, speech, and signal processing (2015)
Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks
Hakan Erdogan;John R. Hershey;Shinji Watanabe;Jonathan Le Roux.
international conference on acoustics, speech, and signal processing (2015)
Speech Enhancement with LSTM Recurrent Neural Networks and its Application to Noise-Robust ASR
Felix Weninger;Hakan Erdogan;Shinji Watanabe;Emmanuel Vincent.
international conference on latent variable analysis and signal separation (2015)
Speech Enhancement with LSTM Recurrent Neural Networks and its Application to Noise-Robust ASR
Felix Weninger;Hakan Erdogan;Shinji Watanabe;Emmanuel Vincent.
international conference on latent variable analysis and signal separation (2015)
Hybrid CTC/Attention Architecture for End-to-End Speech Recognition
Shinji Watanabe;Takaaki Hori;Suyoun Kim;John R. Hershey.
IEEE Journal of Selected Topics in Signal Processing (2017)
Hybrid CTC/Attention Architecture for End-to-End Speech Recognition
Shinji Watanabe;Takaaki Hori;Suyoun Kim;John R. Hershey.
IEEE Journal of Selected Topics in Signal Processing (2017)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Carnegie Mellon University
Mitsubishi Electric (United States)
Google (United States)
Nuance Communications (United States)
Google (United States)
University of California, San Diego
Adobe Systems (United States)
MIT
University of Lorraine
Microsoft (United States)
University of Connecticut
Nankai University
University of Lleida
University of Paris-Saclay
Aristotle University of Thessaloniki
McMaster University
Louisiana State University
University of Oklahoma
University of Bath
Monterey Bay Aquarium Research Institute
Georgia Institute of Technology
Baylor College of Medicine
University of Alabama at Birmingham
University of Illinois at Urbana-Champaign
Vanderbilt University Medical Center
University of Bologna