2021 - IEEE Fellow For contributions to far-field signal processing for speech enhancement and recognition
Speech recognition, Speech enhancement, Speech processing, Reverberation and Artificial neural network are his primary areas of study. His biological study spans a wide range of topics, including Mixture model and Noise reduction. He interconnects Source separation and Blind signal separation in the investigation of issues within Mixture model.
His work deals with themes such as Time domain and Deep neural networks, which intersect with Speech enhancement. His Speech processing research incorporates themes from Speaker recognition, Noise measurement and Robustness. The Reverberation study combines topics in areas such as Filter, Microphone, Linear prediction, Impulse response and Signal processing.
Tomohiro Nakatani spends much of his time researching Speech recognition, Artificial intelligence, Speech enhancement, Pattern recognition and Reverberation. His study in Speech recognition is interdisciplinary in nature, drawing from both Artificial neural network and Noise. His study looks at the intersection of Artificial intelligence and topics like Blind signal separation with Expectation–maximization algorithm and Underdetermined system.
The study incorporates disciplines such as Microphone array, Noise reduction, Background noise and Beamforming in addition to Speech enhancement. Tomohiro Nakatani has included themes like Noise measurement and Word error rate in his Pattern recognition study. His Reverberation research integrates issues from Linear prediction, Filter, Microphone and Inverse filter.
Tomohiro Nakatani focuses on Speech recognition, Artificial neural network, Algorithm, Speech enhancement and Artificial intelligence. The various areas that Tomohiro Nakatani examines in his Speech recognition study include Time domain, Estimator and Beamforming. His Artificial neural network research incorporates themes from Language model, Discriminative model, Utterance and Word error rate.
His Algorithm study combines topics in areas such as Noise, Filter and Blind signal separation. Tomohiro Nakatani interconnects Noise reduction, Filter bank, Reverberation and Minimum-variance unbiased estimator in the investigation of issues within Speech enhancement. His study focuses on the intersection of Artificial intelligence and fields such as Pattern recognition with connections in the field of Neural network learning and Learning methods.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Speech Dereverberation Based on Variance-Normalized Delayed Linear Prediction
Tomohiro Nakatani;Takuya Yoshioka;Keisuke Kinoshita;Masato Miyoshi.
IEEE Transactions on Audio, Speech, and Language Processing (2010)
Making Machines Understand Us in Reverberant Rooms: Robustness Against Reverberation for Automatic Speech Recognition
Takuya Yoshioka;A. Sehr;M. Delcroix;K. Kinoshita.
IEEE Signal Processing Magazine (2012)
A summary of the REVERB challenge: state-of-the-art and remaining challenges in reverberant speech processing research
Keisuke Kinoshita;Marc Delcroix;Sharon Gannot;Emanuël A. P. Habets.
EURASIP Journal on Advances in Signal Processing (2016)
Suppression of Late Reverberation Effect on Speech Signal Using Long-Term Multiple-step Linear Prediction
K. Kinoshita;M. Delcroix;T. Nakatani;M. Miyoshi.
IEEE Transactions on Audio, Speech, and Language Processing (2009)
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices
Takuya Yoshioka;Nobutaka Ito;Marc Delcroix;Atsunori Ogawa.
ieee automatic speech recognition and understanding workshop (2015)
Generalization of Multi-Channel Linear Prediction Methods for Blind MIMO Impulse Response Shortening
T. Yoshioka;T. Nakatani.
IEEE Transactions on Audio, Speech, and Language Processing (2012)
Robust MVDR beamforming using time-frequency masks for online/offline ASR in noise
Takuya Higuchi;Nobutaka Ito;Takuya Yoshioka;Tomohiro Nakatani.
international conference on acoustics, speech, and signal processing (2016)
Blind Separation and Dereverberation of Speech Mixtures by Joint Optimization
Takuya Yoshioka;Tomohiro Nakatani;Masato Miyoshi;Hiroshi G Okuno.
IEEE Transactions on Audio, Speech, and Language Processing (2011)
Blind speech dereverberation with multi-channel linear prediction based on short time fourier transform representation
T. Nakatani;T. Yoshioka;K. Kinoshita;M. Miyoshi.
international conference on acoustics, speech, and signal processing (2008)
Blind dereverberation of single channel speech signal based on harmonic structure
T. Nakatani;M. Miyoshi.
international conference on acoustics, speech, and signal processing (2003)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: