2016 - Fellow, National Academy of Inventors
2009 - Fellow of the American Association for the Advancement of Science (AAAS)
2009 - IEEE Fellow For contributions to human-centric multimodal signal processing and applications
Speech recognition, Artificial intelligence, Speech processing, Natural language processing and Pattern recognition are his primary areas of study. His Speech recognition study frequently links to related topics such as Valence. In his study, Speech production is inextricably linked to Computer vision, which falls within the broad field of Artificial intelligence.
Shrikanth S. Narayanan usually deals with Speech processing and limits it to topics linked to Audio signal processing and Abstraction. His Natural language processing research incorporates elements of Speech corpus and Spoken dialog systems. His Pattern recognition study combines topics in areas such as Time domain and Bayesian probability.
Shrikanth S. Narayanan spends much of his time researching Speech recognition, Artificial intelligence, Natural language processing, Pattern recognition and Vocal tract. Speech processing, Speech production, Hidden Markov model, Word error rate and Speaker recognition are among the areas of Speech recognition where the researcher is concentrating his efforts. Shrikanth S. Narayanan does research in Artificial intelligence, focusing on Feature extraction specifically.
His study in Natural language processing is interdisciplinary in nature, drawing from both Speech corpus and Prosody. His work carried out in the field of Vocal tract brings together such families of science as Tongue, Articulation and Real-time MRI.
His scientific interests lie mostly in Artificial intelligence, Speech recognition, Task, Natural language processing and Speaker diarisation. The study incorporates disciplines such as Context, Machine learning and Pattern recognition in addition to Artificial intelligence. His Speech recognition research integrates issues from Embedding and Robustness.
The Task study combines topics in areas such as Human behavior and Spoken language. His Natural language processing research is multidisciplinary, incorporating elements of Annotation and Word. His Speaker diarisation research is multidisciplinary, incorporating perspectives in Encoder and Cluster analysis.
His main research concerns Speech recognition, Artificial intelligence, Task, Wearable computer and Speaker diarisation. His work deals with themes such as Embedding and Cluster analysis, which intersect with Speech recognition. His Artificial intelligence study combines topics from a wide range of disciplines, such as Context, Machine learning, Pattern recognition and Natural language processing.
His work is dedicated to discovering how Natural language processing, Word are connected with Emotion classification and other disciplines. His research in Wearable computer intersects with topics in Big Five personality traits, Applied psychology, Data set and Set. He interconnects Transcription, Encoder and Spectral clustering in the investigation of issues within Speaker diarisation.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
IEMOCAP: interactive emotional dyadic motion capture database
Carlos Busso;Murtaza Bulut;Chi Chun Lee;Abe Kazemzadeh.
language resources and evaluation (2008)
Toward detecting emotions in spoken dialogs
Chul Min Lee;S.S. Narayanan.
IEEE Transactions on Speech and Audio Processing (2005)
Analysis of emotion recognition using facial expressions, speech and multimodal information
Carlos Busso;Zhigang Deng;Serdar Yildirim;Murtaza Bulut.
international conference on multimodal interfaces (2004)
Acoustics of children's speech: developmental changes of temporal and spectral parameters.
Sungbok Lee;Alexandros Potamianos;Shrikanth Narayanan.
Journal of the Acoustical Society of America (1999)
The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing
Florian Eyben;Klaus R. Scherer;Bjorn W. Schuller;Johan Sundberg.
IEEE Transactions on Affective Computing (2016)
Emotion recognition using a hierarchical binary decision tree approach
Chi-Chun Lee;Emily Mower;Carlos Busso;Sungbok Lee.
Speech Communication (2011)
Environmental Sound Recognition With Time–Frequency Audio Features
S. Chu;S. Narayanan;C.-C.J. Kuo.
IEEE Transactions on Audio, Speech, and Language Processing (2009)
Emotion Recognition System
Shrikanth S. Narayanan.
Journal of the Acoustical Society of America (2006)
The INTERSPEECH 2010 Paralinguistic Challenge
Björn W. Schuller;Stefan Steidl;Anton Batliner;Felix Burkhardt.
conference of the international speech communication association (2010)
A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle
Hao Wang;Dogan Can;Abe Kazemzadeh;François Bar.
meeting of the association for computational linguistics (2012)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Southern California
University of Southern California
University of Southern California
University of Washington
National Technical University of Athens
The University of Texas at Dallas
University of Southern California
University of Waterloo
University of California, Los Angeles
University of Utah
Huazhong University of Science and Technology
University of Oxford
Lehigh University
Ludwig-Maximilians-Universität München
Forschungszentrum Jülich
Romanian Academy
Aristotle University of Thessaloniki
Kyushu University
Centre national de la recherche scientifique, CNRS
Burnet Institute
University of Leeds
Georgia Institute of Technology
University of Birmingham
University of Tübingen
King's College London
London School of Economics and Political Science