His primary areas of investigation include Speech recognition, Artificial intelligence, Natural language processing, Prosody and Feature. Anton Batliner works on Speech recognition which deals in particular with Speech processing. His studies in Natural language processing integrate themes in fields like Paralanguage, Linguistics, German and Speaker recognition.
His study in Speaker recognition is interdisciplinary in nature, drawing from both Usability, Hidden Markov model and System integration. His Prosody research incorporates elements of Sentence, Anger, Utterance and Parsing. His work in Feature tackles topics such as Pattern recognition which are related to areas like Articulation.
His primary areas of study are Speech recognition, Artificial intelligence, Natural language processing, Prosody and Feature vector. His studies examine the connections between Speech recognition and genetics, as well as such issues in Word, with regards to Stress. His Artificial intelligence research is multidisciplinary, incorporating perspectives in German and Pattern recognition.
His study on Natural language processing is mostly dedicated to connecting different topics, such as Speech corpus. His Prosody research includes themes of Artificial neural network and Speech synthesis. His Feature study incorporates themes from Speaker recognition and Relevance.
Anton Batliner mainly focuses on Speech recognition, Cognitive psychology, Artificial intelligence, Natural language processing and Feature. His Speech recognition research incorporates themes from Visual comparison, Support vector machine, First language, Feature learning and Feature extraction. His Cognitive psychology research is multidisciplinary, incorporating elements of Personality and Feature set.
Anton Batliner performs integrative Artificial intelligence and Structure research in his work. His work deals with themes such as Word, Prosody and Notation, which intersect with Natural language processing. His Feature study also includes fields such as
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
The INTERSPEECH 2009 Emotion Challenge
Björn W. Schuller;Stefan Steidl;Anton Batliner.
conference of the international speech communication association (2009)
Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge
Björn Schuller;Anton Batliner;Stefan Steidl;Dino Seppi.
Speech Communication (2011)
The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism
Björn W. Schuller;Stefan Steidl;Anton Batliner;Alessandro Vinciarelli.
conference of the international speech communication association (2013)
The INTERSPEECH 2010 Paralinguistic Challenge
Björn W. Schuller;Stefan Steidl;Anton Batliner;Felix Burkhardt.
conference of the international speech communication association (2010)
How to find trouble in communication
A. Batliner;K. Fischer;R. Huber;J. Spilker.
Speech Communication (2003)
The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data
Ellen Douglas-Cowie;Roddy Cowie;Ian Sneddon;Cate Cox.
affective computing and intelligent interaction (2007)
Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing
Bjorn Schuller;Anton Batliner.
Paralinguistics in speech and language-State-of-the-art and the challenge
BjöRn Schuller;Stefan Steidl;Anton Batliner;Felix Burkhardt.
Computer Speech & Language (2013)
The INTERSPEECH 2012 Speaker Trait Challenge
Björn W. Schuller;Stefan Steidl;Anton Batliner;Elmar Nöth.
conference of the international speech communication association (2012)
The INTERSPEECH 2011 Speaker State Challenge
Stefan Steidl;Anton Batliner;Florian Schiel;Jarek Krajewski.
conference of the international speech communication association (2011)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: