Deep neural networks, Recurrent neural network and Connectionism are the areas that his Artificial neural network study falls under. His research links Speech processing with Artificial intelligence. He combines Speech processing and Acoustic model in his research. His Natural language processing study often links to related topics such as Spoken language. Florian Metze carries out multidisciplinary research, doing studies in Speech recognition and Language model. Language model and Acoustic model are two areas of study in which he engages in interdisciplinary work. His study on Linguistics is mostly dedicated to connecting different topics, such as Word (group theory). Florian Metze merges World Wide Web with Information retrieval in his study. In his papers, Florian Metze integrates diverse fields, such as Information retrieval and Natural language processing.
His Speech recognition research is linked to Word error rate, Language model and Hidden Markov model. Hidden Markov model and Speech recognition are two areas of study in which he engages in interdisciplinary research. Florian Metze connects relevant research areas such as Automatic summarization, Language model, Machine translation and Word error rate in the realm of Natural language processing. In the subject of Linguistics, he integrates adjacent scientific disciplines such as Word (group theory), Vocabulary and Feature (linguistics). Feature (linguistics) and Linguistics are commonly linked in his work. In his works, Florian Metze performs multidisciplinary study on Artificial intelligence and Machine learning. In his articles, Florian Metze combines various disciplines, including Machine learning and Artificial intelligence. His study on Task (project management) is mostly dedicated to connecting different topics, such as Management. His work on Task (project management) expands to the thematically related Management.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding
Yajie Miao;Mohammad Gowayyed;Florian Metze.
ieee automatic speech recognition and understanding workshop (2015)
Extracting deep bottleneck features using stacked auto-encoders
Jonas Gehring;Yajie Miao;Florian Metze;Alex Waibel.
international conference on acoustics, speech, and signal processing (2013)
A one-pass decoder based on polymorphic linguistic context assignment
H. Soltau;F. Metze;C. Fugen;A. Waibel.
ieee automatic speech recognition and understanding workshop (2001)
Comparison of Four Approaches to Age and Gender Recognition for Telephone Applications
F. Metze;J. Ajmera;R. Englert;U. Bub.
international conference on acoustics, speech, and signal processing (2007)
Advances in automatic meeting record creation and access
A. Waibel;M. Bett;F. Metze;K. Ries.
international conference on acoustics, speech, and signal processing (2001)
Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval
Niluthpol Chowdhury Mithun;Juncheng Li;Florian Metze;Amit K. Roy-Chowdhury.
international conference on multimedia retrieval (2018)
Session independent non-audible speech recognition using surface electromyography
L. Maier-Hein;F. Metze;T. Schultz;A. Waibel.
ieee automatic speech recognition and understanding workshop (2005)
How2: A Large-scale Dataset for Multimodal Language Understanding
Ramon Sanabria;Ozan Caglayan;Shruti Palaskar;Desmond Elliott.
A comparison of Deep Learning methods for environmental sound detection
Juncheng Li;Wei Dai;Florian Metze;Shuhui Qu.
international conference on acoustics, speech, and signal processing (2017)
Speaker adaptive training of deep neural network acoustic models using i-vectors
Yajie Miao;Hao Zhang;Florian Metze.
IEEE Transactions on Audio, Speech, and Language Processing (2015)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: