2022 - Research.com Rising Star of Science Award
Michael Auli mostly deals with Artificial intelligence, Recurrent neural network, Language model, Translation and Machine translation. His studies in Artificial intelligence integrate themes in fields like Machine learning and Natural language processing. Michael Auli works mostly in the field of Language model, limiting it down to topics relating to Automatic summarization and, in certain cases, Extensibility, Inference and Programming language.
The study incorporates disciplines such as Layer and Sequence learning in addition to Translation. His Layer research integrates issues from Algorithm, Computation and Convolutional neural network. His work in Machine translation covers topics such as Test set which are related to areas like SIGNAL and Acoustic model.
His scientific interests lie mostly in Artificial intelligence, Natural language processing, Machine translation, Language model and Speech recognition. His study explores the link between Artificial intelligence and topics such as Machine learning that cross with problems in Training set. He has included themes like Domain, Generative grammar and Transformer in his Natural language processing study.
His study looks at the relationship between Machine translation and topics such as Algorithm, which overlap with Convolution and Convolutional neural network. His Language model research incorporates elements of Question answering, Inference and Automatic summarization. His Translation study incorporates themes from Benchmark and Sequence learning.
His main research concerns Artificial intelligence, Machine translation, Speech recognition, Labeled data and Feature learning. Artificial intelligence is closely attributed to Natural language processing in his work. His Document level and Language model study, which is part of a larger body of work in Natural language processing, is frequently linked to Simple, Work and Scale, bridging the gap between disciplines.
His studies deal with areas such as Language modelling, Variety and Transformer as well as Machine translation. His research integrates issues of Phoneme recognition, Quantization, Speech processing, Word error rate and Deep learning in his study of Feature learning. His Word error rate research includes elements of SIGNAL, Latent variable and Cross lingual.
His primary scientific interests are in Speech recognition, Feature learning, Word error rate, Training and Labeled data. The study of Speech recognition is intertwined with the study of Quantization in a number of ways. His work carried out in the field of Quantization brings together such families of science as Deep learning, Speech processing, Cross lingual and Artificial intelligence.
Among his Training studies, there is a synthesis of other scientific areas such as Self training and Test. Structure combines with fields such as Latent variable, SIGNAL, Phoneme recognition and ABX test in his investigation.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Convolutional Sequence to Sequence Learning
Jonas Gehring;Michael Auli;David Grangier;Denis Yarats.
international conference on machine learning (2017)
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Myle Ott;Sergey Edunov;Alexei Baevski;Angela Fan.
north american chapter of the association for computational linguistics (2019)
Language modeling with gated convolutional networks
Yann N. Dauphin;Angela Fan;Michael Auli;David Grangier.
international conference on machine learning (2017)
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni;Michel Galley;Michael Auli;Chris Brockett.
north american chapter of the association for computational linguistics (2015)
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
Sumit Chopra;Michael Auli;Alexander M. Rush.
north american chapter of the association for computational linguistics (2016)
Sequence Level Training with Recurrent Neural Networks
Marc'Aurelio Ranzato;Sumit Chopra;Michael Auli;Wojciech Zaremba.
international conference on learning representations (2016)
Understanding Back-Translation at Scale.
Sergey Edunov;Myle Ott;Michael Auli;David Grangier.
empirical methods in natural language processing (2018)
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Alexei Baevski;Yuhao Zhou;Abdelrahman Mohamed;Michael Auli.
neural information processing systems (2020)
Scaling Neural Machine Translation
Myle Ott;Sergey Edunov;David Grangier;Michael Auli.
Proceedings of the Third Conference on Machine Translation: Research Papers (2018)
3D Human Pose Estimation in Video With Temporal Convolutions and Semi-Supervised Training
Dario Pavllo;Christoph Feichtenhofer;David Grangier;Michael Auli.
computer vision and pattern recognition (2019)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Google (United States)
DeepMind (United Kingdom)
Google (United States)
Microsoft (United States)
Microsoft (United States)
Facebook (United States)
Google (United States)
Microsoft (United States)
Facebook (United States)
Facebook (United States)
Pennsylvania State University
Tilburg University
University of Exeter
Hong Kong Polytechnic University
IBM (United States)
Hitachi (Japan)
École Polytechnique Fédérale de Lausanne
University of Illinois at Urbana-Champaign
Leipzig University
Leiden University Medical Center
Federal University of Rio de Janeiro
University of Basel
University of Potsdam
Johns Hopkins University
University of Sydney
University of Toronto