Yann N. Dauphin mostly deals with Recurrent neural network, Artificial intelligence, Artificial neural network, Algorithm and Convolutional neural network. Many of his studies on Artificial intelligence involve topics that are commonly interrelated, such as Machine learning. His Artificial neural network research includes themes of High dimensional and Empirical risk minimization.
His biological study spans a wide range of topics, including Layer, Translation, Lipschitz continuity and Sequence learning. His Layer research is multidisciplinary, incorporating perspectives in Computation and Variable length. His Convolutional neural network research integrates issues from Modality, Feature, Facial expression and Test set.
Yann N. Dauphin spends much of his time researching Artificial intelligence, Machine learning, Artificial neural network, Algorithm and Deep learning. His work on Classifier as part of general Artificial intelligence study is frequently linked to Generalization, bridging the gap between disciplines. His Machine learning research focuses on Language model and how it relates to Word and Decoding methods.
His work on Gradient descent, Recurrent neural network and Generalization error as part of general Artificial neural network study is frequently linked to Contextual image classification and Inverse problem, therefore connecting diverse disciplines of science. His Algorithm research is multidisciplinary, relying on both Adversarial system, Layer, Convolutional neural network and Convolution. His studies examine the connections between Layer and genetics, as well as such issues in Sequence learning, with regards to Computation and Translation.
His primary scientific interests are in Artificial intelligence, Machine learning, Machine translation, Language model and Deep learning. His work on Artificial neural network as part of general Artificial intelligence study is frequently connected to Contextual image classification and Generalization, therefore bridging the gap between diverse disciplines of science and establishing a new relationship between them. In the field of Machine learning, his study on Pruning overlaps with subjects such as Weighting.
His Machine translation study integrates concerns from other disciplines, such as Decoding methods and Translation. His studies deal with areas such as Sentence, BLEU and Latent variable as well as Language model. Yann N. Dauphin has researched Deep learning in several fields, including Automatic differentiation, Leverage and Singular value decomposition.
His main research concerns Artificial intelligence, Machine translation, Language model, Quadratic equation and Test set. His work in Reinforcement learning and Deep learning is related to Artificial intelligence. His research integrates issues of Regularization, Machine learning, Initialization and Residual in his study of Machine translation.
His Language model research is multidisciplinary, incorporating elements of Word and Coherence. His Quadratic equation research spans across into areas like Automatic summarization, Convolution, Algorithm, Context and Element.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Convolutional Sequence to Sequence Learning
Jonas Gehring;Michael Auli;David Grangier;Denis Yarats.
international conference on machine learning (2017)
Theano: A Python framework for fast computation of mathematical expressions
Rami Al-Rfou;Guillaume Alain;Amjad Almahairi.
arXiv: Symbolic Computation (2016)
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N Dauphin;Razvan Pascanu;Caglar Gulcehre;Kyunghyun Cho.
neural information processing systems (2014)
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang;Moustapha Cisse;Yann N. Dauphin;David Lopez-Paz.
international conference on learning representations (2017)
Language modeling with gated convolutional networks
Yann N. Dauphin;Angela Fan;Michael Auli;David Grangier.
international conference on machine learning (2017)
Using recurrent neural networks for slot filling in spoken language understanding
Grégoire Mesnil;Yann Dauphin;Kaisheng Yao;Yoshua Bengio.
IEEE Transactions on Audio, Speech, and Language Processing (2015)
Hierarchical Neural Story Generation
Angela Fan;Mike Lewis;Yann N. Dauphin.
meeting of the association for computational linguistics (2018)
Parseval networks: improving robustness to adversarial examples
Moustapha Cisse;Piotr Bojanowski;Edouard Grave;Yann Dauphin.
international conference on machine learning (2017)
EmoNets: Multimodal deep learning approaches for emotion recognition in video
Samira Ebrahimi Kahou;Xavier Bouthillier;Pascal Lamblin;Çaglar Gülçehre.
Journal on Multimodal User Interfaces (2016)
Pay Less Attention with Lightweight and Dynamic Convolutions
Felix Wu;Angela Fan;Alexei Baevski;Yann N. Dauphin.
international conference on learning representations (2019)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Montreal
Facebook (United States)
Google (United States)
Facebook (United States)
University of Montreal
DeepMind (United Kingdom)
University of Pittsburgh
Georgia Institute of Technology
Amazon (United States)
Amazon Alexa AI
Boston University
Arizona State University
DeepMind (United Kingdom)
National Yang Ming Chiao Tung University
University of Stuttgart
Rutgers, The State University of New Jersey
Kwansei Gakuin University
National Institutes of Health
INRAE : Institut national de recherche pour l'agriculture, l'alimentation et l'environnement
Oswaldo Cruz Foundation
University of Cambridge
United States Naval Research Laboratory
University of Modena and Reggio Emilia
Philipp University of Marburg
Johns Hopkins University
University of Hawaii at Manoa