His scientific interests lie mostly in Artificial intelligence, Reinforcement learning, Machine learning, Mathematical optimization and Human–computer interaction. Olivier Pietquin combines subjects such as Field and Natural language processing with his study of Artificial intelligence. His Reinforcement learning research integrates issues from Parametric statistics, Kalman filter, Entropy, Sample and Exploration problem.
His work on Temporal difference learning as part of general Machine learning research is frequently linked to User modeling, bridging the gap between disciplines. The Mathematical optimization study combines topics in areas such as Markov decision process and Function approximation. His biological study spans a wide range of topics, including Context and Error-driven learning.
His primary areas of investigation include Artificial intelligence, Reinforcement learning, Machine learning, Human–computer interaction and Markov decision process. His primary area of study in Artificial intelligence is in the field of Natural language. His Reinforcement learning study combines topics from a wide range of disciplines, such as Mathematical optimization, Bellman equation, State space, State and Function approximation.
His research integrates issues of Control and Markov process in his study of Machine learning. The various areas that Olivier Pietquin examines in his Human–computer interaction study include Field and Focus. His study in Markov decision process is interdisciplinary in nature, drawing from both Regularization and Propagation of uncertainty.
His main research concerns Reinforcement learning, Artificial intelligence, Mathematical optimization, Markov decision process and Human–computer interaction. As part of one scientific family, Olivier Pietquin deals mainly with the area of Reinforcement learning, narrowing it down to issues related to the Regularization, and often Propagation of uncertainty. His Artificial intelligence research incorporates elements of Domain, Sample and Machine learning.
His Domain research includes elements of Q-learning, Speech processing, Speaker recognition, Utterance and Robot. His studies in Sample integrate themes in fields like Proxy, Variety, Imitation learning and Bellman equation. His Deep learning research includes themes of Speech recognition, Random function and Monte Carlo method.
Olivier Pietquin mainly focuses on Reinforcement learning, Artificial intelligence, Mathematical optimization, Fictitious play and Entropy. His studies deal with areas such as Regularization, Field, Markov decision process, Empirical research and Representation as well as Reinforcement learning. Olivier Pietquin interconnects Control, Machine learning and Algorithm in the investigation of issues within Empirical research.
The study incorporates disciplines such as Value and Human–computer interaction in addition to Artificial intelligence. His work is dedicated to discovering how Fictitious play, Best response are connected with Robust optimization, Classifier, Contextual image classification and Robustness and other disciplines. Olivier Pietquin has included themes like Gradient descent, Parametric statistics, Propagation of uncertainty and Range in his Entropy study.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Deep Q-learning From Demonstrations.
Todd Hester;Matej Vecerík;Olivier Pietquin;Marc Lanctot.
national conference on artificial intelligence (2018)
Noisy Networks For Exploration
Meire Fortunato;Mohammad Gheshlaghi Azar;Bilal Piot;Jacob Menick.
international conference on learning representations (2018)
Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards
Matej Vecerík;Todd Hester;Jonathan Scholz;Fumin Wang.
arXiv: Artificial Intelligence (2017)
GuessWhat?! Visual Object Discovery through Multi-modal Dialogue
Harm de Vries;Florian Strub;Sarath Chandar;Olivier Pietquin.
computer vision and pattern recognition (2017)
Modulating early visual processing by language
Harm de Vries;Florian Strub;Jeremie Mary;Hugo Larochelle.
neural information processing systems (2017)
A probabilistic framework for dialog simulation and optimal strategy learning
O. Pietquin;T. Dutoit.
IEEE Transactions on Audio, Speech, and Language Processing (2006)
Deep Q-learning from Demonstrations
Todd Hester;Matej Vecerik;Olivier Pietquin;Marc Lanctot.
arXiv: Artificial Intelligence (2017)
Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation
Alexandre Bérard;Olivier Pietquin;Laurent Besacier;Christophe Servan.
neural information processing systems (2016)
Learning from Demonstrations for Real World Reinforcement Learning
Todd Hester;Matej Vecerík;Olivier Pietquin;Marc Lanctot.
(2017)
Machine learning for spoken dialogue systems
Oliver Lemon;Olivier Pietquin.
conference of the international speech communication association (2007)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Montreal
DeepMind (United Kingdom)
Grenoble Alpes University
Heriot-Watt University
University of Mons
Google (United States)
DeepMind (United Kingdom)
DeepMind (United Kingdom)
University of Cambridge
DeepMind (United Kingdom)