His primary scientific interests are in Artificial intelligence, Reinforcement learning, Machine learning, Video game and Representation. His Artificial intelligence research is multidisciplinary, relying on both Pattern recognition and Bellman equation. His Reinforcement learning algorithm study in the realm of Reinforcement learning interacts with subjects such as Zero.
His Machine learning study incorporates themes from Scalability, Theoretical computer science, Message passing, Data mining and Bayesian probability. His Unsupervised learning research includes elements of Artificial neural network, Supervised learning and Monte Carlo tree search, Computer Go. His research in Computer Go focuses on subjects like General video game playing, which are connected to Search algorithm.
His scientific interests lie mostly in Artificial intelligence, Reinforcement learning, Machine learning, Algorithm and Data mining. Much of his study explores Artificial intelligence relationship to Pattern recognition. In his study, Incentive is inextricably linked to Social dilemma, which falls within the broad field of Reinforcement learning.
His research in Machine learning intersects with topics in Probabilistic logic and Inference. His Algorithm course of study focuses on Mathematical optimization and Gradient descent. The various areas that Thore Graepel examines in his Data mining study include Probability distribution and Message passing.
Thore Graepel spends much of his time researching Reinforcement learning, Artificial intelligence, Human–computer interaction, Game theory and Nash equilibrium. Thore Graepel combines subjects such as Cooperative game theory, Communication, Cognitive psychology and Social dilemma with his study of Reinforcement learning. His work carried out in the field of Artificial intelligence brings together such families of science as Class and Video game.
His Game theory study combines topics in areas such as Counterfactual thinking and Analytics. His Nash equilibrium research incorporates elements of Artificial neural network, Solver, Equilibrium selection and Oracle. Thore Graepel focuses mostly in the field of Artificial neural network, narrowing it down to matters related to Scalability and, in some cases, Principal component analysis.
His primary areas of investigation include Reinforcement learning, Human–computer interaction, Artificial intelligence, Video game and Order. He has researched Reinforcement learning in several fields, including Artificial general intelligence and Microeconomics. The concepts of his Human–computer interaction study are interwoven with issues in Scheme, Control, Competition and Game theoretic.
His research investigates the connection with Artificial intelligence and areas like Bellman equation which intersect with concerns in State. Thore Graepel carries out multidisciplinary research, doing studies in Video game and Population based. Throughout his Order studies, Thore Graepel incorporates elements of other sciences such as SIMPLE, Exploration problem, Development, Theoretical computer science and Resource allocation.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Mastering the game of Go with deep neural networks and tree search
David Silver;Aja Huang;Christopher J. Maddison;Arthur Guez.
Nature (2016)
Mastering the game of Go with deep neural networks and tree search
David Silver;Aja Huang;Christopher J. Maddison;Arthur Guez.
Nature (2016)
Mastering the game of Go without human knowledge
David Silver;Julian Schrittwieser;Karen Simonyan;Ioannis Antonoglou.
Nature (2017)
Mastering the game of Go without human knowledge
David Silver;Julian Schrittwieser;Karen Simonyan;Ioannis Antonoglou.
Nature (2017)
Private traits and attributes are predictable from digital records of human behavior
Michal Kosinski;David Stillwell;Thore Graepel.
Proceedings of the National Academy of Sciences of the United States of America (2013)
Private traits and attributes are predictable from digital records of human behavior
Michal Kosinski;David Stillwell;Thore Graepel.
Proceedings of the National Academy of Sciences of the United States of America (2013)
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.
David Silver;Thomas Hubert;Julian Schrittwieser;Ioannis Antonoglou.
Science (2018)
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.
David Silver;Thomas Hubert;Julian Schrittwieser;Ioannis Antonoglou.
Science (2018)
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
David Silver;Thomas Hubert;Julian Schrittwieser;Ioannis Antonoglou.
arXiv: Artificial Intelligence (2017)
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
David Silver;Thomas Hubert;Julian Schrittwieser;Ioannis Antonoglou.
arXiv: Artificial Intelligence (2017)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Hasso Plattner Institute
DeepMind (United Kingdom)
DeepMind (United Kingdom)
DeepMind (United Kingdom)
Technical University of Berlin
University College London
DeepMind (United Kingdom)
Microsoft (United States)
Google (United States)
DeepMind (United Kingdom)
University of Porto
Korea Institute of Science and Technology
Tianjin Polytechnic University
Sao Paulo State University
Chinese Academy of Sciences
Utrecht University
Russian Academy of Sciences
Yonsei University
Trinity College Dublin
Instituto de Salud Carlos III
University of Delaware
University College London
Harvard University
University of Queensland
National University of Singapore
Max Planck Society