2017 - IJCAI Award for Research Excellence for his pioneering work in the theory of reinforcement learning.
2006 - IEEE Fellow For contributions to reinforcement learning methods and their neural network implementations.
2004 - Neural Networks Pioneer Award, IEEE Computational Intelligence Society
1989 - Fellow of the American Association for the Advancement of Science (AAAS)
1989 - Fellow of the American Association for the Advancement of Science (AAAS)
Andrew G. Barto mostly deals with Artificial intelligence, Reinforcement learning, Machine learning, Artificial neural network and Temporal difference learning. His work in the fields of Robot learning, Unsupervised learning and Error-driven learning overlaps with other areas such as Construct. His Learning classifier system and Q-learning study are his primary interests in Reinforcement learning.
His Q-learning study combines topics from a wide range of disciplines, such as Field, AIXI and Cognitive science. His research in Machine learning intersects with topics in Domain, Sequence, Knowledge transfer and State space. Andrew G. Barto interconnects Dynamical system, Stochastic programming, Linear least squares, Least squares and Decision problem in the investigation of issues within Temporal difference learning.
The scientist’s investigation covers issues in Artificial intelligence, Reinforcement learning, Machine learning, Markov decision process and Robot learning. His research investigates the link between Artificial intelligence and topics such as Reinforcement that cross with problems in Relation. His research related to Temporal difference learning and Learning classifier system might be considered part of Reinforcement learning.
His study on Unsupervised learning is often connected to Function as part of broader study in Machine learning. Andrew G. Barto specializes in Unsupervised learning, namely Computational learning theory. The concepts of his Markov decision process study are interwoven with issues in Intelligent decision support system, Field and Mathematical optimization.
Artificial intelligence, Reinforcement learning, Machine learning, Mobile manipulator and Cognitive science are his primary areas of study. His work on Representation as part of general Artificial intelligence research is frequently linked to Training period, thereby connecting diverse disciplines of science. His Reinforcement learning research is multidisciplinary, incorporating perspectives in Active learning, State and Human–computer interaction.
The study incorporates disciplines such as Robot learning, Bayesian probability, Hidden Markov model and Pattern recognition in addition to Machine learning. He has researched Error-driven learning in several fields, including Temporal difference learning and Genetic programming. His research in the fields of Instance-based learning and Learning classifier system overlaps with other disciplines such as Preference learning.
His primary areas of investigation include Artificial intelligence, Machine learning, Reinforcement learning, Mobile manipulator and Robot learning. His study looks at the relationship between Artificial intelligence and topics such as Structure, which overlap with Subroutine, Hierarchy and Linear model. His work on Active learning as part of general Machine learning study is frequently linked to Ranging, therefore connecting diverse disciplines of science.
His study in Active learning is interdisciplinary in nature, drawing from both Semi-supervised learning and Learning classifier system, Unsupervised learning. In Reinforcement learning, Andrew G. Barto works on issues like Robotic arm, which are connected to Underactuation. In his study, Andrew G. Barto carries out multidisciplinary Robot learning and Abstraction research.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Reinforcement Learning: An Introduction
R.S. Sutton;A.G. Barto.
(1988)
Introduction to Reinforcement Learning
Richard S. Sutton;Andrew G. Barto.
(1998)
Neuronlike adaptive elements that can solve difficult learning control problems
Andrew G. Barto;Richard S. Sutton;Charles W. Anderson.
systems man and cybernetics (1983)
Toward a modern theory of adaptive networks: Expectation and prediction.
Richard S. Sutton;Andrew G. Barto.
Psychological Review (1981)
Learning to act using real-time dynamic programming
Andrew G. Barto;Steven J. Bradtke;Satinder P. Singh.
Artificial Intelligence (1995)
Recent Advances in Hierarchical Reinforcement Learning
Andrew G. Barto;Sridhar Mahadevan.
Discrete Event Dynamic Systems (2003)
Handbook of Learning and Approximate Dynamic Programming
Jennie Si;Andrew G. Barto;Warren Buckler Powell;Donald C. Wunsch.
(2004) (2004)
Improving Elevator Performance Using Reinforcement Learning
Robert H. Crites;Andrew G. Barto.
neural information processing systems (1995)
Linear least-squares algorithms for temporal difference learning
Steven J. Bradtke;Andrew G. Barto.
Machine Learning (1996)
Time-Derivative Models of Pavlovian Reinforcement
Richard S. Sutton;Andrew G. Barto.
(1990)
Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.
If you think any of the details on this page are incorrect, let us know.
University of Alberta
Brown University
Northwestern University
University of Alabama at Birmingham
University of Massachusetts Amherst
Princeton University
Colorado State University
University College London
University of Massachusetts Amherst
University of Rochester
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: