2014 - ACM Fellow For contributions to machine learning, artificial intelligence, and algorithmic game theory and computational social science.
2012 - Fellow of the American Academy of Arts and Sciences
2003 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) For significant contributions to computational learning theory, to reinforcement learning and stochastic planning, to dialogue agents, and to the theory of multi-agent systems.
His scientific interests lie mostly in Artificial intelligence, Reinforcement learning, Stability, Machine learning and Probably approximately correct learning. His study in Artificial intelligence focuses on Semi-supervised learning, Unsupervised learning and Concept class. His Semi-supervised learning research focuses on Algorithmic learning theory and how it connects with Instance-based learning and Online machine learning.
His Reinforcement learning study combines topics in areas such as Dialogue management, Human–computer interaction, Markov decision process and Mathematical optimization. His work on Generalization error as part of general Stability research is frequently linked to Sanity, bridging the gap between disciplines. The concepts of his Probably approximately correct learning study are interwoven with issues in Theoretical computer science and Computation.
His primary areas of study are Artificial intelligence, Mathematical optimization, Algorithm, Theoretical computer science and Discrete mathematics. His biological study deals with issues like Machine learning, which deal with fields such as Probabilistic logic. His study in Mathematical optimization is interdisciplinary in nature, drawing from both Mathematical economics and Regret.
The various areas that Michael Kearns examines in his Theoretical computer science study include Computation, Game theory and Stochastic game. As part of one scientific family, Michael Kearns deals mainly with the area of Computational learning theory, narrowing it down to issues related to the Algorithmic learning theory, and often Instance-based learning. His Semi-supervised learning study incorporates themes from Stability and Unsupervised learning.
Mathematical economics, Mathematical optimization, Algorithm, Constraint and Regret are his primary areas of study. His Mathematical economics research is multidisciplinary, incorporating perspectives in State, Differential privacy and Bounding overwatch. He combines subjects such as Distribution, Empirical risk minimization, Group and Minimax with his study of Algorithm.
His work carried out in the field of Constraint brings together such families of science as Time complexity, Theoretical computer science, Heuristic and Reinforcement learning. His Regret study integrates concerns from other disciplines, such as Polynomial, Dimension and Mahalanobis distance, Artificial intelligence. Artificial intelligence is closely attributed to Computation in his work.
Michael Kearns mostly deals with Regret, Mathematical economics, Constraint, Class and Theoretical computer science. His Regret research includes themes of Mahalanobis distance and Artificial intelligence. He interconnects Structure and State in the investigation of issues within Artificial intelligence.
The study incorporates disciplines such as Total cost, Payment, Incentive and Principal in addition to Mathematical economics. His work deals with themes such as Time complexity, Test, Differential privacy and Reinforcement learning, which intersect with Constraint. His research investigates the link between Theoretical computer science and topics such as Computational problem that cross with problems in Statistic, Heuristic and Oracle.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
An Introduction to Computational Learning Theory
Michael J. Kearns;Umesh V. Vazirani.
Cryptographic limitations on learning Boolean formulae and finite automata
Michael Kearns;Leslie Valiant.
Journal of the ACM (1994)
Efficient noise-tolerant learning from statistical queries
Journal of the ACM (1998)
Near-Optimal Reinforcement Learning in Polynomial Time
Michael Kearns;Satinder Singh.
Machine Learning (2002)
A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes
Michael Kearns;Yishay Mansour;Andrew Y. Ng.
Machine Learning (2002)
Learning in the presence of malicious errors
Michael Kearns;Ming Li.
SIAM Journal on Computing (1993)
Algorithmic stability and sanity-check bounds for leave-one-out cross-validation
Michael Kearns;Dana Ron.
Neural Computation (1999)
Graphical models for game theory
Michael J. Kearns;Michael L. Littman;Satinder P. Singh.
uncertainty in artificial intelligence (2001)
Efficient distribution-free learning of probabilistic concepts
Michael J. Kearns;Robert E. Schapire.
IEEE Transactions on Industry Applications (1991)
Toward efficient agnostic learning
Michael J. Kearns;Robert E. Schapire;Linda M. Sellie.
conference on learning theory (1992)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: