2018 - ACM Fellow For contributions to the design and analysis of sequential decision making algorithms in artificial intelligence
2010 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) For significant contributions to the fields of reinforcement learning, decision making under uncertainty, and statistical language applications.
Michael L. Littman spends much of his time researching Artificial intelligence, Markov decision process, Reinforcement learning, Mathematical optimization and Partially observable Markov decision process. The various areas that Michael L. Littman examines in his Artificial intelligence study include Machine learning, Markov chain and Natural language processing. His Markov decision process study integrates concerns from other disciplines, such as Algorithm, State and Polynomial number.
His biological study deals with issues like Representation, which deal with fields such as Factor. The Bellman equation research Michael L. Littman does as part of his general Mathematical optimization study is frequently linked to other disciplines of science, such as Basis, therefore creating a link between diverse domains of science. Robot and Scale is closely connected to Contrast in his research, which is encompassed under the umbrella topic of Partially observable Markov decision process.
His primary areas of investigation include Artificial intelligence, Reinforcement learning, Markov decision process, Mathematical optimization and Machine learning. His research integrates issues of Domain and Natural language processing in his study of Artificial intelligence. His Reinforcement learning research is multidisciplinary, incorporating perspectives in Algorithm, State and Bellman equation.
His work carried out in the field of Markov decision process brings together such families of science as Q-learning and Markov chain. His research on Mathematical optimization often connects related topics like Set. His Probabilistic logic research includes elements of Theoretical computer science, Boolean satisfiability problem and Complexity class.
Michael L. Littman focuses on Reinforcement learning, Artificial intelligence, Machine learning, State and Representation. His research in Reinforcement learning intersects with topics in Artificial neural network, State space, Task and Bellman equation. His Artificial intelligence study frequently draws connections between related disciplines such as Process.
His study in Machine learning is interdisciplinary in nature, drawing from both Variety, Outcome and Visualization. His Representation study incorporates themes from Linear temporal logic, Robot, Key and Set. As part of the same scientific family, Michael L. Littman usually focuses on Markov decision process, concentrating on Subspace topology and intersecting with Theoretical computer science.
His main research concerns Reinforcement learning, Artificial intelligence, Context, Human–computer interaction and Bellman equation. His Reinforcement learning study combines topics in areas such as Intelligent agent, Theoretical computer science and Mathematics education. His research in the fields of Representation and Q-learning overlaps with other disciplines such as Generative model.
His Context research incorporates elements of Domain, Salient, Quality and Task analysis. His Human–computer interaction research is multidisciplinary, incorporating elements of Control flow, Debugging, Programming paradigm and Curriculum. His work deals with themes such as Wasserstein metric, Applied mathematics and Constant, which intersect with Bellman equation.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Reinforcement learning: a survey
Leslie Pack Kaelbling;Michael L. Littman;Andrew W. Moore.
Journal of Artificial Intelligence Research (1996)
Planning and Acting in Partially Observable Stochastic Domains
Leslie Pack Kaelbling;Michael L. Littman;Anthony R. Cassandra.
Artificial Intelligence (1998)
Markov games as a framework for multi-agent reinforcement learning
Michael L. Littman.
international conference on machine learning (1994)
Measuring praise and criticism: Inference of semantic orientation from association
Peter D. Turney;Michael L. Littman.
ACM Transactions on Information Systems (2003)
Activity recognition from accelerometer data
Nishkam Ravi;Nikhil Dandekar;Preetham Mysore;Michael L. Littman.
innovative applications of artificial intelligence (2005)
Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach
Justin A. Boyan;Michael L. Littman.
neural information processing systems (1993)
Learning policies for partially observable environments: scaling up
Michael L. Littman;Anthony R. Cassandra;Leslie Pack Kaelbling.
international conference on machine learning (1997)
Acting Optimally in Partially Observable Stochastic Domains
Anthony R. Cassandra;Leslie Pack Kaelbling;Michael L. Littman.
national conference on artificial intelligence (1994)
Convergence Results for Single-Step On-PolicyReinforcement-Learning Algorithms
Satinder Singh;Tommi Jaakkola;Michael L. Littman;Csaba Szepesvári.
Machine Learning (2000)
Predictive Representations of State
Michael L. Littman;Richard S Sutton.
neural information processing systems (2001)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Amazon (United States)
The University of Texas at Austin
University of Alberta
University of Michigan–Ann Arbor
Brown University
University of Colorado Boulder
Ronin Institute
Harvard University
Princeton University
University of Cologne
Tokyo University of Science
Gorgan University of Agricultural Sciences and Natural Resources
University of California, Santa Barbara
University of Amsterdam
University of Stuttgart
University of California, San Francisco
Carleton University
Scripps Research Institute
Johns Hopkins University
University of Pennsylvania
University of Perugia
Goddard Space Flight Center
University of Toronto
University of Michigan–Ann Arbor
McGill University