2023 - Research.com Best Scientist Award
2023 - Research.com Computer Science in United States Leader Award
2022 - Research.com Best Scientist Award
2022 - Research.com Computer Science in United States Leader Award
2020 - IEEE John von Neumann Medal “For contributions to machine learning and data science.”
2015 - David E. Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition
2012 - SIAM Fellow For contributions to machine learning, in particular variational approaches to statistical inference.
2011 - Fellow of the American Academy of Arts and Sciences
2010 - Member of the National Academy of Engineering For contributions to the foundations and applications of machine learning.
2010 - Member of the National Academy of Sciences
2010 - ACM Fellow For contributions to the theory and application of machine learning.
2009 - ACM AAAI Allen Newell Award For fundamental advances in machine learning, particularly his groundbreaking work on graphical models and nonparametric Bayesian statistics, the broad application of this work across computer science, statistics, and the biological sciences.
2007 - Fellow of the American Statistical Association (ASA)
2006 - Fellow of the American Association for the Advancement of Science (AAAS)
2005 - IEEE Fellow For contributions to probabilistic graphical models and neural information processing systems.
2002 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) For significant contributions to reasoning under uncertainty, machine learning, and human motor control.
His primary areas of investigation include Artificial intelligence, Machine learning, Algorithm, Mathematical optimization and Artificial neural network. His Artificial intelligence research is multidisciplinary, incorporating elements of Data mining and Pattern recognition. His work carried out in the field of Data mining brings together such families of science as Probabilistic latent semantic analysis and Nonparametric statistics.
The study incorporates disciplines such as Training set and Bayesian probability in addition to Machine learning. His Algorithm research integrates issues from Approximate inference, Correlation clustering, Cluster analysis, Fuzzy clustering and Reproducing kernel Hilbert space. His studies deal with areas such as Estimator, Kernel method, Applied mathematics and Nonlinear system as well as Mathematical optimization.
Michael I. Jordan spends much of his time researching Artificial intelligence, Algorithm, Machine learning, Mathematical optimization and Applied mathematics. Michael I. Jordan has included themes like Data mining and Pattern recognition in his Artificial intelligence study. His research on Algorithm frequently connects to adjacent areas such as Cluster analysis.
His work on Machine learning is being expanded to include thematically relevant topics such as Set. Michael I. Jordan works in the field of Mathematical optimization, namely Minimax. His Inference research is multidisciplinary, incorporating perspectives in Graphical model and Markov chain Monte Carlo.
Michael I. Jordan focuses on Algorithm, Applied mathematics, Artificial intelligence, Mathematical optimization and Machine learning. The concepts of his Algorithm study are interwoven with issues in Entropy, Upper and lower bounds, False discovery rate and Mirror descent. Michael I. Jordan interconnects Discretization, Convergence, Distribution and Symplectic geometry in the investigation of issues within Applied mathematics.
His study in Artificial intelligence is interdisciplinary in nature, drawing from both Domain and Set. His research investigates the link between Mathematical optimization and topics such as Reinforcement learning that cross with problems in State, Function approximation, Markov decision process, Discrete mathematics and Polynomial. His Machine learning study frequently links to adjacent areas such as Adversarial system.
His primary areas of investigation include Artificial intelligence, Machine learning, Applied mathematics, Gradient descent and Algorithm. Artificial intelligence and Domain are frequently intertwined in his study. His Machine learning study combines topics in areas such as Adversarial system, Statistical inference, Normalization and Range.
His Applied mathematics research includes elements of Linear approximation, Acceleration, Infinitesimal, Automatic differentiation and Discretization. His Gradient descent study integrates concerns from other disciplines, such as Stochastic gradient descent, Convergence, Saddle point, Convex optimization and Stationary point. His Algorithm research is multidisciplinary, relying on both Mirror descent, Proximal point method, Multiclass classification, Minimax and Upper and lower bounds.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Latent dirichlet allocation
David M. Blei;Andrew Y. Ng;Michael I. Jordan.
Journal of Machine Learning Research (2003)
On Spectral Clustering: Analysis and an algorithm
Andrew Y. Ng;Michael I. Jordan;Yair Weiss.
neural information processing systems (2001)
Adaptive mixtures of local experts
Robert A. Jacobs;Michael I. Jordan;Steven J. Nowlan;Geoffrey E. Hinton.
Neural Computation (1991)
Graphical Models, Exponential Families, and Variational Inference
Martin J. Wainwright;Michael I. Jordan.
(2008)
Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes
Yee W. Teh;Michael I. Jordan;Matthew J. Beal;David M. Blei.
neural information processing systems (2004)
Trust Region Policy Optimization
John Schulman;Sergey Levine;Pieter Abbeel;Michael Jordan.
international conference on machine learning (2015)
Machine learning: Trends, perspectives, and prospects
M. I. Jordan;T. M. Mitchell.
Science (2015)
Hierarchical mixtures of experts and the EM algorithm
Michael I. Jordan;Robert A. Jacobs.
Neural Computation (1994)
An Internal Model for Sensorimotor Integration
Daniel M. Wolpert;Zoubin Ghahramani;Michael I. Jordan.
Science (1995)
Distance Metric Learning with Application to Clustering with Side-Information
Eric P. Xing;Michael I. Jordan;Stuart J Russell;Andrew Y. Ng.
neural information processing systems (2002)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of California, Berkeley
Google (United States)
École Normale Supérieure
University of California, Berkeley
Stanford University
Tsinghua University
Columbia University
Carnegie Mellon University
University of California, Berkeley
Tsinghua University
Information Technologies Institute, Greece
Pennsylvania State University
Shanghai Jiao Tong University
Los Alamos National Laboratory
University of New South Wales
University of Tromsø - The Arctic University of Norway
University of Calgary
Federal University of Rio de Janeiro
University College London
McGill University
Oslo University Hospital
Osaka University
University of Crete
University Health Network
University of Melbourne
Tampere University