His primary areas of investigation include Mathematical optimization, Algorithm, Regret, Reinforcement learning and Artificial intelligence. Sham M. Kakade brings together Mathematical optimization and System identification to produce work in his papers. His Algorithm research incorporates themes from Mixture model, Unsupervised learning, Implicit function and Hidden Markov model.
The various areas that Sham M. Kakade examines in his Regret study include Curse of dimensionality, Code, Function, Upper and lower bounds and Compressed sensing. His study in Reinforcement learning is interdisciplinary in nature, drawing from both Robot and State space. Sham M. Kakade combines subjects such as Machine learning and Pattern recognition with his study of Artificial intelligence.
His scientific interests lie mostly in Mathematical optimization, Algorithm, Artificial intelligence, Applied mathematics and Reinforcement learning. His biological study spans a wide range of topics, including Stochastic gradient descent, Gradient descent, Simple, Markov decision process and Regret. The concepts of his Algorithm study are interwoven with issues in Mixture model, Matrix, Unsupervised learning and Hidden Markov model.
His Artificial intelligence study incorporates themes from Machine learning and Pattern recognition. His Applied mathematics study integrates concerns from other disciplines, such as Regularization, Covariance, Convexity, Estimator and Generalization. His work deals with themes such as Parameterized complexity, Supervised learning, Sample, Polynomial and Function approximation, which intersect with Reinforcement learning.
Sham M. Kakade spends much of his time researching Mathematical optimization, Reinforcement learning, Applied mathematics, Algorithm and Markov decision process. He interconnects Sampling, Control and Sample size determination in the investigation of issues within Mathematical optimization. He has researched Reinforcement learning in several fields, including Upper and lower bounds, Sample, Function approximation and Curse of dimensionality.
The study incorporates disciplines such as Regularization, Stochastic gradient descent and Regret in addition to Applied mathematics. His Algorithm study combines topics from a wide range of disciplines, such as Entropy, Generative grammar and Pruning. His work in Markov decision process covers topics such as State space which are related to areas like Generative model.
The scientist’s investigation covers issues in Regret, Mathematical optimization, Artificial intelligence, Linear dynamical system and Applied mathematics. His work on Intelligent decision support system expands to the thematically related Regret. His research brings together the fields of Markov decision process and Mathematical optimization.
His Artificial intelligence research includes themes of Contrast and Sample. His studies examine the connections between Linear dynamical system and genetics, as well as such issues in Control, with regards to Statistical noise. His work carried out in the field of Applied mathematics brings together such families of science as Measure, Double descent and Convexity.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Niranjan Srinivas;Andreas Krause;Matthias Seeger;Sham M. Kakade.
international conference on machine learning (2010)
Tensor decompositions for learning latent variable models
Animashree Anandkumar;Rong Ge;Daniel Hsu;Sham M. Kakade.
Journal of Machine Learning Research (2014)
Cover trees for nearest neighbor
Alina Beygelzimer;Sham Kakade;John Langford.
international conference on machine learning (2006)
A Natural Policy Gradient
Sham M Kakade.
neural information processing systems (2001)
Opponent interactions between serotonin and dopamine
Nathaniel D. Daw;Sham Kakade;Peter Dayan.
Neural Networks (2002)
Multi-view clustering via canonical correlation analysis
Kamalika Chaudhuri;Sham M. Kakade;Karen Livescu;Karthik Sridharan.
international conference on machine learning (2009)
Approximately Optimal Approximate Reinforcement Learning
Sham Kakade;John Langford.
international conference on machine learning (2002)
Stochastic Linear Optimization Under Bandit Feedback
Varsha Dani;Thomas P Hayes;Sham M Kakade.
conference on learning theory (2008)
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
N. Srinivas;A. Krause;S. M. Kakade;M. Seeger.
IEEE Transactions on Information Theory (2012)
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Niranjan Srinivas;Andreas Krause;Sham M. Kakade;Matthias Seeger.
arXiv: Learning (2009)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Columbia University
Amazon (United States)
Google (United States)
Hong Kong University of Science and Technology
Google (United States)
California Institute of Technology
University of Michigan–Ann Arbor
Princeton University
Nvidia (United Kingdom)
Microsoft (United States)
University of Pavia
International School for Advanced Studies
IBM (United States)
University of Southern California
Nanjing Agricultural University
George Washington University
University of Padua
University of Kansas
Oregon Health & Science University
University of Manitoba
University of Arkansas for Medical Sciences
Chinese Academy of Sciences
Maastricht University
Freie Universität Berlin
University of Utah
Seoul National University Hospital