Alekh Agarwal mainly investigates Mathematical optimization, Artificial intelligence, Convex optimization, Machine learning and Regret. He has researched Mathematical optimization in several fields, including Regularization, Function approximation and Reinforcement learning. His Online machine learning, Computational learning theory, Instance-based learning and Algorithmic learning theory study, which is part of a larger body of work in Artificial intelligence, is frequently linked to Component, bridging the gap between disciplines.
The Convex optimization study combines topics in areas such as Algorithm design, Model of computation, Convex function and Minimax. His Machine learning research is multidisciplinary, incorporating elements of Classifier and Search algorithm. His Minification research incorporates elements of Algorithm, Least squares and Neural coding.
Alekh Agarwal mostly deals with Artificial intelligence, Mathematical optimization, Reinforcement learning, Machine learning and Regret. His work on Leverage as part of general Artificial intelligence research is frequently linked to Space, Oracle and Baseline, thereby connecting diverse disciplines of science. His work in Mathematical optimization addresses subjects such as Convex optimization, which are connected to disciplines such as Applied mathematics.
In his study, Computation is strongly linked to Theoretical computer science, which falls under the umbrella field of Reinforcement learning. His work carried out in the field of Regret brings together such families of science as Algorithm, Realizability and Combinatorics. His study explores the link between Stochastic optimization and topics such as Convex function that cross with problems in Distributed algorithm.
His scientific interests lie mostly in Reinforcement learning, Artificial intelligence, Machine learning, Mathematical optimization and Theoretical computer science. Alekh Agarwal has included themes like State and Bellman equation in his Reinforcement learning study. His work on Leverage as part of general Artificial intelligence research is often related to Baseline, Imitation learning, Weighting and Distribution, thus linking different fields of science.
His Machine learning study integrates concerns from other disciplines, such as Classifier and Face. His Mathematical optimization study frequently draws connections between adjacent fields such as Function approximation. His Theoretical computer science study incorporates themes from Cluster analysis, Decoding methods, Identifiability and Complement.
Artificial intelligence, Mathematical optimization, Reinforcement learning, Machine learning and Markov decision process are his primary areas of study. His work in the fields of Leverage overlaps with other areas such as Bias correction. His Leverage research is multidisciplinary, relying on both Supervised learning, Algorithm design, Regret and Realizability.
The concepts of his Mathematical optimization study are interwoven with issues in Regularization and Softmax function. His work deals with themes such as Gradient descent and Function approximation, which intersect with Reinforcement learning. His work in Machine learning is not limited to one particular discipline; it also encompasses Classifier.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
J. C. Duchi;A. Agarwal;M. J. Wainwright.
IEEE Transactions on Automatic Control (2012)
Distributed delayed stochastic optimization
Alekh Agarwal;John C. Duchi.
conference on decision and control (2012)
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
Alekh Agarwal;Sahand N. Negahban;Martin J. Wainwright.
Annals of Statistics (2012)
A Reductions Approach to Fair Classification
Alekh Agarwal;Alina Beygelzimer;Miroslav Dudík;John Langford.
international conference on machine learning (2018)
A reliable effective terascale linear learning system
Alekh Agarwal;Olivier Chapelle;Miroslav Dudík;John Langford.
Journal of Machine Learning Research (2014)
Information-theoretic lower bounds on the oracle complexity of convex optimization
Alekh Agarwal;Peter L. Bartlett;Pradeep Ravikumar;Martin J. Wainwright.
(2010)
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits
Alekh Agarwal;Daniel Hsu;Satyen Kale;John Langford.
international conference on machine learning (2014)
Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback.
Alekh Agarwal;Ofer Dekel;Lin Xiao.
conference on learning theory (2010)
Fast global convergence of gradient methods for high-dimensional statistical recovery
Alekh Agarwal;Sahand N. Negahban;Martin J. Wainwright.
Annals of Statistics (2012)
Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
A. Agarwal;P. L. Bartlett;P. Ravikumar;M. J. Wainwright.
IEEE Transactions on Information Theory (2012)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Microsoft (United States)
University of California, Berkeley
Microsoft (United States)
Stanford University
Google (United States)
Harvard University
University of Maryland, College Park
Columbia University
Carnegie Mellon University
University of Michigan–Ann Arbor
University of Pisa
University of California, Davis
Grenoble Alpes University
Gakushuin University
City University of Hong Kong
Tsinghua University
University of Florence
Rutgers, The State University of New Jersey
University of Alabama at Birmingham
National Institutes of Health
Harvard University
University of California, San Francisco
University of Michigan–Ann Arbor
University of Toronto
University of California, Riverside