Mathematical optimization, Artificial intelligence, Convex optimization, Algorithm and Stochastic optimization are his primary areas of study. His research in Mathematical optimization intersects with topics in Stochastic gradient descent, Learnability and Regret. His research integrates issues of Machine learning, Theoretical computer science and Euclidean space in his study of Artificial intelligence.
The study incorporates disciplines such as Function and Greedy algorithm in addition to Convex optimization. His Algorithm research is multidisciplinary, incorporating perspectives in Artificial neural network, Norm and Time series. Ohad Shamir combines subjects such as Distributed algorithm, Type and Applied mathematics with his study of Stochastic optimization.
His primary areas of study are Mathematical optimization, Artificial intelligence, Algorithm, Machine learning and Applied mathematics. His Mathematical optimization research integrates issues from Sampling, Stochastic gradient descent, Regret and Convex optimization. Ohad Shamir has included themes like Theoretical computer science and Pattern recognition in his Artificial intelligence study.
Artificial neural network, Dimension, Lipschitz continuity and Kernel is closely connected to Function in his research, which is encompassed under the umbrella topic of Algorithm. In the field of Machine learning, his study on Semi-supervised learning overlaps with subjects such as Collaborative filtering. His Applied mathematics study incorporates themes from Regularization, Gradient descent, Optimization problem, Upper and lower bounds and Stationary point.
Ohad Shamir mainly focuses on Artificial neural network, Applied mathematics, Function, Upper and lower bounds and Stochastic gradient descent. His Artificial neural network research is multidisciplinary, relying on both Exponential growth, Algorithm, Bounded function and Theoretical computer science. His Applied mathematics research includes elements of Regularization, Non convex optimization, Gradient descent, Norm and Stationary point.
His biological study spans a wide range of topics, including Manifold, Mathematical optimization, Grassmannian and Convex optimization. Minimax is the focus of his Mathematical optimization research. Ohad Shamir interconnects Optimization problem and Combinatorics in the investigation of issues within Stochastic gradient descent.
Ohad Shamir mainly investigates Artificial neural network, Applied mathematics, Upper and lower bounds, Algorithm and Stochastic gradient descent. His study looks at the intersection of Artificial neural network and topics like Exponential growth with Gradient descent and Polynomial. His Upper and lower bounds research incorporates themes from Logarithm, Stochastic optimization, Minimax and Convex optimization.
His Convex optimization study combines topics from a wide range of disciplines, such as Regularization, Convex function, Numerical analysis and Mathematical optimization. In general Algorithm study, his work on Residual neural network, Residual and Linear prediction often relates to the realm of Network architecture, thereby connecting several areas of interest. His work deals with themes such as Optimization problem, Stationary point and Maxima and minima, which intersect with Stochastic gradient descent.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Optimal distributed online prediction using mini-batches
Ofer Dekel;Ran Gilad-Bachrach;Ohad Shamir;Lin Xiao.
Journal of Machine Learning Research (2012)
Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization
Alexander Rakhlin;Ohad Shamir;Karthik Sridharan.
international conference on machine learning (2012)
The Power of Depth for Feedforward Neural Networks
Ronen Eldan;Ohad Shamir.
conference on learning theory (2016)
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir;Tong Zhang.
international conference on machine learning (2013)
Communication-Efficient Distributed Optimization using an Approximate Newton-type Method
Ohad Shamir;Nati Srebro;Tong Zhang.
international conference on machine learning (2014)
Learnability, Stability and Uniform Convergence
Shai Shalev-Shwartz;Ohad Shamir;Nathan Srebro;Karthik Sridharan.
Journal of Machine Learning Research (2010)
Stochastic Convex Optimization.
Shai Shalev-Shwartz;Ohad Shamir;Nathan Srebro;Karthik Sridharan.
conference on learning theory (2009)
On the Computational Efficiency of Training Neural Networks
Roi Livni;Shai Shalev-Shwartz;Ohad Shamir.
neural information processing systems (2014)
Adaptively Learning the Crowd Kernel
Omer Tamuz;Ce Liu;Serge Belongie;Ohad Shamir.
arXiv: Learning (2011)
Learning and generalization with the information bottleneck
Ohad Shamir;Sivan Sabato;Naftali Tishby.
Theoretical Computer Science (2010)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Hebrew University of Jerusalem
University of Milan
Cornell University
Toyota Technological Institute at Chicago
Hebrew University of Jerusalem
Technion – Israel Institute of Technology
Microsoft (United States)
Facebook (United States)
Harvard University
Carnegie Mellon University
Sant'Anna School of Advanced Studies
Korea Aerospace University
University of Bologna
University of Science and Technology Beijing
University of Alabama at Birmingham
Paul Dick and Associates
Institut Pasteur
Utrecht University
Monash University
University of Reading
University of Lausanne
The University of Texas at Dallas
University of Edinburgh
University of Birmingham
University of Virginia