His primary areas of study are Rate of convergence, Mathematical optimization, Artificial intelligence, Convexity and Machine learning. His studies deal with areas such as Polytope, Submodular set function, Subgradient method and Applied mathematics as well as Rate of convergence. He interconnects Convergence, Segmentation, Structured prediction and Markov chain in the investigation of issues within Mathematical optimization.
His work in Convergence addresses subjects such as Algorithm, which are connected to disciplines such as Reproducing kernel Hilbert space, Flow network and Video tracking. His work in the fields of Artificial intelligence, such as Cluster analysis and Unsupervised learning, intersects with other areas such as The Internet and Set. His Machine learning research is multidisciplinary, incorporating elements of Training set and Robustness.
Simon Lacoste-Julien mainly focuses on Artificial intelligence, Mathematical optimization, Algorithm, Rate of convergence and Machine learning. His Artificial intelligence study integrates concerns from other disciplines, such as Sequence and Pattern recognition. In general Mathematical optimization study, his work on Subgradient method often relates to the realm of Quadratic equation, thereby connecting several areas of interest.
His research integrates issues of Pairwise comparison, Moment and Reproducing kernel Hilbert space in his study of Algorithm. His Rate of convergence research includes themes of Overhead, Gradient method, Asynchronous communication, Applied mathematics and Speedup. The Machine learning study combines topics in areas such as Base, Adaptation, Robustness and Benchmark.
Simon Lacoste-Julien mainly investigates Artificial intelligence, Machine learning, Applied mathematics, Convergence and Artificial neural network. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Causal model, Stochastic optimization and Categorical variable. His work on Proxy as part of general Machine learning research is frequently linked to Variable, thereby connecting diverse disciplines of science.
His study on Bregman divergence is often connected to Convex optimization and Operator as part of broader study in Applied mathematics. Simon Lacoste-Julien has included themes like Mathematical optimization and Constraint in his Convergence study. He combines subjects such as Stochastic gradient descent and Constant with his study of Mathematical optimization.
Simon Lacoste-Julien mainly focuses on Applied mathematics, Gradient descent, Bilinear interpolation, Convex optimization and Convergence. In his works, Simon Lacoste-Julien undertakes multidisciplinary study on Applied mathematics and Convex function. His work carried out in the field of Gradient descent brings together such families of science as Real line, Adversarial machine learning, Complex plane and Strongly monotone.
His Bilinear interpolation study combines topics from a wide range of disciplines, such as Class, Stationary point and Hamiltonian. He has researched Convergence in several fields, including Mathematical optimization, Stochastic gradient descent, Newton's method and Training set. His Interpolation research integrates issues from Binary classification, Rate of convergence, Broyden–Fletcher–Goldfarb–Shanno algorithm and Hessian matrix.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
A closer look at memorization in deep networks
Devansh Arpit;Stanisław Jastrzębski;Nicolas Ballas;David Krueger.
international conference on machine learning (2017)
A closer look at memorization in deep networks
Devansh Arpit;Stanisław Jastrzębski;Nicolas Ballas;David Krueger.
international conference on machine learning (2017)
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio;Francis Bach;Simon Lacoste-Julien.
neural information processing systems (2014)
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio;Francis Bach;Simon Lacoste-Julien.
neural information processing systems (2014)
DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification
Simon Lacoste-Julien;Fei Sha;Michael I. Jordan.
neural information processing systems (2008)
DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification
Simon Lacoste-Julien;Fei Sha;Michael I. Jordan.
neural information processing systems (2008)
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
Simon Lacoste-Julien;Martin Jaggi;Mark Schmidt;Patrick Pletscher.
international conference on machine learning (2013)
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
Simon Lacoste-Julien;Martin Jaggi;Mark Schmidt;Patrick Pletscher.
international conference on machine learning (2013)
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien;Mark W. Schmidt;Francis R. Bach.
arXiv: Learning (2012)
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien;Mark W. Schmidt;Francis R. Bach.
arXiv: Learning (2012)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
École Normale Supérieure
Facebook (United States)
French Institute for Research in Computer Science and Automation - INRIA
Czech Technical University in Prague
University of British Columbia
University of Montreal
University of California, Berkeley
Polytechnique Montréal
Google (United States)
University of Cambridge
French Institute for Research in Computer Science and Automation - INRIA
Publications: 20
Tsinghua University
Chip Law Group
University of Tokyo
Texas A&M University
Russian Academy of Sciences
The Francis Crick Institute
Norwegian University of Science and Technology
Imperial College London
University of Oregon
Kansas State University
Technical University of Munich
Thermo Fisher Scientific (Israel)
University of Groningen
University of Michigan–Ann Arbor
University of Vienna
University of Bristol