2019 - Fellow of Alfred P. Sloan Foundation
His primary scientific interests are in Artificial intelligence, Reinforcement learning, Robot, Artificial neural network and Machine learning. His research in Artificial intelligence tackles topics such as Computer vision which are related to areas like Animation. His Reinforcement learning research is multidisciplinary, relying on both Principle of maximum entropy, Representation, Control, Benchmark and Function.
His Robot study combines topics in areas such as Object, Motion, Pixel and Human–computer interaction. In his study, Trust region is inextricably linked to Nonlinear system, which falls within the broad field of Artificial neural network. His studies in Machine learning integrate themes in fields like Adversarial system and Adaptation.
The scientist’s investigation covers issues in Reinforcement learning, Artificial intelligence, Robot, Machine learning and Human–computer interaction. In his study, Mathematical optimization is strongly linked to Artificial neural network, which falls under the umbrella field of Reinforcement learning. His research on Artificial intelligence often connects related areas such as Computer vision.
His Robot study incorporates themes from Object, Control engineering, Task and Set. His study in the fields of Leverage under the domain of Machine learning overlaps with other disciplines such as Space. His work carried out in the field of Human–computer interaction brings together such families of science as Motion and Imitation.
His main research concerns Reinforcement learning, Artificial intelligence, Machine learning, Robot and Human–computer interaction. His research investigates the link between Reinforcement learning and topics such as Set that cross with problems in Structure. His is involved in several facets of Artificial intelligence study, as is seen by his studies on Robotics, Variety, Benchmark, Range and Deep learning.
His Machine learning study also includes fields such as
Sergey Levine spends much of his time researching Artificial intelligence, Reinforcement learning, Machine learning, Robot and Human–computer interaction. His Artificial intelligence study deals with Code intersecting with Representation. His Reinforcement learning research includes themes of Supervised learning, Robotics and Set.
His research in Machine learning intersects with topics in Control, Training set and Benchmark. Sergey Levine interconnects Artificial neural network, Modality and Computer vision in the investigation of issues within Robot. His Artificial neural network study incorporates themes from Motion planning, Image translation, Domain knowledge, Cognitive map and Pattern recognition.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Trust Region Policy Optimization
John Schulman;Sergey Levine;Pieter Abbeel;Michael Jordan.
international conference on machine learning (2015)
Trust Region Policy Optimization
John Schulman;Sergey Levine;Philipp Moritz;Michael I. Jordan.
arXiv: Learning (2015)
End-to-end training of deep visuomotor policies
Sergey Levine;Chelsea Finn;Trevor Darrell;Pieter Abbeel.
Journal of Machine Learning Research (2016)
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Sergey Levine;Peter Pastor;Alex Krizhevsky;Julian Ibarz.
The International Journal of Robotics Research (2018)
Model-agnostic meta-learning for fast adaptation of deep networks
Chelsea Finn;Pieter Abbeel;Sergey Levine.
international conference on machine learning (2017)
Guided Policy Search
Sergey Levine;Vladlen Koltun.
international conference on machine learning (2013)
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
Shixiang Gu;Ethan Holly;Timothy Lillicrap;Sergey Levine.
international conference on robotics and automation (2017)
High-Dimensional Continuous Control Using Generalized Advantage Estimation
John Schulman;Philipp Moritz;Sergey Levine;Michael Jordan.
arXiv: Learning (2015)
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Tuomas Haarnoja;Aurick Zhou;Pieter Abbeel;Sergey Levine.
arXiv: Learning (2018)
Soft Actor-Critic Algorithms and Applications
Tuomas Haarnoja;Aurick Zhou;Kristian Hartikainen;George Tucker.
arXiv: Learning (2018)
Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.
If you think any of the details on this page are incorrect, let us know.
University of California, Berkeley
Google (United States)
University of California, Berkeley
University of California, Berkeley
University of Montreal
University of California, Berkeley
University College London
Intel (United States)
University of Illinois at Urbana-Champaign
University of California, Berkeley
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: