H-Index & Metrics Best Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science D-index 116 Citations 59,392 375 World Ranking 64 National Ranking 41

Research.com Recognitions

Awards & Achievements

2019 - Fellow of Alfred P. Sloan Foundation

Overview

What is he best known for?

The fields of study he is best known for:

  • Artificial intelligence
  • Machine learning
  • Artificial neural network

His primary scientific interests are in Artificial intelligence, Reinforcement learning, Robot, Artificial neural network and Machine learning. His research in Artificial intelligence tackles topics such as Computer vision which are related to areas like Animation. His Reinforcement learning research is multidisciplinary, relying on both Principle of maximum entropy, Representation, Control, Benchmark and Function.

His Robot study combines topics in areas such as Object, Motion, Pixel and Human–computer interaction. In his study, Trust region is inextricably linked to Nonlinear system, which falls within the broad field of Artificial neural network. His studies in Machine learning integrate themes in fields like Adversarial system and Adaptation.

His most cited work include:

  • Model-agnostic meta-learning for fast adaptation of deep networks (1969 citations)
  • Trust Region Policy Optimization (1849 citations)
  • End-to-end training of deep visuomotor policies (1663 citations)

What are the main themes of his work throughout his whole career to date?

The scientist’s investigation covers issues in Reinforcement learning, Artificial intelligence, Robot, Machine learning and Human–computer interaction. In his study, Mathematical optimization is strongly linked to Artificial neural network, which falls under the umbrella field of Reinforcement learning. His research on Artificial intelligence often connects related areas such as Computer vision.

His Robot study incorporates themes from Object, Control engineering, Task and Set. His study in the fields of Leverage under the domain of Machine learning overlaps with other disciplines such as Space. His work carried out in the field of Human–computer interaction brings together such families of science as Motion and Imitation.

He most often published in these fields:

  • Reinforcement learning (63.62%)
  • Artificial intelligence (64.30%)
  • Robot (35.03%)

What were the highlights of his more recent work (between 2019-2021)?

  • Reinforcement learning (63.62%)
  • Artificial intelligence (64.30%)
  • Machine learning (32.99%)

In recent papers he was focusing on the following fields of study:

His main research concerns Reinforcement learning, Artificial intelligence, Machine learning, Robot and Human–computer interaction. His research investigates the link between Reinforcement learning and topics such as Set that cross with problems in Structure. His is involved in several facets of Artificial intelligence study, as is seen by his studies on Robotics, Variety, Benchmark, Range and Deep learning.

His Machine learning study also includes fields such as

  • Training set, which have a strong connection to Meta learning,
  • Robustness, which have a strong connection to Minimax. His work deals with themes such as Artificial neural network, Adaptation and Image translation, which intersect with Robot. Sergey Levine has included themes like Object and Mobile robot in his Human–computer interaction study.

Between 2019 and 2021, his most popular works were:

  • Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems (128 citations)
  • Gradient Surgery for Multi-Task Learning (57 citations)
  • D4RL: Datasets for Deep Data-Driven Reinforcement Learning (55 citations)

In his most recent research, the most cited papers focused on:

  • Artificial intelligence
  • Machine learning
  • Algebra

Sergey Levine spends much of his time researching Artificial intelligence, Reinforcement learning, Machine learning, Robot and Human–computer interaction. His Artificial intelligence study deals with Code intersecting with Representation. His Reinforcement learning research includes themes of Supervised learning, Robotics and Set.

His research in Machine learning intersects with topics in Control, Training set and Benchmark. Sergey Levine interconnects Artificial neural network, Modality and Computer vision in the investigation of issues within Robot. His Artificial neural network study incorporates themes from Motion planning, Image translation, Domain knowledge, Cognitive map and Pattern recognition.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Best Publications

Trust Region Policy Optimization

John Schulman;Sergey Levine;Pieter Abbeel;Michael Jordan.
international conference on machine learning (2015)

2675 Citations

Trust Region Policy Optimization

John Schulman;Sergey Levine;Philipp Moritz;Michael I. Jordan.
arXiv: Learning (2015)

2095 Citations

End-to-end training of deep visuomotor policies

Sergey Levine;Chelsea Finn;Trevor Darrell;Pieter Abbeel.
Journal of Machine Learning Research (2016)

1742 Citations

Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection

Sergey Levine;Peter Pastor;Alex Krizhevsky;Julian Ibarz.
The International Journal of Robotics Research (2018)

1505 Citations

Model-agnostic meta-learning for fast adaptation of deep networks

Chelsea Finn;Pieter Abbeel;Sergey Levine.
international conference on machine learning (2017)

1250 Citations

Guided Policy Search

Sergey Levine;Vladlen Koltun.
international conference on machine learning (2013)

867 Citations

Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates

Shixiang Gu;Ethan Holly;Timothy Lillicrap;Sergey Levine.
international conference on robotics and automation (2017)

821 Citations

High-Dimensional Continuous Control Using Generalized Advantage Estimation

John Schulman;Philipp Moritz;Sergey Levine;Michael Jordan.
arXiv: Learning (2015)

623 Citations

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

Tuomas Haarnoja;Aurick Zhou;Pieter Abbeel;Sergey Levine.
arXiv: Learning (2018)

561 Citations

Soft Actor-Critic Algorithms and Applications

Tuomas Haarnoja;Aurick Zhou;Kristian Hartikainen;George Tucker.
arXiv: Learning (2018)

519 Citations

If you think any of the details on this page are incorrect, let us know.

Contact us

Best Scientists Citing Sergey Levine

Pieter Abbeel

Pieter Abbeel

University of California, Berkeley

Publications: 105

Nicolas Heess

Nicolas Heess

Google (United States)

Publications: 69

Dieter Fox

Dieter Fox

University of Washington

Publications: 57

Ken Goldberg

Ken Goldberg

University of California, Berkeley

Publications: 56

Jan Peters

Jan Peters

TU Darmstadt

Publications: 55

Yoshua Bengio

Yoshua Bengio

University of Montreal

Publications: 47

Mykel J. Kochenderfer

Mykel J. Kochenderfer

Stanford University

Publications: 43

Abhinav Gupta

Abhinav Gupta

Facebook (United States)

Publications: 42

Li Fei-Fei

Li Fei-Fei

Stanford University

Publications: 42

Joshua B. Tenenbaum

Joshua B. Tenenbaum

MIT

Publications: 40

Anca D. Dragan

Anca D. Dragan

University of California, Berkeley

Publications: 40

Honglak Lee

Honglak Lee

University of Michigan–Ann Arbor

Publications: 39

Shimon Whiteson

Shimon Whiteson

University of Oxford

Publications: 38

Masayoshi Tomizuka

Masayoshi Tomizuka

University of California, Berkeley

Publications: 36

David Silver

David Silver

Google (United States)

Publications: 35

Dinesh Manocha

Dinesh Manocha

University of Maryland, College Park

Publications: 33

Something went wrong. Please try again later.