2004 - Fellow of Alfred P. Sloan Foundation
Emanuel Todorov mainly focuses on Control theory, Optimal control, Mathematical optimization, Reinforcement learning and Neuroscience. His Control theory research incorporates elements of Humanoid robot, Linear model and Task. His research integrates issues of Generalized coordinates, Compiler, Physics engine and Computation in his study of Humanoid robot.
His research in Optimal control is mostly focused on Linear-quadratic-Gaussian control. His study in the field of Bellman equation is also linked to topics like Torso. In general Neuroscience, his work in Motor cortex and Muscle activation is often linked to Population, Control and Movement linking many areas of study.
Emanuel Todorov spends much of his time researching Control theory, Optimal control, Mathematical optimization, Artificial intelligence and Robot. His Control theory research is mostly focused on the topic Trajectory. His Optimal control study combines topics in areas such as Iterative method, Control theory, Task and Nonlinear system.
His Task research includes elements of Motor synergies, Face, Sensorimotor control, Neuroscience and Range. His study on Stochastic control, Bellman equation and Optimization problem is often connected to Convex optimization and Markov decision process as part of broader study in Mathematical optimization. His Artificial intelligence research is multidisciplinary, relying on both Work, Machine learning and Trajectory optimization.
Emanuel Todorov focuses on Artificial intelligence, Robot, Reinforcement learning, Trajectory optimization and Machine learning. His work on Humanoid robot is typically connected to Motion capture as part of general Robot study, connecting several disciplines of science. His Trajectory optimization research is under the purview of Trajectory.
His Control engineering research is multidisciplinary, incorporating elements of Solver and Optimal control. His biological study deals with issues like Control theory, which deal with fields such as Invertible matrix. The concepts of his Optimal control study are interwoven with issues in Linear model, Face and Robot control.
Emanuel Todorov mainly investigates Robot, Artificial intelligence, Reinforcement learning, Trajectory optimization and Machine learning. As a member of one scientific family, Emanuel Todorov mostly works in the field of Robot, focusing on Simulation and, on occasion, Software framework, Software, Robot kinematics and Ode. His Reinforcement learning research integrates issues from Dexterous manipulation and Human–computer interaction.
Emanuel Todorov combines subjects such as Work and Control with his study of Trajectory optimization. As part of one scientific family, Emanuel Todorov deals mainly with the area of Machine learning, narrowing it down to issues related to the Dynamical systems theory, and often Recurrent neural network. His Optimal control study deals with the bigger picture of Control theory.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Optimal feedback control as a theory of motor coordination.
Emanuel Todorov;Michael I. Jordan.
Nature Neuroscience (2002)
MuJoCo: A physics engine for model-based control
Emanuel Todorov;Tom Erez;Yuval Tassa.
intelligent robots and systems (2012)
Optimality principles in sensorimotor control
Emanuel Todorov.
Nature Neuroscience (2004)
A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems
E. Todorov;Weiwei Li.
american control conference (2005)
Synthesis and stabilization of complex behaviors through online trajectory optimization
Yuval Tassa;Tom Erez;Emanuel Todorov.
intelligent robots and systems (2012)
Iterative Linear Quadratic Regulator Design for Nonlinear Biological Movement Systems
Weiwei Li;Emanuel Todorov.
Iterative Linear Quadratic Regulator Design for Nonlinear Biological Movement Systems (2004)
Direct cortical control of muscle activation in voluntary arm movements: a model.
Emanuel Todorov.
Nature Neuroscience (2000)
Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran;Vikash Kumar;Abhishek Gupta;Giulia Vezzani.
robotics science and systems (2018)
Evidence for the Flexible Sensorimotor Strategies Predicted by Optimal Feedback Control
Dan Liu;Emanuel Todorov.
The Journal of Neuroscience (2007)
Discovery of complex behaviors through contact-invariant optimization
Igor Mordatch;Emanuel Todorov;Zoran Popović.
international conference on computer graphics and interactive techniques (2012)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Cambridge
Georgia Institute of Technology
Harvard University
University of California, Berkeley
University of California, San Diego
University of Washington
Texas A&M University
MIT
University of California, Berkeley
OpenAI
Mitsubishi Electric (United States)
City University of Hong Kong
SINTEF
Russian Academy of Sciences
University of Basel
University of British Columbia
University of Melbourne
AZ Monica
University of Rhode Island
Geophysical Fluid Dynamics Laboratory
Fox Chase Cancer Center
Mayo Clinic
Research Triangle Park Foundation
Swansea University
Los Alamos National Laboratory