Martin Riedmiller focuses on Artificial intelligence, Reinforcement learning, Machine learning, Convolutional neural network and Artificial neural network. Many of his studies on Artificial intelligence apply to Pattern recognition as well. The various areas that Martin Riedmiller examines in his Reinforcement learning study include Task, Function, Control, Set and Robot.
His Set research includes themes of Variety, Temporal difference learning and Sensory processing. His Machine learning study deals with Bellman equation intersecting with Learning environment. His Artificial neural network research is mostly focused on the topic Supervised learning.
His primary areas of investigation include Artificial intelligence, Reinforcement learning, Machine learning, Robot and Artificial neural network. His Artificial intelligence research is multidisciplinary, incorporating perspectives in Task and Computer vision. His research in Reinforcement learning intersects with topics in Control, Markov decision process, Mathematical optimization and Set.
His Machine learning research is multidisciplinary, incorporating elements of Function and Benchmark. His work deals with themes such as Range, Simulation and Human–computer interaction, which intersect with Robot. His study in Artificial neural network is interdisciplinary in nature, drawing from both Intelligent control, Control theory and Bellman equation.
His primary scientific interests are in Reinforcement learning, Artificial intelligence, Mathematical optimization, Robot and Control. Martin Riedmiller has researched Reinforcement learning in several fields, including Kullback–Leibler divergence, Set, Human–computer interaction and Hyperparameter. The Artificial intelligence study combines topics in areas such as Machine learning, Scratch and Task.
His work deals with themes such as Sample and Robustness, which intersect with Mathematical optimization. As a member of one scientific family, Martin Riedmiller mostly works in the field of Robot, focusing on Inference and, on occasion, Hindsight bias, Backpropagation, Dynamic programming and Embedding. His Control research is multidisciplinary, incorporating elements of Tree, Computation, Local search and Adaptation.
The scientist’s investigation covers issues in Reinforcement learning, Artificial intelligence, Inference, Kullback–Leibler divergence and Maximum a posteriori estimation. His research integrates issues of Programming language and Physics engine in his study of Reinforcement learning. His studies in Artificial intelligence integrate themes in fields like Scratch and Set.
His research in Set intersects with topics in Latent variable and Parameterized complexity. His Inference study combines topics in areas such as Graph, Theoretical computer science, Trajectory optimization and System identification. His Kullback–Leibler divergence research includes elements of Parametric statistics, Mathematical optimization, Premature convergence, Hyperparameter and Robustness.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Human-level control through deep reinforcement learning
Volodymyr Mnih;Koray Kavukcuoglu;David Silver;Andrei A. Rusu.
Nature (2015)
Human-level control through deep reinforcement learning
Volodymyr Mnih;Koray Kavukcuoglu;David Silver;Andrei A. Rusu.
Nature (2015)
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih;Koray Kavukcuoglu;David Silver;Alex Graves.
arXiv: Learning (2013)
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih;Koray Kavukcuoglu;David Silver;Alex Graves.
arXiv: Learning (2013)
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
Riedmiller M;Braun H.
IEEE International Conference on Neural Networks (1993)
Deterministic Policy Gradient Algorithms
David Silver;Guy Lever;Nicolas Heess;Thomas Degris.
international conference on machine learning (2014)
Deterministic Policy Gradient Algorithms
David Silver;Guy Lever;Nicolas Heess;Thomas Degris.
international conference on machine learning (2014)
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg;Alexey Dosovitskiy;Thomas Brox;Martin A. Riedmiller.
international conference on learning representations (2015)
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg;Alexey Dosovitskiy;Thomas Brox;Martin A. Riedmiller.
international conference on learning representations (2015)
Neural fitted q iteration – first experiences with a data efficient neural reinforcement learning method
Martin Riedmiller.
european conference on machine learning (2005)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Freiburg
DeepMind (United Kingdom)
Google (United States)
University of Freiburg
DeepMind (United Kingdom)
DeepMind (United Kingdom)
Sapienza University of Rome
DeepMind (United Kingdom)
Carnegie Mellon University
University College London
University of Washington
Tsinghua University
University of Michigan–Ann Arbor
University of Nebraska–Lincoln
Oslo University Hospital
National Institute of Amazonian Research
National Institute of Oceanography
University of California, San Diego
German Aerospace Center
Finnish Environment Institute
University of Manchester
AntiCancer (United States)
University College London
Ruhr University Bochum
University of Birmingham
Institute of Astronomy and Astrophysics, Academia Sinica