Her main research concerns Artificial intelligence, Reinforcement learning, Machine learning, Variation and Feature. The various areas that Doina Precup examines in her Artificial intelligence study include Curiosity and Pattern recognition. Her work deals with themes such as Information theory, Markov decision process, Algorithm, Randomness and Coding, which intersect with Reinforcement learning.
Her work in the fields of Markov decision process, such as Partially observable Markov decision process, intersects with other areas such as Dirichlet distribution. Doina Precup has researched Machine learning in several fields, including Reproducibility, Key and Benchmark. Doina Precup combines subjects such as Representation and Measure with her study of Variation.
Doina Precup spends much of her time researching Artificial intelligence, Reinforcement learning, Machine learning, Mathematical optimization and Markov decision process. Her work carried out in the field of Artificial intelligence brings together such families of science as Markov process, State, Computer vision and Pattern recognition. The concepts of her Pattern recognition study are interwoven with issues in Probabilistic logic and Feature.
Her study looks at the relationship between Reinforcement learning and topics such as Theoretical computer science, which overlap with Graph. Her Dynamic programming study in the realm of Mathematical optimization interacts with subjects such as Set. As a part of the same scientific study, Doina Precup usually deals with the Markov decision process, concentrating on Bellman equation and frequently concerns with Q-learning.
Doina Precup mostly deals with Reinforcement learning, Artificial intelligence, Machine learning, Theoretical computer science and Artificial neural network. In the subject of general Reinforcement learning, her work in Temporal difference learning is often linked to Variance, thereby combining diverse domains of study. Her Machine learning research incorporates elements of Segmentation, Sample and Realization.
Her Theoretical computer science research includes themes of Graph, Embedding, Graph, Value and Generalization error. Her Artificial neural network research incorporates themes from Inference, Training set, Simulation and Affine transformation. Her Mathematical optimization study combines topics in areas such as Upper and lower bounds and Multi-agent system.
The scientist’s investigation covers issues in Reinforcement learning, Artificial intelligence, Theoretical computer science, Artificial neural network and Leverage. Her Reinforcement learning study which covers State space that intersects with Invariant. Her research combines Machine learning and Artificial intelligence.
Doina Precup has included themes like Value, Krylov subspace, Activation function and Graph in her Theoretical computer science study. As part of the same scientific family, Doina Precup usually focuses on Artificial neural network, concentrating on Robustness and intersecting with Regularization. Her Leverage research focuses on subjects like Deep learning, which are linked to Decision problem and Linear combination.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Bjoern H. Menze;Andras Jakab;Stefan Bauer;Jayashree Kalpathy-Cramer.
IEEE Transactions on Medical Imaging (2015)
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Richard S. Sutton;Doina Precup;Satinder Singh.
Artificial Intelligence (1999)
Deep Reinforcement Learning That Matters
Peter Henderson;Riashat Islam;Philip Bachman;Joelle Pineau.
national conference on artificial intelligence (2017)
The Option-Critic Architecture
Pierre-Luc Bacon;Jean Harb;Doina Precup.
national conference on artificial intelligence (2016)
Eligibility Traces for Off-Policy Policy Evaluation
Doina Precup;Richard S. Sutton;Satinder P. Singh.
international conference on machine learning (2000)
Fast gradient-descent methods for temporal-difference learning with linear function approximation
Richard S. Sutton;Hamid Reza Maei;Doina Precup;Shalabh Bhatnagar.
international conference on machine learning (2009)
Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction
Richard S. Sutton;Joseph Modayil;Michael Delp;Thomas Degris.
adaptive agents and multi-agents systems (2011)
Off-Policy Temporal Difference Learning with Function Approximation
Doina Precup;Richard S. Sutton;Sanjoy Dasgupta.
international conference on machine learning (2001)
Off-Policy Deep Reinforcement Learning without Exploration
Scott Fujimoto;David Meger;Doina Precup.
international conference on machine learning (2019)
Learning Options in Reinforcement Learning
Martin Stolle;Doina Precup.
symposium on abstraction, reformulation and approximation (2002)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
McGill University
McGill University
McGill University
University of Alberta
University of Michigan–Ann Arbor
Technion – Israel Institute of Technology
University of Montreal
Montreal Neurological Institute and Hospital
DeepMind (United Kingdom)
Facebook (United States)
London South Bank University
Absolute Ventures
European Bioinformatics Institute
Universidade Federal de Santa Catarina
University of Basel
University of Copenhagen
Agricultural Research Service
University of Nebraska–Lincoln
University of Wisconsin–Milwaukee
Leiden University Medical Center
University of Orléans
Norwegian School of Sport Sciences
University of California, Santa Barbara
Bangor University
University of Oxford
East Carolina University