2007 - Fellow of the American Association for the Advancement of Science (AAAS)
2002 - ACM Fellow For contributions to machine learning.
1994 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) For contributions to the science and practice of machine learning, methodology of machine learning research, and for service to the AI community.
His scientific interests lie mostly in Artificial intelligence, Machine learning, Instance-based learning, Generalization and Pattern recognition. His study in Artificial intelligence focuses on Algorithmic learning theory, Reinforcement learning, Overfitting, Boosting and Robustness. His Boosting study integrates concerns from other disciplines, such as BrownBoost, Gradient boosting and Decision tree.
His work in Machine learning addresses issues such as Classifier, which are connected to fields such as Training set. His studies deal with areas such as Object, Inductive transfer, Multi instance multi label and Feature vector as well as Instance-based learning. His work in the fields of Pattern recognition, such as Euclidean distance, intersects with other areas such as Gaussian.
His primary scientific interests are in Artificial intelligence, Machine learning, Reinforcement learning, Pattern recognition and Data mining. His Artificial intelligence research is multidisciplinary, relying on both Computer vision and Natural language processing. Many of his studies on Machine learning apply to Robot learning as well.
His Reinforcement learning research incorporates elements of Mathematical optimization, Bellman equation and State. Thomas G. Dietterich focuses mostly in the field of Mathematical optimization, narrowing it down to topics relating to Markov decision process and, in certain cases, Markov chain. His study in Pattern recognition is interdisciplinary in nature, drawing from both Contextual image classification, Histogram and Cognitive neuroscience of visual object recognition.
Artificial intelligence, Anomaly detection, Machine learning, Robustness and Markov decision process are his primary areas of study. Artificial neural network and Variety are the subjects of his Artificial intelligence studies. His Anomaly detection research is multidisciplinary, incorporating elements of Deep learning, Anomaly and Outlier.
Much of his study explores Machine learning relationship to Benchmark. His Benchmark study combines topics in areas such as Training set, Baseline and Test set. The concepts of his Markov decision process study are interwoven with issues in Visualization, Monte Carlo method, Mathematical optimization and Reinforcement learning.
Artificial intelligence, Robustness, Anomaly detection, Machine learning and Artificial neural network are his primary areas of study. His work is dedicated to discovering how Anomaly detection, Deep learning are connected with Pattern recognition, Outlier, Data modeling, Field and Relation and other disciplines. His work on Test set as part of general Machine learning research is often related to Fraction, thus linking different fields of science.
His work deals with themes such as Variety, Benchmarking, Generative grammar and Data science, which intersect with Artificial neural network. In his research on the topic of Benchmarking, Classifier is strongly related with Residual neural network. Thomas G. Dietterich has included themes like Baseline and Training set in his Benchmark study.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Ensemble Methods in Machine Learning
Thomas G. Dietterich.
multiple classifier systems (2000)
Approximate statistical tests for comparing supervised classification learning algorithms
Thomas G. Dietterich.
Neural Computation (1998)
Solving multiclass learning problems via error-correcting output codes
Thomas G. Dietterich;Ghulum Bakiri.
Journal of Artificial Intelligence Research (1994)
An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization
Thomas G. Dietterich.
Machine Learning (2000)
Solving the multiple instance problem with axis-parallel rectangles
Thomas G. Dietterich;Richard H. Lathrop;Tomás Lozano-Pérez.
Artificial Intelligence (1997)
Introduction to Semi-Supervised Learning
Xiaojin Zhu;Andrew B. Goldberg;Ronald Brachman;Thomas Dietterich.
(2009)
Machine-Learning Research
Thomas G. Dietterich.
Ai Magazine (1997)
Hierarchical reinforcement learning with the MAXQ value function decomposition
Thomas G. Dietterich.
Journal of Artificial Intelligence Research (2000)
Learning with many irrelevant features
Hussein Almuallim;Thomas G. Dietterich.
national conference on artificial intelligence (1991)
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks;Thomas G. Dietterich.
international conference on learning representations (2019)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Oregon State University
Oregon State University
Oregon State University
University of Washington
University of California, Santa Cruz
Imperial College London
Oregon State University
Oregon State University
KU Leuven
George Mason University
University of North Texas
University of Tehran
Ludwig-Maximilians-Universität München
The Ohio State University
Dalhousie University
University of Maryland, College Park
University of Prince Edward Island
Institut Pasteur
Keio University
Pacific Northwest National Laboratory
Copenhagen University Hospital
Institut Universitaire de France
Royal Netherlands Meteorological Institute
National Institute of Allergy and Infectious Diseases
Johns Hopkins University
University of California, Irvine