2023 - Research.com Computer Science in Canada Leader Award
2022 - Research.com Best Scientist Award
2022 - Research.com Computer Science in Canada Leader Award
2018 - A. M. Turing Award For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
2016 - Member of the National Academy of Engineering For contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision.
2016 - IEEE/RSE Wolfson James Clerk Maxwell Medal “For pioneering and sustained contributions to machine learning, including developments in deep neural networks.”
2014 - IEEE Frank Rosenblatt Award
2003 - Fellow of the American Academy of Arts and Sciences
1998 - Fellow of the Royal Society, United Kingdom
1996 - Fellow of the Royal Society of Canada Academy of Science
1990 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI)
His primary areas of investigation include Artificial intelligence, Artificial neural network, Machine learning, Pattern recognition and Speech recognition. Artificial intelligence and Layer are two areas of study in which he engages in interdisciplinary research. His Artificial neural network research is multidisciplinary, incorporating elements of Dropout, Task, Mixture model, Convolutional neural network and Generative model.
His work in Convolutional neural network tackles topics such as Regularization which are related to areas like Similarity learning, Vanishing gradient problem, Feature learning and Fisher vector. His biological study spans a wide range of topics, including Variable-order Bayesian network, Bayesian statistics, Sigmoid function and Frequentist inference. His research in Speech recognition focuses on subjects like Time delay neural network, which are connected to Frame and Margin.
His main research concerns Artificial intelligence, Artificial neural network, Pattern recognition, Machine learning and Algorithm. His Artificial intelligence study frequently involves adjacent topics like Computer vision. His Artificial neural network study combines topics in areas such as Speech recognition and Generalization.
His studies deal with areas such as Image and Deep belief network as well as Pattern recognition. His Machine learning research is multidisciplinary, incorporating perspectives in Structure, Probabilistic logic and Inference. His Algorithm study combines topics from a wide range of disciplines, such as Latent variable, Simple, Theoretical computer science and Mathematical optimization.
Geoffrey E. Hinton focuses on Artificial intelligence, Artificial neural network, Pattern recognition, Machine learning and Algorithm. His Image, Representation, MNIST database, Deep learning and Class study are his primary interests in Artificial intelligence. His Deep learning research incorporates elements of Backpropagation and Scalability.
His research in Artificial neural network intersects with topics in Language model, Salient, Cortex and Decision tree. The concepts of his Pattern recognition study are interwoven with issues in Smoothing, Data point and Contextual image classification. Geoffrey E. Hinton combines subjects such as Simple and Fraction with his study of Machine learning.
His primary areas of study are Artificial intelligence, Artificial neural network, Machine learning, Pattern recognition and Hyperparameter. His work in Artificial intelligence is not limited to one particular discipline; it also encompasses Key. His study in the field of Backpropagation is also linked to topics like Layer.
The various areas that Geoffrey E. Hinton examines in his Machine learning study include Test and Training set. His work on Convolutional neural network as part of general Pattern recognition study is frequently linked to Test data, therefore connecting diverse disciplines of science. His Deep learning research is multidisciplinary, relying on both Routing and Differential.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
ImageNet classification with deep convolutional neural networks
Alex Krizhevsky;Ilya Sutskever;Geoffrey E. Hinton.
Communications of The ACM (2017)
Yann LeCun;Yann LeCun;Yoshua Bengio;Geoffrey Hinton;Geoffrey Hinton.
Dropout: a simple way to prevent neural networks from overfitting
Nitish Srivastava;Geoffrey Hinton;Alex Krizhevsky;Ilya Sutskever.
Journal of Machine Learning Research (2014)
Learning representations by back-propagating errors
David E. Rumelhart;Geoffrey E. Hinton;Ronald J. Williams.
Visualizing Data using t-SNE
Laurens van der Maaten;Geoffrey Hinton.
Journal of Machine Learning Research (2008)
Reducing the Dimensionality of Data with Neural Networks
G. E. Hinton;R. R. Salakhutdinov.
A fast learning algorithm for deep belief nets
Geoffrey E. Hinton;Simon Osindero;Yee-Whye Teh.
Neural Computation (2006)
Rectified Linear Units Improve Restricted Boltzmann Machines
Vinod Nair;Geoffrey E. Hinton.
international conference on machine learning (2010)
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton;Oriol Vinyals;Jeffrey Dean.
arXiv: Machine Learning (2015)
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups
G. Hinton;Li Deng;Dong Yu;G. E. Dahl.
IEEE Signal Processing Magazine (2012)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: