Honglak Lee mainly focuses on Artificial intelligence, Machine learning, Pattern recognition, Deep learning and Unsupervised learning. His study brings together the fields of Computer vision and Artificial intelligence. His biological study spans a wide range of topics, including Visual Word, Training set and Automatic image annotation.
Honglak Lee combines subjects such as Object and Boltzmann machine with his study of Pattern recognition. His research integrates issues of Classifier, Speech recognition, Representation and Robustness in his study of Deep learning. His studies in Unsupervised learning integrate themes in fields like Semi-supervised learning and Feature extraction.
His scientific interests lie mostly in Artificial intelligence, Machine learning, Pattern recognition, Reinforcement learning and Artificial neural network. His work on Artificial intelligence is being expanded to include thematically relevant topics such as Computer vision. His Machine learning research incorporates elements of Pixel and Representation.
His Pattern recognition study combines topics from a wide range of disciplines, such as Contextual image classification and Inference. His studies deal with areas such as Recurrent neural network, Generalization, Mathematical optimization and Control theory as well as Reinforcement learning. His Deep learning research includes elements of Unsupervised learning, Generative model and Robustness.
His scientific interests lie mostly in Artificial intelligence, Reinforcement learning, Machine learning, Artificial neural network and Algorithm. His work in Artificial intelligence is not limited to one particular discipline; it also encompasses Pattern recognition. He interconnects Smoothing, Autoencoder, Deep learning and Contextual image classification in the investigation of issues within Pattern recognition.
His Reinforcement learning research includes themes of Generalization, Mathematical optimization and Inference. His Machine learning study combines topics in areas such as Adversarial system, Pixel and Generative grammar. His Artificial neural network research incorporates themes from Flow, Labeled data, Leverage and Word error rate.
His primary areas of study are Artificial intelligence, Artificial neural network, Generalization, Reinforcement learning and Regularization. As part of his studies on Artificial intelligence, Honglak Lee often connects relevant subjects like Machine learning. As part of one scientific family, Honglak Lee deals mainly with the area of Machine learning, narrowing it down to issues related to the Generative grammar, and often Interpretability and Feature learning.
His work deals with themes such as Radiology, Deep learning, Convolutional neural network and Encoding, which intersect with Artificial neural network. The Reinforcement learning study combines topics in areas such as Stability, Supervised learning, Flow and Manifold. His Regularization research integrates issues from Robotics and Inference.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Efficient sparse coding algorithms
Honglak Lee;Alexis Battle;Rajat Raina;Andrew Y. Ng.
neural information processing systems (2006)
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
Honglak Lee;Roger Grosse;Rajesh Ranganath;Andrew Y. Ng.
international conference on machine learning (2009)
Multimodal Deep Learning
Jiquan Ngiam;Aditya Khosla;Mingyu Kim;Juhan Nam.
international conference on machine learning (2011)
An analysis of single-layer networks in unsupervised feature learning
Adam Coates;Andrew Y. Ng;Honglak Lee.
international conference on artificial intelligence and statistics (2011)
Generative adversarial text to image synthesis
Scott Reed;Zeynep Akata;Xinchen Yan;Lajanugen Logeswaran.
international conference on machine learning (2016)
Self-taught learning: transfer learning from unlabeled data
Rajat Raina;Alexis Battle;Honglak Lee;Benjamin Packer.
international conference on machine learning (2007)
Learning structured output representation using deep conditional generative models
Kihyuk Sohn;Xinchen Yan;Honglak Lee.
neural information processing systems (2015)
Unsupervised feature learning for audio classification using convolutional deep belief networks
Honglak Lee;Peter Pham;Yan Largman;Andrew Y. Ng.
neural information processing systems (2009)
Deep learning for detecting robotic grasps
Ian Lenz;Honglak Lee;Ashutosh Saxena.
The International Journal of Robotics Research (2015)
Sparse deep belief net model for visual area V2
Honglak Lee;Chaitanya Ekanadham;Andrew Y. Ng.
neural information processing systems (2007)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Michigan–Ann Arbor
Stanford University
Adobe Systems (United States)
Google (United States)
Max Planck Institute for Informatics
University of Tübingen
University of Michigan–Ann Arbor
Yale University
Google (United States)
Google (United States)
London South Bank University
University of Michigan–Ann Arbor
University of Bergen
Edith Cowan University
Oracle (United States)
Imec
Seoul National University
University of Ottawa
Osaka Metropolitan University
Smithsonian Conservation Biology Institute
Fondazione Edmund Mach
Showa University
University of Minnesota
University of North Dakota
University of Sussex
Osaka University