His primary areas of investigation include Artificial intelligence, Machine learning, Deep learning, Convolutional neural network and Artificial neural network. His research in the fields of Hebbian theory, Caffè and Variety overlaps with other disciplines such as Mobile device and Matching. His study looks at the relationship between Hebbian theory and topics such as Object detection, which overlap with Feature extraction.
His Caffè research is multidisciplinary, incorporating perspectives in Embedding, Computer architecture and Theano. His Variety study combines topics in areas such as Range, Visual recognition, Feature and Cognitive neuroscience of visual object recognition. Yangqing Jia interconnects Distributed computing, Robotics and Inference in the investigation of issues within Artificial neural network.
Yangqing Jia mostly deals with Artificial intelligence, Machine learning, Pattern recognition, Deep learning and Artificial neural network. His Artificial intelligence study frequently draws connections between related disciplines such as Computer vision. His research integrates issues of Contextual image classification, Inference and Automatic image annotation in his study of Machine learning.
His studies examine the connections between Deep learning and genetics, as well as such issues in Computer architecture, with regards to Bottleneck. His Artificial neural network research includes elements of Bayesian optimization, Computer engineering and Reinforcement learning. His biological study deals with issues like Hebbian theory, which deal with fields such as Residual neural network.
Yangqing Jia focuses on Artificial intelligence, Deep learning, Computer engineering, Speedup and Artificial neural network. As part of his studies on Artificial intelligence, Yangqing Jia frequently links adjacent subjects like Machine learning. In general Machine learning, his work in Support vector machine is often linked to Scheme, Matching and Generalization linking many areas of study.
His work carried out in the field of Deep learning brings together such families of science as Computer hardware and Computational science. His research in Computer engineering intersects with topics in Bayesian optimization and Frame rate. Artificial neural network and Reinforcement learning are commonly linked in his work.
Artificial intelligence, Machine learning, Inference, Deep learning and Artificial neural network are his primary areas of study. His work on Enhanced Data Rates for GSM Evolution and Support vector machine as part of general Artificial intelligence research is frequently linked to FLOPS and Generalization, thereby connecting diverse disciplines of science. His Support vector machine study frequently draws connections to adjacent fields such as Perspective.
His FLOPS research includes elements of Speedup, Mobile device, Frame rate and Computer engineering. Among his research on Generalization, you can see a combination of other fields of science like Scheme and Matching.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Going deeper with convolutions
Christian Szegedy;Wei Liu;Yangqing Jia;Pierre Sermanet.
computer vision and pattern recognition (2015)
Going deeper with convolutions
Christian Szegedy;Wei Liu;Yangqing Jia;Pierre Sermanet.
computer vision and pattern recognition (2015)
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia;Evan Shelhamer;Jeff Donahue;Sergey Karayev.
acm multimedia (2014)
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia;Evan Shelhamer;Jeff Donahue;Sergey Karayev.
acm multimedia (2014)
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi;Ashish Agarwal;Paul Barham;Eugene Brevdo.
arXiv: Distributed, Parallel, and Cluster Computing (2015)
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi;Ashish Agarwal;Paul Barham;Eugene Brevdo.
arXiv: Distributed, Parallel, and Cluster Computing (2015)
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
Jeff Donahue;Yangqing Jia;Oriol Vinyals;Judy Hoffman.
international conference on machine learning (2014)
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
Jeff Donahue;Yangqing Jia;Oriol Vinyals;Judy Hoffman.
international conference on machine learning (2014)
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal;Piotr Dollár;Ross B. Girshick;Pieter Noordhuis.
arXiv: Computer Vision and Pattern Recognition (2017)
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal;Piotr Dollár;Ross B. Girshick;Pieter Noordhuis.
arXiv: Computer Vision and Pattern Recognition (2017)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of California, Berkeley
Tsinghua University
DeepMind (United Kingdom)
Boston University
Baidu (China)
Alibaba Group (China)
Georgia Institute of Technology
Northwestern Polytechnical University
Facebook (United States)
Google (United States)
California Institute of Technology
National Institute of Standards and Technology
Kyushu Institute of Technology
Missouri University of Science and Technology
King Abdullah University of Science and Technology
University of Toronto
China Agricultural University
University of Udine
University of Burgundy
Princeton University
University of Bristol
University of Barcelona
University of Oslo
Medical University of Vienna
Copenhagen University Hospital
Icahn School of Medicine at Mount Sinai