His primary scientific interests are in Embedded system, Non-volatile memory, Computer hardware, Static random-access memory and Field-programmable gate array. His study in the field of High-level synthesis is also linked to topics like Automation. His Non-volatile memory research is multidisciplinary, incorporating perspectives in Block, Overhead, Tag RAM and Cache.
His studies deal with areas such as Dram, CPU cache and Memory architecture as well as Static random-access memory. His Field-programmable gate array research integrates issues from Computer architecture, Field, Flexibility, Design flow and Machine translation. His Magnetoresistive random-access memory research is multidisciplinary, relying on both Page cache and Write buffer.
His scientific interests lie mostly in Embedded system, Cache, Computer hardware, Parallel computing and Static random-access memory. His Embedded system study combines topics from a wide range of disciplines, such as Dram, Non-volatile memory and Overhead. His research is interdisciplinary, bridging the disciplines of Efficient energy use and Cache.
In the subject of general Parallel computing, his work in Cache-only memory architecture and Instruction set is often linked to Register file and Locality, thereby combining diverse domains of study. His research integrates issues of Block, Magnetoresistive random-access memory, Tag RAM and Backup in his study of Static random-access memory. His Cache pollution study deals with Cache coloring intersecting with Page cache.
Guangyu Sun spends much of his time researching Artificial intelligence, Convolutional neural network, Edge computing, Deep learning and Inference. His Convolutional neural network research includes elements of Convolution, Computation, Overhead and Pruning. Guangyu Sun interconnects Bayesian probability and Computational science in the investigation of issues within Deep learning.
His biological study deals with issues like Computer architecture, which deal with fields such as Field-programmable gate array. He works mostly in the field of Field-programmable gate array, limiting it down to concerns involving Xeon and, occasionally, Computer engineering. His Non-volatile memory research includes themes of Random access memory, Quantization, Embedded system and SIMD.
The scientist’s investigation covers issues in Deep learning, Artificial intelligence, Convolutional neural network, Generalization and Bayesian probability. His research in Deep learning intersects with topics in Computation, Dynamic network analysis and Reinforcement learning. His studies deal with areas such as Process, Pruning, Inference, Computational science and Efficient energy use as well as Convolutional neural network.
His Generalization research encompasses a variety of disciplines, including Bounded function, Regularization, Generative grammar, Information leakage and Theoretical computer science. Guangyu Sun has included themes like Langevin dynamics, Sample, Training set and Markov chain Monte Carlo in his Machine learning study. His Overfitting research incorporates elements of Contextual image classification, Noise and Differential privacy.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks
Chen Zhang;Peng Li;Guangyu Sun;Yijin Guan.
field programmable gate arrays (2015)
Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks
Chen Zhang;Peng Li;Guangyu Sun;Yijin Guan.
field programmable gate arrays (2015)
A novel architecture of the 3D stacked MRAM L2 cache for CMPs
Guangyu Sun;Xiangyu Dong;Yuan Xie;Jian Li.
high-performance computer architecture (2009)
A novel architecture of the 3D stacked MRAM L2 cache for CMPs
Guangyu Sun;Xiangyu Dong;Yuan Xie;Jian Li.
high-performance computer architecture (2009)
Caffeine: Toward Uniformed Representation and Acceleration for Deep Convolutional Neural Networks
Chen Zhang;Guangyu Sun;Zhenman Fang;Peipei Zhou.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2019)
Caffeine: Toward Uniformed Representation and Acceleration for Deep Convolutional Neural Networks
Chen Zhang;Guangyu Sun;Zhenman Fang;Peipei Zhou.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2019)
Circuit and microarchitecture evaluation of 3D stacking magnetic RAM (MRAM) as a universal memory replacement
Xiangyu Dong;Xiaoxia Wu;Guangyu Sun;Yuan Xie.
design automation conference (2008)
Circuit and microarchitecture evaluation of 3D stacking magnetic RAM (MRAM) as a universal memory replacement
Xiangyu Dong;Xiaoxia Wu;Guangyu Sun;Yuan Xie.
design automation conference (2008)
FP-DNN: An Automated Framework for Mapping Deep Neural Networks onto FPGAs with RTL-HLS Hybrid Templates
Yijin Guan;Hao Liang;Ningyi Xu;Wenqiang Wang.
field programmable custom computing machines (2017)
FP-DNN: An Automated Framework for Mapping Deep Neural Networks onto FPGAs with RTL-HLS Hybrid Templates
Yijin Guan;Hao Liang;Ningyi Xu;Wenqiang Wang.
field programmable custom computing machines (2017)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of California, Santa Barbara
University of California, Los Angeles
Peking University
Tsinghua University
Duke University
Beihang University
University of Paris-Saclay
Tsinghua University
Peking University
University of Notre Dame
University of Erlangen-Nuremberg
Université Catholique de Louvain
University of Georgia
Columbia University
Nanjing University of Information Science and Technology
Northeastern University
Soochow University
Tongji University
University of Perpignan
Novartis (Switzerland)
University of California, Davis
University of Copenhagen
University of California, Irvine
Yale University
University of British Columbia
University of Florida