The scientist’s investigation covers issues in Parallel computing, Field-programmable gate array, Embedded system, Computer architecture and Page cache. His work often combines Parallel computing and Efficient energy use studies. His work deals with themes such as Design space exploration, Convolutional neural network and Speedup, which intersect with Field-programmable gate array.
Yun Liang combines subjects such as Artificial intelligence and FLOPS with his study of Embedded system. His Page cache research incorporates themes from Cache pollution and Cache-oblivious algorithm, Bus sniffing, Cache coloring. The study incorporates disciplines such as Smart Cache and Cache invalidation in addition to Cache pollution.
His primary areas of investigation include Parallel computing, Field-programmable gate array, Speedup, Cache and Embedded system. His Parallel computing study integrates concerns from other disciplines, such as Scalability and Thread. His research in Field-programmable gate array intersects with topics in Computer architecture, Design space exploration and Convolutional neural network.
In his research on the topic of Speedup, Computer hardware and Artificial neural network is strongly related with Overhead. He has researched Cache in several fields, including Real-time computing, High memory and Task. His Embedded system study combines topics from a wide range of disciplines, such as Software and Programming paradigm.
His scientific interests lie mostly in Field-programmable gate array, Parallel computing, Speedup, Efficient energy use and Computation. The Field-programmable gate array study combines topics in areas such as Design space exploration, Convolutional neural network, Computer engineering and Dataflow. The concepts of his Convolutional neural network study are interwoven with issues in Convolution and Pipeline.
His CUDA study in the realm of Parallel computing connects with subjects such as Throughput. The various areas that Yun Liang examines in his Speedup study include Thread, Overhead, Memory hierarchy, Deep learning and General-purpose computing on graphics processing units. His Computation research integrates issues from Fast Fourier transform, Data transmission and Computer architecture.
His primary scientific interests are in Field-programmable gate array, Parallel computing, Speedup, Convolutional neural network and Hardware acceleration. His Field-programmable gate array research is within the category of Computer hardware. His work in the fields of Parallel computing, such as Xeon, intersects with other areas such as Sparse matrix.
His Speedup research is multidisciplinary, incorporating perspectives in Image, CUDA, Heuristic and General-purpose computing on graphics processing units. Yun Liang focuses mostly in the field of Convolutional neural network, narrowing it down to matters related to Pipeline and, in some cases, Residual neural network and Algorithm. His Hardware acceleration research is multidisciplinary, relying on both Operand and Design space exploration.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Automated Systolic Array Architecture Synthesis for High Throughput CNN Inference on FPGAs
Xuechao Wei;Cody Hao Yu;Peng Zhang;Youxiang Chen.
design automation conference (2017)
Chronos: A timing analyzer for embedded software
Xianfeng Li;Yun Liang;Tulika Mitra;Abhik Roychoudhury.
Science of Computer Programming (2007)
Evaluating Fast Algorithms for Convolutional Neural Networks on FPGAs
Liqiang Lu;Yun Liang;Qingcheng Xiao;Shengen Yan.
field programmable custom computing machines (2017)
Timing analysis of concurrent programs running on shared cache multi-cores
Yun Liang;Huping Ding;Tulika Mitra;Abhik Roychoudhury.
Real-time Systems (2012)
Timing Analysis of Concurrent Programs Running on Shared Cache Multi-Cores
Yan Li;Vivy Suhendra;Yun Liang;Tulika Mitra.
real-time systems symposium (2009)
C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs
Shuo Wang;Zhe Li;Caiwen Ding;Bo Yuan.
field programmable gate arrays (2018)
Exploring Heterogeneous Algorithms for Accelerating Deep Convolutional Neural Networks on FPGAs
Qingcheng Xiao;Yun Liang;Liqiang Lu;Shengen Yan.
design automation conference (2017)
Coordinated static and dynamic cache bypassing for GPUs
Xiaolong Xie;Yun Liang;Yu Wang;Guangyu Sun.
high-performance computer architecture (2015)
An efficient compiler framework for cache bypassing on GPUs
Xiaolong Xie;Yun Liang;Guangyu Sun;Deming Chen.
international conference on computer aided design (2013)
Lin-analyzer: a high-level performance analysis tool for FPGA-based accelerators
Guanwen Zhong;Alok Prakash;Yun Liang;Tulika Mitra.
design automation conference (2016)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
National University of Singapore
University of Illinois at Urbana-Champaign
University of California, Los Angeles
Peking University
Hong Kong University of Science and Technology
National University of Singapore
Citadel LLC
Tsinghua University
National University of Singapore
Syracuse University
Université Catholique de Louvain
Grenoble Alpes University
International Computer Science Institute
University of South Carolina
Hiroshima University
University of Minnesota
Beijing Normal University
Lawrence Berkeley National Laboratory
University of Lausanne
Lawrence Berkeley National Laboratory
Boston University
University of Leeds
Nagoya City University
University of Tübingen
Stockholm University
National Institute for Astrophysics