2002 - ACM Fellow For technical contributions and leadership in computer architecture.
1999 - ACM Grace Murray Hopper Award For the design and implementation of the IMPACT compiler infrastructure which has been used extensively both by the microprocessor industry as a baseline for product development and by academia as a basis for advanced research and development in computer architecture and compiler design.
Wen-mei W. Hwu mainly focuses on Parallel computing, Compiler, CUDA, General-purpose computing on graphics processing units and Computer architecture. His Parallel computing study frequently links to related topics such as Scheduling. The various areas that Wen-mei W. Hwu examines in his Compiler study include Instruction set and Microarchitecture.
The CUDA study combines topics in areas such as SPMD, Programming paradigm, Shared memory and CUDA Pinned memory. His General-purpose computing on graphics processing units research incorporates themes from Image processing, Electronic design automation, Computer vision and Massively parallel. His research integrates issues of Profile-guided optimization and Concurrent computing in his study of Computer architecture.
Wen-mei W. Hwu mostly deals with Parallel computing, Compiler, Computer architecture, CUDA and Artificial intelligence. His study in Scheduling extends to Parallel computing with its themes. The concepts of his Compiler study are interwoven with issues in Superscalar, Speculative execution and Microarchitecture.
Wen-mei W. Hwu works mostly in the field of Computer architecture, limiting it down to concerns involving Field-programmable gate array and, occasionally, Artificial neural network. His study in CUDA is interdisciplinary in nature, drawing from both Computational science, Programming paradigm, General-purpose computing on graphics processing units and Massively parallel. His studies in Artificial intelligence integrate themes in fields like Machine learning, Software and Computer vision.
Wen-mei W. Hwu mainly investigates Artificial intelligence, Field-programmable gate array, Deep learning, Computer architecture and Artificial neural network. His study on Artificial intelligence also encompasses disciplines like
Wen-mei W. Hwu combines subjects such as Domain and Compiler with his study of Artificial neural network. His research in Compiler intersects with topics in Instruction set and Resistive random-access memory. To a larger extent, he studies Parallel computing with the aim of understanding Shared memory.
His primary areas of investigation include Artificial intelligence, Field-programmable gate array, Computer architecture, Efficient energy use and Machine learning. His work in the fields of Artificial intelligence, such as Deep learning, Segmentation and Reinforcement learning, overlaps with other areas such as Fluency. Wen-mei W. Hwu has researched Computer architecture in several fields, including High-level synthesis, Software development, Artificial neural network, Memory bandwidth and Central processing unit.
His studies deal with areas such as Domain and Compiler as well as Artificial neural network. His biological study spans a wide range of topics, including Thread, Computation and Memristor. His Vectorization research entails a greater understanding of Parallel computing.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Programming Massively Parallel Processors: A Hands-on Approach
David B. Kirk;Wen-mei W. Hwu.
(2012)
Optimization principles and application performance evaluation of a multithreaded GPU using CUDA
Shane Ryoo;Christopher I. Rodrigues;Sara S. Baghsorkhi;Sam S. Stone.
acm sigplan symposium on principles and practice of parallel programming (2008)
A power controlled multiple access protocol for wireless packet networks
J.P. Monks;V. Bharghavan;W.-M.W. Hwu.
international conference on computer communications (2001)
The superblock: an effective technique for VLIW and superscalar compilation
Wen-Mei W. Hwu;Scott A. Mahlke;William Y. Chen;Pohua P. Chang.
The Journal of Supercomputing (1993)
Parboil: A Revised Benchmark Suite for Scientific and Commercial Throughput Computing
John A. Stratton;Christopher Rodrigues;I-Jui Sung;Nady Obeid.
(2012)
IMPACT: an architectural framework for multiple-instruction-issue processors
Pohua P. Chang;Scott A. Mahlke;William Y. Chen;Nancy J. Warter.
international symposium on computer architecture (1991)
Accelerating advanced MRI reconstructions on GPUs
S. S. Stone;J. P. Haldar;S. C. Tsao;W. m. W. Hwu.
Journal of Parallel and Distributed Computing (2008)
An adaptive performance modeling tool for GPU architectures
Sara S. Baghsorkhi;Matthieu Delahaye;Sanjay J. Patel;William D. Gropp.
acm sigplan symposium on principles and practice of parallel programming (2010)
Program optimization space pruning for a multithreaded gpu
Shane Ryoo;Christopher I. Rodrigues;Sam S. Stone;Sara S. Baghsorkhi.
symposium on code generation and optimization (2008)
MCUDA: An Efficient Implementation of CUDA Kernels for Multi-core CPUs
John A. Stratton;Sam S. Stone;Wen-Mei W. Hwu.
languages and compilers for parallel computing (2008)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Illinois at Urbana-Champaign
University of Michigan–Ann Arbor
The University of Texas at Austin
Georgia Institute of Technology
Princeton University
University of Illinois at Urbana-Champaign
University of Illinois at Urbana-Champaign
Purdue University West Lafayette
University of Illinois at Urbana-Champaign
University of Illinois at Urbana-Champaign
University of Stuttgart
Singapore Management University
University of Granada
Nagoya University
Johannes Kepler University of Linz
Stanford University
North Carolina State University
Universidade Federal do Ceará
University of Sydney
New York University
Columbia University
KU Leuven
Queen Mary University of London
Tokyo Institute of Technology