2007 - ACM Fellow For contributions to compiler support for parallel computing.
2000 - IEEE Fellow For contributions to compiler technology for parallel computing.
Parallel computing, Compiler, Programming language, Fortran and Automatic parallelization are his primary areas of study. His work deals with themes such as Control flow and Control flow graph, which intersect with Parallel computing. His research on Compiler focuses in particular on Optimizing compiler.
Programming language is closely attributed to Petascale computing in his research. His research in Fortran tackles topics such as Loop optimization which are related to areas like Data synchronization, Loop fusion, Synchronization and LOOP. His work in Automatic parallelization tackles topics such as Dependence analysis which are related to areas like Program parallelization.
David Padua focuses on Parallel computing, Compiler, Programming language, Fortran and Theoretical computer science. David Padua has researched Parallel computing in several fields, including Optimizing compiler and Automatic parallelization. He focuses mostly in the field of Compiler, narrowing it down to topics relating to Multiprocessing and, in certain cases, Supercomputer.
His study connects Benchmark and Programming language. The study incorporates disciplines such as Synchronization and Parallel processing in addition to Fortran. His Shared memory research integrates issues from Computer architecture and Uniform memory access, Cache-only memory architecture.
David Padua spends much of his time researching Compiler, Parallel computing, Programming language, Theoretical computer science and Programmer. Specifically, his work in Compiler is concerned with the study of Program optimization. His Parallel computing study combines topics from a wide range of disciplines, such as Optimizing compiler and Overhead.
His Theoretical computer science research focuses on subjects like Graph, which are linked to Vertex. As a part of the same scientific family, he mostly works in the field of Programmer, focusing on CUDA and, on occasion, Benchmark, High-level programming language, Chapel and Multi-core processor. His Code generation research focuses on Domain-specific language and how it connects with Process and Program transformation.
His primary areas of study are Compiler, Parallel computing, Overhead, Theoretical computer science and Software engineering. His Compiler research is under the purview of Programming language. His Programming language study integrates concerns from other disciplines, such as CUDA, Multi-core processor and Benchmark.
David Padua interconnects Optimizing compiler, Data type, Object and Class in the investigation of issues within Parallel computing. His Overhead research includes elements of Reduction, External Data Representation and Machine code. His Theoretical computer science study combines topics in areas such as Set, State and Graph algorithms.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Advanced compiler optimizations for supercomputers
David A. Padua;Michael J. Wolfe.
Communications of The ACM (1986)
SPIRAL: Code Generation for DSP Transforms
M. Puschel;J.M.F. Moura;J.R. Johnson;D. Padua.
Proceedings of the IEEE (2005)
Dependence graphs and compiler optimizations
D. J. Kuck;R. H. Kuhn;D. A. Padua;B. Leasure.
symposium on principles of programming languages (1981)
The LRPD test: speculative run-time parallelization of loops with privatization and reduction parallelization
L. Rauchwerger;D.A. Padua.
IEEE Transactions on Parallel and Distributed Systems (1999)
Automatic program parallelization
U. Banerjee;R. Eigenmann;A. Nicolau;D.A. Padua.
Proceedings of the IEEE (1993)
Automatic array privatization
Peng Tu;David Padua.
Compiler optimizations for scalable parallel systems (2001)
SPL: a language and compiler for DSP algorithms
Jianxin Xiong;Jeremy Johnson;Robert Johnson;David Padua.
programming language design and implementation (2001)
Spiral: A Generator for Platform-Adapted Libraries of Signal Processing Algorithms
Markus Püschel;José M. F. Moura;Bryan Singer;Jianxin Xiong.
ieee international conference on high performance computing data and analytics (2004)
Encyclopedia of Parallel Computing
David Padua.
(2011)
An Evaluation of Vectorizing Compilers
Saeed Maleki;Yaoqing Gao;Maria J. Garzar´n;Tommy Wong.
international conference on parallel architectures and compilation techniques (2011)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Delaware
University of California, Irvine
Purdue University West Lafayette
University of Illinois at Urbana-Champaign
The University of Texas at Austin
University of Illinois at Urbana-Champaign
University of Utah
ETH Zurich
Google (United States)
Carnegie Mellon University
Federal Reserve Bank of New York
University of Florence
Zhejiang University
Grenoble Alpes University
Blackberry (United States)
Qualcomm (United Kingdom)
University of Colorado Boulder
University of Copenhagen
University of Tokyo
Wayne State University
Federal University of Rio de Janeiro
University of British Columbia
University of Toronto
University of Toronto
National Institutes of Health
Curtin University