2015 - IEEE Fellow For contributions to parallel programming tools for high-performance computing
His main research concerns Parallel computing, Compiler, Distributed computing, Code generation and Nested loop join. His work deals with themes such as Automatic parallelization and Stencil, which intersect with Parallel computing. His Compiler study incorporates themes from Parallelism, SIMD and Source code.
The Distributed computing study combines topics in areas such as Scheduling, Queueing theory, Scalability and Supercomputer. His Code generation research is multidisciplinary, incorporating perspectives in Tensor contraction, Computation and Computer engineering. P. Sadayappan has researched Nested loop join in several fields, including Deadlock, Loop and Affine transformation.
Parallel computing, Distributed computing, Computation, Algorithm and Compiler are his primary areas of study. His work carried out in the field of Parallel computing brings together such families of science as Sparse matrix and Code generation. His Code generation course of study focuses on CUDA and General-purpose computing on graphics processing units.
His research integrates issues of Scalability and Job scheduler, Scheduling, Rate-monotonic scheduling, Fair-share scheduling in his study of Distributed computing. His work in Algorithm addresses subjects such as Loop fusion, which are connected to disciplines such as Loop tiling. His biological study spans a wide range of topics, including Stencil and SIMD.
P. Sadayappan mainly investigates Parallel computing, Computation, Compiler, Algorithm and Sparse matrix. His Parallel computing study integrates concerns from other disciplines, such as Kernel and Stencil. P. Sadayappan combines subjects such as Computational complexity theory, Tensor representation, Representation and Affine transformation with his study of Computation.
His Compiler research includes elements of Class, Redundancy and Error detection and correction. His Algorithm research incorporates elements of Memory hierarchy, Permutation, Sequence, Bottleneck and Speedup. As a part of the same scientific family, P. Sadayappan mostly works in the field of Sparse matrix, focusing on Multiplication and, on occasion, Matrix multiplication.
P. Sadayappan mainly investigates Parallel computing, Stencil, Computation, Code generation and Compiler. His research in Parallel computing intersects with topics in Sparse matrix and Kernel. His Stencil study combines topics in areas such as Statement, Data parallelism, Loop unrolling and Shared memory.
The study incorporates disciplines such as Degree of parallelism, Tensor representation, Fiber and Finite volume method in addition to Computation. His Code generation research incorporates themes from Scalability, Dataflow, Concurrency and Adaptive mesh refinement. His Compiler study combines topics from a wide range of disciplines, such as Retiming, Associative property, Pointer, Dependence analysis and Class.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
A practical automatic polyhedral parallelizer and locality optimizer
Uday Bondhugula;Albert Hartono;J. Ramanujam;P. Sadayappan.
programming language design and implementation (2008)
Gaining insights into multicore cache partitioning: Bridging the gap between simulation and real systems
Jiang Lin;Qingda Lu;Xiaoning Ding;Zhao Zhang.
high-performance computer architecture (2008)
Scalable work stealing
James Dinan;D. Brian Larkins;P. Sadayappan;Sriram Krishnamoorthy.
ieee international conference on high performance computing data and analytics (2009)
Automatic transformations for communication-minimized parallelization and locality optimization in the polyhedral model
Uday Bondhugula;Muthu Baskaran;Sriram Krishnamoorthy;J. Ramanujam.
compiler construction (2008)
Automatic C-to-CUDA code generation for affine programs
Muthu Manikandan Baskaran;J. Ramanujam;P. Sadayappan.
compiler construction (2010)
High-performance code generation for stencil computations on GPU architectures
Justin Holewinski;Louis-Noël Pouchet;P. Sadayappan.
international conference on supercomputing (2012)
A compiler framework for optimization of affine loop nests for gpgpus
Muthu Manikandan Baskaran;Uday Bondhugula;Sriram Krishnamoorthy;J. Ramanujam.
international conference on supercomputing (2008)
Effective automatic parallelization of stencil computations
Sriram Krishnamoorthy;Muthu Baskaran;Uday Bondhugula;J. Ramanujam.
programming language design and implementation (2007)
NWChem: Past, present, and future
E. Aprà;E. J. Bylaska;W. A. de Jong;N. Govind.
Journal of Chemical Physics (2020)
Compile-time techniques for data distribution in distributed memory machines
J. Ramanujam;P. Sadayappan.
IEEE Transactions on Parallel and Distributed Systems (1991)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Louisiana State University
The Ohio State University
Georgia Institute of Technology
The Ohio State University
University of California, Santa Barbara
Stony Brook University
Stony Brook University
Google (United States)
The Ohio State University
Arizona State University
National Taiwan University of Science and Technology
Humboldt-Universität zu Berlin
Alfréd Rényi Institute of Mathematics
Intel (United States)
Qualcomm (United States)
Nanjing Normal University
Martin Luther University Halle-Wittenberg
MSD (United States)
University of Melbourne
University of Washington
California Institute of Technology
Radboud University Nijmegen
Emory University
University of Michigan–Ann Arbor
University of Cambridge
University Hospital of North Norway