J. Ramanujam mostly deals with Parallel computing, Compiler, Optimizing compiler, Nested loop join and Automatic parallelization. J. Ramanujam has researched Parallel computing in several fields, including Loop tiling, Stencil and Code generation. His Loop tiling study combines topics in areas such as Loop fission, Loop nest optimization and General-purpose computing on graphics processing units.
His Compiler research is multidisciplinary, incorporating elements of Computer architecture, CUDA and Parallelism. The concepts of his Optimizing compiler study are interwoven with issues in Memory architecture, Embedded system and Implementation. The Nested loop join study combines topics in areas such as Deadlock, Shared memory and Affine transformation.
J. Ramanujam mainly investigates Parallel computing, Compiler, Algorithm, Computation and Optimizing compiler. The study incorporates disciplines such as Code, Code generation and Loop nest optimization in addition to Parallel computing. The various areas that J. Ramanujam examines in his Compiler study include Memory address, Multiprocessing, CUDA and Stencil.
His work deals with themes such as Tensor contraction, Loop tiling and Loop fusion, which intersect with Algorithm. His Optimizing compiler research integrates issues from Program transformation, Program optimization and Embedded system. His biological study spans a wide range of topics, including Scheduling and Affine transformation.
His primary scientific interests are in Parallel computing, Compiler, Computation, Theoretical computer science and Directed acyclic graph. His Parallel computing study incorporates themes from Solver and Code. His studies in Compiler integrate themes in fields like Affine transformation, Class, Stencil, Sequence and SIMD.
His Affine transformation course of study focuses on Loop optimization and Program optimization, Nested loop join and Transformation. His study in Computation is interdisciplinary in nature, drawing from both Byte, Linear solver and Benchmark. His research investigates the link between Theoretical computer science and topics such as Data access that cross with problems in CPU cache.
J. Ramanujam spends much of his time researching Parallel computing, Compiler, Affine transformation, Stencil and Computation. In the field of Parallel computing, his study on Speedup overlaps with subjects such as Parametric array. His Compiler study integrates concerns from other disciplines, such as Dependence analysis, Intrinsics and SIMD.
In his study, which falls under the umbrella issue of Affine transformation, Program optimization, Theoretical computer science, Nested loop join and Transformation is strongly linked to Loop optimization. J. Ramanujam combines subjects such as Class, Parallel algorithm, Retiming and Associative property with his study of Stencil. The Computation study combines topics in areas such as Spin glass, CUDA and Graphics processing unit.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
A practical automatic polyhedral parallelizer and locality optimizer
Uday Bondhugula;Albert Hartono;J. Ramanujam;P. Sadayappan.
programming language design and implementation (2008)
Dynamic management of scratch-pad memory space
M. Kandemir;J. Ramanujam;J. Irwin;N. Vijaykrishnan.
design automation conference (2001)
Automatic transformations for communication-minimized parallelization and locality optimization in the polyhedral model
Uday Bondhugula;Muthu Baskaran;Sriram Krishnamoorthy;J. Ramanujam.
compiler construction (2008)
Automatic C-to-CUDA code generation for affine programs
Muthu Manikandan Baskaran;J. Ramanujam;P. Sadayappan.
compiler construction (2010)
A compiler framework for optimization of affine loop nests for gpgpus
Muthu Manikandan Baskaran;Uday Bondhugula;Sriram Krishnamoorthy;J. Ramanujam.
international conference on supercomputing (2008)
Effective automatic parallelization of stencil computations
Sriram Krishnamoorthy;Muthu Baskaran;Uday Bondhugula;J. Ramanujam.
programming language design and implementation (2007)
Compile-time techniques for data distribution in distributed memory machines
J. Ramanujam;P. Sadayappan.
IEEE Transactions on Parallel and Distributed Systems (1991)
Synthesis of High-Performance Parallel Programs for a Class of ab Initio Quantum Chemistry Models
G. Baumgartner;A. Auer;D.E. Bernholdt;A. Bibireata.
Proceedings of the IEEE (2005)
Tiling multidimensional iteration spaces for multicomputers
J. Ramanujam;P. Sadayappan.
Journal of Parallel and Distributed Computing (1992)
Cluster partitioning approaches to mapping parallel programs onto a hypercube
P. Sadayappan;Fikret Erçal;J. Ramanujam.
parallel computing (1990)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Utah
Pennsylvania State University
Northwestern University
University of California, Santa Barbara
Ansys (United States)
The Ohio State University
Google (United States)
Murdoch University
Pennsylvania State University
Barcelona Supercomputing Center
University of British Columbia
IBM (United States)
Kyoto University
Max Planck Institute for Chemical Physics of Solids
University of Washington
Georgia Institute of Technology
University of Leeds
University of California, Davis
Agricultural Research Service
University of California, Riverside
Saarland University
University of Helsinki
University of Alberta
The University of Texas at Austin
KU Leuven
The University of Texas Health Science Center at Houston