His main research concerns Parallel computing, Compiler, Programming language, Automatic parallelization and Optimizing compiler. His Parallel computing research includes themes of Program transformation and Operating system. His work carried out in the field of Compiler brings together such families of science as Supercomputer, Automatic programming, Speedup, Algorithm and Programming paradigm.
His work in the fields of Fortran overlaps with other areas such as Polaris. In his work, Code and Program parallelization is strongly intertwined with Dependence analysis, which is a subfield of Automatic parallelization. His Optimizing compiler research is multidisciplinary, incorporating elements of Multiprocessing, Orchestration, Pentium and Code generation.
The scientist’s investigation covers issues in Parallel computing, Compiler, Programming language, Automatic parallelization and Distributed computing. His studies deal with areas such as Spec# and Programming paradigm as well as Parallel computing. His biological study spans a wide range of topics, including Thread and Code generation.
His study in Fortran and Compiler construction falls within the category of Programming language. His study in Automatic parallelization is interdisciplinary in nature, drawing from both Symbolic data analysis, Automatic programming and Dependence analysis. His work focuses on many connections between Distributed computing and other disciplines, such as Resource, that overlap with his field of interest in Host, The Internet and Resource management.
His primary scientific interests are in Parallel computing, Compiler, Distributed computing, Automatic parallelization and Optimizing compiler. Rudolf Eigenmann frequently studies issues relating to Programming paradigm and Parallel computing. The study incorporates disciplines such as Computer architecture, Set, Shared memory and Code generation in addition to Compiler.
He has researched Distributed computing in several fields, including Scalability, Instruction cycle, Application software, Resource and Host. His Automatic parallelization study combines topics from a wide range of disciplines, such as Program transformation, Automatic programming, Multi-core processor and Parallel processing. His Optimizing compiler research is classified as research in Programming language.
His primary areas of investigation include Parallel computing, Compiler, Optimizing compiler, Code generation and Automatic parallelization. His Parallel computing research is multidisciplinary, incorporating perspectives in Operating system and Computational science. Rudolf Eigenmann has included themes like Scheme and Data-flow analysis in his Compiler study.
His Optimizing compiler research focuses on Automatic programming and how it relates to Java, Parallel processing, Multiprocessing and High-level programming language. His Code generation study incorporates themes from Computer architecture, CUDA, Programming paradigm and General-purpose computing on graphics processing units. His Automatic parallelization research is multidisciplinary, relying on both Program transformation and Multi-core processor.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
OpenMP to GPGPU: a compiler framework for automatic translation and optimization
Seyong Lee;Seung-Jai Min;Rudolf Eigenmann.
acm sigplan symposium on principles and practice of parallel programming (2009)
Parallel programming with Polaris
W. Blume;R. Doallo;R. Doallo;R. Doallo;R. Eigenmann;R. Eigenmann;J. Grout.
IEEE Computer (1996)
Automatic program parallelization
U. Banerjee;R. Eigenmann;A. Nicolau;D.A. Padua.
Proceedings of the IEEE (1993)
OpenMPC: Extended OpenMP Programming and Tuning for GPUs
Seyong Lee;Rudolf Eigenmann.
ieee international conference on high performance computing data and analytics (2010)
SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance
Vishal Aslot;Max J. Domeika;Rudolf Eigenmann;Greg Gaertner.
international workshop on openmp (2001)
Cetus: A Source-to-Source Compiler Infrastructure for Multicores
C. Dave;Hansang Bae;Seung-Jai Min;Seyong Lee.
IEEE Computer (2009)
Cetus – An Extensible Compiler Infrastructure for Source-to-Source Transformation
Sang Ik Lee;Troy A. Johnson;Rudolf Eigenmann.
languages and compilers for parallel computing (2003)
Fast and Effective Orchestration of Compiler Optimizations for Automatic Performance Tuning
Zhelong Pan;Rudolf Eigenmann.
symposium on code generation and optimization (2006)
Cetus – An Extensible Compiler Infrastructure for Source-to-Source Transformation
Sang-Ik Lee;Troy A. Johnson;Rudolf Eigenmann.
languages and compilers for parallel computing (2003)
Performance analysis pf parallelizing compilers on the Perfect Benchmarks programs
W. Blume;R. Eigenmann.
IEEE Transactions on Parallel and Distributed Systems (1992)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Illinois at Urbana-Champaign
University of Florida
Purdue University West Lafayette
Purdue University West Lafayette
Purdue University West Lafayette
University of Illinois at Urbana-Champaign
Colorado State University
Colorado State University
Northwestern University
Lawrence Livermore National Laboratory
University of Southampton
University of Hong Kong
Google (United States)
University of Florida
Ecosense Lighting
University of Illinois at Urbana-Champaign
University of California, Los Angeles
James Cook University
Centre national de la recherche scientifique, CNRS
University of Kentucky
Charles Sturt University
Spanish National Research Council
University of California, Davis
University of Maryland, College Park
American Cancer Society
Baylor College of Medicine