The scientist’s investigation covers issues in Parallel computing, Linear algebra, Iterative refinement, HPC Challenge Benchmark and Supercomputer. His studies in Parallel computing integrate themes in fields like Scalability and Matrix multiplication. His Linear algebra research includes themes of Hybrid system, Algorithm, Numerical linear algebra and Cholesky decomposition.
His study in Iterative refinement is interdisciplinary in nature, drawing from both Floating point, Double-precision floating-point format and Single-precision floating-point format. His HPC Challenge Benchmark study integrates concerns from other disciplines, such as Petascale computing, TOP500 and Locality of reference. His Supercomputer research is multidisciplinary, incorporating elements of Programming language, Task and Distributed memory systems.
His main research concerns Parallel computing, Linear algebra, Multi-core processor, Software and Numerical linear algebra. His study of CUDA is a part of Parallel computing. His Linear algebra study also includes
His work in Multi-core processor addresses subjects such as Factorization, which are connected to disciplines such as System of linear equations. His Software study combines topics from a wide range of disciplines, such as Numerical analysis and Software engineering. As part of one scientific family, Piotr Luszczek deals mainly with the area of LU decomposition, narrowing it down to issues related to the Pivot element, and often Gaussian elimination.
His scientific interests lie mostly in Parallel computing, Linear algebra, Multi-core processor, Software and Supercomputer. His Parallel computing study combines topics in areas such as Scalability, Programming paradigm and Computational science. His Linear algebra research integrates issues from Xeon Phi, Scheduling and Matrix, Cholesky decomposition.
His Multi-core processor research includes elements of Coprocessor, Computer engineering and Generalized minimal residual method. His studies deal with areas such as Data type, Singular value decomposition, Solver and Profiling as well as Software. His Supercomputer research is multidisciplinary, relying on both Machine learning, Ranking, Software engineering and Arithmetic.
Piotr Luszczek mostly deals with Parallel computing, Linear algebra, Multi-core processor, Computation and Computational science. His research integrates issues of Artificial neural network and Software portability in his study of Parallel computing. The concepts of his Linear algebra study are interwoven with issues in Xeon Phi, Matrix, Cholesky decomposition, Porting and Scheduling.
His Multi-core processor research incorporates themes from Multithreading, Coprocessor, Theory of computation and Programming paradigm. His Computation study combines topics in areas such as Divide and conquer algorithms, Pipeline, Singular value decomposition, Instruction set and Generalized minimal residual method. His studies deal with areas such as Floating point, IEEE floating point, Computer engineering and Benchmark as well as Computational science.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
The LINPACK Benchmark: past, present and future
Jack J. Dongarra;Piotr Luszczek;Antoine Petitet.
Concurrency and Computation: Practice and Experience (2003)
Numerical linear algebra on emerging architectures: The PLASMA and MAGMA projects
Emmanuel Agullo;Jim Demmel;Jack Dongarra;Bilel Hadri.
Journal of Physics: Conference Series (2009)
From CUDA to OpenCL: Towards a performance-portable solution for multi-platform GPU programming
Peng Du;Rick Weber;Piotr Luszczek;Stanimire Tomov.
parallel computing (2012)
The HPC Challenge (HPCC) benchmark suite
Piotr R Luszczek;David H Bailey;Jack J Dongarra;Jeremy Kepner.
conference on high performance computing (supercomputing) (2006)
Introduction to the HPC Challenge Benchmark Suite
Piotr Luszczek;Jack J. Dongarra;David Koester;Rolf Rabenseifner.
SC2005, Seattle, WA, Nov 12-18,2005 (2005)
Measuring Energy and Power with PAPI
Vincent M. Weaver;Matt Johnson;Kiran Kasichayanula;James Ralph.
international conference on parallel processing (2012)
Accelerating Scientific Computations with Mixed Precision Algorithms
Marc Baboulin;Alfredo Buttari;Jack J. Dongarra;Jack J. Dongarra;Jack J. Dongarra;Jakub Kurzak.
Computer Physics Communications (2009)
The impact of multicore on math software
Alfredo Buttari;Jack Dongarra;Jakub Kurzak;Julien Langou.
parallel computing (2006)
Exploiting the performance of 32 bit floating point arithmetic in obtaining 64 bit accuracy (revisiting iterative refinement for linear systems)
Julie Langou;Julien Langou;Piotr Luszczek;Jakub Kurzak.
conference on high performance computing (supercomputing) (2006)
Mixed Precision Iterative Refinement Techniques for the Solution of Dense Linear Systems
Alfredo Buttari;Jack Dongarra;Julie Langou;Julien Langou.
ieee international conference on high performance computing data and analytics (2007)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Tennessee at Knoxville
University of Tennessee at Knoxville
University of Tennessee at Knoxville
Sandia National Laboratories
Oak Ridge National Laboratory
University of Manchester
University of California, Berkeley
École Normale Supérieure de Lyon
MIT
University of California, Davis
Beihang University
University of Stuttgart
Sichuan University
Chimie ParisTech
University of Kentucky
University of Maryland, College Park
University of Wisconsin–Madison
University of Oklahoma
University of California, San Francisco
Northern Arizona University
University of Michigan–Ann Arbor
Texas A&M University
University of Southern California
University of Cagliari
University of Akron
University of Tokyo