2023 - Member of the National Academy of Sciences
2023 - Research.com Computer Science in United States Leader Award
2021 - A. M. Turing Award
2019 - Fellow of the Royal Society, United Kingdom
2019 - SIAM/ACM Prize in Computational Science and Engineering For his key role in the development of software and software standards, software repositories, performance and benchmarking software, and in community efforts to prepare for the challenges of exascale computing, especially in adapting linear algebra infrastructure to emerging architectures.
2013 - ACM - IEEE CS Ken Kennedy Award For influential contributions to mathematical software, performance measurement, and parallel programming, and significant leadership and service within the HPC community
2009 - SIAM Fellow For contributions to numerical linear algebra, including EISPACK, LINPACK, and LAPACK, and high-performance computing.
2001 - Member of the National Academy of Engineering For contributions to numerical software, parallel and distributed computation, and problem-solving environments.
2001 - ACM Fellow For contributions in the field of scientific computing, the development of mathematical software, parallel methods, and enabling technologies for high-performance computing.
2000 - IEEE Fellow For contributions and leadership in the field of computational mathematics.
1994 - Fellow of the American Association for the Advancement of Science (AAAS)
His primary scientific interests are in Parallel computing, Linear algebra, Distributed computing, Software and Supercomputer. Jack Dongarra works on Parallel computing which deals in particular with Distributed memory. His research integrates issues of Linear system, Numerical linear algebra, Basic Linear Algebra Subprograms and Algorithm, Iterative refinement in his study of Linear algebra.
His Distributed computing research focuses on Grid computing and how it connects with Suspension. His Software research focuses on Computational science and how it relates to Graphics. His Supercomputer research is multidisciplinary, incorporating elements of Computer architecture and Concurrent computing.
His primary areas of investigation include Parallel computing, Linear algebra, Distributed computing, Supercomputer and Software. Jack Dongarra is studying Multi-core processor, which is a component of Parallel computing. His Multi-core processor study combines topics from a wide range of disciplines, such as QR decomposition, CUDA and Shared memory.
Jack Dongarra combines subjects such as Linear system, Numerical linear algebra, Distributed memory and Computational science with his study of Linear algebra. His research in Distributed computing intersects with topics in Grid, Grid computing and Scheduling. His studies in Matrix integrate themes in fields like Algorithm and Eigenvalues and eigenvectors.
The scientist’s investigation covers issues in Parallel computing, Matrix, Linear algebra, Computational science and Linear system. His primary area of study in Parallel computing is in the field of Multi-core processor. He has researched Matrix in several fields, including Factorization, Algorithm, Multiplication and Preconditioner.
His Linear algebra study integrates concerns from other disciplines, such as Xeon Phi, Software, Porting, Vectorization and Numerical linear algebra. His research in Software focuses on subjects like Supercomputer, which are connected to Distributed computing. His research investigates the link between Computational science and topics such as Energy consumption that cross with problems in Coprocessor.
Jack Dongarra spends much of his time researching Parallel computing, Matrix, Linear system, Linear algebra and Computational science. He conducts interdisciplinary study in the fields of Parallel computing and Kernel through his works. His Linear system research is multidisciplinary, relying on both Positive-definite matrix and Algorithm.
Jack Dongarra interconnects Scheduling, Matrix multiplication and Kernel in the investigation of issues within Linear algebra. His Computational science study also includes
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
LINPACK Users' Guide
J. J. Dongarra;C. B. Moler;J. R. Bunch;G. W. Stewart.
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
Al Geist;Adam Beguelin;Jack Dongarra;Weicheng Jiang.
Computers in Physics (1995)
A set of level 3 basic linear algebra subprograms
J. J. Dongarra;Jeremy Du Croz;Sven Hammarling;I. S. Duff.
ACM Transactions on Mathematical Software (1990)
ScaLAPACK Users' Guide
L. S. Blackford;J. Choi;A. Cleary;E. D'Azevedo.
ScaLAPACK user's guide
L. S. Blackford;J. Choi;A. Cleary;E. D'Azeuedo.
MPI: The Complete Reference
Marc Snir;Steve W. Otto;David W. Walker;Jack Dongarra.
Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation
Edgar Gabriel;Graham E. Fagg;George Bosilca;Thara Angskun.
Lecture Notes in Computer Science (2004)
Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide
James Demmel;Jack Dongarra;Axel Ruhe;Henk van der Vorst.
Automatically Tuned Linear Algebra Software
R. Clint Whaley;Jack J. Dongarra.
conference on high performance computing (supercomputing) (1998)
The LINPACK Benchmark: past, present and future
Jack J. Dongarra;Piotr Luszczek;Antoine Petitet.
Concurrency and Computation: Practice and Experience (2003)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: