2020 - ACM Senior Member
2019 - ACM Gordon Bell Prize For A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations
His main research concerns Distributed computing, Parallel computing, Message Passing Interface, Scalability and Supercomputer. Torsten Hoefler studies Distributed computing, focusing on Message passing in particular. His Parallel computing research includes elements of Conjugate gradient solver, Computation, Stencil and Programming paradigm.
His study on Message Passing Interface also encompasses disciplines like
His primary scientific interests are in Distributed computing, Parallel computing, Scalability, Computer network and Network topology. His Distributed computing research includes themes of Implementation, Computation and InfiniBand. As a part of the same scientific family, he mostly works in the field of Parallel computing, focusing on Programming paradigm and, on occasion, Theoretical computer science.
The Scalability study combines topics in areas such as Computer architecture and Supercomputer. His Network packet and Multipath routing study are his primary interests in Computer network. His Message Passing Interface study frequently draws connections to adjacent fields such as Interface.
His primary areas of study are Parallel computing, Artificial intelligence, Scalability, Graph and Deep learning. As a part of the same scientific study, Torsten Hoefler usually deals with the Artificial intelligence, concentrating on Machine learning and frequently concerns with Set. His research in Scalability intersects with topics in Computer architecture, Compiler, Middleware, Distributed computing and Latency.
His Compiler study integrates concerns from other disciplines, such as Supercomputer and Solver. His study in Distributed computing is interdisciplinary in nature, drawing from both Interconnection, Ethernet protocol and Interoperability. Torsten Hoefler works mostly in the field of Graph, limiting it down to concerns involving Graph and, occasionally, Graph theory, Heuristics and Algorithm.
His primary areas of investigation include Artificial intelligence, Set, Deep learning, Artificial neural network and Compiler. As a member of one scientific family, he mostly works in the field of Artificial intelligence, focusing on Machine learning and, on occasion, Scalability and Forecast skill. His Set research is multidisciplinary, incorporating perspectives in Software, Reusability, Sparse matrix, Distributed algorithm and System on a chip.
His Deep learning study incorporates themes from Stochastic gradient descent, Convergence, Asynchronous communication, Training and Phrase. His work carried out in the field of Compiler brings together such families of science as Source lines of code and Code. His work on Parallel computing is being expanded to include thematically relevant topics such as Stencil.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis
Tal Ben-Nun;Torsten Hoefler.
ACM Computing Surveys (2019)
Characterizing the Influence of System Noise on Large-Scale Applications by Simulation
Torsten Hoefler;Timo Schneider;Andrew Lumsdaine.
ieee international conference on high performance computing data and analytics (2010)
Generic topology mapping strategies for large-scale parallel architectures
Torsten Hoefler;Marc Snir.
international conference on supercomputing (2011)
The PERCS High-Performance Interconnect
Baba Arimilli;Ravi Arimilli;Vicente Chung;Scott Clark.
high performance interconnects (2010)
Slim fly: a cost effective low-diameter network topology
Maciej Besta;Torsten Hoefler.
ieee international conference on high performance computing data and analytics (2014)
Implementation and performance analysis of non-blocking collective operations for MPI
Torsten Hoefler;Andrew Lumsdaine;Wolfgang Rehm.
conference on high performance computing (supercomputing) (2007)
The Convergence of Sparsified Gradient Methods
Dan Alistarh;Torsten Hoefler;Mikael Johansson;Nikola Konstantinov.
neural information processing systems (2018)
Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results
Torsten Hoefler;Roberto Belli.
ieee international conference on high performance computing data and analytics (2015)
LogGOPSim: simulating large-scale applications in the LogGOPS model
Torsten Hoefler;Timo Schneider;Andrew Lumsdaine.
high performance distributed computing (2010)
Enabling highly scalable remote memory access programming with MPI-3 one sided
Robert Gerstenberger;Maciej Besta;Torsten Hoefler.
Communications of The ACM (2018)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Pacific Northwest National Laboratory
University of Illinois at Urbana-Champaign
Argonne National Laboratory
Argonne National Laboratory
ETH Zurich
ETH Zurich
University of Illinois at Urbana-Champaign
ETH Zurich
ETH Zurich
Sandia National Laboratories
Chinese University of Hong Kong
Carnegie Mellon University
Interactions LLC
Hong Kong Polytechnic University
Rockefeller University
Pennsylvania State University
Technion – Israel Institute of Technology
Université Paris Cité
Yale University
Yıldız Technical University
The University of Texas Southwestern Medical Center
University of Utah
Bernardino Rivadavia Natural Sciences Museum
Duke University
Iowa State University
University of Washington