Parallel computing, Supercomputer, Multi-core processor, Grid computing and Distributed computing are his primary areas of study. His work on Cache as part of general Parallel computing research is frequently linked to Sparse matrix, thereby connecting diverse disciplines of science. The study incorporates disciplines such as Dennard scaling, Computer cluster, Scalability, Microprocessor and Benchmark in addition to Supercomputer.
His Microprocessor research focuses on Parallelism and how it connects with Exascale computing. His Multi-core processor research includes themes of Computer architecture and Key. His biological study spans a wide range of topics, including Data type, System software, Data parallelism and Task parallelism.
His main research concerns Parallel computing, Supercomputer, Distributed computing, Computer architecture and Software. He combines subjects such as Scalability, Computation and Stencil with his study of Parallel computing. His Supercomputer research incorporates themes from Microprocessor, Programming paradigm and Benchmark.
His research in Distributed computing intersects with topics in Visualization and Connectionless communication. John Shalf has researched Visualization in several fields, including Computational science, Adaptive mesh refinement, Computer graphics and Data science. His Computer architecture research is multidisciplinary, relying on both Field-programmable gate array, Embedded system and Efficient energy use.
The scientist’s investigation covers issues in Software, Computer architecture, Photonics, Parallel computing and Supercomputer. The Software architecture research John Shalf does as part of his general Software study is frequently linked to other disciplines of science, such as Solid-state, Design space and Reuse, therefore creating a link between diverse domains of science. His Computer architecture study combines topics in areas such as Design space exploration, Efficient energy use, Resource, Field-programmable gate array and Systems architecture.
His studies deal with areas such as Random number generation, Asynchronous communication and Adaptive mesh refinement as well as Parallel computing. His Supercomputer research is multidisciplinary, incorporating elements of Distributed database, Energy consumption, Technology roadmap, Software portability and Data science. In his study, Distributed computing is inextricably linked to Parallel processing, which falls within the broad field of Solid-state drive.
John Shalf mostly deals with Scalability, Computation, Distributed computing, Photonics and Computer architecture. His Scalability study integrates concerns from other disciplines, such as Semantics, Instruction set, Fat tree and Encoding. His work deals with themes such as Provisioning, Byte, Logic gate and FLOPS, which intersect with Computation.
Supercomputer is closely connected to Memory hierarchy in his research, which is encompassed under the umbrella topic of Distributed computing. John Shalf focuses mostly in the field of Photonics, narrowing it down to topics relating to Interconnection and, in certain cases, Efficient energy use, Optical switch, Data center, Bandwidth and Latency. John Shalf has included themes like Field-programmable gate array, Circuit switching and Tapering in his Computer architecture study.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
The Landscape of Parallel Computing Research: A View from Berkeley
Krste Asanovic;Ras Bodik;Bryan Christopher Catanzaro;Joseph James Gebis.
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
Samuel Williams;Leonid Oliker;Richard Vuduc;John Shalf.
conference on high performance computing (supercomputing) (2007)
The International Exascale Software Project roadmap
Jack Dongarra;Pete Beckman;Terry Moore;Patrick Aerts.
ieee international conference on high performance computing data and analytics (2011)
Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud
Keith R. Jackson;Lavanya Ramakrishnan;Krishna Muriki;Shane Canon.
ieee international conference on cloud computing technology and science (2010)
Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures
Kaushik Datta;Mark Murphy;Vasily Volkov;Samuel Williams.
ieee international conference on high performance computing data and analytics (2008)
Exascale computing technology challenges
John Shalf;Sudip Dosanjh;John Morrison.
ieee international conference on high performance computing data and analytics (2010)
The potential of the cell processor for scientific computing
Samuel Williams;John Shalf;Leonid Oliker;Shoaib Kamil.
computing frontiers (2006)
The cactus framework and toolkit: design and applications
Tom Goodale;Gabrielle Allen;Gerd Lanfermann;Joan Masso.
ieee international conference on high performance computing data and analytics (2002)
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Kaushik Datta;Shoaib Kamil;Samuel Williams;Leonid Oliker.
Siam Review (2009)
An auto-tuning framework for parallel multicore stencil computations
Shoaib Kamil;Cy Chan;Leonid Oliker;John Shalf.
international parallel and distributed processing symposium (2010)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: