The scientist’s investigation covers issues in Parallel computing, Programming language, Fortran, Shared memory and Multi-core processor. Her Parallel computing research incorporates themes from Compiler and Computational science. Her Programming language research is multidisciplinary, relying on both SPMD, Task parallelism and Asynchronous communication.
Barbara Chapman combines subjects such as Uniform memory access and Distributed shared memory with her study of Shared memory. Her Multi-core processor research integrates issues from Node, Petascale computing, Software and Computer architecture. Her biological study spans a wide range of topics, including Interface, Source code, Troubleshooting, Software engineering and Language construct.
Her primary scientific interests are in Parallel computing, Compiler, Programming paradigm, Programming language and Shared memory. Her Benchmark, Supercomputer, Multi-core processor and CUDA study, which is part of a larger body of work in Parallel computing, is frequently linked to General-purpose computing on graphics processing units, bridging the gap between disciplines. Her Compiler study which covers Thread that intersects with Multiprocessing and CPU cache.
Her Programming paradigm research is multidisciplinary, incorporating perspectives in Computer architecture, Software, Porting and Distributed computing. Her research in Programming language intersects with topics in SPMD and Asynchronous communication. Her study in the field of Distributed memory is also linked to topics like Locality.
Barbara Chapman spends much of her time researching Parallel computing, Programming paradigm, Compiler, Distributed computing and Benchmark. In the subject of general Parallel computing, her work in CUDA and Shared memory is often linked to General-purpose computing on graphics processing units, thereby combining diverse domains of study. Her Programming paradigm research incorporates elements of Software, Porting, Directive and Interface.
Her research integrates issues of Supercomputer, Overhead, Memory management, Multi-core processor and Speedup in her study of Compiler. Barbara Chapman has researched Supercomputer in several fields, including Software portability and Software engineering. The concepts of her Distributed computing study are interwoven with issues in Scheduling, Instruction cycle, Scalability and Asynchronous communication.
Barbara Chapman focuses on Parallel computing, Compiler, Programming paradigm, Benchmark and Partitioned global address space. In her articles, Barbara Chapman combines various disciplines, including Parallel computing and General-purpose computing on graphics processing units. Her Compiler research is included under the broader classification of Programming language.
The study incorporates disciplines such as Data transmission, Initialization, Directive, Message Passing Interface and Kernel in addition to Programming paradigm. Her research in Benchmark tackles topics such as System software which are related to areas like Uniform memory access. In her research, Distributed computing, Scheme and Fault tolerance is intimately related to Software, which falls under the overarching field of Partitioned global address space.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Using OpenMP: Portable Shared Memory Parallel Programming
Barbara Chapman;Gabriele Jost;Ruud van der Pas.
(2007)
Supercompilers for parallel and vector computers
Hans Zima;Barbara Chapman.
(1990)
The International Exascale Software Project roadmap
Jack Dongarra;Pete Beckman;Terry Moore;Patrick Aerts.
ieee international conference on high performance computing data and analytics (2011)
Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation)
Barbara Chapman;Gabriele Jost;Ruud van der Pas.
(2007)
Professional CUDA C Programming
John Cheng;Max Grossman;Ty McKercher;Barbara Chapman.
(2014)
Programming in Vienna Fortran
Barbara Chapman;Piyush Mehrotra;Hans Zima.
Scientific Programming (1992)
High performance computing using MPI and OpenMP on multi-core parallel systems
Haoqiang Jin;Dennis Jespersen;Piyush Mehrotra;Rupak Biswas.
parallel computing (2011)
Introducing OpenSHMEM: SHMEM for the PGAS community
Barbara Chapman;Tony Curtis;Swaroop Pophale;Stephen Poole.
Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model (2010)
Compiling for distributed-memory systems
H.P. Zima;B.M. Chapman.
Proceedings of the IEEE (1993)
OpenUH: an optimizing, portable OpenMP compiler
Chunhua Liao;Oscar R. Hernandez;Barbara M. Chapman;Wenguang Chen.
Concurrency and Computation: Practice and Experience (2007)
Lawrence Livermore National Laboratory
University of Oregon
University of Delaware
University of Tennessee at Knoxville
Tsinghua University
St. Francis Xavier University
Georgia Institute of Technology
Oak Ridge National Laboratory
Barcelona Supercomputing Center
Virginia Tech
Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking d-index is inferred from publications deemed to belong to the considered discipline.
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: