Distributed computing, Supercomputer, Fault tolerance, Message passing and Programming paradigm are his primary areas of study. George Bosilca combines subjects such as Input/output, Thread and Task with his study of Distributed computing. His studies in Supercomputer integrate themes in fields like Parallel algorithm and Scalability.
His Fault tolerance research is multidisciplinary, incorporating perspectives in Concurrent computing, Process and Overhead. As part of the same scientific family, George Bosilca usually focuses on Message passing, concentrating on Virtual machine and intersecting with Implementation, Component, Software engineering, Software and Software architecture. George Bosilca has included themes like Theoretical computer science, POSIX Threads and Interoperability in his Programming paradigm study.
His primary areas of study are Distributed computing, Parallel computing, Supercomputer, Scalability and Fault tolerance. His Distributed computing research includes themes of Node, Network topology and Programming paradigm. George Bosilca has researched Parallel computing in several fields, including Algorithm and Cholesky decomposition.
Within one scientific family, George Bosilca focuses on topics pertaining to Software under Scalability, and may sometimes address concerns connected to Software engineering. His Fault tolerance study incorporates themes from Set, Overhead and Protocol. His studies deal with areas such as Virtual machine and Operating system as well as Message passing.
George Bosilca focuses on Parallel computing, Distributed computing, Supercomputer, Scalability and Message Passing Interface. In general Parallel computing study, his work on Exascale computing, Runtime system and Multi-core processor often relates to the realm of Parsec, thereby connecting several areas of interest. George Bosilca is interested in Fault tolerance, which is a branch of Distributed computing.
His Supercomputer study combines topics in areas such as Computer engineering, Task, Sparse matrix, Parallel programming model and Computation. His Scalability research incorporates elements of Context switch, Embedded system, Node, Execution model and Software. His Message Passing Interface research includes elements of Consistency, Interface, Programming paradigm and Resilience.
His primary areas of investigation include Distributed computing, Scalability, Message Passing Interface, Overhead and Programming paradigm. His study in Distributed computing focuses on Fault tolerance in particular. His biological study spans a wide range of topics, including Network topology, Software, Overhead and Fault management.
George Bosilca interconnects Supercomputer, Computer engineering, Group method of data handling, Task and Node in the investigation of issues within Scalability. His Message Passing Interface study integrates concerns from other disciplines, such as Interface and Resilience. His research investigates the connection between Programming paradigm and topics such as Thread that intersect with issues in Parallel computing.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation
Edgar Gabriel;Graham E. Fagg;George Bosilca;Thara Angskun.
Lecture Notes in Computer Science (2004)
MPICH-V: Toward a Scalable Fault Tolerant MPI for Volatile Nodes
George Bosilca;Aurelien Bouteiller;Franck Cappello;Samir Djilali.
conference on high performance computing (supercomputing) (2002)
DAGuE: A generic distributed DAG engine for High Performance Computing
George Bosilca;Aurelien Bouteiller;Anthony Danalis;Thomas Herault.
parallel computing (2012)
Performance analysis of MPI collective operations
Jelena Pješivac-Grbović;Thara Angskun;George Bosilca;Graham E. Fagg.
Cluster Computing (2007)
PaRSEC: Exploiting Heterogeneity to Enhance Scalability
George Bosilca;Aurelien Bouteiller;Anthony Danalis;Mathieu Faverge.
computational science and engineering (2013)
Algorithm-based fault tolerance applied to high performance computing
George Bosilca;Rémi Delmas;Jack Dongarra;Julien Langou.
Journal of Parallel and Distributed Computing (2009)
Post-failure recovery of MPI communication capability: Design and rationale
Wesley Bland;Aurelien Bouteiller;Thomas Herault;George Bosilca.
ieee international conference on high performance computing data and analytics (2013)
Open MPI: A High-Performance, Heterogeneous MPI
R.L. Graham;G.M. Shipman;B.W. Barrett;R.H. Castain.
international conference on cluster computing (2006)
Algorithm-based fault tolerance for dense matrix factorizations
Peng Du;Aurelien Bouteiller;George Bosilca;Thomas Herault.
acm sigplan symposium on principles and practice of parallel programming (2012)
Flexible Development of Dense Linear Algebra Algorithms on Massively Parallel Architectures with DPLASMA
George Bosilca;Aurelien Bouteiller;Anthony Danalis;Mathieu Faverge.
ieee international symposium on parallel & distributed processing, workshops and phd forum (2011)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Tennessee at Knoxville
École Normale Supérieure de Lyon
University of Tennessee at Knoxville
Argonne National Laboratory
Pacific Northwest National Laboratory
University of Tennessee at Knoxville
King Abdullah University of Science and Technology
University of Tokyo
Georgia Institute of Technology
Stanford University
Asia University Taiwan
Renaissance Technologies
University of California, San Diego
Purdue University West Lafayette
University of Arizona
Swedish University of Agricultural Sciences
Oregon State University
Institute of Cancer Research
University of Tokyo
Baylor College of Medicine
McGill University
University of Sydney
University of Massachusetts Amherst
University of Florida
Vita-Salute San Raffaele University
National Institute for Astrophysics