H-Index & Metrics Best Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science D-index 53 Citations 11,262 355 World Ranking 2501 National Ranking 29

Research.com Recognitions

Awards & Achievements

2017 - ACM - IEEE CS Ken Kennedy Award For his contributions to programming models and performance analysis tools for High Performance Computing.

Overview

What is he best known for?

The fields of study he is best known for:

  • Operating system
  • Programming language
  • Artificial intelligence

Jesús Labarta mostly deals with Parallel computing, Programming paradigm, Scheduling, Distributed computing and Supercomputer. His Parallel computing study incorporates themes from Thread, Compiler and Software portability. The various areas that he examines in his Programming paradigm study include Computer architecture, Theory of computation, Multi-core processor and Programmer.

His studies in Scheduling integrate themes in fields like Runtime library, Page fault, Demand paging, Page replacement algorithm and Source code. His Distributed computing research includes elements of Range, Dynamic priority scheduling, Job shop scheduling and Fair-share scheduling. His Supercomputer research integrates issues from Software, Embedded system and Computer engineering.

His most cited work include:

  • The International Exascale Software Project roadmap (580 citations)
  • OmpSs: A proposal for programming heterogeneous multi-core architectures (441 citations)
  • CellSs: a programming model for the cell BE architecture (269 citations)

What are the main themes of his work throughout his whole career to date?

Jesús Labarta focuses on Parallel computing, Distributed computing, Programming paradigm, Scheduling and Supercomputer. Jesús Labarta interconnects Scalability, Compiler and Programmer in the investigation of issues within Parallel computing. His work investigates the relationship between Distributed computing and topics such as Instrumentation that intersect with problems in Computer engineering.

He combines subjects such as Computer architecture, Task, Asynchronous communication and Runtime system with his study of Programming paradigm. Jesús Labarta works in the field of Scheduling, namely Processor scheduling. Jesús Labarta integrates many fields, such as Supercomputer and Efficient energy use, in his works.

He most often published in these fields:

  • Parallel computing (48.23%)
  • Distributed computing (29.08%)
  • Programming paradigm (24.59%)

What were the highlights of his more recent work (between 2014-2021)?

  • Parallel computing (48.23%)
  • Distributed computing (29.08%)
  • Programming paradigm (24.59%)

In recent papers he was focusing on the following fields of study:

His primary areas of study are Parallel computing, Distributed computing, Programming paradigm, Scalability and Supercomputer. His Parallel computing study which covers Implementation that intersects with Volume. His Distributed computing research is multidisciplinary, incorporating perspectives in Workload, Scheduling, Instrumentation and Source code.

His research on Programming paradigm concerns the broader Programming language. His Scalability study incorporates themes from Algorithm, Relevance and Task parallelism. His study in Supercomputer is interdisciplinary in nature, drawing from both Node, Real-time computing and Parallel processing.

Between 2014 and 2021, his most popular works were:

  • The mont-blanc prototype: an alternative approach for HPC systems (50 citations)
  • PyCOMPSs: Parallel computational workflows in Python: (45 citations)
  • On the Behavior of Convolutional Nets for Feature Extraction (41 citations)

In his most recent research, the most cited papers focused on:

  • Operating system
  • Programming language
  • Artificial intelligence

Parallel computing, Distributed computing, Supercomputer, Programming paradigm and Scalability are his primary areas of study. He has researched Parallel computing in several fields, including Solver and Implementation. His Distributed computing study integrates concerns from other disciplines, such as System software, FIFO, Programmer, Node and Scheduling.

His work in Supercomputer addresses issues such as Parallel processing, which are connected to fields such as Big data, Parallelism, Cluster and Software deployment. The subject of his Programming paradigm research is within the realm of Programming language. His Scalability study combines topics in areas such as Schedule, Software, Task parallelism and Task.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Best Publications

The International Exascale Software Project roadmap

Jack Dongarra;Pete Beckman;Terry Moore;Patrick Aerts.
ieee international conference on high performance computing data and analytics (2011)

802 Citations

OmpSs: A proposal for programming heterogeneous multi-core architectures

Alejandro Duran;Eduard Ayguadé;Rosa M. Badia;Rosa M. Badia;Jesús Labarta.
Parallel Processing Letters (2011)

735 Citations

CellSs: a programming model for the cell BE architecture

Pieter Bellens;Josep M. Perez;Rosa M. Badia;Jesus Labarta.
conference on high performance computing (supercomputing) (2006)

459 Citations

PARAVER: A Tool to Visualize and Analyze Parallel Code

Vincent Pillet;Jes us Labarta;Toni Cortes;Sergi Girona.
(2007)

419 Citations

A Framework for Performance Modeling and Prediction

Allan Snavely;Laura Carrington;Nicole Wolter;Jesus Labarta.
conference on high performance computing (supercomputing) (2002)

365 Citations

A dependency-aware task-based programming environment for multi-core architectures

J.M. Perez;R.M. Badia;J. Labarta.
international conference on cluster computing (2008)

330 Citations

Hierarchical Task-Based Programming With StarSs

Judit Planas;Rosa M. Badia;Eduard Ayguadé;Jesus Labarta.
ieee international conference on high performance computing data and analytics (2009)

272 Citations

An Extension of the StarSs Programming Model for Platforms with Multiple GPUs

Eduard Ayguadé;Rosa M. Badia;Francisco D. Igual;Jesús Labarta.
european conference on parallel processing (2009)

218 Citations

DiP: A Parallel Program Development Environment

Jesús Labarta;Sergi Girona;Vincent Pillet;Toni Cortes.
european conference on parallel processing (1996)

217 Citations

Productive Programming of GPU Clusters with OmpSs

Javier Bueno;Judit Planas;Alejandro Duran;Rosa M. Badia.
international parallel and distributed processing symposium (2012)

201 Citations

If you think any of the details on this page are incorrect, let us know.

Contact us

Best Scientists Citing Jesús Labarta

Jack Dongarra

Jack Dongarra

University of Tennessee at Knoxville

Publications: 110

Eduard Ayguadé

Eduard Ayguadé

Barcelona Supercomputing Center

Publications: 95

Rosa M. Badia

Rosa M. Badia

Barcelona Supercomputing Center

Publications: 92

Enrique S. Quintana-Ortí

Enrique S. Quintana-Ortí

Universitat Politècnica de València

Publications: 80

Mateo Valero

Mateo Valero

Barcelona Supercomputing Center

Publications: 62

Xavier Martorell

Xavier Martorell

Barcelona Supercomputing Center

Publications: 60

Dimitrios S. Nikolopoulos

Dimitrios S. Nikolopoulos

Virginia Tech

Publications: 60

Martin Schulz

Martin Schulz

Technical University of Munich

Publications: 47

Torsten Hoefler

Torsten Hoefler

ETH Zurich

Publications: 40

Barbara Chapman

Barbara Chapman

Stony Brook University

Publications: 40

Jeffrey S. Vetter

Jeffrey S. Vetter

Oak Ridge National Laboratory

Publications: 39

Frank Mueller

Frank Mueller

North Carolina State University

Publications: 34

Bronis R. de Supinski

Bronis R. de Supinski

Lawrence Livermore National Laboratory

Publications: 33

Allan Snavely

Allan Snavely

University of California, San Diego

Publications: 33

Allen D. Malony

Allen D. Malony

University of Oregon

Publications: 32

George Bosilca

George Bosilca

University of Tennessee at Knoxville

Publications: 32

Something went wrong. Please try again later.