D-Index & Metrics Best Publications

D-Index & Metrics D-index (Discipline H-index) only includes papers and citation values for an examined discipline in contrast to General H-index which accounts for publications across all disciplines.

Discipline name D-index D-index (Discipline H-index) only includes papers and citation values for an examined discipline in contrast to General H-index which accounts for publications across all disciplines. Citations Publications World Ranking National Ranking
Computer Science D-index 30 Citations 7,168 83 World Ranking 10025 National Ranking 4503

Overview

What is he best known for?

The fields of study he is best known for:

  • Operating system
  • Programming language
  • Algorithm

His primary areas of study are Parallel computing, Speedup, Microarchitecture, Multi-core processor and Compiler. The Parallel computing study combines topics in areas such as Artificial neural network, Unreachable code, Dead code and Computer engineering. His research integrates issues of Stochastic gradient descent, Gradient descent, Computational learning theory, Artificial intelligence and Machine learning in his study of Speedup.

His research in Microarchitecture intersects with topics in Hardware description language, Quantization and Verilog. His Multi-core processor research incorporates themes from ARM architecture, Stratix and Memory footprint. His Compiler study incorporates themes from Redundant code and Programming paradigm, Imperative programming.

His most cited work include:

  • Dark silicon and the end of multicore scaling (1276 citations)
  • A reconfigurable fabric for accelerating large-scale datacenter services (639 citations)
  • Neural Acceleration for General-Purpose Approximate Programs (466 citations)

What are the main themes of his work throughout his whole career to date?

Hadi Esmaeilzadeh mainly focuses on Speedup, Parallel computing, Artificial neural network, Compiler and Microarchitecture. His Speedup research is multidisciplinary, incorporating elements of Overhead, Computer engineering, Xeon, Hardware acceleration and Operand. His studies deal with areas such as Computation and Approximate computing as well as Parallel computing.

The study incorporates disciplines such as Distributed computing, Algorithm, Quantization, Inference and Rendering in addition to Artificial neural network. The concepts of his Compiler study are interwoven with issues in Cache, Memory footprint and Programmer. His Microarchitecture study combines topics from a wide range of disciplines, such as Domain, Multi-core processor and Encoding.

He most often published in these fields:

  • Speedup (33.33%)
  • Parallel computing (26.44%)
  • Artificial neural network (22.99%)

What were the highlights of his more recent work (between 2019-2021)?

  • Artificial neural network (22.99%)
  • Speedup (33.33%)
  • Data mining (8.05%)

In recent papers he was focusing on the following fields of study:

His main research concerns Artificial neural network, Speedup, Data mining, Cloud computing and Inference. His Artificial neural network research integrates issues from Compiler, Quantization, Computer engineering, Rendering and Reinforcement learning. His Speedup research is under the purview of Parallel computing.

Hadi Esmaeilzadeh has researched Parallel computing in several fields, including Operand, Composability and Leverage. His work deals with themes such as Recommender system, Deep learning and Data science, which intersect with Cloud computing. His biological study spans a wide range of topics, including Gradient descent and Optimization problem.

Between 2019 and 2021, his most popular works were:

  • Shredder: Learning Noise Distributions to Protect Inference Privacy (22 citations)
  • Privacy in Deep Learning: A Survey (15 citations)
  • Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices (8 citations)

In his most recent research, the most cited papers focused on:

  • Operating system
  • Programming language
  • Algorithm

His primary scientific interests are in Cloud computing, Inference, Edge device, Artificial neural network and Compiler. His work is dedicated to discovering how Cloud computing, Data mining are connected with Set, Gradient descent and Optimization problem and other disciplines. His research on Inference concerns the broader Artificial intelligence.

His work carried out in the field of Edge device brings together such families of science as Enhanced Data Rates for GSM Evolution, Edge computing and Server. He combines subjects such as Dynamic programming, Scheduling, Memory footprint and Reinforcement learning with his study of Artificial neural network. His Compiler research is multidisciplinary, relying on both Heuristics, Combinatorial optimization, Distributed computing and Computer engineering.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Best Publications

Dark silicon and the end of multicore scaling

Hadi Esmaeilzadeh;Emily Blem;Renee St. Amant;Karthikeyan Sankaralingam.
international symposium on computer architecture (2011)

2221 Citations

Neural acceleration for general-purpose approximate programs

Hadi Esmaeilzadeh;Adrian Sampson;Luis Ceze;Doug Burger.
Communications of The ACM (2014)

796 Citations

Architecture support for disciplined approximate programming

Hadi Esmaeilzadeh;Adrian Sampson;Luis Ceze;Doug Burger.
architectural support for programming languages and operating systems (2012)

550 Citations

From high-level deep neural models to FPGAs

Hardik Sharma;Jongse Park;Divya Mahajan;Emmanuel Amaro.
international symposium on microarchitecture (2016)

412 Citations

Dark Silicon and the End of Multicore Scaling

H. Esmaeilzadeh;E. Blem;R. St. Amant;K. Sankaralingam.
IEEE Micro (2012)

341 Citations

Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks

Hardik Sharma;Jongse Park;Naveen Suda;Liangzhen Lai.
international symposium on computer architecture (2018)

326 Citations

General-purpose code acceleration with limited-precision analog computation

Renée St. Amant;Amir Yazdanbakhsh;Jongse Park;Bradley Thwaites.
international symposium on computer architecture (2014)

219 Citations

AxBench: A Multiplatform Benchmark Suite for Approximate Computing

Amir Yazdanbakhsh;Divya Mahajan;Hadi Esmaeilzadeh;Pejman Lotfi-Kamran.
IEEE Design & Test of Computers (2017)

214 Citations

Power challenges may end the multicore era

Hadi Esmaeilzadeh;Emily Blem;Renée St. Amant;Karthikeyan Sankaralingam.
Communications of The ACM (2013)

208 Citations

SNNAP: Approximate computing on programmable SoCs via neural acceleration

Thierry Moreau;Mark Wyse;Jacob Nelson;Adrian Sampson.
high-performance computer architecture (2015)

160 Citations

If you think any of the details on this page are incorrect, let us know.

Contact us

Best Scientists Citing Hadi Esmaeilzadeh

Muhammad Shafique

Muhammad Shafique

New York University Abu Dhabi

Publications: 59

Yuan Xie

Yuan Xie

University of California, Santa Barbara

Publications: 44

Luca Benini

Luca Benini

ETH Zurich

Publications: 39

Jorg Henkel

Jorg Henkel

Karlsruhe Institute of Technology

Publications: 39

David Brooks

David Brooks

Harvard University

Publications: 37

Jason Cong

Jason Cong

University of California, Los Angeles

Publications: 31

G. Glenn Henry

G. Glenn Henry

Centaur Technology

Publications: 29

Gustavo Alonso

Gustavo Alonso

ETH Zurich

Publications: 27

Yu Wang

Yu Wang

Tsinghua University

Publications: 27

Gu-Yeon Wei

Gu-Yeon Wei

Harvard University

Publications: 26

Hai Li

Hai Li

Duke University

Publications: 25

Onur Mutlu

Onur Mutlu

ETH Zurich

Publications: 24

Tajana Rosing

Tajana Rosing

University of California, San Diego

Publications: 24

Massoud Pedram

Massoud Pedram

University of Southern California

Publications: 23

Pasi Liljeberg

Pasi Liljeberg

University of Turku

Publications: 22

Luis Ceze

Luis Ceze

University of Washington

Publications: 22

Something went wrong. Please try again later.