His primary areas of study are Parallel computing, Speedup, Microarchitecture, Multi-core processor and Compiler. The Parallel computing study combines topics in areas such as Artificial neural network, Unreachable code, Dead code and Computer engineering. His research integrates issues of Stochastic gradient descent, Gradient descent, Computational learning theory, Artificial intelligence and Machine learning in his study of Speedup.
His research in Microarchitecture intersects with topics in Hardware description language, Quantization and Verilog. His Multi-core processor research incorporates themes from ARM architecture, Stratix and Memory footprint. His Compiler study incorporates themes from Redundant code and Programming paradigm, Imperative programming.
Hadi Esmaeilzadeh mainly focuses on Speedup, Parallel computing, Artificial neural network, Compiler and Microarchitecture. His Speedup research is multidisciplinary, incorporating elements of Overhead, Computer engineering, Xeon, Hardware acceleration and Operand. His studies deal with areas such as Computation and Approximate computing as well as Parallel computing.
The study incorporates disciplines such as Distributed computing, Algorithm, Quantization, Inference and Rendering in addition to Artificial neural network. The concepts of his Compiler study are interwoven with issues in Cache, Memory footprint and Programmer. His Microarchitecture study combines topics from a wide range of disciplines, such as Domain, Multi-core processor and Encoding.
His main research concerns Artificial neural network, Speedup, Data mining, Cloud computing and Inference. His Artificial neural network research integrates issues from Compiler, Quantization, Computer engineering, Rendering and Reinforcement learning. His Speedup research is under the purview of Parallel computing.
Hadi Esmaeilzadeh has researched Parallel computing in several fields, including Operand, Composability and Leverage. His work deals with themes such as Recommender system, Deep learning and Data science, which intersect with Cloud computing. His biological study spans a wide range of topics, including Gradient descent and Optimization problem.
His primary scientific interests are in Cloud computing, Inference, Edge device, Artificial neural network and Compiler. His work is dedicated to discovering how Cloud computing, Data mining are connected with Set, Gradient descent and Optimization problem and other disciplines. His research on Inference concerns the broader Artificial intelligence.
His work carried out in the field of Edge device brings together such families of science as Enhanced Data Rates for GSM Evolution, Edge computing and Server. He combines subjects such as Dynamic programming, Scheduling, Memory footprint and Reinforcement learning with his study of Artificial neural network. His Compiler research is multidisciplinary, relying on both Heuristics, Combinatorial optimization, Distributed computing and Computer engineering.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Dark silicon and the end of multicore scaling
Hadi Esmaeilzadeh;Emily Blem;Renee St. Amant;Karthikeyan Sankaralingam.
international symposium on computer architecture (2011)
Neural acceleration for general-purpose approximate programs
Hadi Esmaeilzadeh;Adrian Sampson;Luis Ceze;Doug Burger.
Communications of The ACM (2014)
Architecture support for disciplined approximate programming
Hadi Esmaeilzadeh;Adrian Sampson;Luis Ceze;Doug Burger.
architectural support for programming languages and operating systems (2012)
From high-level deep neural models to FPGAs
Hardik Sharma;Jongse Park;Divya Mahajan;Emmanuel Amaro.
international symposium on microarchitecture (2016)
Dark Silicon and the End of Multicore Scaling
H. Esmaeilzadeh;E. Blem;R. St. Amant;K. Sankaralingam.
IEEE Micro (2012)
Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks
Hardik Sharma;Jongse Park;Naveen Suda;Liangzhen Lai.
international symposium on computer architecture (2018)
General-purpose code acceleration with limited-precision analog computation
Renée St. Amant;Amir Yazdanbakhsh;Jongse Park;Bradley Thwaites.
international symposium on computer architecture (2014)
AxBench: A Multiplatform Benchmark Suite for Approximate Computing
Amir Yazdanbakhsh;Divya Mahajan;Hadi Esmaeilzadeh;Pejman Lotfi-Kamran.
IEEE Design & Test of Computers (2017)
Power challenges may end the multicore era
Hadi Esmaeilzadeh;Emily Blem;Renée St. Amant;Karthikeyan Sankaralingam.
Communications of The ACM (2013)
SNNAP: Approximate computing on programmable SoCs via neural acceleration
Thierry Moreau;Mark Wyse;Jacob Nelson;Adrian Sampson.
high-performance computer architecture (2015)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Microsoft (United States)
University of Illinois at Urbana-Champaign
University of California, San Diego
University of Wisconsin–Madison
University of Washington
ETH Zurich
Carnegie Mellon University
IBM (United States)
Google (United States)
University of Washington