2020 - ACM Fellow For contributions to software and hardware design for power-efficient computer architectures
David Brooks spends much of his time researching Embedded system, Microprocessor, Computer architecture, Microarchitecture and Chip. His Embedded system study combines topics from a wide range of disciplines, such as Wireless, Wireless sensor network, Frequency scaling, Voltage and Key distribution in wireless sensor networks. The Microprocessor study combines topics in areas such as Reliability engineering and CPU core voltage.
His work carried out in the field of Computer architecture brings together such families of science as Compiler, Cache coloring, Adaptation and Implementation. David Brooks has included themes like Power analysis, Field, Floorplan, Operand and MMX in his Compiler study. His work deals with themes such as Multithreading, CPU cache, Parallel computing, Power management and Electronic engineering, which intersect with Chip.
David Brooks focuses on Embedded system, Artificial intelligence, Electronic engineering, Voltage and Computer architecture. His study in the field of Microprocessor also crosses realms of Clock gating. In his study, Reduction is strongly linked to Real-time computing, which falls under the umbrella field of Microprocessor.
His studies deal with areas such as Machine learning and Software as well as Artificial intelligence. His Electronic engineering research integrates issues from Boost converter, Forward converter and Voltage regulator. His studies in Computer architecture integrate themes in fields like Hardware acceleration, Compiler and Microarchitecture.
His primary areas of investigation include Artificial intelligence, Deep learning, Inference, Artificial neural network and Speedup. His research integrates issues of Scalability, Hardware acceleration, Software, CUDA and Machine learning in his study of Artificial intelligence. David Brooks has researched Deep learning in several fields, including Computer architecture, Recommender system, Systems design, Scheduling and Cloud computing.
His Computer architecture study typically links adjacent topics like Macro. His Inference research includes elements of Variety, Computer engineering and Bayesian inference. The concepts of his Speedup study are interwoven with issues in Microarchitecture and Memory bandwidth.
The scientist’s investigation covers issues in Artificial intelligence, Deep learning, Inference, Software and Distributed computing. His Deep learning research is multidisciplinary, relying on both Computer architecture, Artificial neural network, Hardware acceleration, Systems design and Speedup. His Computer architecture research incorporates elements of CUDA and Efficient energy use.
His work focuses on many connections between Speedup and other disciplines, such as Microarchitecture, that overlap with his field of interest in Bottleneck and System deployment. His Inference research is multidisciplinary, incorporating elements of Recommender system, Overhead, Reduction and Cluster analysis. His Software study combines topics in areas such as Scalability, Software engineering and Benchmark.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Wattch: a framework for architectural-level power analysis and optimizations
David Brooks;Vivek Tiwari;Margaret Martonosi.
international symposium on computer architecture (2000)
Dynamic thermal management for high-performance microprocessors
D. Brooks;M. Martonosi.
high performance computer architecture (2001)
System level analysis of fast, per-core DVFS using on-chip switching regulators
Wonyoung Kim;M.S. Gupta;Gu-Yeon Wei;D. Brooks.
high-performance computer architecture (2008)
Power-aware microarchitecture: design and modeling challenges for next-generation microprocessors
D.M. Brooks;P. Bose;S.E. Schuster;H. Jacobson.
IEEE Micro (2000)
Accurate and efficient regression modeling for microarchitectural performance and power prediction
Benjamin C. Lee;David M. Brooks.
architectural support for programming languages and operating systems (2006)
Minerva: enabling low-power, highly-accurate deep neural network accelerators
Brandon Reagen;Paul Whatmough;Robert Adolf;Saketh Rama.
international symposium on computer architecture (2016)
Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective
Kim Hazelwood;Sarah Bird;David Brooks;Soumith Chintala.
high-performance computer architecture (2018)
Dynamically exploiting narrow width operands to improve processor power and performance
D. Brooks;M. Martonosi.
high-performance computer architecture (1999)
Profiling a Warehouse-Scale Computer
Svilen Kanev;Juan Pablo Darago;Kim Hazelwood;Parthasarathy Ranganathan.
IEEE Micro (2016)
Thread motion: fine-grained power management for multi-core systems
Krishna K. Rangan;Gu-Yeon Wei;David Brooks.
international symposium on computer architecture (2009)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Harvard University
Harvard University
IBM (United States)
Princeton University
Harvard University
University of Virginia
IBM (United States)
Facebook (United States)
Cornell University
Boston University
University of Parma
University of Münster
Waseda University
Niigata University
University of Edinburgh
Spanish National Research Council
University of Ferrara
National Institutes of Health
SUNY Upstate Medical University
University of Victoria
Ritsumeikan University
University of Murcia
University of Sydney
Texas A&M University
Monash University
University of Bristol