H-Index & Metrics Best Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science D-index 92 Citations 75,916 277 World Ranking 226 National Ranking 135

Research.com Recognitions

Awards & Achievements

2010 - ACM - IEEE CS Eckert-Mauchly Award For outstanding contributions to the architecture of interconnection networks and parallel computers.

2009 - Member of the National Academy of Engineering For contributions to the design of high-performance interconnect networks and parallel computer architectures.

2007 - Fellow of the American Academy of Arts and Sciences

2002 - ACM Fellow For contributions to the architecture and design of interconnections networks and parallel computing.

Overview

What is he best known for?

The fields of study he is best known for:

  • Operating system
  • Central processing unit
  • Computer network

William J. Dally mainly investigates Parallel computing, Computer network, Computer architecture, Artificial neural network and Artificial intelligence. As a part of the same scientific family, William J. Dally mostly works in the field of Parallel computing, focusing on Interconnection and, on occasion, Telecommunications network, Topology, Network packet and Integrated circuit layout. His Router, Flow control and Static routing study, which is part of a larger body of work in Computer network, is frequently linked to Throughput, bridging the gap between disciplines.

His studies deal with areas such as Efficient energy use, Quantization and Computer engineering as well as Artificial neural network. His work carried out in the field of Efficient energy use brings together such families of science as Random access memory and Static random-access memory. His study looks at the intersection of Computer engineering and topics like Huffman coding with Cache and Theoretical computer science.

His most cited work include:

  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (3890 citations)
  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (3890 citations)
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (3214 citations)

What are the main themes of his work throughout his whole career to date?

William J. Dally spends much of his time researching Parallel computing, Computer network, Electronic engineering, Computer architecture and Computer hardware. William J. Dally is studying Cache, which is a component of Parallel computing. His work in Cache is not limited to one particular discipline; it also encompasses Speedup.

In the subject of general Computer network, his work in Router, Flow control and Network packet is often linked to Throughput, thereby combining diverse domains of study. William J. Dally combines subjects such as Transmitter, Signal, Electrical engineering and Synchronous circuit with his study of Electronic engineering. The Latency study which covers Interconnection that intersects with Telecommunications network.

He most often published in these fields:

  • Parallel computing (24.90%)
  • Computer network (15.35%)
  • Electronic engineering (15.15%)

What were the highlights of his more recent work (between 2013-2021)?

  • Artificial intelligence (6.22%)
  • Artificial neural network (6.02%)
  • Efficient energy use (5.81%)

In recent papers he was focusing on the following fields of study:

His scientific interests lie mostly in Artificial intelligence, Artificial neural network, Efficient energy use, Parallel computing and Speedup. His Artificial intelligence research focuses on subjects like Machine learning, which are linked to Factor. His Artificial neural network research is multidisciplinary, incorporating perspectives in Algorithm, Quantization, Computer engineering, Dataflow and Convolutional neural network.

His Efficient energy use study incorporates themes from Random access memory, Uncompressed video, System on a chip, Embedded system and Static random-access memory. A large part of his Parallel computing studies is devoted to Cache. His Speedup research is multidisciplinary, relying on both Hardware acceleration, Darwin, Huffman coding and Theoretical computer science.

Between 2013 and 2021, his most popular works were:

  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (3890 citations)
  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (3890 citations)
  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (3214 citations)

In his most recent research, the most cited papers focused on:

  • Operating system
  • Central processing unit
  • Computer network

William J. Dally focuses on Artificial intelligence, Artificial neural network, Computer engineering, Machine learning and Efficient energy use. His research integrates issues of Field-programmable gate array, Hardware acceleration and Pattern recognition in his study of Artificial intelligence. His work carried out in the field of Artificial neural network brings together such families of science as Huffman coding, Quantization, Speedup and Cache.

His Huffman coding research incorporates themes from Centroid and Theoretical computer science. His Computer engineering research focuses on Language model and how it relates to Latency, Scalability and Ethernet. His work deals with themes such as Dram, Static random-access memory and Parallel computing, which intersect with Efficient energy use.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Best Publications

Route packets, not wires: on-chip interconnection networks

William J. Dally;Brian Towles.
design automation conference (2001)

4302 Citations

Principles and Practices of Interconnection Networks

William James Dally;Brian Patrick Towles.
(2004)

4034 Citations

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

Song Han;Huizi Mao;William J. Dally;William J. Dally.
international conference on learning representations (2016)

3784 Citations

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size

Forrest N. Iandola;Song Han;Matthew W. Moskewicz;Khalid Ashraf.
arXiv: Computer Vision and Pattern Recognition (2017)

3563 Citations

Learning both weights and connections for efficient neural networks

Song Han;Jeff Pool;John Tran;William J. Dally.
neural information processing systems (2015)

2144 Citations

Virtual-channel flow control

W.J. Dally.
IEEE Transactions on Parallel and Distributed Systems (1992)

1816 Citations

EIE: efficient inference engine on compressed deep neural network

Song Han;Xingyu Liu;Huizi Mao;Jing Pu.
international symposium on computer architecture (2016)

1447 Citations

Performance analysis of k-ary n-cube interconnection networks

W.J. Dally.
IEEE Transactions on Computers (1990)

1361 Citations

Digital Systems Engineering

William J. Dally;John W. Poulton.
(1998)

1256 Citations

The Torus Routing Chip

William J. Dally;Charles L. Seitz.
Distributed Computing (1986)

1169 Citations

If you think any of the details on this page are incorrect, let us know.

Contact us

Best Scientists Citing William J. Dally

Onur Mutlu

Onur Mutlu

ETH Zurich

Publications: 167

José Duato

José Duato

Universitat Politècnica de València

Publications: 163

Luca Benini

Luca Benini

University of Bologna

Publications: 136

Yuan Xie

Yuan Xie

University of California, Santa Barbara

Publications: 86

Hannu Tenhunen

Hannu Tenhunen

Royal Institute of Technology

Publications: 85

Mark Horowitz

Mark Horowitz

Stanford University

Publications: 83

Chita R. Das

Chita R. Das

Pennsylvania State University

Publications: 80

Yanzhi Wang

Yanzhi Wang

Northeastern University

Publications: 79

Axel Jantsch

Axel Jantsch

TU Wien

Publications: 75

Jose Flich

Jose Flich

Universitat Politècnica de València

Publications: 71

Keren Bergman

Keren Bergman

Columbia University

Publications: 70

Li-Shiuan Peh

Li-Shiuan Peh

National University of Singapore

Publications: 68

Juha Plosila

Juha Plosila

University of Turku

Publications: 67

Mahmut Kandemir

Mahmut Kandemir

Pennsylvania State University

Publications: 66

Hideharu Amano

Hideharu Amano

Keio University

Publications: 65

Pasi Liljeberg

Pasi Liljeberg

University of Turku

Publications: 64

Something went wrong. Please try again later.