Parallel computing, Computer hardware, Distributed computing, Scalability and Algorithm are his primary areas of study. His Parallel computing study combines topics in areas such as Scheduling and Memory bank. The various areas that Srimat T. Chakradhar examines in his Computer hardware study include Electronic engineering, Filesystem-level encryption, Block cipher and Massively parallel.
His Distributed computing research is multidisciplinary, relying on both Workload, Key, Programming paradigm and Implementation. His studies deal with areas such as Process, Application-specific integrated circuit, Embedded system, Algorithm design and Multi-core processor as well as Scalability. He has researched Algorithm in several fields, including Artificial neural network, Digital electronics, Very-large-scale integration and Automatic test pattern generation.
His primary areas of study are Algorithm, Parallel computing, Embedded system, Sequential logic and Computer hardware. His Algorithm study combines topics from a wide range of disciplines, such as Automatic test pattern generation, Fault coverage and Test set. In his study, which falls under the umbrella issue of Parallel computing, Cloud computing is strongly linked to Distributed computing.
His research in Embedded system intersects with topics in Data compression, Computer architecture, Software and Memory management. His research on Sequential logic also deals with topics like
Srimat T. Chakradhar mainly focuses on Parallel computing, Coprocessor, Distributed computing, Scheduling and Operating system. Srimat T. Chakradhar is interested in CUDA, which is a branch of Parallel computing. His study on Coprocessor also encompasses disciplines like
Srimat T. Chakradhar interconnects Cloud computing, End-user computing and Key in the investigation of issues within Distributed computing. His work in Key addresses issues such as Algorithm design, which are connected to fields such as Scalability and Computer hardware. His Software study incorporates themes from Computer architecture and Pipeline.
His primary scientific interests are in Distributed computing, Parallel computing, Scheduling, Coprocessor and Xeon. His Distributed computing research includes elements of Key and Cloud computing. Srimat T. Chakradhar studies CUDA, a branch of Parallel computing.
His Scheduling research integrates issues from Manycore processor and Computation. His study explores the link between Coprocessor and topics such as Operating system that cross with problems in Processing core. Srimat T. Chakradhar usually deals with Xeon and limits it to topics linked to Server and Implementation and Set.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Massively parallel processing core with plural chains of processing elements and respective smart memory storing select data received from each chain
Srihari Cadambi;Abhinandan Majumdar;Michela Becchi;Srimat Chakradhar.
(2010)
Analysis and characterization of inherent application resilience for approximate computing
Vinay K. Chippa;Srimat T. Chakradhar;Kaushik Roy;Anand Raghunathan.
design automation conference (2013)
A dynamically configurable coprocessor for convolutional neural networks
Srimat Chakradhar;Murugan Sankaradas;Venkata Jakkula;Srihari Cadambi.
international symposium on computer architecture (2010)
Tarazu: optimizing MapReduce on heterogeneous clusters
Faraz Ahmad;Srimat T. Chakradhar;Anand Raghunathan;T. N. Vijaykumar.
architectural support for programming languages and operating systems (2012)
Quality programmable vector processors for approximate computing
Swagath Venkataramani;Vinay K. Chippa;Srimat T. Chakradhar;Kaushik Roy.
international symposium on microarchitecture (2013)
Tamper resistance mechanisms for secure embedded systems
S. Ravi;A. Raghunathan;S. Chakradhar.
international conference on vlsi design (2004)
On-chip networks: a scalable, communication-centric embedded system design paradigm
J. Henkel;W. Wolf;S. Chakradhar.
international conference on vlsi design (2004)
A Massively Parallel Coprocessor for Convolutional Neural Networks
Murugan Sankaradas;Venkata Jakkula;Srihari Cadambi;Srimat Chakradhar.
application specific systems architectures and processors (2009)
Approximate computing and the quest for computing efficiency
Swagath Venkataramani;Srimat T. Chakradhar;Kaushik Roy;Anand Raghunathan.
design automation conference (2015)
Scalable effort hardware design: exploiting algorithmic resilience for energy efficiency
Vinay K. Chippa;Debabrata Mohapatra;Anand Raghunathan;Kaushik Roy.
design automation conference (2010)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Purdue University West Lafayette
Auburn University
Purdue University West Lafayette
NEC (United States)
Karlsruhe Institute of Technology
Hong Kong University of Science and Technology
Princeton University
University of Nebraska–Lincoln
University of Michigan–Ann Arbor
University of Bristol
University of Innsbruck
Southern University of Science and Technology
ETH Zurich
National Tsing Hua University
Friedrich Miescher Institute
Kyoto University
University of Adelaide
Monash University
Instituto Português do Mar e da Atmosfera
Radboud University Nijmegen
University of California, Los Angeles
University of Washington
Wayne State University
National Institutes of Health
Yale University
The Ohio State University