Anima Anandkumar mainly investigates Artificial neural network, Artificial intelligence, Algorithm, Language model and Compact space. His Artificial neural network research includes elements of Python, NumPy, Programming language and Theoretical computer science. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Machine learning and Graph.
His Algorithm research includes themes of Range, Sequence, Nonlinear system and Sensitivity. His study connects Perspective and Language model. As a part of the same scientific study, Anima Anandkumar usually deals with the State, concentrating on Tensor and frequently concerns with Mathematical optimization.
His primary scientific interests are in Artificial intelligence, Artificial neural network, Algorithm, Machine learning and Pattern recognition. His research in Deep learning, Feature learning, Convolutional neural network, Latent variable and Leverage are components of Artificial intelligence. In the subject of general Artificial neural network, his work in Gradient descent is often linked to Partial differential equation, thereby combining diverse domains of study.
The study incorporates disciplines such as Curse of dimensionality, Spectral method, Dimensionality reduction, Range and Principal component analysis in addition to Algorithm. Anima Anandkumar usually deals with Range and limits it to topics linked to Propagation of uncertainty and Nonlinear system. His work is dedicated to discovering how Machine learning, Generator are connected with Controllability, Face and Labeled data and other disciplines.
Anima Anandkumar mostly deals with Artificial intelligence, Artificial neural network, Machine learning, Mathematical optimization and Partial differential equation. His Artificial intelligence research is multidisciplinary, incorporating perspectives in Generator and Pattern recognition. His Artificial neural network research is multidisciplinary, incorporating elements of Exploit, Theoretical computer science, Hierarchy and Nonlinear system.
In general Machine learning study, his work on Leverage and Transfer of learning often relates to the realm of Generalization and Weather and climate, thereby connecting several areas of interest. His study looks at the relationship between Mathematical optimization and fields such as Adaptive control, as well as how they intersect with chemical problems. His Deep learning study incorporates themes from Algorithm and Computer vision.
Anima Anandkumar mainly focuses on Artificial intelligence, Algorithm, Artificial neural network, Partial differential equation and Machine learning. Artificial intelligence is closely attributed to Pattern recognition in his study. His research in Algorithm intersects with topics in Multilinear map, Deep learning and Contraction.
His Deep learning study combines topics from a wide range of disciplines, such as Computational complexity theory, Matrix decomposition and Time complexity. His Artificial neural network research incorporates elements of Discretization, Operator and Nonlinear system. His work on Early stopping and Leverage as part of general Machine learning study is frequently connected to Weather and climate and Generalization, therefore bridging the gap between diverse disciplines of science and establishing a new relationship between them.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Tensor decompositions for learning latent variable models
Animashree Anandkumar;Rong Ge;Daniel Hsu;Sham M. Kakade.
Journal of Machine Learning Research (2014)
Tensor decompositions for learning latent variable models
Animashree Anandkumar;Rong Ge;Daniel Hsu;Sham M. Kakade.
Journal of Machine Learning Research (2014)
Born Again Neural Networks
Tommaso Furlanello;Zachary Chase Lipton;Michael Tschannen;Laurent Itti.
international conference on machine learning (2018)
Born Again Neural Networks
Tommaso Furlanello;Zachary Chase Lipton;Michael Tschannen;Laurent Itti.
international conference on machine learning (2018)
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein;Yu-Xiang Wang;Kamyar Azizzadenesheli;Animashree Anandkumar.
international conference on machine learning (2018)
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein;Yu-Xiang Wang;Kamyar Azizzadenesheli;Animashree Anandkumar.
international conference on machine learning (2018)
A Method of Moments for Mixture Models and Hidden Markov Models
Animashree Anandkumar;Daniel J. Hsu;Sham M. Kakade.
conference on learning theory (2012)
A Method of Moments for Mixture Models and Hidden Markov Models
Animashree Anandkumar;Daniel J. Hsu;Sham M. Kakade.
conference on learning theory (2012)
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet S. Dhillon;Kamyar Azizzadenesheli;Zachary C. Lipton;Jeremy D. Bernstein.
international conference on learning representations (2018)
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet S. Dhillon;Kamyar Azizzadenesheli;Zachary C. Lipton;Jeremy D. Bernstein.
international conference on learning representations (2018)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
MIT
Columbia University
California Institute of Technology
Harvard University
Cornell University
Carnegie Mellon University
California Institute of Technology
United States Army Research Laboratory
California Institute of Technology
California Institute of Technology
University of California, Irvine
ETH Zurich
IBM (United States)
University of Bath
Karlsruhe Institute of Technology
Tokyo Metropolitan University
Osaka University
United States Geological Survey
Oregon Health & Science University
Kantonsspital St. Gallen
Mayo Clinic
McMaster University
University of Nantes
University of Verona
Agostino Gemelli University Polyclinic
Case Western Reserve University