The scientist’s investigation covers issues in Artificial intelligence, Algorithm, Coordinate descent, Machine learning and Robustness. His Artificial intelligence study frequently involves adjacent topics like Pattern recognition. He has researched Algorithm in several fields, including Stochastic gradient descent, Distributed memory and Heuristics.
His Machine learning study incorporates themes from Table and Stationary point. His Robustness research incorporates elements of Artificial neural network, MNIST database, Contextual image classification, Upper and lower bounds and Lipschitz continuity. His research in Linear classifier tackles topics such as Data mining which are related to areas like Sparse data sets, Logistic regression and Network representation learning.
His scientific interests lie mostly in Artificial intelligence, Robustness, Algorithm, Machine learning and Artificial neural network. His Artificial intelligence study frequently links to adjacent areas such as Pattern recognition. In Robustness, Cho-Jui Hsieh works on issues like MNIST database, which are connected to Optimization problem and Theoretical computer science.
His work carried out in the field of Algorithm brings together such families of science as Embedding and Stochastic gradient descent. His Machine learning research focuses on Boosting in particular. His biological study spans a wide range of topics, including Support vector machine and Asynchronous communication.
His primary areas of investigation include Artificial intelligence, Robustness, Machine learning, Artificial neural network and Adversarial system. In his study, Mathematical optimization is inextricably linked to Smoothing, which falls within the broad field of Artificial intelligence. His work deals with themes such as Transformer, MNIST database, Noise, Code and Algorithm, which intersect with Robustness.
His study in Machine learning is interdisciplinary in nature, drawing from both Limit and Influence function. His Artificial neural network study integrates concerns from other disciplines, such as Computational complexity theory, Linear programming, Network architecture and Computer engineering. His studies deal with areas such as Subspace topology, Optimization problem and Leverage as well as Adversarial system.
His primary areas of study are Artificial intelligence, Robustness, Machine learning, Artificial neural network and Adversarial system. His Artificial intelligence research is multidisciplinary, incorporating perspectives in Smoothing, Matrix decomposition and Natural language processing. His Robustness research includes elements of Algorithm, MNIST database and Reinforcement learning.
His work on Deep learning and Collaborative filtering as part of general Machine learning study is frequently linked to Detector and Space, therefore connecting diverse disciplines of science. His studies in Artificial neural network integrate themes in fields like Network architecture, Scale, Transformer and Flexibility. His Adversarial system research includes themes of Theoretical computer science, Gumbel distribution and Probabilistic framework.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
LIBLINEAR: A Library for Large Linear Classification
Rong-En Fan;Kai-Wei Chang;Cho-Jui Hsieh;Xiang-Rui Wang.
Journal of Machine Learning Research (2008)
A dual coordinate descent method for large-scale linear SVM
Cho-Jui Hsieh;Kai-Wei Chang;Chih-Jen Lin;S. Sathiya Keerthi.
international conference on machine learning (2008)
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen;Huan Zhang;Yash Sharma;Jinfeng Yi.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017)
VisualBERT: A Simple and Performant Baseline for Vision and Language.
Liunian Harold Li;Mark Yatskar;Da Yin;Cho-Jui Hsieh.
arXiv: Computer Vision and Pattern Recognition (2019)
Training and Testing Low-degree Polynomial Data Mappings via Linear SVM
Yin-Wen Chang;Cho-Jui Hsieh;Kai-Wei Chang;Michael Ringgaard.
Journal of Machine Learning Research (2010)
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian;Ce Zhang;Huan Zhang;Cho-Jui Hsieh.
neural information processing systems (2017)
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
Wei-Lin Chiang;Xuanqing Liu;Si Si;Yang Li.
knowledge discovery and data mining (2019)
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen;Yash Sharma;Huan Zhang;Jinfeng Yi.
national conference on artificial intelligence (2018)
Large Linear Classification When Data Cannot Fit in Memory
Hsiang-Fu Yu;Cho-Jui Hsieh;Kai-Wei Chang;Chih-Jen Lin.
ACM Transactions on Knowledge Discovery From Data (2012)
Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation
Cho-jui Hsieh;Inderjit S. Dhillon;Pradeep K. Ravikumar;Mátyás A. Sustik.
neural information processing systems (2011)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of California, Los Angeles
The University of Texas at Austin
IBM (United States)
IBM (United States)
University of California, Los Angeles
MIT
University of California, Berkeley
Google (United States)
Carnegie Mellon University
National Taiwan University
Uppsala University
SRI International
University of Tokyo
University of Bologna
Texas A&M University
Chinese Academy of Sciences
National Institutes of Health
University of Western Ontario
Keimyung University
George Mason University
University of Victoria
Goddard Space Flight Center
University of California, Irvine
Washington University in St. Louis
Guido Carli Free International University for Social Studies
University of Chicago