Ji Liu spends much of his time researching Algorithm, Asynchronous communication, Speedup, Stochastic gradient descent and Coordinate descent. His Algorithm research incorporates themes from Stochastic approximation, Saddle, Acceleration, Convex optimization and Pattern recognition. His Convex optimization study incorporates themes from Artificial neural network, Artificial intelligence, Bounded function and Shared memory.
His work deals with themes such as Event, Machine learning and Data mining, which intersect with Artificial intelligence. In his study, Support vector machine, Stochastic optimization, Convolutional neural network and Computation is inextricably linked to Bottleneck, which falls within the broad field of Stochastic gradient descent. His Coordinate descent study improves the overall literature in Mathematical optimization.
His scientific interests lie mostly in Artificial intelligence, Algorithm, Mathematical optimization, Machine learning and Reinforcement learning. His research ties Pattern recognition and Artificial intelligence together. When carried out as part of a general Algorithm research project, his work on Coordinate descent is frequently linked to work in Asynchronous communication, therefore connecting diverse disciplines of study.
The various areas that he examines in his Mathematical optimization study include Regularization, Bounded function and Proximal Gradient Methods. Ji Liu has included themes like Stochastic optimization and Variance reduction in his Reinforcement learning study. His study in Rate of convergence is interdisciplinary in nature, drawing from both Stochastic gradient descent and Server.
His main research concerns Artificial intelligence, Regret, Key, Bottleneck and Scalability. Artificial intelligence is frequently linked to Function in his study. His Function research is multidisciplinary, incorporating elements of Stochastic gradient descent, Variance reduction and Reinforcement learning.
His Regret research includes elements of World Wide Web, Social network, Monte Carlo tree search and Federated learning. His studies examine the connections between Bottleneck and genetics, as well as such issues in Data mining, with regards to Embedding. His work focuses on many connections between Scalability and other disciplines, such as System model, that overlap with his field of interest in Feature.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Tensor completion for estimating missing values in visual data
Ji Liu;Przemyslaw Musialski;Peter Wonka;Jieping Ye.
international conference on computer vision (2009)
Sparse reconstruction cost for abnormal event detection
Yang Cong;Junsong Yuan;Ji Liu.
computer vision and pattern recognition (2011)
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian;Ce Zhang;Huan Zhang;Cho-Jui Hsieh.
neural information processing systems (2017)
Abnormal event detection in crowded scenes using sparse representation
Yang Cong;Junsong Yuan;Ji Liu.
Pattern Recognition (2013)
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni;Jialei Wang;Ji Liu;Tong Zhang.
neural information processing systems (2018)
Asynchronous parallel stochastic gradient for nonconvex optimization
Xiangru Lian;Yijun Huang;Yuncheng Li;Ji Liu.
neural information processing systems (2015)
Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
Ji Liu;Stephen J. Wright.
Siam Journal on Optimization (2015)
D$^2$: Decentralized Training over Decentralized Data
Hanlin Tang;Xiangru Lian;Ming Yan;Ce Zhang.
arXiv: Distributed, Parallel, and Cluster Computing (2018)
An asynchronous parallel stochastic coordinate descent algorithm
Ji Liu;Stephen J. Wright;Christopher Ré;Victor Bittorf.
Journal of Machine Learning Research (2015)
Asynchronous Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian;Wei Zhang;Ce Zhang;Ji Liu.
international conference on machine learning (2018)
Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.
If you think any of the details on this page are incorrect, let us know.
ETH Zurich
The University of Texas at Austin
Hong Kong University of Science and Technology
University of Massachusetts Amherst
Tsinghua University
Aberystwyth University
University of California, Los Angeles
University of Sydney
University of Rochester
University at Buffalo, State University of New York
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: