Ji Liu spends much of his time researching Algorithm, Asynchronous communication, Speedup, Stochastic gradient descent and Coordinate descent. His Algorithm research incorporates themes from Stochastic approximation, Saddle, Acceleration, Convex optimization and Pattern recognition. His Convex optimization study incorporates themes from Artificial neural network, Artificial intelligence, Bounded function and Shared memory.
His work deals with themes such as Event, Machine learning and Data mining, which intersect with Artificial intelligence. In his study, Support vector machine, Stochastic optimization, Convolutional neural network and Computation is inextricably linked to Bottleneck, which falls within the broad field of Stochastic gradient descent. His Coordinate descent study improves the overall literature in Mathematical optimization.
His scientific interests lie mostly in Artificial intelligence, Algorithm, Mathematical optimization, Machine learning and Reinforcement learning. His research ties Pattern recognition and Artificial intelligence together. When carried out as part of a general Algorithm research project, his work on Coordinate descent is frequently linked to work in Asynchronous communication, therefore connecting diverse disciplines of study.
The various areas that he examines in his Mathematical optimization study include Regularization, Bounded function and Proximal Gradient Methods. Ji Liu has included themes like Stochastic optimization and Variance reduction in his Reinforcement learning study. His study in Rate of convergence is interdisciplinary in nature, drawing from both Stochastic gradient descent and Server.
His main research concerns Artificial intelligence, Regret, Key, Bottleneck and Scalability. Artificial intelligence is frequently linked to Function in his study. His Function research is multidisciplinary, incorporating elements of Stochastic gradient descent, Variance reduction and Reinforcement learning.
His Regret research includes elements of World Wide Web, Social network, Monte Carlo tree search and Federated learning. His studies examine the connections between Bottleneck and genetics, as well as such issues in Data mining, with regards to Embedding. His work focuses on many connections between Scalability and other disciplines, such as System model, that overlap with his field of interest in Feature.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Tensor completion for estimating missing values in visual data
Ji Liu;Przemyslaw Musialski;Peter Wonka;Jieping Ye.
international conference on computer vision (2009)
Sparse reconstruction cost for abnormal event detection
Yang Cong;Junsong Yuan;Ji Liu.
computer vision and pattern recognition (2011)
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian;Ce Zhang;Huan Zhang;Cho-Jui Hsieh.
neural information processing systems (2017)
An asynchronous parallel stochastic coordinate descent algorithm
Ji Liu;Stephen J. Wright;Christopher Ré;Victor Bittorf.
Journal of Machine Learning Research (2015)
Abnormal event detection in crowded scenes using sparse representation
Yang Cong;Junsong Yuan;Ji Liu.
Pattern Recognition (2013)
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni;Jialei Wang;Ji Liu;Tong Zhang.
neural information processing systems (2018)
Asynchronous parallel stochastic gradient for nonconvex optimization
Xiangru Lian;Yijun Huang;Yuncheng Li;Ji Liu.
neural information processing systems (2015)
Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
Ji Liu;Stephen J. Wright.
Siam Journal on Optimization (2015)
Asynchronous Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian;Wei Zhang;Ce Zhang;Ji Liu.
international conference on machine learning (2018)
$D^2$: Decentralized Training over Decentralized Data
Hanlin Tang;Xiangru Lian;Ming Yan;Ce Zhang.
international conference on machine learning (2018)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
ETH Zurich
The University of Texas at Austin
Hong Kong University of Science and Technology
Tsinghua University
University of Massachusetts Amherst
Aberystwyth University
University of Sydney
University of Rochester
University of California, Los Angeles
Arizona State University
Paris Nanterre University
University of British Columbia
Hiroshima University
Delft University of Technology
University of Leeds
University of Minnesota
Broad Institute
University of Oxford
Kumamoto University
University College Dublin
University of Reading
University of Washington
University of Bradford
University College London
Northwestern University
Princeton University