John Shawe-Taylor focuses on Artificial intelligence, Machine learning, Support vector machine, Algorithm and Kernel. His Artificial intelligence research is multidisciplinary, incorporating elements of Hyperplane and Pattern recognition. His study focuses on the intersection of Machine learning and fields such as Generalization with connections in the field of Approximation theory.
His Support vector machine research incorporates themes from Artificial neural network and Decision support system. In the subject of general Kernel, his work in String kernel is often linked to Analogy, thereby combining diverse domains of study. His study in Statistical learning theory is interdisciplinary in nature, drawing from both Semi-supervised learning, Software, GRASP and Presentation.
John Shawe-Taylor spends much of his time researching Artificial intelligence, Machine learning, Support vector machine, Algorithm and Kernel method. His Artificial intelligence study frequently intersects with other fields, such as Pattern recognition. John Shawe-Taylor works in the field of Machine learning, namely Multiple kernel learning.
His Support vector machine research is multidisciplinary, incorporating perspectives in Margin, Data mining and Model selection. The study incorporates disciplines such as Function, Generalization and Perceptron in addition to Algorithm. His Kernel study typically links adjacent topics like Kernel.
The scientist’s investigation covers issues in Artificial intelligence, Machine learning, Algorithm, Recommender system and Generalization. His Artificial intelligence study integrates concerns from other disciplines, such as Natural language processing and Pattern recognition. His work investigates the relationship between Machine learning and topics such as Neuroimaging that intersect with problems in Multiple kernel learning and Interpretability.
His research in Algorithm intersects with topics in Norm, Feature, Similarity and Lasso. His research integrates issues of Open educational resources, Data science and Set in his study of Recommender system. His Support vector machine study combines topics in areas such as Linear programming, Data mining and Feature selection.
John Shawe-Taylor spends much of his time researching Artificial intelligence, Machine learning, Support vector machine, Kernel method and Mathematical optimization. His Artificial intelligence research includes elements of Wine, Linear regression and Time series. He has included themes like Voxel and Neuroimaging in his Machine learning study.
John Shawe-Taylor studies Support vector machine, namely Statistical learning theory. His Kernel method study combines topics from a wide range of disciplines, such as Dynamic programming, Markov decision process, Kernel and Conditional expectation. His Mathematical optimization research includes themes of Classifier, Linear model and Data pre-processing.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
An Introduction to Support Vector Machines and Other Kernel-based Learning Methods
Nello Cristianini;John Shawe-Taylor.
(2000)
Kernel Methods for Pattern Analysis
John Shawe-Taylor;Nello Cristianini.
(2004)
An introduction to Support Vector Machines
Nello Cristianini;J Shawe-Taylor.
Cambridge University Press (2000) (2000)
Estimating the Support of a High-Dimensional Distribution
Bernhard Schölkopf;John C. Platt;John C. Shawe-Taylor;Alex J. Smola.
Neural Computation (2001)
Large Margin DAGs for Multiclass Classification
John C. Platt;Nello Cristianini;John Shawe-Taylor.
neural information processing systems (1999)
Text classification using string kernels
Huma Lodhi;Craig Saunders;John Shawe-Taylor;Nello Cristianini.
Journal of Machine Learning Research (2002)
Support Vector Method for Novelty Detection
Bernhard Schölkopf;Robert C Williamson;Alex J. Smola;John Shawe-Taylor.
neural information processing systems (1999)
On Kernel-Target Alignment
Nello Cristianini;John Shawe-Taylor;André Elisseeff;Jaz S. Kandola.
neural information processing systems (2001)
Structural risk minimization over data-dependent hierarchies
J. Shawe-Taylor;P.L. Bartlett;R.C. Williamson;M. Anthony.
IEEE Transactions on Information Theory (1998)
Challenges in Representation Learning: A Report on Three Machine Learning Contests
Ian J. Goodfellow;Dumitru Erhan;Pierre Luc Carrier;Aaron Courville.
international conference on neural information processing (2013)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Bristol
University College London
University of Leoben
Australian National University
Google (United States)
Max Planck Institute for Intelligent Systems
Aalto University
University of Milan
University of California, Berkeley
KU Leuven
French Institute for Research in Computer Science and Automation - INRIA
Publications: 71
University of Rostock
Ch. Ranbir Singh State Institute of Engineering & Technology, Jhajjar
California Institute of Technology
University of Central Florida
University of Nottingham
University of Michigan–Ann Arbor
University of Michigan–Ann Arbor
Korea Advanced Institute of Science and Technology
University of Bologna
Henkel (Germany)
University of Virginia
University of South Florida
Kibi International University
Bjerknes Centre for Climate Research
Cardiff University
Forschungszentrum Jülich