H-Index & Metrics Top Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science H-index 57 Citations 13,951 168 World Ranking 1959 National Ranking 1078

Research.com Recognitions

Awards & Achievements

2006 - Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) For significant contributions to machine learning, especially knowledge-intensive approaches, and the application of machine learning to problems in computational biology.

Overview

What is he best known for?

The fields of study he is best known for:

  • Artificial intelligence
  • Machine learning
  • Programming language

His primary areas of study are Artificial intelligence, Machine learning, Artificial neural network, Time delay neural network and Nervous system network models. Artificial intelligence is closely attributed to Set in his work. His Machine learning research incorporates elements of Algorithm and Task.

Jude W. Shavlik works mostly in the field of Artificial neural network, limiting it down to topics relating to Decision tree and, in certain cases, Tree, Range, Simple, Matching and Structure, as a part of the same area of interest. His Time delay neural network research focuses on subjects like Deep learning, which are linked to Recurrent neural network. His Nervous system network models research includes themes of Physical neural network and Stochastic neural network.

His most cited work include:

  • Knowledge-based artificial neural networks (581 citations)
  • Extracting Refined Rules from Knowledge-Based Neural Networks (580 citations)
  • Extracting Tree-Structured Representations of Trained Networks (429 citations)

What are the main themes of his work throughout his whole career to date?

Artificial intelligence, Machine learning, Artificial neural network, Inductive logic programming and Task are his primary areas of study. His research in Artificial intelligence intersects with topics in Set and Statistical relational learning. His Machine learning study combines topics from a wide range of disciplines, such as Information extraction and Data mining.

As a part of the same scientific family, he mostly works in the field of Inductive logic programming, focusing on Precision and recall and, on occasion, Field. His Task research incorporates themes from Natural language processing, Advice and Reinforcement learning. In his study, Algorithm is inextricably linked to Backpropagation, which falls within the broad field of Connectionism.

He most often published in these fields:

  • Artificial intelligence (66.23%)
  • Machine learning (47.81%)
  • Artificial neural network (24.12%)

What were the highlights of his more recent work (between 2009-2021)?

  • Artificial intelligence (66.23%)
  • Machine learning (47.81%)
  • Statistical relational learning (12.28%)

In recent papers he was focusing on the following fields of study:

His primary areas of study are Artificial intelligence, Machine learning, Statistical relational learning, Boosting and Task. His study brings together the fields of Statistical inference and Artificial intelligence. His study in the field of Support vector machine is also linked to topics like Gradient boosting.

The study incorporates disciplines such as Ensemble learning and Data-driven in addition to Statistical relational learning. His Boosting research integrates issues from Gradient based algorithm, Markov logic network, Conditional probability distribution and Missing data. His Task study integrates concerns from other disciplines, such as Relation, Set and Domain knowledge.

Between 2009 and 2021, his most popular works were:

  • Tuffy: scaling up statistical inference in Markov logic networks using an RDBMS (233 citations)
  • Corleone: hands-off crowdsourcing for entity matching (146 citations)
  • DeepDive: Web-scale Knowledge-base Construction using Statistical Learning and Inference (134 citations)

In his most recent research, the most cited papers focused on:

  • Artificial intelligence
  • Machine learning
  • Programming language

His primary scientific interests are in Artificial intelligence, Machine learning, Statistical relational learning, Markov chain and Data science. His studies deal with areas such as Simple and Big data as well as Artificial intelligence. His Machine learning study combines topics in areas such as Domain, Scalability and Markov process.

His work carried out in the field of Domain brings together such families of science as Learning classifier system, Unsupervised learning, Reinforcement learning and Control theory. The Statistical relational learning study combines topics in areas such as Ensemble learning and Boosting. His study in Data science is interdisciplinary in nature, drawing from both Crowdsourcing, Web page, World Wide Web, Knowledge base and Workflow.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Top Publications

Extracting Refined Rules from Knowledge-Based Neural Networks

Geoffrey G. Towell;Jude W. Shavlik.
Machine Learning (1993)

1135 Citations

Knowledge-based artificial neural networks

Geoffrey G. Towell;Jude W. Shavlik.
Artificial Intelligence (1994)

988 Citations

Extracting Tree-Structured Representations of Trained Networks

Mark Craven;Jude W. Shavlik.
neural information processing systems (1995)

729 Citations

Refinement of approximate domain theories by knowledge-based neural networks

Geoffrey G. Towell;Jude W. Shavlik;Michiel O. Noordewier.
national conference on artificial intelligence (1990)

633 Citations

Readings in Machine Learning

Jude W. Shavlik;Thomas E. Deitterich;Thomas Dietterich.
(1991)

478 Citations

Symbolic and neural learning algorithms: an experimental comparison

Jude W. Shavlik;Raymond J. Mooney;Geoffrey G. Towell.
Machine Learning (1991)

474 Citations

Actively Searching for an Effective Neural Network Ensemble

David W Opitz;Jude W Shavlik.
Connection Science (1996)

441 Citations

Generating Accurate and Diverse Members of a Neural-Network Ensemble

David W. Opitz;Jude W. Shavlik.
neural information processing systems (1995)

408 Citations

Using sampling and queries to extract rules from trained neural networks

Mark Craven;Jude W. Shavlik.
international conference on machine learning (1994)

400 Citations

Using neural networks for data mining

Mark W. Craven;Jude W. Shavlik.
Future Generation Computer Systems (1997)

395 Citations

Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.

If you think any of the details on this page are incorrect, let us know.

Contact us

Top Scientists Citing Jude W. Shavlik

Raymond J. Mooney

Raymond J. Mooney

The University of Texas at Austin

Publications: 36

Kristian Kersting

Kristian Kersting

TU Darmstadt

Publications: 34

David L. Page

David L. Page

MIT

Publications: 34

Matthew E. Taylor

Matthew E. Taylor

University of Alberta

Publications: 32

Rudy Setiono

Rudy Setiono

National University of Singapore

Publications: 30

Christopher Ré

Christopher Ré

Stanford University

Publications: 29

Peter Stone

Peter Stone

The University of Texas at Austin

Publications: 23

Salvatore J. Stolfo

Salvatore J. Stolfo

Columbia University

Publications: 21

Huan Liu

Huan Liu

Arizona State University

Publications: 20

Vasant Honavar

Vasant Honavar

Pennsylvania State University

Publications: 20

Pedro Domingos

Pedro Domingos

University of Washington

Publications: 19

William W. Cohen

William W. Cohen

Google (United States)

Publications: 18

Vítor Santos Costa

Vítor Santos Costa

University of Porto

Publications: 18

Marco Gori

Marco Gori

University of Siena

Publications: 17

Sankar K. Pal

Sankar K. Pal

Indian Statistical Institute

Publications: 17

Something went wrong. Please try again later.