Special Issue Information Special Issue Call for Paper Other Special Issues on this journal Closed Special Issues
Explainability of Machine Learning in Methodologies and Applications

Explainability of Machine Learning in Methodologies and Applications

Journal
Impact Score 11.71

OFFICIAL WEBSITE

Special Issue Information

Submission Deadline: 15-09-2021
Journal Impact Score: 11.71
Journal Name: Knowledge-Based Systems
Publisher: Knowledge-Based Systems

Special Issue Call for Papers


Guest Editors:



Zhong Li, FernUniversität in Hagen, Germany



Frank Kirchner, University of Bremen and DFKI



Herwig Unger, FernUniversität in Hagen, GermanyKyandoghere Kyamakya, Alpen-Adria-Universität Klagenfurt, Austria



Recent successes in machine learning (ML), particularly deep learning, have led to an upsurge of artificial intelligence (AI) applications in a wide range of fields. However, the models built with ML and deep learning have been regarded as ‘black-box’ in the sense that they can make good predictions but one is difficult to understand the logic behind those predictions. The main reasons are the underlying structures are complex, non-linear and extremely difficult to be interpreted and explained, neither by the neural network itself, nor by an external explanatory component, and not even by the developer of the system. It is critical in many real-world applications, such as in healthcare, medicine, finance, and law, to make it explainable for users, the affected people, and for the researchers and developers who create the AI system, to understand, trust, and manage it.



The explainable machine learning aims to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The explainable machine learning system should be able to explain its capabilities and understandings; explain what it has done; and disclose the salient information that it is acting on. The explainable machine learning methods for explainability mainly focus on interpreting and making the entire process of building an AI system transparent, from the inputs to the outputs via the application of a learning approach to generate a model. The outcome of these methods are explanations that can be of different formats, such as rules, numerical, textual or visual information, or a combination of the former ones. These explanations can be theoretically analyzed and received support from the discipline of Human-Computer Interaction (HCI). More recently, many researchers are working on new explainable machine learning models and techniques, which may correspond to interpretable models and model interpretability techniques; or transparent models (algorithmic transparency, decomposability and simulatability) and post-hoc explainability (text explanations, visualizations, local explanations, explanations by example, explanations by simplification and feature relevance).



This special issue aims to report the newest developments of explainable machine learning in methodologies and applications such as, the production of explanations for black box predictions with methods to extract or lift correlative structures from deep-learned models into vocabularies appropriate for user level explanations. Topics of interest include, but are not limited to:




  1. Theory, models, frameworks and tools of explainable machine learning


  2. Explainable machine learning methods by implementation of fuzzy sets and systems


  3. Explainable machine learning methods and algorithms by integrating rule-based learning, ontologies, Bayesian models and other related machine learning techniques


  4. Explainable machine learning security, privacy and related systems


  5. Explainable machine learning in robotics, healthcare and social science


  6. Explainable machine learning for human machine interaction and collaboration


  7. Explainable machine learning system with autonomous data-driven machine learning and automated reasoning


  8. Explainable machine learning models for personalised support in human thinking, learning, designing, planning and decision-making.



Important Dates


Closed Special Issues

Publisher
Journal Details
Closing date
G2R Score
Explainability of Machine Learning in Methodologies and Applications

Explainability of Machine Learning in Methodologies and Applications

Knowledge-Based Systems
Closing date: 15-09-2021 G2R Score: 11.71
Robust, Explainable, and Privacy-Preserving Deep Learning

Robust, Explainable, and Privacy-Preserving Deep Learning

Knowledge-Based Systems
Closing date: 31-08-2021 G2R Score: 11.71
Explainable Artificial Intelligence for Sentiment Analysis

Explainable Artificial Intelligence for Sentiment Analysis

Knowledge-Based Systems
Closing date: 25-12-2020 G2R Score: 11.71
intelligent decision-making and consensus under uncertainty in inconsistent and dynamic environments

intelligent decision-making and consensus under uncertainty in inconsistent and dynamic environments

Knowledge-Based Systems
Closing date: 31-01-2018 G2R Score: 11.71
Decision Support Systems in Big Data Environments

Decision Support Systems in Big Data Environments

Knowledge-Based Systems
Closing date: 15-01-2017 G2R Score: 11.71
Volume, Variety and Velocity of Data Sciences

Volume, Variety and Velocity of Data Sciences

Knowledge-Based Systems
Closing date: 15-12-2015 G2R Score: 11.71
New Avenues in Knowledge Bases for Natural Language Processing

New Avenues in Knowledge Bases for Natural Language Processing

Knowledge-Based Systems
Closing date: 30-10-2015 G2R Score: 11.71