Impact Score 11.71
Zhong Li, FernUniversität in Hagen, Germany
Frank Kirchner, University of Bremen and DFKI
Herwig Unger, FernUniversität in Hagen, GermanyKyandoghere Kyamakya, Alpen-Adria-Universität Klagenfurt, Austria
Recent successes in machine learning (ML), particularly deep learning, have led to an upsurge of artificial intelligence (AI) applications in a wide range of fields. However, the models built with ML and deep learning have been regarded as ‘black-box’ in the sense that they can make good predictions but one is difficult to understand the logic behind those predictions. The main reasons are the underlying structures are complex, non-linear and extremely difficult to be interpreted and explained, neither by the neural network itself, nor by an external explanatory component, and not even by the developer of the system. It is critical in many real-world applications, such as in healthcare, medicine, finance, and law, to make it explainable for users, the affected people, and for the researchers and developers who create the AI system, to understand, trust, and manage it.
The explainable machine learning aims to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The explainable machine learning system should be able to explain its capabilities and understandings; explain what it has done; and disclose the salient information that it is acting on. The explainable machine learning methods for explainability mainly focus on interpreting and making the entire process of building an AI system transparent, from the inputs to the outputs via the application of a learning approach to generate a model. The outcome of these methods are explanations that can be of different formats, such as rules, numerical, textual or visual information, or a combination of the former ones. These explanations can be theoretically analyzed and received support from the discipline of Human-Computer Interaction (HCI). More recently, many researchers are working on new explainable machine learning models and techniques, which may correspond to interpretable models and model interpretability techniques; or transparent models (algorithmic transparency, decomposability and simulatability) and post-hoc explainability (text explanations, visualizations, local explanations, explanations by example, explanations by simplification and feature relevance).
This special issue aims to report the newest developments of explainable machine learning in methodologies and applications such as, the production of explanations for black box predictions with methods to extract or lift correlative structures from deep-learned models into vocabularies appropriate for user level explanations. Topics of interest include, but are not limited to: