Impact Score 5.94
Thanks to the analysis of data from various types of sensors that can be installed in the environment and / or on common mobile devices (such as smartphones and smartwatches) it is possible to automatically recognize the activities carried out by people. Activity recognition algorithms are usually based on machine and deep learning techniques. While these techniques allow for accurate recognition, they do not provide a human-readable explanation for the inferences produced by the system. Obtaining a human-understandable explanation of why the system recognized a particular activity serves a dual purpose: to help data scientists better understand the recognition model during its construction and at the same time to provide more transparent context-aware services and more refined. In the context of healthcare, for example, the continuous monitoring of the activities carried out by a patient is useful for understanding her health status. A more transparent activity recognition system would therefore allow to fully understand (for example to doctors) the detailed reason for the behavior of a monitored patient. Recently, a new category of machine learning algorithms called Explainable Artificial Intelligence (XAI) is emerging. Nowadays many of the machine and deep learning applications do not allow you to understand how they work entirely or the logic behind them for effect called "BlackBox", according to which machine learning models are mostly black boxes. The algorithms of XAI allow obtaining machine learning models capable of "explaining" every prediction obtained by the system. Requiring artificial intelligence algorithms to be understandable in human terms is instrumental in validating quality and correctness and aligning algorithms with human values and expectations and preserving human autonomy in the capacity of decision. If machine learning models have to do with important decisions, the black box also risks reproducing biases or prejudices that are difficult to analyze, involving great dangers.
The main objective of this special issue is to bring together diverse, novel and impactful research work on Explainable Artificial Intelligence for IoT environment, thereby accelerating research in this field.
The topics of interest for this special issue include, but are not limited to:
- Explainable Artificial Intelligence
- Interpretable and Transparent Machine Learning Models
- Strategies to Explain Black Box Decision Systems
- Designing new Explanation Styles
- Evaluating Transparency and Interpretability of AI Systems
- Technical Aspects of Algorithms for Explanation
- Theoretical Aspects of Explanation and Interpretability
- Ethics in Explainable AI
- Argumentation Theory for Explainable AI
New papers, or extended versions of papers presented at related conferences, are welcome. Submissions must not be currently under review for publication elsewhere. Conference papers may be submitted only if they are substantially extended (more than 50%), and must be referenced. All submitted papers will be peer-reviewed using the normal standards of CAEE, and accepted based on quality, originality, novelty, and relevance to the theme of the special section. By submitting a paper to this issue, the authors agree to referee one paper (if asked) within the time frame of the special issue.
Before submission, authors should carefully read the Guide for Authors available at
Authors should submit their papers through the journal's web submission tool at https://www.editorialmanager.com/compeleceng/default.aspx by selecting “VSI-XAIOT” under the “Issues” tab.
For additional questions, contact the Main Guest Editor.
Submission of manuscript: Nov. 30, 2021
First notification: Jan. 30, 2022
Submission of revised manuscript: March 30, 2022
Final notification: May 30, 2022
Final paper due: June 31, 2022
Publication: 2022 (tentative)
Francesco Piccialli (lead GE)
University of Naples FEDERICO II, Italy
Francesco Piccialli is currently Assistant Professor (tenure track) of Computer Science at the Department of Mathematics and Applications “R. Caccioppoli” (DMA) of the University of Naples Federico II (UNINA), Italy. In 2018 he took the Italian Scientific Habilitation for Associate Professorship. He received a Laurea Degree (BSc+MSc) in Computer Science and a PhD in Computational and Computer sciences from the Unviersity of Naples Federico II, Italy in 2012 and 2016, respectively. He is also research fellow at CINI (National Interuniversity Consortium for Informatics) from 2013. He is the founder and Scientific director of the M.O.D.A.L. research group that is engaged in cutting-edge on novel methodologies, applications and services in Data Science and Machine Learning fields and their emerging application domains. He has been involved in research and development projects in the research areas of Internet of Things, Smart Environments, Data Science, Mobile Applications. He is author of many papers (100+) in international conferences and top-level journals (IEEE, ACM, Springer and Elsevier).
Edge Hill University, UK
Nik Bessis is a full Professor and Head of the Department of Computer Science at Edge Hill University, UK. His research is on social graphs for network and big data analytics as well as on developing data push and resource provisioning services in IoT, FI and clouds. He runs a number of research initiatives with several Universities worldwide and he was a visiting scientist at ETH Z and a visiting professor at University of Seville. He has led several research and knowledge transfer projects worth over £7m. He has published over 280 works and won 4 best paper awards. His latest edited book on IoTs & big data has been ranked as top 25 on Amazon AI book list and has attracted over 100,000 downloads. Professor Bessis has served as a Departmental and Institutional expert evaluator for the QAA UK, Greek and Cypriot Quality Assurance Agencies and, as an assessor for more than 20 Professorships conferment worldwide.
National Sun Yat-sen University, Taiwan
He received the Ph.D. degree in Computer Science and Engineering from National Sun Yat-sen University, Kaohsiung, Taiwan, in 2009. He was a postdoctoral fellow with the Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan before joining the faculty of the Department of Applied Geoinformatics and the Department of Information Technology, Chia Nan University of Pharmacy & Science, Tainan, Taiwan in 2010 and 2012, respectively. He joined the faculty of the Department of Computer Science and Information Engineering, National Ilan University, Yilan, Taiwan and the Department of Computer Science and Engineering, National Chung-Hsing University, Taichung, Taiwan, in 2014 and 2017. In 2019, he joined the faculty of the Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan, where he is currently an Assistant Professor. He has served as the Senior Associate Editor for the Journal of Internet Technology since 2014 and the Associate Editors for the IEEE Access, IET Networks, and IEEE Internet of Things Journal since 2017, 2018, and 2020. He has also been a member of the Editorial Board of the International Journal of Internet Technology and Secured Transactions (IJITST) and the Journal of Network and Computer Applications (JNCA) since 2016 and 2017. His research interests include computational intelligence, data mining, cloud computing, and internet of things. His favorite hobbies and leisure activities are painting, reading, basketball, and travelling.