Impact Score 4.62
Daniela Godoy (ISISTAN CONICET/UNCPBA, Argentina)
Antonela Tommasel (ISISTAN CONICET/UNCPBA, Argentina)
Arkaitz Zubiaga (Queen Mary University of London, United Kingdom)
Social media platforms have become an integral part of everyday lives and activities of most people, providing new forms of communication and interaction. These sites allow users to freely share information and opinions (in the form of photos, short texts and comments) as well as to promote the formation of links and social relationships (friendships, follower/followee relations). One of the most valuable features of social platforms is the potential for the dissemination of information widely and rapidly. The adoption of social media, however, also exposes users to risks, giving rise to what has been referred to as online harms.
Online harms are also widespread on social media and can have serious damaging effects on individuals and society at large. Different forms of online harms include, inter alia, the distribution of false and misleading content (such as hoaxes, conspiracy theories, fake news and even satiric content), harmful content such as abusive, discriminatory, offensive and violence-inciting comments, or the augmentation of societal biases and inequalities online. The proliferation of harms online has become a serious problem with several negative consequences, ranging from public health issues to the disruption of democratic systems (Online Harms White Paper, 2019). Identification of harmful content online has however proven difficult, with not only the scientific community but also social media platforms and governments worldwide calling for support to develop effective methods.
Online harm-aware mechanisms based on intelligent methods become essential to mitigate the negative effects of this unwanted content, preventing it from reaching large audiences and to be amplified by social media. Although intelligent techniques in the intersection of machine learning, natural language processing and social computing have made substantial advances in detecting and modelling the propagation of harmful content in social networks, there are a number of open problems in this area. Among others, concerns have been raised about the potential social biases and unfairness of intelligent systems tackling the detection of online harms, which also stems from the lack of explainability and transparency of learned models.
The aim of this Special Issue is to bring together a community of researchers interested in tackling online harms and mitigating their impact on social media. We seek novel research contributions on misinformation- and harm-aware intelligent systems assisting users in making informed decisions in the context of online misinformation, hate speech and other forms of online harms. Expected contributions are original works on intelligent systems that can circumvent the negative effects of online harms in social media, by improving detection methods and diffusion modelling as well as addressing concerns such as social biases, fairness and explainability.
Topics of interest include, but are not restricted to:
Submission deadline: December 4, 2020
Author notification: March 4, 2021
Revised papers due: April 4, 2021
Final notification: June 4, 2021
To discuss a possible contribution, please contact the special issue editors at [email protected]
Submissions should be original papers and should not be under consideration for publication elsewhere. Extended versions of high quality conference papers that are already published at relevant venues may also be considered as long as the additional contribution is substantial (at least 30% of new content).
Authors must follow the formatting and submission instructions of the Personal and Ubiquitous Computing journal at https://www.springer.com/journal/779.
During the first step in the submission system Editorial Manager, please select “Original article” as article type. In further steps, please confirm that your submission belongs to a special issue and choose from the drop-down menu the appropriate special issue title.