Special Issue Information Special Issue Call for Paper Other Special Issues on this journal Closed Special Issues
Risk-aware Autonomous Systems: Theory and Practice

Risk-aware Autonomous Systems: Theory and Practice

Journal
Impact Score 6.93

OFFICIAL WEBSITE

Special Issue Information

Submission Deadline: 15-10-2021
Journal Impact Score: 6.93
Journal Name: Artificial Intelligence
Publisher: Artificial Intelligence

Special Issue Call for Papers


Aims and Scope



This special issue focuses on the theory and practice of risk-aware autonomous systems that reason about uncertainty and risk online to achieve safety, and that combine machine learning and decision making to accomplish real world tasks.



The topic of risk-aware autonomous systems has seen a dramatic increase in importance over the last few years, as autonomous systems are being deployed almost daily within safety-critical applications, including self-driving vehicles, autonomous undersea and aerospace systems, service robotics, and collaborative manufacturing. This broad adoption is a testament to the fast-paced progress of the research community across multiple areas, including planning, learning, perception, decision making, and control. At the same time, today’s widely used AI algorithms for autonomy are beginning to showcase fundamental limits and practical shortcomings. In particular, excessive risk taken by these algorithms can lead to catastrophic failure of the overall system and may put human life in danger. Many AI methods used today do not attempt to quantify uncertainty; they do not assess the risks that uncertainty imposes on system safety and success; they do not guarantee bounds on this risk and they do not perform these assessments in real-time.



To push the envelope of autonomous systems’ safety, this special issue will present ground-breaking research on the theory and practice of designing the next generation of risk-aware AI algorithms and autonomous systems. Key to our envisioned methods is their ability to account for uncertainty and risk of failure during their online execution, their capabilities for proactively quantifying and mitigating risks against task goals and safety constraints, and their ability to offer formal guarantees, such as bounds on the risk of failure. Emerging risk-bounded methods often operate on models of uncertainty, specifications of intended outcomes, and specifications of acceptable risks regarding these outcomes. These models and specifications are diverse. Uncertainty models may be probabilistic, set bounded, or interval based. Intended outcomes include goals achieved, deadlines met, safety constraints respected, required accuracy in model estimation and perception, and rate of false positives. Specifications of acceptable risk include risk bounds and acceptable costs of failure. These intended outcomes and acceptable risks can apply to individual AI components, such as policy and action learners, image classifiers and planners, and the aggregate systems as a whole.



This special issue is intended to represent this diversity. It aims to cover a broad set of topics related to risk-aware autonomous systems, including but not limited to:



● risk-aware task and motion planning;



● robust and adversarial learning;



● certifiable and risk-aware perception, localization and mapping;



● robust task monitoring and execution under uncertainty;



● formal methods for monitoring and verifying uncertain systems;



● constraint and mathematical programming with chance constraints;



● robust control of intelligent systems;



● system-level monitoring and risk quantification.



Submission Instructions



We welcome high quality original (unpublished) articles. Each submission will be peer-reviewed.



All submissions should be formatted following the AI journal instructions for authors https://www.elsevier.com/journals/artificial-intelligence/0004-3702/guide-for-authors and submitted to: https://www.editorialmanager.com/artint/default.aspx 



Important Dates



● Submissions open: 15 May 2021



● Submissions close: 15 October 2021



● Publication of the special issue: 15 August 2022



Depending on the interest and number of submissions, after the submission deadline, we will continue to accept papers until January 15, 2022, which will be reviewed and published on a rolling basis.



Guest Editors



● Prof. Sara Bernardini (Royal Holloway University of London, [email protected])



● Prof. Luca Carlone (Massachusetts Institute of Technology, [email protected])



● Dr. Ashkan Jasour (Massachusetts Institute of Technology, [email protected])



● Prof. Andreas Krause (ETH Zurich, [email protected])



● Prof. George Pappas (University of Pennsylvania, [email protected])



● Prof Brian Williams (Massachusetts Institute of Technology, [email protected])



● Prof. Yisong Yue (California Institute of Technology, [email protected])

Other Special Issues on this journal

Publisher
Journal Details
Closing date
G2R Score
Risk-aware Autonomous Systems: Theory and Practice

Risk-aware Autonomous Systems: Theory and Practice

Artificial Intelligence
Closing date: 15-10-2021 G2R Score: 6.93

Closed Special Issues

Publisher
Journal Details
Closing date
G2R Score
Explainable Artificial Intelligence

Explainable Artificial Intelligence

Artificial Intelligence
Closing date: 01-03-2020 G2R Score: 6.93