AI-Based Safety-Critical Systems (AI-SCS) are being increasingly deployed in
the real world. These can pose a risk of harm to people and the environment.
Reducing that risk is an overarching priority during development and operation.
As more AI-SCS become autonomous, a layer of risk management via human
intervention has been removed. Following an accident it will be important to
identify causal contributions and the different responsible actors behind those
to learn from mistakes and prevent similar future events. Many authors have
commented on the “responsibility gap” where it is difficult for developers and
manufacturers to be held responsible for harmful behaviour of an AI-SCS. This
is due to the complex development cycle for AI, uncertainty in AI performance,
and dynamic operating environment. A human operator can become a “liability
sink” absorbing blame for the consequences of AI-SCS outputs they weren’t
responsible for creating, and may not have understanding of.

This cross-disciplinary paper considers different senses of responsibility
(role, moral, legal and causal), and how they apply in the context of AI-SCS
safety. We use a core concept (Actor(A) is responsible for Occurrence(O)) to
create role responsibility models, producing a practical method to capture
responsibility relationships and provide clarity on the previously identified
responsibility issues. Our paper demonstrates the approach with two examples: a
retrospective analysis of the Tempe Arizona fatal collision involving an
autonomous vehicle, and a safety focused predictive role-responsibility
analysis for an AI-based diabetes co-morbidity predictor. In both examples our
primary focus is on safety, aiming to reduce unfair or disproportionate blame
being placed on operators or developers. We present a discussion and avenues
for future research.

Expert Commentary: Analysis of AI-Based Safety-Critical Systems (AI-SCS)

The deployment of AI-Based Safety-Critical Systems (AI-SCS) in the real world is becoming increasingly common. However, with their deployment comes an inherent risk to both people and the environment. Therefore, reducing this risk and ensuring the safety of AI-SCS is of paramount importance during their development and operation.

One significant challenge in managing the risk associated with AI-SCS is the increasing autonomy of these systems, which removes the layer of human intervention that was previously responsible for managing and mitigating risks. This poses a challenge when trying to identify causal contributions and responsible actors in the event of accidents, as human operators can no longer be solely held accountable for the consequences of AI-SCS outputs.

Authors have highlighted the “responsibility gap” in holding developers and manufacturers accountable for the harmful behavior of AI-SCS. The complex development cycle of AI, uncertainty in AI performance, and the dynamic operating environment make it challenging to establish clear responsibility for any harmful outcomes. This often results in operators becoming “liability sinks,” taking the blame for consequences they did not create or fully understand.

This cross-disciplinary paper addresses these responsibility issues by considering different senses of responsibility, including role, moral, legal, and causal responsibilities, within the context of AI-SCS safety. The paper introduces the core concept that an actor is responsible for an occurrence, which enables the creation of role responsibility models. These models offer a practical approach to capturing responsibility relationships and providing clarity on complex responsibility issues.

To demonstrate their approach, the authors analyze two examples: the Tempe Arizona fatal collision involving an autonomous vehicle and a safety-focused predictive role-responsibility analysis for an AI-based diabetes co-morbidity predictor. Both examples emphasize the importance of focusing on safety and minimizing unfair or disproportionate blame placed on operators and developers.

This paper highlights the multi-disciplinary nature of AI-SCS responsibility, as it requires a combination of technical expertise, legal knowledge, and ethical considerations. By incorporating these different perspectives, the authors present a comprehensive discussion of the challenges surrounding responsibility in the context of AI-SCS. The paper also identifies potential avenues for future research, including the development of standardized frameworks and guidelines for assigning responsibility within AI-SCS.

In conclusion, the increasing deployment of AI-SCS necessitates a thorough understanding and management of responsibility. This cross-disciplinary paper offers valuable insights into the different dimensions of responsibility and provides a practical method for capturing responsibility relationships. By addressing these responsibility issues, we can ensure the safe and ethical development and operation of AI-SCS in the future.

Read the original article