Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study

Published: 2025, Last Modified: 21 Jul 2025REFSQ 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Context & Motivation] Explainable autonomous systems are increasingly essential for engendering trust, especially when they are deployed in safety-critical scenarios. [Question/Problem] Despite the robust reliability needed in critical settings, there remains a gap between Explainable AI and Requirements Engineering (RE), raising the question: can current RE techniques sufficiently elicit explainability requirements and what characteristics do these requirements have? [Principal Ideas/Results] We examine whether established RE techniques can be used to elicit explainability requirements and analyse the characteristics of such requirements. We answer these questions in the context of a nuclear robotics case study focused on navigation and task scheduling missions. [Contribution] We contribute: (1) an experience report of eliciting explainability requirements, (2) categories for explainability requirements for explainable autonomous robotic systems and (3) practical guidance for applying our approach in other safety-critical domains.
Loading