Simulator-based Explanation and Debugging of Hazard-triggering Events in DNN-based Safety-critical Systems

Hazem Fahmy, Fabrizio Pastore, Lionel Briand, Thomas Stifter

Research output: Contribution to journalArticlepeer-review

Abstract

When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i.e., erroneous outputs) observed during testing. For DNNs processing images, engineers visually inspect all failure-inducing images to determine common characteristics among them. Such characteristics correspond to hazard-triggering events (e.g., low illumination) that are essential inputs for safety analysis. Though informative, such activity is expensive and error prone.To support such safety analysis practices, we propose Simulator-based Explanations for DNN failurEs (SEDE), a technique that generates readable descriptions for commonalities in failure-inducing, real-world images and improves the DNN through effective retraining. SEDE leverages the availability of simulators, which are commonly used for cyber-physical systems. It relies on genetic algorithms to drive simulators toward the generation of images that are similar to failure-inducing, real-world images in the test set; it then employs rule learning algorithms to derive expressions that capture commonalities in terms of simulator parameter values. The derived expressions are then used to generate additional images to retrain and improve the DNN.With DNNs performing in-car sensing tasks, SEDE successfully characterized hazard-triggering events leading to a DNN accuracy drop. Also, SEDE enabled retraining leading to significant improvements in DNN accuracy, up to 18 percentage points.

Original languageEnglish
Article number104
JournalACM Transactions on Software Engineering and Methodology
Volume32
Issue number4
DOIs
Publication statusPublished - 27 May 2023
Externally publishedYes

Keywords

  • DNN debugging
  • DNN explanation
  • DNN functional safety analysis
  • explainable AI
  • heatmaps

Fingerprint

Dive into the research topics of 'Simulator-based Explanation and Debugging of Hazard-triggering Events in DNN-based Safety-critical Systems'. Together they form a unique fingerprint.

Cite this