Abstract

IN Towards Explainability in Machine Learning: The Formal Methods Way,1 we illustrated last year how Explainable AI can profit by formal methods in terms of its explainability. In fact, Explainable AI is a new branch of AI, directed to a finer granular understanding of how the fancy heuristics and experimental fine tuning of hyperparameters influence the outcomes of advanced AI tools and algorithms. We discussed the concept of explanation, and showed how the stronger meaning of explanation in terms of formal models leads to a precise characterization of the phenomenon under consideration. We illustrated how, following the Algebraic Decision Diagram (ADD)-based aggregation technique originally established in Gossen and Steffen's work2 we can produce precise information about and exact, deterministic prediction of the outcome from a random forest consisting of 100 trees.

Original languageEnglish
Pages (from-to)8-12
Number of pages5
JournalIT Professional
Volume23
Issue number6
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Formal Methods Boost Experimental Performance for Explainable AI'. Together they form a unique fingerprint.

Cite this