Towards Explainability in Machine Learning: The Formal Methods Way

Frederik Gossen, Tiziana Margaria, Bernhard Steffen

Research output: Contribution to journalArticlepeer-review

Abstract

Classification is a central discipline of machine learning (ML) and classifiers have become increasingly popular to support or replace human decisions. Ease of explanation is also particularly important when the proposed classification is correct, but apparently counter-intuitive. This is why Explainability is now a new hot topic in ML, and this is where formal methods can play an essential role. Popular methods towards explainability try to establish some user intuition. They may hint at the most influential input data, like highlighting or framing the area of a picture where a face has been identified. Such information is very helpful, and it helps in particular to reveal some of the ‘popular’ drastic mismatches incurred by neural networks.

Original languageEnglish
Article number9143264
Pages (from-to)8-12
Number of pages5
JournalIT Professional
Volume22
Issue number4
DOIs
Publication statusPublished - 1 Jul 2020

Fingerprint

Dive into the research topics of 'Towards Explainability in Machine Learning: The Formal Methods Way'. Together they form a unique fingerprint.

Cite this