Abstract
Classification is a central discipline of machine learning (ML) and classifiers have become increasingly popular to support or replace human decisions. Ease of explanation is also particularly important when the proposed classification is correct, but apparently counter-intuitive. This is why Explainability is now a new hot topic in ML, and this is where formal methods can play an essential role. Popular methods towards explainability try to establish some user intuition. They may hint at the most influential input data, like highlighting or framing the area of a picture where a face has been identified. Such information is very helpful, and it helps in particular to reveal some of the ‘popular’ drastic mismatches incurred by neural networks.
Original language | English |
---|---|
Article number | 9143264 |
Pages (from-to) | 8-12 |
Number of pages | 5 |
Journal | IT Professional |
Volume | 22 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Jul 2020 |