LABEL AUGMENTATION FOR NEURAL NETWORKS ROBUSTNESS

Fatemeh Amerehi, Patrick Healy

Research output: Contribution to journalConference articlepeer-review

Abstract

Out-of-distribution generalization can be categorized into two types: common perturbations arising from natural variations in the real world and adversarial perturbations that are intentionally crafted to deceive neural networks. While deep neural networks excel in accuracy under the assumption of identical distributions between training and test data, they often encounter out-of-distribution scenarios resulting in a significant decline in accuracy. Data augmentation methods can effectively enhance robustness against common corruptions, but they typically fall short in improving robustness against adversarial perturbations. In this study, we develop Label Augmentation (LA), which enhances robustness against both common and intentional perturbations and improves uncertainty estimation. Our findings indicate a Clean error rate improvement of up to 23.29% when employing LA in comparisons to the baseline. Additionally, it enhances robustness under common corruptions benchmark by up to 24.23%. When tested against FGSM and PGD attacks, improvements in adversarial robustness are noticeable, with enhancements of up to 53.18% for FGSM and 24.46% for PGD attacks.

Original languageEnglish
Pages (from-to)620-640
Number of pages21
JournalProceedings of Machine Learning Research
Volume274
Publication statusPublished - 2024
Event3rd Conference on Lifelong Learning Agents, CoLLAs 2024 - Pisa, Italy
Duration: 29 Jul 20241 Aug 2024

Fingerprint

Dive into the research topics of 'LABEL AUGMENTATION FOR NEURAL NETWORKS ROBUSTNESS'. Together they form a unique fingerprint.

Cite this