TY - GEN
T1 - VF-NET
T2 - 31st IEEE International Conference on Image Processing, ICIP 2024
AU - Amerehi, Fatemeh
AU - Healy, Patrick
N1 - Publisher Copyright:
© 2024 IEEE
PY - 2024
Y1 - 2024
N2 - Ensuring the secure and dependable deployment of deep neural networks hinges on their ability to withstand distributional shifts and distortions. While data augmentation enhances robustness, its effectiveness varies across different types of data corruption. It tends to excel in cases where corruptions share perceptually similar traits or have a high-frequency nature. In response, a strategy is to encompass a broad spectrum of distortions. Yet, it is often impractical to incorporate every conceivable modification that images may undergo within augmented data. Instead, we show that providing the model with a stronger inductive bias to learn the underlying concept of”change” would offer a more reliable approach. To this end, we develop Virtual Fusion (VF), a technique that treats corruptions as virtual labels. Diverging from conventional augmentation, when an image undergoes any form of transformation, its label becomes linked with the specific name attributed to the distortion. The finding indicates that VF effectively enhances both clean accuracy and robustness against common corruptions. On previously unseen corruptions, it shows an 11.90% performance improvement and a 12.78% increase in accuracy. In similar corruption scenarios, it achieves a 7.83% performance gain and a significant accuracy improvement of 22.04% on robustness benchmarks.
AB - Ensuring the secure and dependable deployment of deep neural networks hinges on their ability to withstand distributional shifts and distortions. While data augmentation enhances robustness, its effectiveness varies across different types of data corruption. It tends to excel in cases where corruptions share perceptually similar traits or have a high-frequency nature. In response, a strategy is to encompass a broad spectrum of distortions. Yet, it is often impractical to incorporate every conceivable modification that images may undergo within augmented data. Instead, we show that providing the model with a stronger inductive bias to learn the underlying concept of”change” would offer a more reliable approach. To this end, we develop Virtual Fusion (VF), a technique that treats corruptions as virtual labels. Diverging from conventional augmentation, when an image undergoes any form of transformation, its label becomes linked with the specific name attributed to the distortion. The finding indicates that VF effectively enhances both clean accuracy and robustness against common corruptions. On previously unseen corruptions, it shows an 11.90% performance improvement and a 12.78% increase in accuracy. In similar corruption scenarios, it achieves a 7.83% performance gain and a significant accuracy improvement of 22.04% on robustness benchmarks.
KW - Augmentation
KW - Deep Neural Networks
KW - Distribution Shifts
KW - Generalization
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=85216888420&partnerID=8YFLogxK
U2 - 10.1109/ICIP51287.2024.10647346
DO - 10.1109/ICIP51287.2024.10647346
M3 - Conference contribution
AN - SCOPUS:85216888420
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 828
EP - 834
BT - 2024 IEEE International Conference on Image Processing, ICIP 2024 - Proceedings
PB - IEEE Computer Society
Y2 - 27 October 2024 through 30 October 2024
ER -