TY - JOUR
T1 - On the Effect of Cross-Channel Normalization on Wide-Shallow Convolutional Networks
AU - Alakkari, Salaheddin
AU - Mileo, Alessandra
N1 - Publisher Copyright:
© This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/)
PY - 2024
Y1 - 2024
N2 - Cross-channel Normalization (CN) was first proposed in AlexNet paper as a biologically inspired normalization process mimicking the lateral inhibition phenomenon in biological neurons. However, the effect of such a normalization was not well explored in the literature due to the wide popularity of batch normalization. In this paper, we show that such a type of normalization can significantly enhance the network accuracy when applied to wide-shallow convolutional networks. Wide-shallow networks are more advantageous compared to deep CNNs since they significantly increase parallelism and reduce the sequential computation across layers. Our experiments show that using cross-channel normalization not only improves the accuracy of residual and standard convolutional networks on benchmark datasets but also significantly enhances the performance when considering limited and reduced representation examples.
AB - Cross-channel Normalization (CN) was first proposed in AlexNet paper as a biologically inspired normalization process mimicking the lateral inhibition phenomenon in biological neurons. However, the effect of such a normalization was not well explored in the literature due to the wide popularity of batch normalization. In this paper, we show that such a type of normalization can significantly enhance the network accuracy when applied to wide-shallow convolutional networks. Wide-shallow networks are more advantageous compared to deep CNNs since they significantly increase parallelism and reduce the sequential computation across layers. Our experiments show that using cross-channel normalization not only improves the accuracy of residual and standard convolutional networks on benchmark datasets but also significantly enhances the performance when considering limited and reduced representation examples.
KW - Computer Vision
KW - Deep Learning
KW - Local Response Normalization
KW - Wide-Shallow CNNs
UR - https://www.scopus.com/pages/publications/85216758746
U2 - 10.1049/icp.2024.3299
DO - 10.1049/icp.2024.3299
M3 - Conference article
AN - SCOPUS:85216758746
SN - 2732-4494
VL - 2024
SP - 154
EP - 161
JO - IET Conference Proceedings
JF - IET Conference Proceedings
IS - 10
T2 - 26th Irish Machine Vision and Image Processing Conference, IMVIP 2024
Y2 - 21 August 2024 through 23 August 2024
ER -