Mathematics & Machine Learning Seminar
Group-Equivariant Convolutional Neural Networks (G-CNNs) generalize the translation-equivariance of traditional CNNs to group-equivariance, using more general symmetry transformations such as rotations for weight tying. For tasks such as classification, such transformations are ultimately removed via pooling to achieve group-invariance and improve classification accuracy.
In this talk, we argue that traditional pooling operations are excessively invariant, resulting in a general lack of robustness to adversarial attacks both in classical CNNs and in G-CNNs. We propose alternative approaches to achieve robust invariance in CNNs and G-CNNs through two computational primitives: the G-triple correlation and its G-Fourier transform, the G-Bispectrum. This talk presents their mathematical properties with an introduction to group representation theory, and demonstrates gains in accuracy and robustness upon incorporating them in neural network architectures.