Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Robustcaps: A Transformation-Robust Capsule Network For Image Classification


Affiliations
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India
     

   Subscribe/Renew Journal


Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. To address this issue, we present a deep neural network model that exhibits the desirable property of transformationrobustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations.

Keywords

Deep Learning, Capsule Networks, Transformation Robustness, Equivariance.
Subscription Login to verify subscription
User
Notifications
Font Size

  • T. Cohen and M. Welling, “Group Equivariant Convolutional Networks”, Proceedings of International Conference on Machine Learning, pp. 2990-2999, 2016.
  • M. Weiler and G. Cesa, “General E (2)-Equivariant Steerable CNNs”, Advances in Neural Information Processing Systems, Vol .32, pp. 1-15, 2019.
  • T.S. Cohen and M. Welling, “Spherical CNNs”, Proceedings of International Conference on Learning Representations, pp. 1-7, 2018.
  • G.E. Hinton, A. Krizhevsky and S.D. Wang, “Transforming Auto-Encoders”, Proceedings of International Conference on Artificial Neural Networks, pp. 44-51, 2011.
  • S. Sabour and G.E. Hinton, “Dynamic Routing between Capsules”, Advances in Neural Information Processing Systems, Vol. 30, pp. 1-12, 2017.
  • G.E. Hinton, S. Sabour and N. Frosst, “Matrix Capsules with EM Routing”, Proceedings of International Conference on Learning Representations, pp. 241-254, 2018.
  • S.R. Venkataraman, S. Balasubramanian and R.R. Sarma, “Building Deep Equivariant Capsule Networks”, Proceedings of International Conference on Learning Representations, pp. 1-10, 2020.
  • R. Pucci, C. Micheloni and N. Martinel, “Self-Attention Agreement Among Capsules”, Proceedings of International Conference on Computer Vision, pp. 272-280, 2021.
  • J.E. Lenssen and P. Libuschewski, “Group Equivariant Capsule Networks”, Advances in Neural Information Processing Systems, Vol. 31, pp. 1-15, 2018.
  • T.S. Cohen and M. Weiler, “A General Theory of Equivariant CNNs on Homogeneous Spaces”, Advances in Neural Information Processing Systems, Vol. 32, pp. 1-12, 2019.
  • J. Rajasegaran, S. Seneviratne and R. Rodrigo, “Deepcaps: Going Deeper with Capsule Networks”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10725-10733, 2019.
  • K. Ahmed and L. Torresani, “Star-Caps: Capsule Networks with Straight-Through Attentive Routing”, Advances in Neural Information Processing Systems, Vol. 32, pp. 167- 178, 2018.
  • H. Xiao, K. Rasul and R. Vollgraf, “Fashion-Mnist: A Novel Image dataset for Benchmarking Machine Learning Algorithms”, Proceedings of International Conference on Computer Vision, pp. 1-8, 2017.
  • A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images”, Available at https://www.cs.toronto.edu/~kriz/learning-features-2009- TR.pdf, 2009.
  • K. He and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  • D. Romero and M. Hoogendoorn, “Attentive Group Equivariant Convolutional Networks”, Proceedings of the IEEE International Conference on Machine Learning, pp. 8188-8199, 2020.
  • I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization”, Proceedings of International Conference on Learning Representations, Vol. 32, pp. 89-97, 2018.
  • L.N. Smith and N. Topin, “Super-Convergence: Very Fast Training of Neural Networks using Large Learning Rates”, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Vol. 11006, pp. 369-386, 2019.

Abstract Views: 161

PDF Views: 0




  • Robustcaps: A Transformation-Robust Capsule Network For Image Classification

Abstract Views: 161  |  PDF Views: 0

Authors

Sai Raam Venkataraman
Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India
S. Balasubramanian
Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India
R. Raghunatha Sarma
Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India

Abstract


Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. To address this issue, we present a deep neural network model that exhibits the desirable property of transformationrobustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations.

Keywords


Deep Learning, Capsule Networks, Transformation Robustness, Equivariance.

References