Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Clustering Algorithm Networks Test Cost Sensitive for Specialist Divisions


Affiliations
1 School of Computing Science and Engineering, Galgotias University, India
2 Department of Computer Science and Engineering, Malla Reddy Institute of Technology and Science, India
     

   Subscribe/Renew Journal


It has been proven that deeper Convolutional Neural Networks (CNN) can result in better accuracy in many problems, but this accuracy comes with a high computational cost. Also, input instances have not the same difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive method for convolution neural networks. This method trains a CNN with a set Based on the difficulty of the input instance, the expert branches decide to use a shallower part of the network or go deeper to the end. The expert branches learn to determine: is the current network prediction wrong and if the instance passed to deeper network layers it will generate the right output; if not, then the expert branches will stop the process of computation. The experimental results on the standard CIFAR-10 dataset indicate that in comparison with basic models, the proposed method can train models with lower test cost and competitive accuracy.

Keywords

Test-Cost-Sensitive Learning, Deep Learning, CNN with Expert Branches, Instance-Based Cost.
Subscription Login to verify subscription
User
Notifications
Font Size

  • S.P.S. Gurjar, S. Gupta and R. Srivastava, “Automatic Image Annotation Model using LSTM Approach”, Signal and Image Processing: An International Journal, Vol. 8, No. 4, pp. 25-37, 2017.
  • S. Maity, M. Abdel-Mottaleb and S.S. Asfour, “Multimodal Biometrics Recognition from Facial Video via Deep Learning”, Signal and Image Processing: An International Journal, Vol. 8, No. 1, pp. 67-75, 2017.
  • K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of International Conference on Computer Vision, pp. 1-9, 2015.
  • D. Kadam, A.R. Madane, K. Kutty and B.S.V. Bonde, “Rain Streaks Elimination Using Image Processing Algorithms”, Signal and Image Processing: An International Journal, Vol. 10, No. 3, pp. 21-32, 2019.
  • A. Massaro, V. Vitti and A. Galiano, “Automatic Image Processing Engine Oriented on Quality Control of Electronic Boards”, Signal and Image Processing: An International Journal, Vol. 9, No. 2, pp. 1-14, 2018.
  • X. Li, Z. Liu, P. Luo, C. Change Loy and X. Tang, “Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3193-3202, 2017.
  • M. Naghibi, R. Anvari, A. Forghani and B. Minaei, “Cost-Sensitive Topical Data Acquisition from the Web”, International Journal of Data Mining and Knowledge Management Process, Vol. 9, No. 3, pp. 39-56, 2019.
  • A. Polyak and L. Wolf, “Channel-Level Acceleration of Deep Face Representations”, IEEE Access, Vol. 3, pp. 2163-2175, 2015.
  • A. Lavin and S. Gray, “Fast Algorithms for Convolutional Neural Networks”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 4013-4021, 2016.
  • L. J. Ba and R. Caruana, “Do Deep Nets Really Need to be Deep?”, Proceedings of International Conference on Advances in Neural Information Processing Systems, pp. 2654-2662, 2014.
  • A. Romero, N. Ballas, S.E. Kahou, A. Chassang, C. Gatta and Y. Bengio, “Fitnets: Hints for Thin Deep Nets”, Proceedings of International Conference on Neural and Evolutionary Computing, pp. 782-789, 2014.
  • X. Zhang, J. Zou, K. He and J. Sun, “Accelerating Very Deep Convolutional Networks for Classification and Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 1, pp. 1943-1955, 2015.
  • E.L. Denton, W. Zaremba, J. Bruna, Y. LeCun and R. Fergus, “Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation”, Proceedings of 27th International Conference on Neural Information Processing Systems, pp. 1269-1277, 2014.
  • M. Jaderberg, A. Vedaldi and A. Zisserman, “Speeding Up Convolutional Neural Networks with Low Rank Expansions”, Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 1-12, 2014.
  • N. Strom, “Sparse Connection and Pruning in Large Dynamic Artificial Neural Networks”, Proceedings of 5th European Conference on Speech Communication and Technology, pp. 1-4, 1997.
  • G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R.R. Salakhutdinov, “Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors”, Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 1-18, 2012.
  • N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino and Y. LeCun, “Fast Convolutional Nets with FBFFT: A GPU Performance Evaluation”, Proceedings of International Conference on Learning Representations, pp. 1-17, 2014.
  • M. Mathieu, M. Henaff and Y. LeCun, “Fast Training of Convolutional Networks through FFTs”, Proceedings of International Conference on Neural and Evolutionary Computing, pp. 1111-1119, 2013.
  • V.N. Murthy, V. Singh, T. Chen, R. Manmatha and D. Comaniciu, “Deep Decision Network for Multi-Class Image Classification”, Proceedings of IEEE conference on Computer Vision and Pattern Recognition, pp. 2240-2248, 2016.
  • V. Vanhoucke, A. Senior and M.Z. Mao, “Improving the Speed of Neural Networks on CPUs”, Proceedings of International Workshop on Deep Learning and Unsupervised Feature Learning, pp. 1-8, 2011.
  • A. Toshev and C. Szegedy, “Deeppose: Human Pose Estimation via Deep Neural Networks”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1653-1660, 2014.
  • A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images”, Technical Report, University of Toronto, pp. 1-65, 2009.
  • C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016.

Abstract Views: 204

PDF Views: 0




  • Clustering Algorithm Networks Test Cost Sensitive for Specialist Divisions

Abstract Views: 204  |  PDF Views: 0

Authors

M. Arvindhan
School of Computing Science and Engineering, Galgotias University, India
G. S. Pradeep Ghantasala
Department of Computer Science and Engineering, Malla Reddy Institute of Technology and Science, India
N. V. Kousik
School of Computing Science and Engineering, Galgotias University, India

Abstract


It has been proven that deeper Convolutional Neural Networks (CNN) can result in better accuracy in many problems, but this accuracy comes with a high computational cost. Also, input instances have not the same difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive method for convolution neural networks. This method trains a CNN with a set Based on the difficulty of the input instance, the expert branches decide to use a shallower part of the network or go deeper to the end. The expert branches learn to determine: is the current network prediction wrong and if the instance passed to deeper network layers it will generate the right output; if not, then the expert branches will stop the process of computation. The experimental results on the standard CIFAR-10 dataset indicate that in comparison with basic models, the proposed method can train models with lower test cost and competitive accuracy.

Keywords


Test-Cost-Sensitive Learning, Deep Learning, CNN with Expert Branches, Instance-Based Cost.

References