Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Transfer Learning: Inception-V3 based Custom Classification Approach for Food Images


Affiliations
1 Department of Information Science and Engineering, Poojya Doddappa Appa College of Engineering, India
2 Department of Computer Science and Engineering, Poojya Doddappa Appa College of Engineering, India
     

   Subscribe/Renew Journal


Deep-learning approach has become more popular in the field of image processing. When it is concerned with health issues, there are lots of improvements in the applications of food image classification by deep learning methods. Transfer learning has become one of the popular techniques used in inception V3 for image classification, it is the reuse of a pre-trained model on a new model, where it uses a small amount of dataset to reduce the training time and increases the performance. In this paper, the Google Inception-V3 model is considered as a base, in top of that fully connected layer is built to optimize the classification process. In the model building process, convolution layers are capable to enough learn on its own convolution kernel to produce the tensor outputs. In addition, the separately obtained segmented features are concatenated with our custom model before the classification phase. It enhances the capability of important features and utilizes in the process of food classification. Here, the dataset of 16 class food images is considered and it contains thousands of images. The 96.27% classification accuracy has been obtained at the testing phase which is compared with different state-of-art techniques.

Keywords

Deep Learning, Transfer Learning, Convolutional Neural Networks (CNNs), Food Classification, Calories Estimation, South Indian Dataset, Inception Model.
Subscription Login to verify subscription
User
Notifications
Font Size

  • H. Chen, J. Xu, G. Xiao, Q. Wu, and S. Zhang, “Fast Auto-Clean CNN Model for Online Prediction of Food Materials”, Journal of Parallel and Distributed Computing, Vol. 117, pp. 218-227, 2017.
  • S. Pouyanfar and S.C. Chen, “Semantic Concept Detection using Weighted Discretization Multiple Correspondence Analysis for Disaster Information Management”, Proceedings of 17th IEEE International Conference on Information Reuse and Integration, pp. 556-564, 2016.
  • X. Chen, C. Zhang, S.C. Chen and S. Rubin, “A Human-Centered Multiple Instance Learning Framework for Semantic Video Retrieval”, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 39, No. 2, pp. 228-233, 2009.
  • C. Chen, Q. Zhu, L. Lin and M.L. Shyu, “Web Media Semantic Concept Retrieval via Tag Removal and Model Fusion”, ACM Transactions on Intelligent Systems and Technology, Vol. 4, No. 4, pp. 1-22, 2013.
  • T. Meng and M.L. Shyu, “Leveraging Concept Association Network for Multimedia Rare Concept Mining and Retrieval”, Proceedings of IEEE International Conference on Multimedia and Expo, pp. 860-865, 2012.
  • K. Yanai, and Y. Kawano, “Food Image Recognition using Deep Convolutional Network with Pre-Training and Fine-Tuning”, Proceedings of IEEE International Conference on Multimedia and Expo Workshops, pp. 1-6, 2015.
  • P. Pouladzadeh, S. Shirmohammadi and R. Al-Maghrabi, “Measuring Calorie and Nutrition from Food Image”, IEEE Transactions on Instrumentation and Measurement, Vol. 63, No. 8, pp. 1947-1956, 2014.
  • M. Puri, Z. Zhu, Q. Yu, A. Divakaran and H. Sawhney, “Recognition and Volume Estimation of Food Intake using a Mobile Device”, Proceedings of IEEE International Conference on Applications of Computer Vision, pp. 1-8, 2009.
  • L. Fei Fei and P. Perona, “A Bayesian Hierarchical Model for Learning Natural Scene Categories”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 524-531, 2005.
  • Basavaraj Anami and Vishwanath C. Burkpalli, “Recognition and Classification of Images of Fruits Juices Based on 3-Sigma Approach”, International Journal of Computational Vision and Robotics, Vol. 1, No. 2, pp. 206-217, 2010.
  • M. Chen, K. Dhingra, W. Wu, L. Yang, R. Sukthankar and J. Yang, “PFID: Pittsburgh Fast-Food Image Dataset”, Proceedings of IEEE International Conference on Image Processing, pp. 289-292, 2009.
  • F. Zhu, M. Bosch, I. Woo, S. Kim, C.J. Boushey, D.S. Ebert and E.J. Delp, “The Use of Mobile Devices in Aiding Dietary Assessment and Evaluation”, IEEE Journal of Selected Topics in Signal Processing, Vol. 4, No. 4, pp. 756-766, 2010.
  • F. Kong and J. Tan, “Dietcam: Automatic Dietary Assessment with Mobile Camera Phones”, Pervasive and Mobile Computing, Vol. 8, No. 1, pp. 147-163, 2012.
  • T. Joutou and K. Yanai, “A Food Image Recognition System with Multiple Kernel Learning”, Proceedings of IEEE International Conference on Image Processing, pp. 285-288, 2009.
  • G. Ciocca, P. Napoletano and R. Schettini, “Food Recognition: A New Dataset, Experiments, and Results”, IEEE Journal of Biomedical and Health Informatics, Vol. 21, No. 3, pp. 588-598, 2017.
  • G.M. Farinella, M. Moltisanti and S. Battiato, “Classifying Food Images Represented as Bag of Textons”, Proceedings of IEEE International Conference on Image Processing, pp. 5212-5216, 2014.
  • H. Kagaya, K. Aizawa and M. Ogawa, “Food Detection and Recognition using Convolutional Neural Network”, Proceedings of ACM International Conference on Multimedia, pp. 1085-1088, 2014.
  • L. Bossard, M. Guillaumin and L. Van Gool, “Food-101-Mining Discriminative Components with Random Forests”, Proceedings of IEEE International Conference on Image Processing, pp. 446-461, 2014.
  • Y. He, C. Xu, N. Khanna, C. Boushey and E. Delp, “Analysis of Food Images: Features and Classification”, Proceedings of IEEE International Conference on Image Processing, pp. 2744-2748, 2014.
  • D.T. Nguyen, Z. Zong, P.O. Ogunbona, Y. Probst and W. Li, “Food Image Classification using Local Appearance and Global Structural Information”, Neurocomputing, Vol. 140, pp. 242-251, 2014.
  • S. Jayaraman, T. Choudhury and P. Kumar, "Analysis of Classification Models based on Cuisine Prediction using Machine Learning”, Proceedings of IEEE Conference on Smart Technologies for Smart Nation, pp. 1485-1490, 2017.
  • V. Bettadapura, E. Thomaz, A. Parnami, G.D. Abowd, and I. Essa, “Leveraging Context to Support Automated Food Recognition in Restaurants”, Proceedings of IEEE International Conference on Applications of Computer Vision, pp. 580-587, 2012.
  • T. Joutou and K. Yanai, “A Food Image Recognition System with Multiple Kernel Learning”, Proceedings of IEEE International Conference on Image Processing, pp. 285-288, 2009.
  • H. Hoashi, T. Joutou and K. Yanai, “Image Recognition of 85 Food Categories by Feature Fusion”, Proceedings of IEEE International Conference on Multimedia, pp. 296-301, 2010.
  • K. He, X. Zhang, S. Ren and J. Sun. “Deep Residual Learning for Image Recognition”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  • Vishwanath. C. Burkapalli and Priyadarshini. C. Patil, “Fine Grained Food Image Segmentation through EA-DCNNs”, International Journal of Innovative Technology and Exploring Engineering, Vol. 9, No. 1, pp. 1-12, 2019.
  • G.E. Hinton and R.R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks”, Science, Vol. 313, No. 5786, pp. 504-507, 2006.
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed and D. Anguelov, “Going Deeper with Convolutions”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, 2015.
  • H. Kagaya and K. Aizawa, “Highly Accurate Food/Non-Food Image Classification based on a Deep Convolutional Neural Network”, Proceedings of IEEE International Conference on Image Analysis and Processing, pp. 350-357, 2015.
  • S. Christodoulidis, M. Anthimopoulos and S. Mougiakakou, “Food Recognition for Dietary Assessment using Deep Convolutional Neural Networks”, Proceedings of IEEE International Conference on Image Analysis and Processing, pp.458-465, 2015.

Abstract Views: 279

PDF Views: 0




  • Transfer Learning: Inception-V3 based Custom Classification Approach for Food Images

Abstract Views: 279  |  PDF Views: 0

Authors

Vishwanath C. Burkapalli
Department of Information Science and Engineering, Poojya Doddappa Appa College of Engineering, India
Priyadarshini C. Patil
Department of Computer Science and Engineering, Poojya Doddappa Appa College of Engineering, India

Abstract


Deep-learning approach has become more popular in the field of image processing. When it is concerned with health issues, there are lots of improvements in the applications of food image classification by deep learning methods. Transfer learning has become one of the popular techniques used in inception V3 for image classification, it is the reuse of a pre-trained model on a new model, where it uses a small amount of dataset to reduce the training time and increases the performance. In this paper, the Google Inception-V3 model is considered as a base, in top of that fully connected layer is built to optimize the classification process. In the model building process, convolution layers are capable to enough learn on its own convolution kernel to produce the tensor outputs. In addition, the separately obtained segmented features are concatenated with our custom model before the classification phase. It enhances the capability of important features and utilizes in the process of food classification. Here, the dataset of 16 class food images is considered and it contains thousands of images. The 96.27% classification accuracy has been obtained at the testing phase which is compared with different state-of-art techniques.

Keywords


Deep Learning, Transfer Learning, Convolutional Neural Networks (CNNs), Food Classification, Calories Estimation, South Indian Dataset, Inception Model.

References