Open Access
Subscription Access
Skeleton and Joint Angle Estimation Based on MobileNet
2D pose estimation is a general problem in computer vision, where the main objective is to detect a person’s body key-points and estimate a 2D skeletonized pose of a person. Skeleton estimation is outbound as an essential part of body parts detection in many fields, such as healthcare, rehabilitation, sports and fitness, animation, gaming, augmented reality, robotics. These systems are based on neural network applications and able to give reliable, objective and cost-effective benefits. Various methods are available based on this topic and used to update existing systems. In this regard, in this work, we have proposed a method for skeleton-based angle detection where we have used MobileNet model. This model is developed based on the convolution neural network (CNN). At first, 18 key-points of the human body parts were generated through the model. After that, by using the extracted key-points the skeleton of the human body parts is generated by estimating key-points according to the body part pairs. Furthermore, based on the generated skeletons, different skeleton joint angles at different key-points are estimated. To evaluate the performance of the proposed model at different environmental conditions, a customized dataset was utilized. This approach shows 95.37% accuracy for key-points detection, for joint angle estimation the accuracy is 96.11%, and shows 96.667% accuracy for body part length measurement.
Keywords
Skeleton, Pose Estimation, MobileNet, Heatmap, Convolution Neural Network.
User
Font Size
Information
- Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. and Adam, H., “Mobilenets: Efficient convolutional neural networks for mobile vision applications”, arXiv preprint arXiv:1704.04861, (2017).
- Sinha, D. and El-Sharkawy, M., “Thin MobileNet: an enhanced mobilenet architecture”, 2019 IEEE 10th annual ubiquitous computing, electronics & mobile communication conference (UEMCON), (2019).
- Edel, G. and Kapustin, V., “Exploring of the MobileNet V1 and MobileNet V2 models on NVIDIA Jetson Nano microcomputer”, In Journal of Physics: Conference Series IOP Publishing, (2022).
- Groos, D., Ramampiaro, H. and Ihlen, E.A., “EfficientPose: Scalable single-person pose estimation”, Applied intelligence, 51:2518-2533, (2021).
- Cao, Z., Simon, T., Wei, S.E. and Sheikh, Y., “Realtime multi-person 2d pose estimation using part affinity fields”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 7291-7299, (2017).
- Hidalgo, G., Raaj, Y., Idrees, H., Xiang, D., Joo, H., Simon, T. and Sheikh, Y., “Single-network whole-body pose estimation”, In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6982-6991, (2019).
- Zhang, F., Zhu, X. and Wang, C., “Single person pose estimation: a survey”, arXiv preprint arXiv:2109.10056, (2021).
- İnce, Ö.F., Ince, I.F., Yıldırım, M.E., Park, J.S., Song, J.K. and Yoon, B.W., “Human activity recognition with analysis of angles between skeletal joints using a RGB‐depth sensor”, ETRI journal, 42(1):78-89, (2020).
- Patil, M.R.R., Chaugule, S.V. and Malemath, V.S., “POSE ESTIMATION FOR SKELETON DETECTION”, International Journal of Engineering Applied Sciences and Technology, 4(4):192 -196, (2019).
- Xiao, B., Wu, H. and Wei, Y., “Simple baselines for human pose estimation and tracking”, In Proceedings of the European conference on computer vision (ECCV), 466 -481, (2018).
- Akhter, I. and Black, M.J.,” Pose -conditioned joint angle limits for 3D human pose reconstruction”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1446 -1455, (2015).
- Artacho, B. and Savakis, A., “Omnipose: A multi - scale framework for multi -person pose estimation”, arXiv preprint arXiv:2103.10180, (2021).
- Aubry, S., Laraba, S., Tilmanne, J. and Dutoit, T., “Action recognition based on 2D skeletons extracted from RGB videos”, In MATEC Web of Conferences EDP Sciences, (2019).
- Chu, X., Yang, W., Ouyang, W., Ma, C., Yuille, A.L. and Wang, X., “Multi -context attention for human pose estimation”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1831 -1840, (2017).
- Liu, H., Wu, J. and He, R., “Center point to pose: Multiple views 3D human pose estimation for multi - person”, Plos one , 17(9):0274450, (2022).
- Martinez, J., Hossain, R., Romero, J. and Little, J.J., “A simple yet effective baseline for 3d human pose estimation”, In Proceedings of the IEEE international conference on computer vision 2640 -2649, (2017).
- Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P.V. and Schiele, B., “Deepcut: Joint subset partition and labeling for multi person pose estimation”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 4929 -4937, (2016).
- Sun, K., Xiao, B., Liu, D. and Wang, J., “Deep high - resolution representation learning for human pose estimation”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5693 -5703, (2019).
- Özbay, E. and Özbay, F.A., "A cnn framework for classification of melanoma and benign lesions on dermatoscopic skin images", International Journal of Advanced Networking and Applications, 13(2):4874 - 4883,(2021).
Abstract Views: 163
PDF Views: 0