Open Access Open Access  Restricted Access Subscription Access

Human Activity Recognition by Analysis of Skeleton Joint Position in Internet of Things (IOT) Environment


Affiliations
1 Bilaspur University, Bilaspur - 495001, Chhattisgarh, India
2 National Institute of Technology, Raipur - 492001, Chhattisgarh, India
 

Objective: To provide automatically analyzing and detecting human activities to provide better support in healthcare sector, security purpose etc. Method: We have used UT Kinect-Action 3D dataset containing position of 20 body joint captured by Kinect sensor. We selected two set of joints J1 and J2; after that we have formed some rules for activity classification then we have applied SVM classifier, KNN classifier using Euclidean distance and KNN classifier using minkowski distance for activity classification. Findings: When we have used joint set J1 we got 97.8% accuracy with SVM classifier, 98.8% accuracy with KNN classifier using Euclidean distance, and 98.9% accuracy with KNN classifier using minkowski distance and for joint set J2 we got 97.7% accuracy with SVM classifier, 98.6% accuracy with KNN classifier using Euclidean distance, and 98.7% accuracy with KNN classifier using minkowski distance. Application/Improvement: we have classified four activities hand waving, standing, sitting and picking. In future more activities can also be included in this study. IOT along with this activity recognition method can be used to reduce overheads.

Keywords

Activity Recognition, IOT, Joint Set, Kinect, Skeleton.
User

Abstract Views: 158

PDF Views: 0




  • Human Activity Recognition by Analysis of Skeleton Joint Position in Internet of Things (IOT) Environment

Abstract Views: 158  |  PDF Views: 0

Authors

Rashmi Shrivastava
Bilaspur University, Bilaspur - 495001, Chhattisgarh, India
Manju Pandey
National Institute of Technology, Raipur - 492001, Chhattisgarh, India

Abstract


Objective: To provide automatically analyzing and detecting human activities to provide better support in healthcare sector, security purpose etc. Method: We have used UT Kinect-Action 3D dataset containing position of 20 body joint captured by Kinect sensor. We selected two set of joints J1 and J2; after that we have formed some rules for activity classification then we have applied SVM classifier, KNN classifier using Euclidean distance and KNN classifier using minkowski distance for activity classification. Findings: When we have used joint set J1 we got 97.8% accuracy with SVM classifier, 98.8% accuracy with KNN classifier using Euclidean distance, and 98.9% accuracy with KNN classifier using minkowski distance and for joint set J2 we got 97.7% accuracy with SVM classifier, 98.6% accuracy with KNN classifier using Euclidean distance, and 98.7% accuracy with KNN classifier using minkowski distance. Application/Improvement: we have classified four activities hand waving, standing, sitting and picking. In future more activities can also be included in this study. IOT along with this activity recognition method can be used to reduce overheads.

Keywords


Activity Recognition, IOT, Joint Set, Kinect, Skeleton.



DOI: https://doi.org/10.17485/ijst%2F2017%2Fv10i16%2F151232