Abstract
People's lives are enriched by human activity recognition (HAR), which extracts action-level details about human behaviour from raw input data. There are a variety of uses for Human Activity Recognition, including elderly surveillance systems, abnormal behaviour, and so forth. However, deep learning models such as Convolutional Neural Networks have outperformed traditional machine learning methods. As a result of CNN, it is possible to extract features and reduce computational costs. Transfer Learning, on the other hand, refers to the use of pre-trained machine learning models that can be used to detect human activity using a special type of Artificial Neural Network known as Leveraging CNN. Resnet-34 Use of a CNN model can provide detection accuracy up to 96.95 percent for human activity recognition Numerous studies and research have been conducted on HAR. There are only a few models in the majority of the paper, however What we know is that the more data we have, the better the model and the more accurate the model will be.
Keywords—HAR, CNN, KNN, CCTV, ROC, SGD, RELU.
References
1. A.Wang, S.Zhao, C.Zheng, H.Chen, L.Liu and G.Chen,"HierHAR:Sensor-Based Data-Driven Hierarchical Human Activity Recognition, in IEEE Sensors Journal, vol.21, no.3,pp.3353-3365, Sensors Journal,vol.21,no.3,pp.3353-3365,1Feb.1,2021,doi:10.1109/JSEN.2020.3023860.
2. M.A.Gul, M.H.Yousaf, S.Nawaz, Z.U.Rehman, and H.W. Kim, “Patient Monitoring by Abnormal HumanActivityRecognitionBasedonCNNArchitecture”.Electronics9:1993(2020).doi:10.3390/electronics9121993
3. F. Cruciani, A. Vafeiadis, C. Nugent, et. al., “Feature learning for Human Activity Recognition using Convolutional Neural Networks”. CCF Trans. Pervasive Comp. Interact. 2,18–32(2020). https://doi.org/10.1007/s42486-020-00026-2 Indoor Localization for Ambient Assisted LivingSystems,"2020IEEE16thInternationalConference
4. A. V. Vesa et al., "Human Activity Recognition using Smart phone Sensors and Beacon-based on Intelligent Computer Communication and Processing (ICCP), 2020, pp. 205-212,doi:10.1109/ICCP51029.2020.926615
5. Z. Zhang, Y. Yang, Z. Lv, C. Gan and Q. Zhu, "LMFNet: Human Activity Recognition Using Attentive 3-D Residual Network and Multistage Fusion Strategy,"in IEEE Internet of Things Journal, vol. 8, no.7, pp. 6012-6023,1April1,2021,doi:10.1109/JIOT.2020.3033449.
6. J.Deng, W.Dong, R.Socher, L.Li,KaiLi, and LiFei-Fei,“Image Net:A large-scale hierarchical image database,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 2009, pp.248–255.
A. Jalal, N. Sarif, J. T. Kim, and T.-S. Kim, “Human activity recognition via recognized body parts of human depths ilhouettes for residents monitoring services at smart home, ”Indoor and built environment, vol. 22, no. 1, pp. 271–279,2013.
7. K.SimonyanandA. Zisserman,“Two-stream convolutional networks for action recognition in videos,” in Advances in neural information processing systems, 2014, pp.568–576.
8. G.Gkioxari, R.Girshick, and J.Malik, “Contextual action recognition with r*cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1080–1088.
9. L. Wang, Y. Xiong, Z. Wang, and Y. Qiao, “Towards good practices for very deep two-stream convnets,” arXivpreprintarXiv:1507.02159, 2015.
10. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M.Paluri, “Deepend2endvoxel2voxelprediction,”in the Preceeding of the IEEE conference on computer vision and pattern recognitionworkshops,2016, pp. 17–24.
11. P. Wang, W. Li, J. Wan, P. Ogunbona, and X. Liu,“Cooperative training of deep aggregation networks forrgb-dactionrecognition,”in Thirty Second AAAI Conference on ArtificialIntelligence,2018.
12. S.Ji,W.Xu,M.Yang,andK.Yu,“3dconvolutionalneuralnetworksforhumanactionrecognition,” IEEE transactions on pattern analysis and machine intelligence,vol.35,no.1,pp.221–231,2013.
13. P. Khaire, P. Kumar, and J. Imran, “Combining cnnstreams of rgb-d and skeletal data for human activity recognition,” Pattern Recognition Letters, vol. 115, pp.107–116,2018.[15]A.Karpathy, G.Roderick, S.Shetty,
14. T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scalevideo classificationwith convolutional neural networks,”in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2014,pp.1725–17.
15. Viswanathan, A., Arunachalam, V. P., & Karthik, S. (2012). Geographical division trace back for distributed denial of service. Journal of Computer Science, 8(2), 216.
16. Anurekha, R., K. Duraiswamy, A. Viswanathan, V.P. Arunachalam and K.G. Kumar et al., 2012. Dynamic approach to defend against distributed denial of service attacks using an adaptive spin lock rate control mechanism. J. Comput. Sci., 8: 632-636.
17. Umamaheswari, M., & Rengarajan, N. (2020). Intelligent exhaustion rate and stability control on underwater wsn with fuzzy based clustering for efficient cost management strategies. Information Systems and e-Business Management, 18(3), 283-294.
18. Babu, G., & Maheswari, M. U. (2014). Bandwidth Scheduling for Content Delivery in VANET. International Journal of Innovative Research in Computer and Communication Engineering IJIRCCE, 2(1), 1000-1007.
19. Viswanathan, A., Kannan, A. R., & Kumar, K. G. (2010). A Dynamic Approach to defend against anonymous DDoS flooding Attacks. International Journal of Computer Science & Information Security.
20. Kalaivani, R., & Viswanathan, A. HYBRID CLOUD SERVICE COMPOSITION MECHANISM WITH SECURITY AND PRIVACY FOR BIG DATA PROCESS., International Journal of Advanced Research in Biology Engineering Science and Technology, Vol. 2,Special Issue 10, ISSN 2395-695X.
21. Ardra, S., & Viswanathan, A. (2012). A Survey On Detection And Mitigation Of Misbehavior In Disruption Tolerant Networks. IRACST-International Journal of Computer Networks and Wireless Communications (IJCNWC), 2(6). 22. S.B. Jayabharathi, S. Manjula, E. Tamil Selvan, P. Vengatesh and V. Senthil Kumar “Semantic Risk Analysis Model Cancer Data Prediction”, Journal on Science Engineering and Technology, Vol 5 Issue No 2 2018