References
1. Wang, J., Chen, Z., and Wu, Y. (2011b). “Action recognition with multiscale spatio-temporal contexts,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 3185–3192.
2. Lichtenauer, J., Valstar, J. S. M., and Pantic, M. (2011). Cost-effective solution to synchronised audio-visual data capture using multiple sensors. Image Vis. Comput. 29, 666–680. doi:10.1016/j.imavis.2011.07.004.
3. Rahmani, H., and Mian, A. (2015). “Learning a non-linear knowledge transfer model for cross-view action recognition,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA), 2458–2466.
4. Tian, Y., Sukthankar, R., and Shah, M. (2013). “Spatiotemporal deformable part models for action detection,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Portland, OR), 2642–2649.
5. Jain, M., Jegou, H., and Bouthemy, P. (2013). “Better exploiting motion for better action recognition,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Portland, OR), 2555–2562.
6. Kulkarni, K., Evangelidis, G., Cech, J., and Horaud, R. (2015). Continuous action recognition based on sequence alignment. Int. J. Comput. Vis. 112, 90–114. doi:10.1007/s11263-014-0758-9.
7. Samanta, S., and Chanda, B. (2014). Space-time facet model for human activity classification. IEEE Trans. Multimedia 16, 1525–1535. doi:10.1109/TMM.2014.2326734
8. Jiang, Z., Lin, Z., and Davis, L. S. (2013). A unified tree-based framework for joint action localization, recognition and segmentation. Computer. Vis. Image Understand. 117, 1345–1355. doi:10.1016/j.cviu.2012.09.008.
9. Roshtkhari, M. J., and Levine, M. D. (2013). Human activity recognition in videos using a single example. Image Vis. Comput. 31, 864–876. doi:10.1016/j.imavis.2013.08.005
10. Le, Q. V., Zou, W. Y., Yeung, S. Y., and Ng, A. Y. (2011). “Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 3361–3368.
11. Sadanand, S., and Corso, J. J. (2012). “Action bank: a high-level representation of activity in video,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Providence, RI),
1234–1241.
12. Wu, X., Xu, D., Duan, L., and Luo, J. (2011). “Action recognition using context and appearance distribution features,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 489–496.
13. Li, R., and Zickler, T. (2012). “Discriminative virtual views for cross-view action recognition,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Providence, RI), 2855–2862.
14. Vrigkas, M., Karavasilis, V., Nikou, C., and Kakadiaris, I. A. (2014a). Matching mixtures of curves for human action recognition. Comput. Vis. Image Understand. 119, 27–40. doi:10.1016/j.cviu.2013.11.007.
15. Yu, G., Yuan, J., and Liu, Z. (2012). “Propagative Hough voting for human activity recognition,” in Proc. European Conference on Computer Vision (Florence), 693–706.
16. Jain, M., Jegou, H., and Bouthemy, P. (2013). “Better exploiting motion for better action recognition,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Portland, OR), 2555–2562.
17. Wang, H., Kläser, A., Schmid, C., and Liu, C. L. (2013). Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103, 60–79. doi:10.1007/s11263-012-0594-8.
18. Ni, B., Moulin, P., Yang, X., and Yan, S. (2015). “Motion part regularization: improving action recognition via trajectory group selection,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA), 3698–3706.
19. Gaidon, A., Harchaoui, Z., and Schmid, C. (2014). Activity representation with motion hierarchies. Int. J. Comput. Vis. 107, 219–238. doi:10.1007/s11263-013-0677-1.
20. Raptis, M., Kokkinos, I., and Soatto, S. (2012). “Discovering discriminative action parts from mid-level video representations,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Providence, RI), 1242–1249.
21. Yu, G., and Yuan, J. (2015). “Fast action proposals for human action detection and search,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA), 1302–1311.
22. Li, B., Ayazoglu, M., Mao, T., Camps, O. I., and Sznaier, M. (2011). “Activity recognition using dynamic subspace angles,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 3193–3200.
23. Matikainen, P., Hebert, M., and Sukthankar, R. (2009). “Trajectons: action recognition through the motion analysis of tracked features,” in Workshop on Video-Oriented Object and Event Classification, in Conjunction with ICCV (Kyoto: IEEE), 514–521.
24. Messing, R., Pal, C. J., and Kautz, H. A. (2009). “Activity recognition using the velocity histories of tracked keypoints,” in Proc. IEEE International Conference on Computer Vision (Kyoto), 104–111.
25. Tran, D., Yuan, J., and Forsyth, D. (2014a). Video event detection: from subvolume localization to spatiotemporal path search. IEEE Trans. Pattern Anal. Mach. Intell. 36, 404–416. doi:10.1109/TPAMI.2013.137.
26. Holte, M. B., Chakraborty, B., Gonzàlez, J., and Moeslund, T. B. (2012a). A local 3-D motion descriptor for multi-view human action recognition from 4-D spatio-temporal interest points. IEEE J. Sel. Top. Signal Process. 6, 553–565. doi:10.1109/JSTSP.2012.2193556.
27. Zhou, Q., and Wang, G. (2012). “Atomic action features: a new feature for action recognition,” in Proc. European Conference on Computer Vision (Firenze), 291–300.
28. Sanchez-Riera, J., Cech, J., and Horaud, R. (2012). “Action recognition robust to background clutter by using stereo vision,” in Proc. European Conference on Computer Vision (Firenze), 332–341.
29. Martinez, Julieta, Michael J. Black, and Javier Romero. "On human motion prediction using recurrent neural networks." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2891-2900. 2017.arXiv:1705.02445v1 [cs.CV] 6 May 2017.
30. Li, Chao, Qiaoyong Zhong, Di Xie, and Shiliang Pu. "Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation." arXiv preprint arXiv:1804.06055 (2018).
31. Li, Maosen& Chen, Siheng& Chen, Xu & Zhang, Ya& Wang, Yanfeng & Tian, Qi. (2021). Symbiotic Graph Neural Networks for 3D Skeleton-based Human Action Recognition and Motion Prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence. PP. 1-1. 10.1109/TPAMI.2021.3053765.
32. N. Y. Hammerla, S. Halloran, and T. Ploetz, “Deep, convolutional, and recurrent models for human activity recognition using wearables,” 2016, arXiv:1604.08880. [Online]. Available: http://arxiv.org/abs/1604.08880
33. W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” in Proc. 23rd ACM Int. Conf. Multimedia, Oct. 2015, pp. 1307–1310.
34. F. J. Ordóñez and D. Roggen, “Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition,” Sensors, vol. 16, no. 1, p. 115, 2016.
35. C. Hu, Y. Chen, L. Hu, and X. Peng, “A novel random forests based class incremental learning method for activity recognition,” Pattern Recognit., vol. 78, pp. 277–290, Jun. 2018.
36. H. Qian, S. J. Pan, B. Da, and C. Miao, “A novel distribution-embedded neural network for sensor-based activity recognition,” in Proc. IJCAI, 2019, pp. 5614–5620.
37. M. Zeng et al., “Convolutional neural networks for human activity recognition using mobile sensors,” in Proc. 6th Int. Conf. Mobile Comput., Appl. Services, 2014, pp. 197–205.
38. J. Yang, M. N. Nguyen, P. P. San, X. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multichannel time series for human activity recognition,” in Proc. IJCAI, vol. 15, Buenos Aires, Argentina, 2015, pp. 3995–4001.
39. K. Wang, J. He, and L. Zhang, “Attention-based convolutional neural network for weakly labeled human activities’ recognition with wearable sensors,” IEEE Sensors J., vol. 19, no. 7, pp. 7598–7604, Sep. 2019.
40. Chen, Y., &Xue, Y. (2015, October). A deep learning approach to human activity recognition based on a single accelerometer. In 2015 IEEE international conference on systems, man, and cybernetics (pp. 1488-1492). IEEE.
41. Ronao, C. A., & Cho, S. B. (2016). Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications, 59, 235-244.
42. Ignatov, A. (2018). Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Applied Soft Computing, 62, 915-922.
43. Avilés-Cruz, C., Ferreyra-Ramírez, A., Zúñiga-López, A., &VillegasCortéz, J. (2019). Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors, 19(7), 1556.
44. Z.S. Abdallah , M.M. Gaber , B. Srinivasan , S. Krishnaswamy , Activity recogni- tion with evolving data streams: areview, ACM Comput. Surv. (CSUR) 51 (4) (2018) 71 .
45. S. Ramasamy Ramamurthy , N. Roy , Recent trends in machine learning for hu- man activity recognition–a survey, Wiley Interdiscip. Rev: Data Min. Knowl. Discov. 8 (4) (2018) e1254 .
46. Tsitsoulis , N. Bourbakis , A first stage comparative survey on vision-based human activity recognition, J. AI Tools 24 (6) (2013) .
47. O.D. Lara , M.A. Labrador , A survey on human activity recognition using wear- able sensors, IEEE Commun. Surv. Tutor. 15 (3) (2012) 1192–1209 .
48. Dernbach, Stefan, Barnan Das, Narayanan C. Krishnan, Brian L. Thomas, and Diane J. Cook. "Simple and complex activity recognition through smart phones." In Intelligent Environments (IE), 2012 8th International Conference on, pp. 214-221. IEEE, 2012.
49. Bayat, Akram, Marc Pomplun, and Duc A. Tran. "A study on human activity recognition using accelerometer data from smartphones." Procedia Computer Science 34 (2014): 450-457.
50. Davis, Kadian, Evans Owusu, Vahid Bastani, Lucio Marcenaro, Jun Hu, Carlo Regazzoni, and LoeFeijs. "Activity recognition based on inertial sensors for Ambient Assisted Living." In Information Fusion (FUSION), 2016 19th International Conference on, pp. 371-378. ISIF, 2016.
51. Farkas, Ioana, and Elena Doran. "ACTIVITY RECOGNITION FROM ACCELERATION DATA COLLECTED WITH A TRI-AXIAL ACCELEROMETER." Acta TechnicaNapocensis. Electronica-Telecomunicatii 52, no. 2 (2011).
52. Schwickert, Lars, Ronald Boos, Jochen Klenk, Alan Bourke, Clemens Becker, and WiebrenZijlstra. "Inertial Sensor Based Analysis of Lie-to-Stand Transfers in Younger and Older Adults." Sensors 16, no. 8 (2016): 1277.
53. Lopez-Nava, Irvin Hussein, and Angelica Munoz-Melendez. "Complex human action recognition on daily living environments using wearable inertial sensors." In Proc. 10th EAI Int. Conf. Pervas. Comput. Technol. Healthcare. 2016.
54. Hamm, Jihun, Benjamin Stone, Mikhail Belkin, and Simon Dennis. "Automatic annotation of daily activity from smartphone-based multisensory streams." In Mobile Computing, Applications, and Services, pp. 328-342. Springer Berlin Heidelberg, 2012.
55. Kaghyan, Sahak, and HakobSarukhanyan. "Accelerometer and GPS sensor combination based system for human activity recognition." In Computer Science and Information Technologies (CSIT), 2013, pp. 1-9. IEEE, 2013.
56. Beth Logan, Jennifer Healey, MatthaiPhilipose, Emmanuel Munguia Tapia, and Stephen Intille. “A long-term evaluation of sensing modalities for activity recognition”. In Proceedings of the 9th international conference on Ubiquitous computing, pages 483–500, Berlin, Heidelberg, 2007. Springer-Verlag.
57. Cornacchia, M., Ozcan, K., Zheng, Y., Velipasalar, S., 2017. A survey on activity detection and classification using wearable sensors. IEEE Sensor. J. 17, 386–403.
58. Cheok, M.J., Omar, Z., Jaward, M.H., 2019. A review of hand gesture and sign language recognition techniques. Int. J. Mach. Learn. Cybernet. 10, 131–153.
59. Soliman, M.A., Alrashed, M., 2018. Anrfid based activity of daily living for elderly with alzheimer’s. Internet of Things (IoT) Technol. HealthCare 54.
60. G. Xu and F. Huang, “Viewpoint insensitive action recognition using envelop shape,” in Computer Vision--Asian Conference on Computer Vision 2007, pp. 477–486, Springer, Tokyo, Japan, 2007.
61. D. Weinland, R. Ronfard, and E. Boyer, “Free viewpoint action recognition using motion history volumes,” Computer Vision and Image Understanding, vol. 104, pp. 249–257, 2006.View at: Google Scholar
62. M. D. Rodriguez, J. Ahmed, and M. Shah, “Action mach a spatio-temporal maximum average correlation height filter for action recognition,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, Anchorage, AK, USA, 2008
63. Senthil kumar, V., Prasanth, K. Weighted Rendezvous Planning on Q-Learning Based Adaptive Zone Partition with PSO Based Optimal Path Selection. Wireless Personal Communications 110, 153–167 (2020). https://doi-org.libproxy.viko.lt /10.1007/s11277-019-06717-z
64. Jaganathan, M., Sabari, A. An heuristic cloud based segmentation technique using edge and texture based two dimensional entropy. Cluster Computing Vol 22, PP 12767–12776 (2019).
65. Vignesh Janarthanan, A.Viswanathan, M. Umamaheswari, “Neural Network and Cuckoo Optimization Algorithm for Remote Sensing Image Classification ", International Journal of Recent Technology and Engineering., vol. 8, no. 4, pp. 1630-1634, Jun. 2019.
66. Dr.VigneshJanarthanan, Dr.Venkata Reddy Medikonda,.Er.Dr.G.ManojSomeswar Proposal of a Novel Approach for Stabilization of the Image from Omni-Directional System in the case of Human Detection & Tracking “American Journal of Engineering Research (AJER)” vol 6 issue 11 2017
67. V.Senthilkumar , K. Prashanth” A Survey of Rendezvous planning Algorithms for Wireless Sensor Networks International Journal of communication and computer Technologies, Vol 4 Issue No 1 (2016)
68. Dr. V. Senthil kumar, Mr. P. Jeevanantham, Dr. A. Viswanathan, Dr. Vignesh Janarthanan, Dr. M. Umamaheswari, Dr. S. Sivaprakash Emperor Journal of Applied Scientific Research “Improve Design and Analysis of Friend-to-Friend Content Dissemination System ”Volume - 3 Issue - 3 2021
69. Viswanathan, A., Arunachalam, V. P., & Karthik, S. (2012). Geographical division traceback for distributed denial of service. Journal of Computer Science, 8(2), 216.
70. Anurekha, R., K. Duraiswamy, A. Viswanathan, V.P. Arunachalam and K.G. Kumar et al., 2012. Dynamic approach to defend against distributed denial of service attacks using an adaptive spin lock rate control mechanism. J. Comput. Sci., 8: 632-636.
71. Viswanathan, A., Kannan, A. R., & Kumar, K. G. (2010). A Dynamic Approach to defend against anonymous DDoS flooding Attacks. International Journal of Computer Science & Information Security.
72. Kalaivani, R., & Viswanathan, A. HYBRID CLOUD SERVICE COMPOSITION MECHANISM WITH SECURITY AND PRIVACY FOR BIG DATA PROCESS., International Journal of Advanced Research in Biology Engineering Science and Technology, Vol. 2,Special Issue 10, ISSN 2395-695X.
73. Ardra, S., & Viswanathan, A. (2012). A Survey On Detection And Mitigation Of Misbehavior In Disruption Tolerant Networks. IRACST-International Journal of Computer Networks and Wireless Communications (IJCNWC), 2(6).
74. Umamaheswari, M., &Rengarajan, N. (2020). Intelligent exhaustion rate and stability control on underwater wsn with fuzzy based clustering for efficient cost management strategies. Information Systems and e-Business Management, 18(3), 283-294.
75. Babu, G., &Maheswari, M. U. (2014). Bandwidth Scheduling for Content Delivery in VANET. International Journal of Innovative Research in Computer and Communication Engineering IJIRCCE, 2(1), 1000-1007.
76. Sowmitha, V., and Mr V. Senthilkumar. "A Cluster Based Weighted Rendezvous Planning for Efficient Mobile-Sink Path Selection in WSN." International Journal for Scientific Research & Development Vol 2 Issue 11 2015