Optimizing Human Activity Recognition with Ensemble Deep Learning on Wearable Sensor Data
Keywords:
Human Activity Recognition, Ensemble Deep Learning Model, Sensory Data Analysis, Wearable Devices, Time-Series Signal Processing.Abstract
In recent years, the research community has shown a growing interest in the continuous temporal data gathered from motion sensors integrated into wearable devices. This type of data is highly valuable for analyzing human activities in a variety of domains, including surveillance, healthcare, and sports. Various deep-learning models have been developed to extract meaningful feature representations from temporal sensory data. Nonetheless, many of these models are constrained by their focus on a single aspect of the data, frequently overlooking the complex relationships between patterns. This paper presents an ensemble model aimed at capturing these intricate patterns by combining CNN and LSTM models within an ensemble framework. The ensemble approach involves combining multiple independent models to harness their strengths, resulting in a more robust and effective solution. The proposed model utilizes the complementary capabilities of CNNs and LSTMs to identify both spatial and temporal features in raw sensory data. A comprehensive evaluation of the model is conducted using two well-known benchmark datasets: UCI-HAR and WISDM. The proposed model attained notable recognition accuracies of 97.92% on the UCI-HAR dataset and 98.52% on the WISDM dataset. When compared to existing state-of-the-art methods, the ensemble model exhibited superior performance and effectiveness.
References
“Human Activity Analysis in Visual Surveillance and Healthcare (Studien Zur Mustererkennung): 9783832548070: Computer Science Books @ Amazon.com.” Accessed: Oct. 22, 2024. [Online]. Available: https://www.amazon.com/Activity-Analysis-Surveillance-Healthcare-Mustererkennung/dp/3832548076
T. Haider, M. H. Khan, and M. S. Farid, “An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data,” Inf. 2024, Vol. 15, Page 593, vol. 15, no. 10, p. 593, Sep. 2024, doi: 10.3390/INFO15100593.
E. Ferrara, “Large Language Models for Wearable Sensor-Based Human Activity Recognition, Health Monitoring, and Behavioral Modeling: A Survey of Early Trends, Datasets, and Challenges,” Sensors 2024, Vol. 24, Page 5045, vol. 24, no. 15, p. 5045, Aug. 2024, doi: 10.3390/S24155045.
Y. Zhao et al., “Image expression of time series data of wearable IMU sensor and fusion classification of gymnastics action,” Expert Syst. Appl., vol. 238, p. 121978, Mar. 2024, doi: 10.1016/J.ESWA.2023.121978.
I. Priyadarshini, R. Sharma, D. Bhatt, and M. Al-Numay, “Human activity recognition in cyber-physical systems using optimized machine learning techniques,” Cluster Comput., vol. 26, no. 4, pp. 2199–2215, Aug. 2023, doi: 10.1007/S10586-022-03662-8/METRICS.
F. Amjad, M. H. Khan, M. A. Nisar, M. S. Farid, and M. Grzegorzek, “A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data,” Sensors 2021, Vol. 21, Page 2368, vol. 21, no. 7, p. 2368, Mar. 2021, doi: 10.3390/S21072368.
S. Waghchaware and R. Joshi, “Machine learning and deep learning models for human activity recognition in security and surveillance: a review,” Knowl. Inf. Syst., vol. 66, no. 8, pp. 4405–4436, Aug. 2024, doi: 10.1007/S10115-024-02122-6/METRICS.
F. Li, K. Shirahama, M. A. Nisar, L. Köping, and M. Grzegorzek, “Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors,” Sensors 2018, Vol. 18, Page 679, vol. 18, no. 2, p. 679, Feb. 2018, doi: 10.3390/S18020679.
G. M. Weiss, K. Yoneda, and T. Hayajneh, “Smartphone and Smartwatch-Based Biometrics Using Activities of Daily Living,” IEEE Access, vol. 7, pp. 133190–133202, 2019, doi: 10.1109/ACCESS.2019.2940729.
D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly Support Vector Machine,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7657 LNCS, pp. 216–223, 2012, doi: 10.1007/978-3-642-35395-6_30.
W. N. Ismail, H. A. Alsalamah, M. M. Hassan, and E. Mohamed, “AUTO-HAR: An adaptive human activity recognition framework using an automated CNN architecture design,” Heliyon, vol. 9, no. 2, Feb. 2023, doi: 10.1016/J.HELIYON.2023.E13636/ASSET/8182D440-0B5D-4D21-9255-91C8EA2B55F4/MAIN.ASSETS/GR016.JPG.
M. Bock, A. Hölzemann, M. Moeller, and K. Van Laerhoven, “Improving Deep Learning for HAR with Shallow LSTMs,” Proc. - Int. Symp. Wearable Comput. ISWC, pp. 7–12, Sep. 2020, doi: 10.1145/3460421.3480419.
N. Rashid, B. U. Demirel, and M. Abdullah Al Faruque, “AHAR: Adaptive CNN for Energy-Efficient Human Activity Recognition in Low-Power Edge Devices,” IEEE Internet Things J., vol. 9, no. 15, pp. 13041–13051, Aug. 2022, doi: 10.1109/JIOT.2022.3140465.
N. Jaouedi, N. Boujnah, and M. S. Bouhlel, “A new hybrid deep learning model for human action recognition,” J. King Saud Univ. - Comput. Inf. Sci., vol. 32, no. 4, pp. 447–453, May 2020, doi: 10.1016/J.JKSUCI.2019.09.004.
R. Mutegeki and D. S. Han, “A CNN-LSTM Approach to Human Activity Recognition,” 2020 Int. Conf. Artif. Intell. Inf. Commun. ICAIIC 2020, pp. 362–366, Feb. 2020, doi: 10.1109/ICAIIC48513.2020.9065078.
Z. Chen, L. Zhang, Z. Cao, and J. Guo, “Distilling the Knowledge from Handcrafted Features for Human Activity Recognition,” IEEE Trans. Ind. Informatics, vol. 14, no. 10, pp. 4334–4342, Oct. 2018, doi: 10.1109/TII.2018.2789925.
R. Fatima, M. H. Khan, M. A. Nisar, R. Doniec, M. S. Farid, and M. Grzegorzek, “A Systematic Evaluation of Feature Encoding Techniques for Gait Analysis Using Multimodal Sensory Data,” Sensors 2024, Vol. 24, Page 75, vol. 24, no. 1, p. 75, Dec. 2023, doi: 10.3390/S24010075.
M. H. Khan, M. S. Farid, and M. Grzegorzek, “A comprehensive study on codebook-based feature fusion for gait recognition,” Inf. Fusion, vol. 92, pp. 216–230, Apr. 2023, doi: 10.1016/J.INFFUS.2022.12.001.
M. H. Khan, M. S. Farid, and M. Grzegorzek, “Using a generic model for codebook-based gait recognition algorithms,” IWBF 2018 - Proc. 2018 6th Int. Work. Biometrics Forensics, pp. 1–7, Jun. 2018, doi: 10.1109/IWBF.2018.8401551.
K. Shirahama and M. Grzegorzek, “On the Generality of Codebook Approach for Sensor-Based Human Activity Recognition,” Electron. 2017, Vol. 6, Page 44, vol. 6, no. 2, p. 44, Jun. 2017, doi: 10.3390/ELECTRONICS6020044.
L. Köping, K. Shirahama, and M. Grzegorzek, “A general framework for sensor-based human activity recognition,” Comput. Biol. Med., vol. 95, pp. 248–260, Apr. 2018, doi: 10.1016/J.COMPBIOMED.2017.12.025.
M. H. Khan, F. Li, M. S. Farid, and M. Grzegorzek, “Gait Recognition Using Motion Trajectory Analysis,” Adv. Intell. Syst. Comput., vol. 578, pp. 73–82, 2018, doi: 10.1007/978-3-319-59162-9_8.
R. HECHT-NIELSEN, “Theory of the Backpropagation Neural Network,” Neural Networks Percept., pp. 65–93, Jan. 1992, doi: 10.1016/B978-0-12-741252-8.50010-8.
S. Chung, J. Lim, K. J. Noh, G. Kim, and H. Jeong, “Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning,” Sensors 2019, Vol. 19, Page 1716, vol. 19, no. 7, p. 1716, Apr. 2019, doi: 10.3390/S19071716.
S. Mohsen, “Recognition of human activity using GRU deep learning algorithm,” Multimed. Tools Appl., vol. 82, no. 30, pp. 47733–47749, Dec. 2023, doi: 10.1007/S11042-023-15571-Y/FIGURES/11.
M. N. Haque, M. T. H. Tonmoy, S. Mahmud, A. A. Ali, M. A. H. Khan, and M. Shoyaib, “GRU-based Attention Mechanism for Human Activity Recognition,” 1st Int. Conf. Adv. Sci. Eng. Robot. Technol. 2019, ICASERT 2019, May 2019, doi: 10.1109/ICASERT.2019.8934659.
N. Dua, S. N. Singh, and V. B. Semwal, “Multi-input CNN-GRU based human activity recognition using wearable sensors,” Computing, vol. 103, no. 7, pp. 1461–1478, Jul. 2021, doi: 10.1007/S00607-021-00928-8/METRICS.
G. Khodabandelou, H. Moon, Y. Amirat, and S. Mohammed, “A fuzzy convolutional attention-based GRU network for human activity recognition,” Eng. Appl. Artif. Intell., vol. 118, p. 105702, Feb. 2023, doi: 10.1016/J.ENGAPPAI.2022.105702.
S. Mekruksavanich and A. Jitpattanakul, “Deep Convolutional Neural Network with RNNs for Complex Activity Recognition Using Wrist-Worn Wearable Sensor Data,” Electron. 2021, Vol. 10, Page 1685, vol. 10, no. 14, p. 1685, Jul. 2021, doi: 10.3390/ELECTRONICS10141685.
L. Tong, H. Ma, Q. Lin, J. He, and L. Peng, “A Novel Deep Learning Bi-GRU-I Model for Real-Time Human Activity Recognition Using Inertial Sensors,” IEEE Sens. J., vol. 22, no. 6, pp. 6164–6174, Mar. 2022, doi: 10.1109/JSEN.2022.3148431.
S. Grossberg, “Recurrent neural networks,” Scholarpedia, vol. 8, no. 2, p. 1888, 2013, doi: 10.4249/SCHOLARPEDIA.1888.
S. Batool, M. H. Khan, and M. S. Farid, “An ensemble deep learning model for human activity analysis using wearable sensory data,” Appl. Soft Comput., vol. 159, p. 111599, Jul. 2024, doi: 10.1016/J.ASOC.2024.111599.
M. A. Khatun et al., “Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor,” IEEE J. Transl. Eng. Heal. Med., vol. 10, 2022, doi: 10.1109/JTEHM.2022.3177710.
S. Perez-Gamboa, Q. Sun, and Y. Zhang, “Improved Sensor Based Human Activity Recognition via Hybrid Convolutional and Recurrent Neural Networks,” Inert. 2021 - 8th IEEE Int. Symp. Inert. Sensors Syst. Proc., Mar. 2021, doi: 10.1109/INERTIAL51137.2021.9430460.
J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A survey,” Pattern Recognit. Lett., vol. 119, pp. 3–11, Mar. 2019, doi: 10.1016/J.PATREC.2018.02.010.
J. Wu, “Introduction to Convolutional Neural Networks,” 2017.
H. Hewamalage, C. Bergmeir, and K. Bandara, “Recurrent Neural Networks for Time Series Forecasting: Current status and future directions,” Int. J. Forecast., vol. 37, no. 1, pp. 388–427, Jan. 2021, doi: 10.1016/J.IJFORECAST.2020.06.008.
S. Wager, S. Wang, and P. Liang, “Dropout Training as Adaptive Regularization,” Adv. Neural Inf. Process. Syst., Jul. 2013, Accessed: Oct. 22, 2024. [Online]. Available: https://arxiv.org/abs/1307.1493v2
D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly Support Vector Machine,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7657 LNCS, pp. 216–223, 2012, doi: 10.1007/978-3-642-35395-6_30.
“Deep learning for computer vision: image classification, object detection, and face recognition in python - Google Search.” Accessed: Oct. 22, 2024. [Online]. Available: https://www.google.com/search?q=Deep+learning+for+computer+vision%3A+image+classification%2C+object+detection%2C+and+face+recognition+in+python&oq=Deep+learning+for+computer+vision%3A+image+classification%2C+object+detection%2C+and+face+recognition+in+python&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBBzk1OGowajSoAgCwAgE&sourceid=chrome&ie=UTF-8
Y. Bengio, “Practical Recommendations for Gradient-Based Training of Deep Architectures,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 7700 LECTURE NO, pp. 437–478, 2012, doi: 10.1007/978-3-642-35289-8_26.
N. Ashfaq, M. H. Khan, and M. A. Nisar, “Identification of Optimal Data Augmentation Techniques for Multimodal Time-Series Sensory Data: A Framework,” Inf. 2024, Vol. 15, Page 343, vol. 15, no. 6, p. 343, Jun. 2024, doi: 10.3390/INFO15060343.
Y. Wang et al., “A Novel Deep Multifeature Extraction Framework Based on Attention Mechanism Using Wearable Sensor Data for Human Activity Recognition,” IEEE Sens. J., vol. 23, no. 7, pp. 7188–7198, Apr. 2023, doi: 10.1109/JSEN.2023.3242603.
S. Gupta, “Deep learning based human activity recognition (HAR) using wearable sensor data,” Int. J. Inf. Manag. Data Insights, vol. 1, no. 2, p. 100046, Nov. 2021, doi: 10.1016/J.JJIMEI.2021.100046.
V. Soni, H. Yadav, V. B. Semwal, B. Roy, D. K. Choubey, and D. K. Mallick, “A Novel Smartphone-Based Human Activity Recognition Using Deep Learning in Health care,” Lect. Notes Electr. Eng., vol. 946, pp. 493–503, 2023, doi: 10.1007/978-981-19-5868-7_36.
Z. N. Khan and J. Ahmad, “Attention induced multi-head convolutional neural network for human activity recognition,” Appl. Soft Comput., vol. 110, p. 107671, Oct. 2021, doi: 10.1016/J.ASOC.2021.107671.
“Human Activity Recognition using WISDM Datasets.” Accessed: Oct. 22, 2024. [Online]. Available: https://www.researchgate.net/publication/372420736_Human_Activity_Recognition_using_WISDM_Datasets
M. M. Afsar et al., “Body-Worn Sensors for Recognizing Physical Sports Activities in Exergaming via Deep Learning Model,” IEEE Access, vol. 11, pp. 12460–12473, 2023, doi: 10.1109/ACCESS.2023.3239692.
M. K. R. Al-juaifari and A. A. Athari, “A Novel Framework for Future Human Activity Prediction Using Sensor-Based Data,” Int. J. Intell. Eng. Syst., vol. 16, no. 6, pp. 981–991, 2023, doi: 10.22266/ijies2023.1231.81.
V. B. Semwal, N. Gaud, P. Lalwani, V. Bijalwan, and A. K. Alok, “Pattern identification of different human joints for different human walking styles using inertial measurement unit (IMU) sensor,” Artif. Intell. Rev., vol. 55, no. 2, pp. 1149–1169, Feb. 2022, doi: 10.1007/S10462-021-09979-X/METRICS.
P. Duan, C. Li, J. Li, X. Chen, C. Wang, and E. Wang, “WISDOM: Wi-Fi-Based Contactless Multiuser Activity Recognition,” IEEE Internet Things J., vol. 10, no. 2, pp. 1876–1886, Jan. 2023, doi: 10.1109/JIOT.2022.3210131.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 50sea
This work is licensed under a Creative Commons Attribution 4.0 International License.