Decoding Cognitive States and Emotions Using the Electroencephalogram

Authors

  • Ihsan Ullah Faculty of Engineering and Computing, National University of Modern Languages, Islamabad, Pakistan
  • Raheel Zafar Faculty of Engineering and Computing, National University of Modern Languages, Islamabad, Pakistan
  • Hammad Dilpazir Faculty of Engineering and Computing, National University of Modern Languages, Islamabad, Pakistan
  • Muhammad Javvad Ur Rehman Faculty of Engineering and Computing, National University of Modern Languages, Islamabad, Pakistan
  • Abdullah Waqas National University of Technology, Islamabad, Pakistan
  • Rana Fayyaz Ahmad Artificial Intelligence Technology Center, National Center for Physics, Islamabad, Pakistan

Keywords:

EEG, Text, Emotions, CNN, WT, PSD, Monte Carlo

Abstract

Emotions are essential in human communication, social interaction, and decision-making. However, accurately classifying emotions is difficult with many applications in various domains such as psychology, psychiatry, neuroscience, and human-computer interaction. Emotion detection is one of the key challenges in current research, especially when emotional words are used. It is already known that positive and negative words have an impact on human behaviour and emotions, but very rare study that focus on emotions based on the words. In this study, we propose a novel approach for emotion classification based on electroencephalogram (EEG) data elicited by text stimuli, which are various English words. Text stimuli can evoke rich and diverse emotions, but they have been less explored than other modalities for emotion elicitation. In this study, EEG data of 25 participants were used, which were collected using a 128-channel EGI system. The collected data was pre-processed, and features were extracted using four methods: Convolutional Neural Network (CNN), Wavelet Transform (WT), Power Spectral Density (PSD), and the raw data itself was used as features. The results showed that CNN features achieved an average accuracy of 80%, followed by WT with 75%, PSD with 72%, and raw data with 65%. Our study shows the feasibility and effectiveness of using CNN, PSD, and WT with SVM for emotion classification based on EEG data and text stimuli. Lastly, a hybrid model was proposed based on the combination of CNN for feature extraction and SVM for classification.

References

“Emotion recognition: from speech and facial expressions,” pp. 307–326, Jan. 2022, doi: 10.1016/B978-0-12-820125-1.00028-2.

A. Bakhshi, A. Harimi, and S. Chalup, “CyTex: Transforming speech to textured images for speech emotion recognition,” Speech Commun., vol. 139, pp. 62–75, Apr. 2022, doi: 10.1016/J.SPECOM.2022.02.007.

R. Febrian, B. M. Halim, M. Christina, D. Ramdhan, and A. Chowanda, “Facial expression recognition using bidirectional LSTM - CNN,” Procedia Comput. Sci., vol. 216, pp. 39–47, Jan. 2023, doi: 10.1016/J.PROCS.2022.12.109.

G. Muhammad Sajjad, Fath U Min Ullah, Christodoulou and J. J. P. C. R. Ullah, Mohib, Faouzi Alaya Cheikh, Mohammad Hijji, Khan Muhammad, “A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines,” Alexandria Eng. J., vol. 68, pp. 817–840, 2023, doi: https://doi.org/10.1016/j.aej.2023.01.017.

Y. Izountar, S. Benbelkacem, S. Otmane, A. Khababa, M. Masmoudi, and N. Zenati, “VR-PEER: A Personalized Exer-Game Platform Based on Emotion Recognition,” Electron. 2022, Vol. 11, Page 455, vol. 11, no. 3, p. 455, Feb. 2022, doi: 10.3390/ELECTRONICS11030455.

W. Lin and C. Li, “Review of Studies on Emotion Recognition and Judgment Based on Physiological Signals,” Appl. Sci. 2023, Vol. 13, Page 2573, vol. 13, no. 4, p. 2573, Feb. 2023, doi: 10.3390/APP13042573.

D. M. Zolezzi, L. M. Alonso-Valerdi, and D. I. Ibarra-Zarate, “EEG frequency band analysis in chronic neuropathic pain: A linear and nonlinear approach to classify pain severity,” Comput. Methods Programs Biomed., vol. 230, p. 107349, Mar. 2023, doi: 10.1016/J.CMPB.2023.107349.

Z. Zhou, M. A. Asghar, D. Nazir, K. Siddique, M. Shorfuzzaman, and R. M. Mehmood, “An AI-empowered affect recognition model for healthcare and emotional well-being using physiological signals,” Clust. Comput. 2022 262, vol. 26, no. 2, pp. 1253–1266, Nov. 2022, doi: 10.1007/S10586-022-03705-0.

D. Li, L. Xie, Z. Wang, and H. Yang, “Brain Emotion Perception Inspired EEG Emotion Recognition With Deep Reinforcement Learning,” IEEE Trans. Neural Networks Learn. Syst., vol. 35, no. 9, pp. 12979–12992, 2024, doi: 10.1109/TNNLS.2023.3265730.

F. Z. Canal et al., “A survey on facial emotion recognition techniques: A state-of-the-art literature review,” Inf. Sci. (Ny)., vol. 582, pp. 593–617, Jan. 2022, doi: 10.1016/J.INS.2021.10.005.

M. K. Abadi, M. Kia, R. Subramanian, P. Avesani, and N. Sebe, “Decoding affect in videos employing the MEG brain signal,” 2013 10th IEEE Int. Conf. Work. Autom. Face Gesture Recognition, FG 2013, 2013, doi: 10.1109/FG.2013.6553809.

J. Atkinson and D. Campos, “Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers,” Expert Syst. Appl., vol. 47, pp. 35–41, Apr. 2016, doi: 10.1016/J.ESWA.2015.10.049.

A. A. Anthony and C. M. Patil, “Speech Emotion Recognition Systems: A Comprehensive Review on Different Methodologies,” Wirel. Pers. Commun. 2023 1301, vol. 130, no. 1, pp. 515–525, Mar. 2023, doi: 10.1007/S11277-023-10296-5.

N. Alswaidan and M. E. B. Menai, “A survey of state-of-the-art approaches for emotion recognition in text,” Knowl. Inf. Syst., vol. 62, no. 8, pp. 2937–2987, Aug. 2020, doi: 10.1007/S10115-020-01449-0/TABLES/5.

D. Mehta, M. F. H. Siddiqui, and A. Y. Javaid, “Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality,” Sensors 2018, Vol. 18, Page 416, vol. 18, no. 2, p. 416, Feb. 2018, doi: 10.3390/S18020416.

C. Marechal et al., “Survey on AI-Based Multimodal Methods for Emotion Detection,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11400, pp. 307–324, 2019, doi: 10.1007/978-3-030-16272-6_11.

M. K. Ahirwal and M. R. Kose, “Audio-visual stimulation based emotion classification by correlated EEG channels,” Heal. Technol. 2019 101, vol. 10, no. 1, pp. 7–23, Dec. 2019, doi: 10.1007/S12553-019-00394-5.

K. Zaman, S. Zhaoyun, S. M. Shah, M. Shoaib, P. Lili, and A. Hussain, “Driver Emotions Recognition Based on Improved Faster R-CNN and Neural Architectural Search Network,” Symmetry 2022, Vol. 14, Page 687, vol. 14, no. 4, p. 687, Mar. 2022, doi: 10.3390/SYM14040687.

I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty, “The Belfast induced natural emotion database,” IEEE Trans. Affect. Comput., vol. 3, no. 1, pp. 32–41, Jan. 2012, doi: 10.1109/T-AFFC.2011.26.

M. Asif, S. Mishra, M. T. Vinodbhai, and U. S. Tiwary, “Emotion Recognition Using Temporally Localized Emotional Events in EEG With Naturalistic Context: DENS# Dataset,” IEEE Access, vol. 11, pp. 39913–39925, 2023, doi: 10.1109/ACCESS.2023.3266804.

L. Jin and E. Y. Kim, “E-EmotiConNet: EEG-based Emotion Recognition with Context Information,” Proc. Int. Jt. Conf. Neural Networks, 2022, doi: 10.1109/IJCNN55064.2022.9892017.

X. Wu, W. L. Zheng, Z. Li, and B. L. Lu, “Investigating EEG-based functional connectivity patterns for multimodal emotion recognition,” J. Neural Eng., vol. 19, no. 1, p. 016012, Jan. 2022, doi: 10.1088/1741-2552/AC49A7.

Y. Y. Lee and S. Hsieh, “Classifying Different Emotional States by Means of EEG-Based Functional Connectivity Patterns,” PLoS One, vol. 9, no. 4, p. e95415, Apr. 2014, doi: 10.1371/JOURNAL.PONE.0095415.

K. A. Lindquist, T. D. Wager, H. Kober, E. Bliss-Moreau, and L. F. Barrett, “The brain basis of emotion: a meta-analytic review,” Behav. Brain Sci., vol. 35, no. 3, pp. 121–143, Jun. 2012, doi: 10.1017/S0140525X11000446.

J. Marín-Morales et al., “Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors,” Sci. Reports 2018 81, vol. 8, no. 1, pp. 13657-, Sep. 2018, doi: 10.1038/s41598-018-32063-4.

B. L. Fredrickson, “Positive Emotions Broaden and Build,” Adv. Exp. Soc. Psychol., vol. 47, pp. 1–53, Jan. 2013, doi: 10.1016/B978-0-12-407236-7.00001-2.

X. Hu et al., “EEG correlates of ten positive emotions,” Front. Hum. Neurosci., vol. 11, p. 242491, Jan. 2017, doi: 10.3389/FNHUM.2017.00026/BIBTEX.

A. Q.-X. Ang, Y. Q. Yeong, W. Wee, A. Q.-X. Ang, Y. Q. Yeong, and W. Wee, “Emotion Classification from EEG Signals Using Time-Frequency-DWT Features and ANN,” J. Comput. Commun., vol. 5, no. 3, pp. 75–79, Mar. 2017, doi: 10.4236/JCC.2017.53009.

Y. Li et al., “The influence of positive emotion and negative emotion on false memory based on EEG signal analysis,” Neurosci. Lett., vol. 764, p. 136203, Nov. 2021, doi: 10.1016/J.NEULET.2021.136203.

S. G. Koolagudi, · K Sreenivasa Rao, S. G. Koolagudi, · K S Rao, and K. S. Rao, “Emotion recognition from speech: a review,” Int. J. Speech Technol. 2011 152, vol. 15, no. 2, pp. 99–117, Jan. 2012, doi: 10.1007/S10772-011-9125-1.

C. Lima, J. A. Lopes, V. Souza, S. Barros, I. Winkler, and V. Senna, “Analysis of brain activation and wave frequencies during a sentence completion task: a paradigm used with EEG in aphasic participants,” PeerJ, vol. 11, p. e15518, Jun. 2023, doi: 10.7717/PEERJ.15518/SUPP-2.

P. J. Lang, M. M. Bradley, and B. N. Cuthbert, “International Affective Picture System,” PsycTESTS Dataset, Nov. 2020, doi: 10.1037/T66667-000.

Z. Z. and S. W. J. Pan, W. Fang, Z. Zhang, B. Chen, “Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG,” IEEE Open J. Eng. Med. Biol., vol. 5, pp. 396–403, 2024, doi: 10.1109/OJEMB.2023.3240280.

“Discovering Psychology Hockenbury Hockenbury.” Accessed: Dec. 02, 2025. [Online]. Available: https://bookstation.org/book/Discovering Psychology Hockenbury Hockenbury-4962221

“Hettler, W. (1976). The Six Dimensions of Wellness. National Wellness Center. - References - Scientific Research Publishing.” Accessed: Dec. 02, 2025. [Online]. Available: https://www.scirp.org/reference/referencespapers?referenceid=2754783

“What Is Well-Being? Definition, Types, and Well-Being Skills | Psychology Today.” Accessed: Dec. 02, 2025. [Online]. Available: https://www.psychologytoday.com/us/blog/click-here-for-happiness/201901/what-is-well-being-definition-types-and-well-being-skills

L. Lovén, M. Pakanen, E. Gilman, and S. Pirttikangas, “Wellbeing in smart environments: Definition, measurement, prediction and control,” UbiComp/ISWC 2018 - Adjun. Proc. 2018 ACM Int. Jt. Conf. Pervasive Ubiquitous Comput. Proc. 2018 ACM Int. Symp. Wearable Comput., pp. 693–697, Oct. 2018, doi: 10.1145/3267305.3267692;JOURNAL:JOURNAL:ACMCONFERENCES;PAGEGROUP:STRING:PUBLICATION.

F. H. Lopes Da Silva, “Electrophysiological Basis of MEG Signals,” MEG An Introd. to Methods, pp. 1–24, Sep. 2010, doi: 10.1093/ACPROF:OSO/9780195307238.003.0001.

“(PDF) A New Methodology of Usability Testing on the Base of the Analysis of User’s Electroencephalogram.” Accessed: Dec. 02, 2025. [Online]. Available: https://www.researchgate.net/publication/286220374_A_New_Methodology_of_Usability_Testing_on_the_Base_of_the_Analysis_of_User’s_Electroencephalogram

M. Seeck et al., “The standardized EEG electrode array of the IFCN,” Clin. Neurophysiol., vol. 128, no. 10, pp. 2070–2077, Oct. 2017, doi: 10.1016/J.CLINPH.2017.06.254.

C. Stamoulis, R. E. Vanderwert, C. H. Zeanah, N. A. Fox, and C. A. Nelson, “Neuronal networks in the developing brain are adversely modulated by early psychosocial neglect,” J. Neurophysiol., vol. 118, no. 4, pp. 2275–2288, Oct. 2017, doi: 10.1152/JN.00014.2017.

Z. He et al., “Advances in Multimodal Emotion Recognition Based on Brain-Computer Interfaces,” Brain Sci., vol. 10, no. 10, pp. 1–29, Oct. 2020, doi: 10.3390/BRAINSCI10100687.

J. Zhu, H. Chen, and W. Ye, “A Hybrid CNN-LSTM Network for the Classification of Human Activities Based on Micro-Doppler Radar,” IEEE Access, vol. 8, pp. 24713–24720, 2020, doi: 10.1109/ACCESS.2020.2971064.

S. Kiranyaz, T. Ince, and M. Gabbouj, “Real-Time Patient-Specific ECG Classification by 1-D Convolutional Neural Networks,” IEEE Trans. Biomed. Eng., vol. 63, no. 3, pp. 664–675, Mar. 2016, doi: 10.1109/TBME.2015.2468589.

S. Kiranyaz, T. Ince, O. Abdeljaber, O. Avci, and M. Gabbouj, “1-D Convolutional Neural Networks for Signal Processing Applications,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2019-May, pp. 8360–8364, May 2019, doi: 10.1109/ICASSP.2019.8682194.

S. Kiranyaz, O. Avci, O. Abdeljaber, T. Ince, M. Gabbouj, and D. J. Inman, “1D convolutional neural networks and applications: A survey,” Mech. Syst. Signal Process., vol. 151, p. 107398, Apr. 2021, doi: 10.1016/J.YMSSP.2020.107398.

A. K. Liu, A. M. Dale, and J. W. Belliveau, “Monte Carlo simulation studies of EEG and MEG localization accuracy,” Hum. Brain Mapp., vol. 16, no. 1, pp. 47–62, 2002, doi: 10.1002/HBM.10024.

Downloads

Published

2025-11-23

How to Cite

Ihsan Ullah, Zafar, R., Hammad Dilpazir, Muhammad Javvad Ur Rehman, Abdullah Waqas, & Rana Fayyaz Ahmad. (2025). Decoding Cognitive States and Emotions Using the Electroencephalogram. International Journal of Innovations in Science & Technology, 7(4), 2842–2862. Retrieved from https://journal.50sea.com/index.php/IJIST/article/view/1652