This paper introduces a new approach to emotion classification utilising deep learning models, specifically the Vision Transformer (ViT) model, in the analysis of electroencephalogram (EEG) signals. A dual-feature extraction approach was implemented in our study, utilising Power Spectral Density and Differential Entropy, to analyse the SEED IV dataset. This methodology resulted in the detailed classification of four distinct emotional states. The ViT model, which was originally designed for image processing, has been successfully applied to EEG signal analysis. It demonstrated remarkable performance by attaining a test accuracy of 99.02% with little variance. Notably, it outperformed conventional models like GRUs, LSTMs, and CNNs in this context. The findings of our study indicate that the ViT model has a high level of effectiveness in accurately identifying complex patterns present in EEG data. Specifically, the precision and recall rates achieved by the model surpass 98%, while the F1 score is estimated to be about 98.9%. The results of this study not only demonstrate the efficacy of transformer-based models in analysing cognitive states, but also indicate their considerable potential in improving systems for sympathetic human-computer interaction.

Enhancing Sentiment Analysis on SEED-IV Dataset with Vision Transformers: A Comparative Study / Tibermacine, I. E.; Tibermacine, A.; Guettala, W.; Napoli, C.; Russo, S.. - (2023), pp. 238-246. (Intervento presentato al convegno 11th International Conference on Information Technology: IoT and Smart City, ICIT 2023 tenutosi a Kyoto; Japan) [10.1145/3638985.3639024].

Enhancing Sentiment Analysis on SEED-IV Dataset with Vision Transformers: A Comparative Study

Tibermacine I. E.
Primo
Investigation
;
Napoli C.
Penultimo
Project Administration
;
Russo S.
Ultimo
Supervision
2023

Abstract

This paper introduces a new approach to emotion classification utilising deep learning models, specifically the Vision Transformer (ViT) model, in the analysis of electroencephalogram (EEG) signals. A dual-feature extraction approach was implemented in our study, utilising Power Spectral Density and Differential Entropy, to analyse the SEED IV dataset. This methodology resulted in the detailed classification of four distinct emotional states. The ViT model, which was originally designed for image processing, has been successfully applied to EEG signal analysis. It demonstrated remarkable performance by attaining a test accuracy of 99.02% with little variance. Notably, it outperformed conventional models like GRUs, LSTMs, and CNNs in this context. The findings of our study indicate that the ViT model has a high level of effectiveness in accurately identifying complex patterns present in EEG data. Specifically, the precision and recall rates achieved by the model surpass 98%, while the F1 score is estimated to be about 98.9%. The results of this study not only demonstrate the efficacy of transformer-based models in analysing cognitive states, but also indicate their considerable potential in improving systems for sympathetic human-computer interaction.
2023
11th International Conference on Information Technology: IoT and Smart City, ICIT 2023
classification; deep learning; EEG; signal analysis; vision transformer
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Enhancing Sentiment Analysis on SEED-IV Dataset with Vision Transformers: A Comparative Study / Tibermacine, I. E.; Tibermacine, A.; Guettala, W.; Napoli, C.; Russo, S.. - (2023), pp. 238-246. (Intervento presentato al convegno 11th International Conference on Information Technology: IoT and Smart City, ICIT 2023 tenutosi a Kyoto; Japan) [10.1145/3638985.3639024].
File allegati a questo prodotto
File Dimensione Formato  
Tibermacine_Enhancing_2023.pdf

accesso aperto

Note: https://doi.org/10.1145/3638985.3639024
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1707238
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact