Emotion recognition is essential across numerous fields, including medical applications and brain-computer interface (BCI). Emotional responses include behavioral reactions, such as tone of voice and body movement, and changes in physiological signals, such as the electroencephalogram (EEG). The latter are involuntary, thus they provide a reliable input for identifying emotions, in contrast to the former which individuals can consciously control. These signals reveal true emotional states without intentional alteration, thus increasing the accuracy of emotion recognition models. However, multimodal deep learning methods from physiological signals have not been significantly investigated. In this paper, we introduce PHEmoNet, a fully hypercomplex network for multimodal emotion recognition from physiological signals. In detail, the architecture comprises modality-specific encoders and a fusion module. Both encoders and fusion modules are defined in the hypercomplex domain through parameterized hypercomplex multiplications (PHMs) that can capture latent relations between the different dimensions of each modality and between the modalities themselves. The proposed method outperforms current state-of-the-art models on the MAHNOB-HCI dataset in classifying valence and arousal using electroencephalograms (EEGs) and peripheral physiological signals. The code for this work is available at https://github.com/ispamm/MHyEEG.

PHemoNet. A multimodal network for physiological signals / Lopez, E.; Uncini, A.; Comminiello, D.. - (2024), pp. 260-264. ( 8th IEEE International Forum on Research and Technologies for Society and Industry Innovation, RTSI 2024 Milano; Italy ) [10.1109/RTSI61910.2024.10761462].

PHemoNet. A multimodal network for physiological signals

Lopez E.
;
Uncini A.;Comminiello D.
2024

Abstract

Emotion recognition is essential across numerous fields, including medical applications and brain-computer interface (BCI). Emotional responses include behavioral reactions, such as tone of voice and body movement, and changes in physiological signals, such as the electroencephalogram (EEG). The latter are involuntary, thus they provide a reliable input for identifying emotions, in contrast to the former which individuals can consciously control. These signals reveal true emotional states without intentional alteration, thus increasing the accuracy of emotion recognition models. However, multimodal deep learning methods from physiological signals have not been significantly investigated. In this paper, we introduce PHEmoNet, a fully hypercomplex network for multimodal emotion recognition from physiological signals. In detail, the architecture comprises modality-specific encoders and a fusion module. Both encoders and fusion modules are defined in the hypercomplex domain through parameterized hypercomplex multiplications (PHMs) that can capture latent relations between the different dimensions of each modality and between the modalities themselves. The proposed method outperforms current state-of-the-art models on the MAHNOB-HCI dataset in classifying valence and arousal using electroencephalograms (EEGs) and peripheral physiological signals. The code for this work is available at https://github.com/ispamm/MHyEEG.
2024
8th IEEE International Forum on Research and Technologies for Society and Industry Innovation, RTSI 2024
emotion recognition; EEG; physiological signals; hypercomplex networks; hypercomplex algebra
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
PHemoNet. A multimodal network for physiological signals / Lopez, E.; Uncini, A.; Comminiello, D.. - (2024), pp. 260-264. ( 8th IEEE International Forum on Research and Technologies for Society and Industry Innovation, RTSI 2024 Milano; Italy ) [10.1109/RTSI61910.2024.10761462].
File allegati a questo prodotto
File Dimensione Formato  
Lopez_PHemoNet-a-multimodal_2024.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 253.14 kB
Formato Adobe PDF
253.14 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1765671
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact