Introduction: Facial Emotion Recognition (FER) enables smart environments and robots to adapt their behavior to a user's affective state. Translating those recognized emotions into ambient cues, such as colored lighting, can improve comfort and engagement in Ambient Assisted Living (AAL) settings. Methods: We design a FER pipeline that combines a Spatial Transformer Network for pose-invariant region focusing with a novel Multiple Self-Attention (MSA) block comprising parallel attention heads and learned fusion weights. The MSA-enhanced block is inserted into a compact VGG-style backbone trained on the FER+ dataset using weighted sampling to counteract class imbalance. The resulting soft-max probabilities are linearly blended with prototype hues derived from a simplified Plutchik wheel to drive RGB lighting in real time. Results: The proposed VGGFac-STN-MSA model achieves 82.54% test accuracy on FER+, outperforming a CNN baseline and the reproduced Deep-Emotion architecture. Ablation shows that MSA contributes +1% accuracy. Continuous color blending yields smooth, intensity-aware lighting transitions in a proof-of-concept demo. Discussion: Our attention scheme is architecture-agnostic, adds minimal computational overhead, and markedly boosts FER accuracy on low-resolution faces. Coupling the probability distribution directly to the RGB gamut provides a fine-grained, perceptually meaningful channel for affect-adaptive AAL systems.

Exploiting facial emotion recognition system for ambient assisted living technologies triggered by interpreting the user's emotional state / Russo, Samuele; Tibermacine, Imad Eddine; Randieri, Cristian; Rabehi, Abdelaziz; Alharbi, Amal H.; El-kenawy, El-Sayed M.; Napoli, Christian. - In: FRONTIERS IN NEUROSCIENCE. - ISSN 1662-453X. - 19:(2025). [10.3389/fnins.2025.1622194]

Exploiting facial emotion recognition system for ambient assisted living technologies triggered by interpreting the user's emotional state

Russo, Samuele;Tibermacine, Imad Eddine
;
Napoli, Christian
2025

Abstract

Introduction: Facial Emotion Recognition (FER) enables smart environments and robots to adapt their behavior to a user's affective state. Translating those recognized emotions into ambient cues, such as colored lighting, can improve comfort and engagement in Ambient Assisted Living (AAL) settings. Methods: We design a FER pipeline that combines a Spatial Transformer Network for pose-invariant region focusing with a novel Multiple Self-Attention (MSA) block comprising parallel attention heads and learned fusion weights. The MSA-enhanced block is inserted into a compact VGG-style backbone trained on the FER+ dataset using weighted sampling to counteract class imbalance. The resulting soft-max probabilities are linearly blended with prototype hues derived from a simplified Plutchik wheel to drive RGB lighting in real time. Results: The proposed VGGFac-STN-MSA model achieves 82.54% test accuracy on FER+, outperforming a CNN baseline and the reproduced Deep-Emotion architecture. Ablation shows that MSA contributes +1% accuracy. Continuous color blending yields smooth, intensity-aware lighting transitions in a proof-of-concept demo. Discussion: Our attention scheme is architecture-agnostic, adds minimal computational overhead, and markedly boosts FER accuracy on low-resolution faces. Coupling the probability distribution directly to the RGB gamut provides a fine-grained, perceptually meaningful channel for affect-adaptive AAL systems.
2025
ambient assisted living; facial emotion recognition; human-robot interaction; self-attention; spatial transformer network
01 Pubblicazione su rivista::01a Articolo in rivista
Exploiting facial emotion recognition system for ambient assisted living technologies triggered by interpreting the user's emotional state / Russo, Samuele; Tibermacine, Imad Eddine; Randieri, Cristian; Rabehi, Abdelaziz; Alharbi, Amal H.; El-kenawy, El-Sayed M.; Napoli, Christian. - In: FRONTIERS IN NEUROSCIENCE. - ISSN 1662-453X. - 19:(2025). [10.3389/fnins.2025.1622194]
File allegati a questo prodotto
File Dimensione Formato  
Russo_Exploiting-facial-emotion_2025.pdf

accesso aperto

Note: DOI 10.3389/fnins.2025.1622194
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 29.89 MB
Formato Adobe PDF
29.89 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1747135
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact