In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.
An explainable fast deep neural network for emotion recognition / Di Luzio, Francesco; Rosato, Antonello; Panella, Massimo. - In: BIOMEDICAL SIGNAL PROCESSING AND CONTROL. - ISSN 1746-8094. - 100, Part B:(2025), pp. 1-10. [10.1016/j.bspc.2024.107177]
An explainable fast deep neural network for emotion recognition
Di Luzio, Francesco;Rosato, Antonello;Panella, Massimo
2025
Abstract
In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.File | Dimensione | Formato | |
---|---|---|---|
Di-Luzio_Explainable_2024 .pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
1.41 MB
Formato
Adobe PDF
|
1.41 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.