Hypercomplex neural networks are gaining increasing interest in the deep learning community. The attention directed towards hypercomplex models originates from several aspects, spanning from purely theoretical and mathematical characteristics to the practical advantage of lightweight models over conventional networks, and their unique properties to capture both global and local relations. In particular, a branch of these architectures, parameterized hypercomplex neural networks (PHNNs), has also gained popularity due to their versatility across a multitude of application domains. Nonetheless, only few attempts have been made to explain or interpret their intricacies. In this paper, we propose inherently interpretable PHNNs and quaternion-like networks, thus without the need for any post-hoc method. To achieve this, we define a type of cosine-similarity transform within the parameterized hypercomplex domain. This PHB-cos transform induces weight alignment with relevant input features and allows to reduce the model into a single linear transform, rendering it directly interpretable. In this work, we start to draw insights into how this unique branch of neural models operates. We observe that hypercomplex networks exhibit a tendency to concentrate on the shape around the main object of interest, in addition to the shape of the object itself. We provide a thorough analysis, studying single neurons of different layers and comparing them against how real-valued networks learn. The code of the paper is available at https://github.com/ispamm/HxAI.

Towards explaining hypercomplex neural networks / Lopez, E.; Grassucci, E.; Capriotti, D.; Comminiello, D.. - (2024), pp. 1-8. (Intervento presentato al convegno 2024 International Joint Conference on Neural Networks, IJCNN 2024 tenutosi a Yokohama; Japan) [10.1109/IJCNN60899.2024.10650793].

Towards explaining hypercomplex neural networks

Lopez E.
;
Grassucci E.;Capriotti D.;Comminiello D.
2024

Abstract

Hypercomplex neural networks are gaining increasing interest in the deep learning community. The attention directed towards hypercomplex models originates from several aspects, spanning from purely theoretical and mathematical characteristics to the practical advantage of lightweight models over conventional networks, and their unique properties to capture both global and local relations. In particular, a branch of these architectures, parameterized hypercomplex neural networks (PHNNs), has also gained popularity due to their versatility across a multitude of application domains. Nonetheless, only few attempts have been made to explain or interpret their intricacies. In this paper, we propose inherently interpretable PHNNs and quaternion-like networks, thus without the need for any post-hoc method. To achieve this, we define a type of cosine-similarity transform within the parameterized hypercomplex domain. This PHB-cos transform induces weight alignment with relevant input features and allows to reduce the model into a single linear transform, rendering it directly interpretable. In this work, we start to draw insights into how this unique branch of neural models operates. We observe that hypercomplex networks exhibit a tendency to concentrate on the shape around the main object of interest, in addition to the shape of the object itself. We provide a thorough analysis, studying single neurons of different layers and comparing them against how real-valued networks learn. The code of the paper is available at https://github.com/ispamm/HxAI.
2024
2024 International Joint Conference on Neural Networks, IJCNN 2024
explainability; hypercomplex neural networks; interpretability; parameterized hypercomplex neural networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Towards explaining hypercomplex neural networks / Lopez, E.; Grassucci, E.; Capriotti, D.; Comminiello, D.. - (2024), pp. 1-8. (Intervento presentato al convegno 2024 International Joint Conference on Neural Networks, IJCNN 2024 tenutosi a Yokohama; Japan) [10.1109/IJCNN60899.2024.10650793].
File allegati a questo prodotto
File Dimensione Formato  
Lopez_Towards_2024.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 844.28 kB
Formato Adobe PDF
844.28 kB Adobe PDF   Contatta l'autore
Lopez_Indice_Towards_2024.pdf

solo gestori archivio

Note: Indice
Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.2 MB
Formato Adobe PDF
1.2 MB Adobe PDF   Contatta l'autore
Lopez_Frontespizio_Towards_2024.pdf

solo gestori archivio

Note: Frontespizio
Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.53 MB
Formato Adobe PDF
3.53 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1725064
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact