In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function. Within this framework, the optimal neuron-interaction matrices turn out to be a class of matrices which correspond to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the extent of such unlearning is proved to be related to the regularization hyperparameter of the loss function and to the training time. Thus, we can design strategies to avoid overfitting that are formulated in terms of regularization and early-stopping tuning. The generalization capabilities of these attractor networks are also investigated: analytical results are obtained for random synthetic datasets, next, the emerging picture is corroborated by numerical experiments that highlight the existence of several regimes (i.e., overfitting, failure and success) as the dataset parameters are varied.

Regularization, early-stopping and dreaming: A Hopfield-like setup to address generalization and overfitting / Agliari, E.; Alemanno, F.; Aquaro, M.; Fachechi, A.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 177:(2024). [10.1016/j.neunet.2024.106389]

Regularization, early-stopping and dreaming: A Hopfield-like setup to address generalization and overfitting

Agliari, E.
;
Aquaro, M.;Fachechi, A.
2024

Abstract

In this work we approach attractor neural networks from a machine learning perspective: we look for optimal network parameters by applying a gradient descent over a regularized loss function. Within this framework, the optimal neuron-interaction matrices turn out to be a class of matrices which correspond to Hebbian kernels revised by a reiterated unlearning protocol. Remarkably, the extent of such unlearning is proved to be related to the regularization hyperparameter of the loss function and to the training time. Thus, we can design strategies to avoid overfitting that are formulated in terms of regularization and early-stopping tuning. The generalization capabilities of these attractor networks are also investigated: analytical results are obtained for random synthetic datasets, next, the emerging picture is corroborated by numerical experiments that highlight the existence of several regimes (i.e., overfitting, failure and success) as the dataset parameters are varied.
2024
attractor neural networks; overfitting; spin-glasses
01 Pubblicazione su rivista::01a Articolo in rivista
Regularization, early-stopping and dreaming: A Hopfield-like setup to address generalization and overfitting / Agliari, E.; Alemanno, F.; Aquaro, M.; Fachechi, A.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 177:(2024). [10.1016/j.neunet.2024.106389]
File allegati a questo prodotto
File Dimensione Formato  
Agliari_Regularization_2024.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.56 MB
Formato Adobe PDF
1.56 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1710330
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact