The formal equivalence between the Hopfield network (HN) and the Boltzmann Machine (BM) has been well established in the context of random, unstructured and unbiased patterns to be retrieved and recognised. Here we extend this equivalence to the case of “biased” patterns, that is patterns which display an unbalanced count of positive neurons/pixels: starting from previous results of the bias paradigm for the HN, we construct the BM's equivalent Hamiltonian introducing a constraint parameter for the bias correction. We show analytically and numerically that the parameters suggested by equivalence are fixed points under contrastive divergence evolution when exposed to a dataset of blurred examples of each pattern, also enjoying large basins of attraction when the model suffers of a noisy initialisation. These results are also shown to be robust against increasing storage of the models, and increasing bias in the reference patterns. This picture, together with analytical derivation of HN's phase diagram via self-consistency equations, allows us to enhance our mathematical control on BM's performance when approaching more realistic datasets.

Storing, learning and retrieving biased patterns / Agliari, E.; Leonelli, F. E.; Marullo, C.. - In: APPLIED MATHEMATICS AND COMPUTATION. - ISSN 0096-3003. - 415:(2021), p. 126716. [10.1016/j.amc.2021.126716]

Storing, learning and retrieving biased patterns

Agliari E.
;
Leonelli F. E.;Marullo C.
2021

Abstract

The formal equivalence between the Hopfield network (HN) and the Boltzmann Machine (BM) has been well established in the context of random, unstructured and unbiased patterns to be retrieved and recognised. Here we extend this equivalence to the case of “biased” patterns, that is patterns which display an unbalanced count of positive neurons/pixels: starting from previous results of the bias paradigm for the HN, we construct the BM's equivalent Hamiltonian introducing a constraint parameter for the bias correction. We show analytically and numerically that the parameters suggested by equivalence are fixed points under contrastive divergence evolution when exposed to a dataset of blurred examples of each pattern, also enjoying large basins of attraction when the model suffers of a noisy initialisation. These results are also shown to be robust against increasing storage of the models, and increasing bias in the reference patterns. This picture, together with analytical derivation of HN's phase diagram via self-consistency equations, allows us to enhance our mathematical control on BM's performance when approaching more realistic datasets.
2021
Disordered systems; machine learning; neural networks
01 Pubblicazione su rivista::01a Articolo in rivista
Storing, learning and retrieving biased patterns / Agliari, E.; Leonelli, F. E.; Marullo, C.. - In: APPLIED MATHEMATICS AND COMPUTATION. - ISSN 0096-3003. - 415:(2021), p. 126716. [10.1016/j.amc.2021.126716]
File allegati a questo prodotto
File Dimensione Formato  
Agliari_Storing_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.29 MB
Formato Adobe PDF
2.29 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1586702
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact