Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discrete optimization and inference problems, the dynamics of an SGD-like algorithm resemble very closely that of Metropolis Monte Carlo with a properly chosen temperature, which depends on the mini-batch size. This quantitative matching holds both at equilibrium and in the out-of-equilibrium regime, despite the two algorithms having fundamental differences (e.g. SGD does not satisfy detailed balance). Such equivalence allows us to use results about performances and limits of Monte Carlo algorithms to optimize the mini-batch size in the SGD-like algorithm and make it efficient at recovering the signal in hard inference problems.

Stochastic gradient descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems / Angelini, M. C.; Cavaliere, A. G.; Marino, R.; Ricci-Tersenghi, F.. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 14:1(2024), pp. 1-11. [10.1038/s41598-024-62625-8]

Stochastic gradient descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems

Angelini M. C.
Primo
;
Cavaliere A. G.;Marino R.;Ricci-Tersenghi F.
2024

Abstract

Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discrete optimization and inference problems, the dynamics of an SGD-like algorithm resemble very closely that of Metropolis Monte Carlo with a properly chosen temperature, which depends on the mini-batch size. This quantitative matching holds both at equilibrium and in the out-of-equilibrium regime, despite the two algorithms having fundamental differences (e.g. SGD does not satisfy detailed balance). Such equivalence allows us to use results about performances and limits of Monte Carlo algorithms to optimize the mini-batch size in the SGD-like algorithm and make it efficient at recovering the signal in hard inference problems.
2024
inference; montecarlo; stochastic gradient descent; algorithmic optimization
01 Pubblicazione su rivista::01a Articolo in rivista
Stochastic gradient descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems / Angelini, M. C.; Cavaliere, A. G.; Marino, R.; Ricci-Tersenghi, F.. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 14:1(2024), pp. 1-11. [10.1038/s41598-024-62625-8]
File allegati a questo prodotto
File Dimensione Formato  
Angelini_Stochastic-gradient_2024.pdf

accesso aperto

Note: Articolo su rivista
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 2.06 MB
Formato Adobe PDF
2.06 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1711042
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact