We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are (1) the complexity of the loss landscape and of the dynamics within it, and (2) to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and datasets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large limes, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, showing that the statistical properties of the corresponding loss and energy landscapes arc different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.

Comparing dynamics: deep neural networks versus glassy systems / Baity-Jest, M.; Sagun, L.; Mario, G.; Spiglery, S.; Arous, G. B.; Cammarota, C.; Lecun, Y.; Vvyart, M.; Biroli, G.. - 1:(2018), pp. 526-535. (Intervento presentato al convegno 35th International conference on Machine Learning - ICML 2018 tenutosi a Sweden).

Comparing dynamics: deep neural networks versus glassy systems

Cammarota C.;
2018

Abstract

We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are (1) the complexity of the loss landscape and of the dynamics within it, and (2) to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and datasets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large limes, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, showing that the statistical properties of the corresponding loss and energy landscapes arc different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.
2018
35th International conference on Machine Learning - ICML 2018
Machine learning; glassy dynamics; loss landscape
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Comparing dynamics: deep neural networks versus glassy systems / Baity-Jest, M.; Sagun, L.; Mario, G.; Spiglery, S.; Arous, G. B.; Cammarota, C.; Lecun, Y.; Vvyart, M.; Biroli, G.. - 1:(2018), pp. 526-535. (Intervento presentato al convegno 35th International conference on Machine Learning - ICML 2018 tenutosi a Sweden).
File allegati a questo prodotto
File Dimensione Formato  
Cammarota_Deep-neural-networks.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 1.74 MB
Formato Adobe PDF
1.74 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1472305
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 9
social impact