Gradient-based algorithms are effective for many machine learning tasks, but despite ample recent effort and some progress, it often remains unclear why they work in practice in optimising high-dimensional non-convex functions and why they find good minima instead of being trapped in spurious ones. Here we present a quantitative theory explaining this behaviour in a spiked matrix-tensor model. Our framework is based on the Kac-Rice analysis of stationary points and a closed-form analysis of gradient-flow originating from statistical physics. We show that there is a well defined region of parameters where the gradient-flow algorithm finds a good global minimum despite the presence of exponentially many spurious local minima. We show that this is achieved by surfing on saddles that have strong negative direction towards the global minima, a phenomenon that is connected to a BBP-type threshold in the Hessian describing the critical points of the landscapes.

Who is afraid of big bad minima? Analysis of gradient-flow in spiked matrix-tensor models / Sarao Mannelli, Stefano; Biroli, Giulio; Cammarota, Chiara; Krzakala, Florent; Zdeborová, Lenka. - (2019). (Intervento presentato al convegno 2019 Conference on Neural Information Processing Systems-NeurIPS 2019 tenutosi a Vancouver Convention Center).

Who is afraid of big bad minima? Analysis of gradient-flow in spiked matrix-tensor models

Chiara Cammarota;
2019

Abstract

Gradient-based algorithms are effective for many machine learning tasks, but despite ample recent effort and some progress, it often remains unclear why they work in practice in optimising high-dimensional non-convex functions and why they find good minima instead of being trapped in spurious ones. Here we present a quantitative theory explaining this behaviour in a spiked matrix-tensor model. Our framework is based on the Kac-Rice analysis of stationary points and a closed-form analysis of gradient-flow originating from statistical physics. We show that there is a well defined region of parameters where the gradient-flow algorithm finds a good global minimum despite the presence of exponentially many spurious local minima. We show that this is achieved by surfing on saddles that have strong negative direction towards the global minima, a phenomenon that is connected to a BBP-type threshold in the Hessian describing the critical points of the landscapes.
2019
2019 Conference on Neural Information Processing Systems-NeurIPS 2019
Inference; algorithms; risk landscape
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Who is afraid of big bad minima? Analysis of gradient-flow in spiked matrix-tensor models / Sarao Mannelli, Stefano; Biroli, Giulio; Cammarota, Chiara; Krzakala, Florent; Zdeborová, Lenka. - (2019). (Intervento presentato al convegno 2019 Conference on Neural Information Processing Systems-NeurIPS 2019 tenutosi a Vancouver Convention Center).
File allegati a questo prodotto
File Dimensione Formato  
Cammarota_Big-bad-minima.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 1.52 MB
Formato Adobe PDF
1.52 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1472288
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact