We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function-the agents' sum-utility-plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive averaging to recover the expected gradient of the sumutility function. Almost sure convergence to (stationary) solutions of the nonconvex problem is established. Finally, the method is applied to distributed stochastic training of neural networks. Numerical results confirm the theoretical claims, and illustrate the advantages of the proposed method with respect to other methods available in the literature.

Distributed stochastic nonconvex optimization and learning based on successive convex approximation / Di Lorenzo, P.; Scardapane, S.. - 2019:(2019), pp. 2224-2228. (Intervento presentato al convegno 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019 tenutosi a Pacific Grove; United States) [10.1109/IEEECONF44664.2019.9089408].

Distributed stochastic nonconvex optimization and learning based on successive convex approximation

Di Lorenzo P.;Scardapane S.
2019

Abstract

We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function-the agents' sum-utility-plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive averaging to recover the expected gradient of the sumutility function. Almost sure convergence to (stationary) solutions of the nonconvex problem is established. Finally, the method is applied to distributed stochastic training of neural networks. Numerical results confirm the theoretical claims, and illustrate the advantages of the proposed method with respect to other methods available in the literature.
2019
53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019
consensus; distributed optimization; nonconvex; stochastic optimization; successive convex approximation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Distributed stochastic nonconvex optimization and learning based on successive convex approximation / Di Lorenzo, P.; Scardapane, S.. - 2019:(2019), pp. 2224-2228. (Intervento presentato al convegno 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019 tenutosi a Pacific Grove; United States) [10.1109/IEEECONF44664.2019.9089408].
File allegati a questo prodotto
File Dimensione Formato  
DiLorenzo_Distributed_2019.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.15 MB
Formato Adobe PDF
1.15 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1450441
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact