In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints, and also consider extensions to some structured, nonsmooth problems. The algorithm solves a sequence of (separable) strongly convex problems and maintains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible and unifies several existing successive convex approximation (SCA)-based algorithms. More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems. This Part I is devoted to the description of the framework in its generality. In Part II, we customize our general methods to several (multiagent) optimization problems in communications, networking, and machine learning; the result is a new class of centralized and distributed algorithms that compare favorably to existing ad-hoc (centralized) schemes.

Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory / Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo. - In: IEEE TRANSACTIONS ON SIGNAL PROCESSING. - ISSN 1053-587X. - STAMPA. - 65:8(2017), pp. 1929-1944. [10.1109/TSP.2016.2637317]

Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory

FACCHINEI, Francisco
;
2017

Abstract

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints, and also consider extensions to some structured, nonsmooth problems. The algorithm solves a sequence of (separable) strongly convex problems and maintains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible and unifies several existing successive convex approximation (SCA)-based algorithms. More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems. This Part I is devoted to the description of the framework in its generality. In Part II, we customize our general methods to several (multiagent) optimization problems in communications, networking, and machine learning; the result is a new class of centralized and distributed algorithms that compare favorably to existing ad-hoc (centralized) schemes.
2017
Successive convex approximation; Distributed algorithms; Nonconvex optimization
01 Pubblicazione su rivista::01a Articolo in rivista
Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory / Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo. - In: IEEE TRANSACTIONS ON SIGNAL PROCESSING. - ISSN 1053-587X. - STAMPA. - 65:8(2017), pp. 1929-1944. [10.1109/TSP.2016.2637317]
File allegati a questo prodotto
File Dimensione Formato  
Scutari_Parallel-and-Distributed_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 345.33 kB
Formato Adobe PDF
345.33 kB Adobe PDF   Contatta l'autore
Scutari_postprint_Parallel-and-Distributed_2017.pdf

accesso aperto

Note: http://dx.doi.org/10.1109/TSP.2016.2637317
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 310.21 kB
Formato Adobe PDF
310.21 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/937935
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 245
  • ???jsp.display-item.citation.isi??? 180
social impact