We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a smooth (nonconvex) function and a convex (nonsmooth) regularizer. The framework hinges on Successive Convex Approximation (SCA) techniques and on a novel probabilistic model which describes in a unified way a variety of asynchronous settings in a more faithful and exhaustive way with respect to state-of-the-art models. Key features of our framework are: i) it accommodates inconsistent read, meaning that components of the variables may be written by some cores while being simultaneously read by others; ii) it covers in a unified way several existing methods; and iii) it accommodates a variety of parallel computing architectures. Almost sure convergence to stationary solutions is proved for the general case, and iteration complexity analysis is given for a specific version of our model. Numerical results show that our scheme outperforms existing asynchronous ones.

Asynchronous parallel nonconvex large-scale optimization / Cannelli, L.; Facchinei, F.; Kungurtsev, V.; Scutari, G.. - STAMPA. - (2017), pp. 4706-4710. (Intervento presentato al convegno 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 tenutosi a New Orleans; United States nel 2017) [10.1109/ICASSP.2017.7953049].

Asynchronous parallel nonconvex large-scale optimization

Facchinei, F.;
2017

Abstract

We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a smooth (nonconvex) function and a convex (nonsmooth) regularizer. The framework hinges on Successive Convex Approximation (SCA) techniques and on a novel probabilistic model which describes in a unified way a variety of asynchronous settings in a more faithful and exhaustive way with respect to state-of-the-art models. Key features of our framework are: i) it accommodates inconsistent read, meaning that components of the variables may be written by some cores while being simultaneously read by others; ii) it covers in a unified way several existing methods; and iii) it accommodates a variety of parallel computing architectures. Almost sure convergence to stationary solutions is proved for the general case, and iteration complexity analysis is given for a specific version of our model. Numerical results show that our scheme outperforms existing asynchronous ones.
2017
2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
Asynchronous algorithms; big-data; inconsistent read; nonconvex constrained optimization; Software; Signal Processing; Electrical and Electronic Engineering
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Asynchronous parallel nonconvex large-scale optimization / Cannelli, L.; Facchinei, F.; Kungurtsev, V.; Scutari, G.. - STAMPA. - (2017), pp. 4706-4710. (Intervento presentato al convegno 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 tenutosi a New Orleans; United States nel 2017) [10.1109/ICASSP.2017.7953049].
File allegati a questo prodotto
File Dimensione Formato  
Cannelli_Asynchronous_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 520.7 kB
Formato Adobe PDF
520.7 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1083836
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 4
social impact