The aim of this paper is to develop a theoretical framework for training neural network (NN) models, when data is distributed over a set of agents that are connected to each other through a sparse network topology. The framework builds on a distributed convexification technique, while leveraging dynamic consensus to propagate the information over the network. It can be customized to work with different loss and regularization functions, typically used when training NN models, while guaranteeing provable convergence to a stationary solution under mild assumptions. Interestingly, it naturally leads to distributed architectures where agents solve local optimization problems exploiting parallel multi-core processors. Numerical results corroborate our theoretical findings, and assess the performance for parallel and distributed training of neural networks.

Parallel and distributed training of neural networks via successive convex approximation / DI LORENZO, Paolo; Scardapane, Simone. - ELETTRONICO. - 2016-:(2016), pp. 1-6. (Intervento presentato al convegno 26th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings tenutosi a Salerno nel 2016) [10.1109/MLSP.2016.7738894].

Parallel and distributed training of neural networks via successive convex approximation

DI LORENZO, PAOLO;SCARDAPANE, SIMONE
2016

Abstract

The aim of this paper is to develop a theoretical framework for training neural network (NN) models, when data is distributed over a set of agents that are connected to each other through a sparse network topology. The framework builds on a distributed convexification technique, while leveraging dynamic consensus to propagate the information over the network. It can be customized to work with different loss and regularization functions, typically used when training NN models, while guaranteeing provable convergence to a stationary solution under mild assumptions. Interestingly, it naturally leads to distributed architectures where agents solve local optimization problems exploiting parallel multi-core processors. Numerical results corroborate our theoretical findings, and assess the performance for parallel and distributed training of neural networks.
2016
26th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings
Artificial neural networks; distributed algorithms; nonconvex optimization; parallel algorithms; human-computer interaction; signal processing
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Parallel and distributed training of neural networks via successive convex approximation / DI LORENZO, Paolo; Scardapane, Simone. - ELETTRONICO. - 2016-:(2016), pp. 1-6. (Intervento presentato al convegno 26th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings tenutosi a Salerno nel 2016) [10.1109/MLSP.2016.7738894].
File allegati a questo prodotto
File Dimensione Formato  
DiLorenzo_Parallel-distributed-training_2016.pdf

solo utenti autorizzati

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 309.41 kB
Formato Adobe PDF
309.41 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/966764
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 3
social impact