In a network of agents, a widespread problem is the need to estimate a common underlying function starting from locally distributed measurements. Real-world scenarios may not allow the presence of centralized fusion centers, requiring the development of distributed, message-passing implementations of the standard machine learning training algorithms. In this paper, we are concerned with the distributed training of a particular class of recurrent neural networks, namely echo state networks (ESNs). In the centralized case, ESNs have received considerable attention, due to the fact that they can be trained with standard linear regression routines. Based on this observation, in our previous work we have introduced a decentralized algorithm, framed in the distributed optimization field, in order to train an ESN. In this paper, we focus on an additional sparsity property of the output layer of ESNs, allowing for very efficient implementations of the resulting networks. In order to evaluate the proposed algorithm, we test it on two well-known prediction benchmarks, namely the Mackey-Glass chaotic time series and the 10th order nonlinear auto regressive moving average (NARMA) system.

Distributed reservoir computing with sparse readouts / Scardapane, Simone; Panella, Massimo; Comminiello, Danilo; Hussain, Amir; Uncini, Aurelio. - In: IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE. - ISSN 1556-603X. - STAMPA. - 11:4(2016), pp. 59-70. [10.1109/MCI.2016.2601759]

Distributed reservoir computing with sparse readouts

SCARDAPANE, SIMONE;PANELLA, Massimo;COMMINIELLO, DANILO;UNCINI, Aurelio
2016

Abstract

In a network of agents, a widespread problem is the need to estimate a common underlying function starting from locally distributed measurements. Real-world scenarios may not allow the presence of centralized fusion centers, requiring the development of distributed, message-passing implementations of the standard machine learning training algorithms. In this paper, we are concerned with the distributed training of a particular class of recurrent neural networks, namely echo state networks (ESNs). In the centralized case, ESNs have received considerable attention, due to the fact that they can be trained with standard linear regression routines. Based on this observation, in our previous work we have introduced a decentralized algorithm, framed in the distributed optimization field, in order to train an ESN. In this paper, we focus on an additional sparsity property of the output layer of ESNs, allowing for very efficient implementations of the resulting networks. In order to evaluate the proposed algorithm, we test it on two well-known prediction benchmarks, namely the Mackey-Glass chaotic time series and the 10th order nonlinear auto regressive moving average (NARMA) system.
2016
Artificial intelligence; learning systems; message passing
01 Pubblicazione su rivista::01a Articolo in rivista
Distributed reservoir computing with sparse readouts / Scardapane, Simone; Panella, Massimo; Comminiello, Danilo; Hussain, Amir; Uncini, Aurelio. - In: IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE. - ISSN 1556-603X. - STAMPA. - 11:4(2016), pp. 59-70. [10.1109/MCI.2016.2601759]
File allegati a questo prodotto
File Dimensione Formato  
Scardapane_Distributed-reservoir_2016.pdf

solo utenti autorizzati

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.14 MB
Formato Adobe PDF
6.14 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/902818
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 12
social impact