Gated recurrent neural networks have achieved remarkable results in the analysis of sequential data. Inside these networks, gates are used to control the flow of information, allowing to model even very long-term dependencies in the data. In this paper, we investigate whether the original gate equation (a linear projection followed by an element-wise sigmoid) can be improved. In particular, we design a more flexible architecture, with a small number of adaptable parameters, which is able to model a wider range of gating functions than the classical one. To this end, we replace the sigmoid function in the standard gate with a non-parametric formulation extending the recently proposed kernel activation function (KAF), with the addition of a residual skip-connection. A set of experiments on sequential variants of the MNIST dataset shows that the adoption of this novel gate allows to improve accuracy with a negligible cost in terms of computational power and with a large speed-up in the number of training iterations.

Recurrent neural networks with flexible gates using kernel activation functions / Scardapane, S; Van Vaerenbergh, S; Comminiello, D; Totaro, S; Uncini, A. - (2018), pp. 1-6. (Intervento presentato al convegno IEEE International Workshop on Machine Learning for Signal Processing tenutosi a Aalborg, Denmark) [10.1109/MLSP.2018.8516994].

Recurrent neural networks with flexible gates using kernel activation functions

Scardapane, S;Comminiello, D;Uncini, A
2018

Abstract

Gated recurrent neural networks have achieved remarkable results in the analysis of sequential data. Inside these networks, gates are used to control the flow of information, allowing to model even very long-term dependencies in the data. In this paper, we investigate whether the original gate equation (a linear projection followed by an element-wise sigmoid) can be improved. In particular, we design a more flexible architecture, with a small number of adaptable parameters, which is able to model a wider range of gating functions than the classical one. To this end, we replace the sigmoid function in the standard gate with a non-parametric formulation extending the recently proposed kernel activation function (KAF), with the addition of a residual skip-connection. A set of experiments on sequential variants of the MNIST dataset shows that the adoption of this novel gate allows to improve accuracy with a negligible cost in terms of computational power and with a large speed-up in the number of training iterations.
2018
IEEE International Workshop on Machine Learning for Signal Processing
recurrent network; LSTM; GRU; gate; kernel activation function
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Recurrent neural networks with flexible gates using kernel activation functions / Scardapane, S; Van Vaerenbergh, S; Comminiello, D; Totaro, S; Uncini, A. - (2018), pp. 1-6. (Intervento presentato al convegno IEEE International Workshop on Machine Learning for Signal Processing tenutosi a Aalborg, Denmark) [10.1109/MLSP.2018.8516994].
File allegati a questo prodotto
File Dimensione Formato  
Scardapane_ Recurrent-neural_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 876.76 kB
Formato Adobe PDF
876.76 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1335713
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact