In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.

Group sparse regularization for deep neural networks / Scardapane, Simone; Comminiello, Danilo; Hussain, Amir; Uncini, Aurelio. - In: NEUROCOMPUTING. - ISSN 0925-2312. - STAMPA. - 241:(2017), pp. 81-89. [10.1016/j.neucom.2017.02.029]

Group sparse regularization for deep neural networks

SCARDAPANE, SIMONE;COMMINIELLO, DANILO;UNCINI, Aurelio
2017

Abstract

In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.
2017
deep networks; feature selection; group sparsity; pruning; computer science applications1707 computer vision and pattern recognition; cognitive neuroscience; artificiali Intelligence
01 Pubblicazione su rivista::01a Articolo in rivista
Group sparse regularization for deep neural networks / Scardapane, Simone; Comminiello, Danilo; Hussain, Amir; Uncini, Aurelio. - In: NEUROCOMPUTING. - ISSN 0925-2312. - STAMPA. - 241:(2017), pp. 81-89. [10.1016/j.neucom.2017.02.029]
File allegati a questo prodotto
File Dimensione Formato  
Scardapane_Group-sparse_ 2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.18 MB
Formato Adobe PDF
1.18 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/947999
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 332
  • ???jsp.display-item.citation.isi??? 256
social impact