In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches. In particular, we propose to extract multiple sub-networks from a single, untrained neural network by solving an end-to-end optimization task combining differentiable scaling over the original architecture, with multiple regularization terms favouring the diversity of the ensemble. Since our proposal aims to detect and extract sub-structures, we call it Structured Ensemble. On a large experimental evaluation, we show that our method can achieve higher or comparable accuracy to competing methods while requiring significantly less storage. In addition, we evaluate our ensembles in terms of predictive calibration and uncertainty, showing they compare favourably with the state-of-the-art. Finally, we draw a link with the continual learning literature, and we propose a modification of our framework to handle continuous streams of tasks with a sub-linear memory cost. We compare with a number of alternative strategies to mitigate catastrophic forgetting, highlighting advantages in terms of average accuracy and memory.

Structured ensembles. An approach to reduce the memory footprint of ensemble methods / Pomponi, J.; Scardapane, S.; Uncini, A.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 144:(2021), pp. 407-418. [10.1016/j.neunet.2021.09.007]

Structured ensembles. An approach to reduce the memory footprint of ensemble methods

Pomponi J.;Scardapane S.;Uncini A.
2021

Abstract

In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches. In particular, we propose to extract multiple sub-networks from a single, untrained neural network by solving an end-to-end optimization task combining differentiable scaling over the original architecture, with multiple regularization terms favouring the diversity of the ensemble. Since our proposal aims to detect and extract sub-structures, we call it Structured Ensemble. On a large experimental evaluation, we show that our method can achieve higher or comparable accuracy to competing methods while requiring significantly less storage. In addition, we evaluate our ensembles in terms of predictive calibration and uncertainty, showing they compare favourably with the state-of-the-art. Finally, we draw a link with the continual learning literature, and we propose a modification of our framework to handle continuous streams of tasks with a sub-linear memory cost. We compare with a number of alternative strategies to mitigate catastrophic forgetting, highlighting advantages in terms of average accuracy and memory.
2021
continual learning; deep learning; ensemble; neural networks; pruning; structured pruning; uncertainty; learning; neural networks; computer
01 Pubblicazione su rivista::01a Articolo in rivista
Structured ensembles. An approach to reduce the memory footprint of ensemble methods / Pomponi, J.; Scardapane, S.; Uncini, A.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 144:(2021), pp. 407-418. [10.1016/j.neunet.2021.09.007]
File allegati a questo prodotto
File Dimensione Formato  
Pomponi_Structured Ensembles_2021.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 652.46 kB
Formato Adobe PDF
652.46 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1612484
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact