In this paper, we consider the design of deep neural networks augmented with multiple auxiliary classifiers departing from the main (backbone) network. These classifiers can be used to perform early-exit from the network at various layers, making them convenient for energy-constrained applications such as IoT, embedded devices, or Fog computing. However, designing an optimized early-exit strategy is a difficult task, generally requiring a large amount of manual fine-tuning. In this paper, we propose a way to jointly optimize this strategy together with the branches, providing an end-to-end trainable algorithm for this emerging class of neural networks. We achieve this by replacing the original output of the branches with a 'soft', differentiable approximation. In addition, we also propose a regularization approach to trade-off the computational efficiency of the early-exit strategy with respect to the overall classification accuracy. We evaluate our proposed design approach on a set of image classification benchmarks, showing significant gains in accuracy and inference time.

Differentiable branching in deep networks for fast inference / Scardapane, S.; Comminiello, D.; Scarpiniti, M.; Baccarelli, E.; Uncini, A.. - (2020), pp. 4167-4171. (Intervento presentato al convegno 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 tenutosi a Barcelona, Spain) [10.1109/ICASSP40776.2020.9054209].

Differentiable branching in deep networks for fast inference

Scardapane S.
;
Comminiello D.;Scarpiniti M.;Baccarelli E.;Uncini A.
2020

Abstract

In this paper, we consider the design of deep neural networks augmented with multiple auxiliary classifiers departing from the main (backbone) network. These classifiers can be used to perform early-exit from the network at various layers, making them convenient for energy-constrained applications such as IoT, embedded devices, or Fog computing. However, designing an optimized early-exit strategy is a difficult task, generally requiring a large amount of manual fine-tuning. In this paper, we propose a way to jointly optimize this strategy together with the branches, providing an end-to-end trainable algorithm for this emerging class of neural networks. We achieve this by replacing the original output of the branches with a 'soft', differentiable approximation. In addition, we also propose a regularization approach to trade-off the computational efficiency of the early-exit strategy with respect to the overall classification accuracy. We evaluate our proposed design approach on a set of image classification benchmarks, showing significant gains in accuracy and inference time.
2020
2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
deep network; energy efficiency; inference time; multi-branch architectures
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Differentiable branching in deep networks for fast inference / Scardapane, S.; Comminiello, D.; Scarpiniti, M.; Baccarelli, E.; Uncini, A.. - (2020), pp. 4167-4171. (Intervento presentato al convegno 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 tenutosi a Barcelona, Spain) [10.1109/ICASSP40776.2020.9054209].
File allegati a questo prodotto
File Dimensione Formato  
Scardapane_Differentiable-branching_2020.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 451.49 kB
Formato Adobe PDF
451.49 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1435416
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 7
social impact