In this paper, we consider the design of deep neural networks augmented with multiple auxiliary classifiers departing from the main (backbone) network. These classifiers can be used to perform early-exit from the network at various layers, making them convenient for energy-constrained applications such as IoT, embedded devices, or Fog computing. However, designing an optimized early-exit strategy is a difficult task, generally requiring a large amount of manual fine-tuning. In this paper, we propose a way to jointly optimize this strategy together with the branches, providing an end-to-end trainable algorithm for this emerging class of neural networks. We achieve this by replacing the original output of the branches with a 'soft', differentiable approximation. In addition, we also propose a regularization approach to trade-off the computational efficiency of the early-exit strategy with respect to the overall classification accuracy. We evaluate our proposed design approach on a set of image classification benchmarks, showing significant gains in accuracy and inference time.
Differentiable branching in deep networks for fast inference / Scardapane, S.; Comminiello, D.; Scarpiniti, M.; Baccarelli, E.; Uncini, A.. - (2020), pp. 4167-4171. (Intervento presentato al convegno 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 tenutosi a Barcelona, Spain) [10.1109/ICASSP40776.2020.9054209].
Differentiable branching in deep networks for fast inference
Scardapane S.
;Comminiello D.;Scarpiniti M.;Baccarelli E.;Uncini A.
2020
Abstract
In this paper, we consider the design of deep neural networks augmented with multiple auxiliary classifiers departing from the main (backbone) network. These classifiers can be used to perform early-exit from the network at various layers, making them convenient for energy-constrained applications such as IoT, embedded devices, or Fog computing. However, designing an optimized early-exit strategy is a difficult task, generally requiring a large amount of manual fine-tuning. In this paper, we propose a way to jointly optimize this strategy together with the branches, providing an end-to-end trainable algorithm for this emerging class of neural networks. We achieve this by replacing the original output of the branches with a 'soft', differentiable approximation. In addition, we also propose a regularization approach to trade-off the computational efficiency of the early-exit strategy with respect to the overall classification accuracy. We evaluate our proposed design approach on a set of image classification benchmarks, showing significant gains in accuracy and inference time.File | Dimensione | Formato | |
---|---|---|---|
Scardapane_Differentiable-branching_2020.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
451.49 kB
Formato
Adobe PDF
|
451.49 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.