Existing methods for reducing the computational burden of neural networks at run-time, such as parameter pruning or dynamic computational path selection, focus solely on improving computational efficiency during inference. On the other hand, in this work, we propose a novel method which reduces the memory footprint and number of computing operations required for training and inference. Our framework efficiently integrates pruning as part of the training procedure by exploring and tracking the relative importance of convolutional channels. At each training step, we select only a subset of highly salient channels to execute according to the combinatorial upper confidence bound algorithm, and run a forward and backward pass only on these activated channels, hence learning their parameters. Consequently, we enable the efficient discovery of compact models. We validate our approach empirically on state-of-the-art CNNs - VGGNet, ResNet and DenseNet, and on several image classification datasets. Results demonstrate our framework for dynamic channel execution reduces computational cost up to 4× and parameter count up to 9×, thus reducing the memory and computational demands for discovering and training compact neural network models.

Dynamic neural network channel execution for efficient training / Spasov, S. E.; Lio, P.. - (2020). (Intervento presentato al convegno 30th British Machine Vision Conference, BMVC 2019 tenutosi a Cardiff; gbr).

Dynamic neural network channel execution for efficient training

Lio P.
2020

Abstract

Existing methods for reducing the computational burden of neural networks at run-time, such as parameter pruning or dynamic computational path selection, focus solely on improving computational efficiency during inference. On the other hand, in this work, we propose a novel method which reduces the memory footprint and number of computing operations required for training and inference. Our framework efficiently integrates pruning as part of the training procedure by exploring and tracking the relative importance of convolutional channels. At each training step, we select only a subset of highly salient channels to execute according to the combinatorial upper confidence bound algorithm, and run a forward and backward pass only on these activated channels, hence learning their parameters. Consequently, we enable the efficient discovery of compact models. We validate our approach empirically on state-of-the-art CNNs - VGGNet, ResNet and DenseNet, and on several image classification datasets. Results demonstrate our framework for dynamic channel execution reduces computational cost up to 4× and parameter count up to 9×, thus reducing the memory and computational demands for discovering and training compact neural network models.
2020
30th British Machine Vision Conference, BMVC 2019
Classification (of information); Computer vision; Neural networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Dynamic neural network channel execution for efficient training / Spasov, S. E.; Lio, P.. - (2020). (Intervento presentato al convegno 30th British Machine Vision Conference, BMVC 2019 tenutosi a Cardiff; gbr).
File allegati a questo prodotto
File Dimensione Formato  
preprint_lio.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1720013
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact