Neural Networks (NNs) have been driving machine learning progress in recent years, but their larger models present challenges in resource-limited environments. Weight pruning reduces the computational demand, often with performance degradation and long training procedures. This work introduces Distilled Gradual Pruning with Pruned Fine-tuning (DG2PF), a comprehensive algorithm that iteratively prunes pre-trained neural networks using knowledge distillation. We employ a magnitude-based unstructured pruning function that selectively removes a specified proportion of unimportant weights from the network. This function also leads to an efficient compression of the model size while minimizing classification accuracy loss. Additionally, we introduce a simulated pruning strategy with the same effects of weight recovery but while maintaining stable convergence. Furthermore, we propose a multi-step self-knowledge distillation strategy to effectively transfer the knowledge of the full, unpruned network to the pruned counterpart. We validate the performance of our algorithm through extensive experimentation on diverse benchmark datasets, including CIFAR-10 and ImageNet, as well as a set of model architectures. The results highlight how our algorithm prunes and optimizes pre-trained neural networks without substantially degrading their classification accuracy while delivering significantly faster and more compact models.

Distilled Gradual Pruning with Pruned Fine-tuning / Fontana, Federico; Lanzino, Romeo; Marini, Marco Raoul; Avola, Danilo; Cinque, Luigi; Scarcello, Francesco; Foresti, Gian Luca. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - 5:8(2024), pp. 4269-4279. [10.1109/tai.2024.3366497]

Distilled Gradual Pruning with Pruned Fine-tuning

Fontana, Federico
;
Lanzino, Romeo
;
Marini, Marco Raoul
;
Avola, Danilo
;
Cinque, Luigi
;
Foresti, Gian Luca
2024

Abstract

Neural Networks (NNs) have been driving machine learning progress in recent years, but their larger models present challenges in resource-limited environments. Weight pruning reduces the computational demand, often with performance degradation and long training procedures. This work introduces Distilled Gradual Pruning with Pruned Fine-tuning (DG2PF), a comprehensive algorithm that iteratively prunes pre-trained neural networks using knowledge distillation. We employ a magnitude-based unstructured pruning function that selectively removes a specified proportion of unimportant weights from the network. This function also leads to an efficient compression of the model size while minimizing classification accuracy loss. Additionally, we introduce a simulated pruning strategy with the same effects of weight recovery but while maintaining stable convergence. Furthermore, we propose a multi-step self-knowledge distillation strategy to effectively transfer the knowledge of the full, unpruned network to the pruned counterpart. We validate the performance of our algorithm through extensive experimentation on diverse benchmark datasets, including CIFAR-10 and ImageNet, as well as a set of model architectures. The results highlight how our algorithm prunes and optimizes pre-trained neural networks without substantially degrading their classification accuracy while delivering significantly faster and more compact models.
2024
training; computational modeling; knowledge engineering; artificial intelligence; computer architecture; schedules; classification algorithms
01 Pubblicazione su rivista::01a Articolo in rivista
Distilled Gradual Pruning with Pruned Fine-tuning / Fontana, Federico; Lanzino, Romeo; Marini, Marco Raoul; Avola, Danilo; Cinque, Luigi; Scarcello, Francesco; Foresti, Gian Luca. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - 5:8(2024), pp. 4269-4279. [10.1109/tai.2024.3366497]
File allegati a questo prodotto
File Dimensione Formato  
Fontana_Distilled_2024.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10438214
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 722.27 kB
Formato Adobe PDF
722.27 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1706715
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact