Neural Networks (NNs) have been driving machine learning progress in recent years, but their larger models present challenges in resource-limited environments. Weight pruning reduces the computational demand, often with performance degradation and long training procedures. This work introduces Distilled Gradual Pruning with Pruned Fine-tuning (DG2PF), a comprehensive algorithm that iteratively prunes pre-trained neural networks using knowledge distillation. We employ a magnitude-based unstructured pruning function that selectively removes a specified proportion of unimportant weights from the network. This function also leads to an efficient compression of the model size while minimizing classification accuracy loss. Additionally, we introduce a simulated pruning strategy with the same effects of weight recovery but while maintaining stable convergence. Furthermore, we propose a multi-step self-knowledge distillation strategy to effectively transfer the knowledge of the full, unpruned network to the pruned counterpart. We validate the performance of our algorithm through extensive experimentation on diverse benchmark datasets, including CIFAR-10 and ImageNet, as well as a set of model architectures. The results highlight how our algorithm prunes and optimizes pre-trained neural networks without substantially degrading their classification accuracy while delivering significantly faster and more compact models.

Distilled Gradual Pruning with Pruned Fine-tuning / Fontana, Federico; Lanzino, Romeo; Marini, Marco Raoul; Avola, Danilo; Cinque, Luigi; Scarcello, Francesco; Foresti, Gian Luca. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - (2024), pp. 1-11. [10.1109/tai.2024.3366497]

Distilled Gradual Pruning with Pruned Fine-tuning

Fontana, Federico;Lanzino, Romeo
;
Marini, Marco Raoul;Avola, Danilo;Cinque, Luigi;Foresti, Gian Luca
2024

Abstract

Neural Networks (NNs) have been driving machine learning progress in recent years, but their larger models present challenges in resource-limited environments. Weight pruning reduces the computational demand, often with performance degradation and long training procedures. This work introduces Distilled Gradual Pruning with Pruned Fine-tuning (DG2PF), a comprehensive algorithm that iteratively prunes pre-trained neural networks using knowledge distillation. We employ a magnitude-based unstructured pruning function that selectively removes a specified proportion of unimportant weights from the network. This function also leads to an efficient compression of the model size while minimizing classification accuracy loss. Additionally, we introduce a simulated pruning strategy with the same effects of weight recovery but while maintaining stable convergence. Furthermore, we propose a multi-step self-knowledge distillation strategy to effectively transfer the knowledge of the full, unpruned network to the pruned counterpart. We validate the performance of our algorithm through extensive experimentation on diverse benchmark datasets, including CIFAR-10 and ImageNet, as well as a set of model architectures. The results highlight how our algorithm prunes and optimizes pre-trained neural networks without substantially degrading their classification accuracy while delivering significantly faster and more compact models.
2024
Training; Computational modeling; Knowledge engineering; Artificial intelligence; Computer architecture; Schedules; Classification algorithms
01 Pubblicazione su rivista::01a Articolo in rivista
Distilled Gradual Pruning with Pruned Fine-tuning / Fontana, Federico; Lanzino, Romeo; Marini, Marco Raoul; Avola, Danilo; Cinque, Luigi; Scarcello, Francesco; Foresti, Gian Luca. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - (2024), pp. 1-11. [10.1109/tai.2024.3366497]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1706715
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact