Modern neural networks often rely on overparameterized architectures to ensure stability and accuracy, but in many real-world scenarios, large models are unnecessarily expensive to train and deploy. This is especially true in Internet of Things (IoT) and edge computing scenarios, where computational resources and available memory are severely limited. Reducing the size of neural networks without compromising their ability to solve the target task remains a practical challenge, especially when the goal is to simplify the architecture itself, not just the weight space. To address this problem, we introduce ImproveNet, a simple and general method that reduces the size of a neural network, without compromising its ability to solve the original task. The approach does not require any pre-trained model, specific architecture knowledge, or manual tuning. Starting with a standard-sized network and the standard training configuration, ImproveNet verifies the model's performance during training. Once the performance requirements are met, it reduces the network by resizing feature maps or removing internal layers, thus making it ready for AI-on-the-edge deployment and execution.

Efficient Neural Network Reduction for AI-on-the-edge Applications through Structural Compression / Puglisi, Adriano; Monti, Flavia; Napoli, Christian; Mecella, Massimo. - In: WORKS IN PROGRESS IN EMBEDDED COMPUTING JOURNAL. - ISSN 2980-7298. - 11:1(2025), pp. 20-23. [10.64552/wipiec.v11i1.89]

Efficient Neural Network Reduction for AI-on-the-edge Applications through Structural Compression

Puglisi, Adriano
;
Monti, Flavia;Napoli, Christian;Mecella, Massimo
2025

Abstract

Modern neural networks often rely on overparameterized architectures to ensure stability and accuracy, but in many real-world scenarios, large models are unnecessarily expensive to train and deploy. This is especially true in Internet of Things (IoT) and edge computing scenarios, where computational resources and available memory are severely limited. Reducing the size of neural networks without compromising their ability to solve the target task remains a practical challenge, especially when the goal is to simplify the architecture itself, not just the weight space. To address this problem, we introduce ImproveNet, a simple and general method that reduces the size of a neural network, without compromising its ability to solve the original task. The approach does not require any pre-trained model, specific architecture knowledge, or manual tuning. Starting with a standard-sized network and the standard training configuration, ImproveNet verifies the model's performance during training. Once the performance requirements are met, it reduces the network by resizing feature maps or removing internal layers, thus making it ready for AI-on-the-edge deployment and execution.
2025
IoT; Edge AI; Deep Model Optimization; Neural Network Compression
01 Pubblicazione su rivista::01a Articolo in rivista
Efficient Neural Network Reduction for AI-on-the-edge Applications through Structural Compression / Puglisi, Adriano; Monti, Flavia; Napoli, Christian; Mecella, Massimo. - In: WORKS IN PROGRESS IN EMBEDDED COMPUTING JOURNAL. - ISSN 2980-7298. - 11:1(2025), pp. 20-23. [10.64552/wipiec.v11i1.89]
File allegati a questo prodotto
File Dimensione Formato  
Puglisi_Efficient-Neural_2025.pdf

accesso aperto

Note: DOI: https://doi.org/10.64552/wipiec.v11i1.89
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.39 MB
Formato Adobe PDF
1.39 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1752354
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact