Different libraries allow performing computer vision tasks, e.g., object recognition, in almost every mobile device that has a computing capability. In modern smartphones, such tasks are compute-intensive, energy hungry computation running on the GPU or the particular Machine Learning (ML) processor embedded in the device. Task offloading is a strategy adopted to move compute-intensive tasks and hence their energy consumption to external computers, in the edge network or in the cloud. In this paper, we report an experimental study that measure under different mobile computer vision set-ups the energy reduction when the inference of an image processing is moved to an edge node, and the capability to still meet real-time requirements. In particular, our experiments show that offloading the task - in our case real-time object recognition - to a possible next-to-the-user node allows saving about the 70% of battery consumption while maintaining the same frame rate (fps) that local processing can achieve.

A study on real-time image processing applications with edge computing support for mobile devices / Proietti Mattia, Gabriele; Beraldi, Roberto. - (2021), pp. 1-7. ((Intervento presentato al convegno IEEE/ACM 25th International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2021) tenutosi a Valencia; Spain [10.1109/DS-RT52167.2021.9576139].

A study on real-time image processing applications with edge computing support for mobile devices

Proietti Mattia, Gabriele
Primo
;
Beraldi, Roberto
Secondo
2021

Abstract

Different libraries allow performing computer vision tasks, e.g., object recognition, in almost every mobile device that has a computing capability. In modern smartphones, such tasks are compute-intensive, energy hungry computation running on the GPU or the particular Machine Learning (ML) processor embedded in the device. Task offloading is a strategy adopted to move compute-intensive tasks and hence their energy consumption to external computers, in the edge network or in the cloud. In this paper, we report an experimental study that measure under different mobile computer vision set-ups the energy reduction when the inference of an image processing is moved to an edge node, and the capability to still meet real-time requirements. In particular, our experiments show that offloading the task - in our case real-time object recognition - to a possible next-to-the-user node allows saving about the 70% of battery consumption while maintaining the same frame rate (fps) that local processing can achieve.
978-1-6654-3326-6
File allegati a questo prodotto
File Dimensione Formato  
ProiettiMattia_A-Study-on-real-time_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.05 MB
Formato Adobe PDF
4.05 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
ProiettiMattia_postprint_A-Study-on-real-time_2021.pdf.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.06 MB
Formato Adobe PDF
2.06 MB Adobe PDF Visualizza/Apri PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11573/1582866
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact