Fog Computing is today a wide used paradigm that allows to distribute the computation in a geographic area. This not only makes possible to implement time-critical applications but opens the study to a series of solutions which permit to smartly organise the traffic among a set of Fog nodes, which constitute the core of the Fog Computing paradigm. A typical smart city setting is subject to a continuous change of traffic conditions, a node that was saturated can become almost completely unloaded and this creates the need of designing an algorithm which allows to meet the strict deadlines of the tasks but at the same time it can choose the best scheduling policy according to the current load situation that can vary at any time. In this paper, we use a Reinforcement Learning approach to design such an algorithm starting from the power-of-random choice paradigm, used as a baseline. By showing results from our delay-based simulator, we demonstrate how such distributed reinforcement learning approach is able to maximise the rate of the tasks executed within the deadline in a way that is equal to every node, both in a fixed load condition and in a real geographic scenario.

On real-time scheduling in Fog computing: A Reinforcement Learning algorithm with application to smart cities / Mattia, Gabriele Proietti; Beraldi, Roberto. - (2022), pp. 187-193. (Intervento presentato al convegno IEEE International Conference on Pervasive Computing and Communications tenutosi a Pisa; Italy) [10.1109/PerComWorkshops53856.2022.9767498].

On real-time scheduling in Fog computing: A Reinforcement Learning algorithm with application to smart cities

Mattia, Gabriele Proietti
;
Beraldi, Roberto
2022

Abstract

Fog Computing is today a wide used paradigm that allows to distribute the computation in a geographic area. This not only makes possible to implement time-critical applications but opens the study to a series of solutions which permit to smartly organise the traffic among a set of Fog nodes, which constitute the core of the Fog Computing paradigm. A typical smart city setting is subject to a continuous change of traffic conditions, a node that was saturated can become almost completely unloaded and this creates the need of designing an algorithm which allows to meet the strict deadlines of the tasks but at the same time it can choose the best scheduling policy according to the current load situation that can vary at any time. In this paper, we use a Reinforcement Learning approach to design such an algorithm starting from the power-of-random choice paradigm, used as a baseline. By showing results from our delay-based simulator, we demonstrate how such distributed reinforcement learning approach is able to maximise the rate of the tasks executed within the deadline in a way that is equal to every node, both in a fixed load condition and in a real geographic scenario.
2022
IEEE International Conference on Pervasive Computing and Communications
fog computing; scheduling; real-time; reinforcement learning; smart cities
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
On real-time scheduling in Fog computing: A Reinforcement Learning algorithm with application to smart cities / Mattia, Gabriele Proietti; Beraldi, Roberto. - (2022), pp. 187-193. (Intervento presentato al convegno IEEE International Conference on Pervasive Computing and Communications tenutosi a Pisa; Italy) [10.1109/PerComWorkshops53856.2022.9767498].
File allegati a questo prodotto
File Dimensione Formato  
ProiettiMattia_On-Real-Time_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.7 MB
Formato Adobe PDF
1.7 MB Adobe PDF   Contatta l'autore
ProiettiMattia_postprint_On-Real-Time_2022.pdf

accesso aperto

Note: DOI: 10.1109/PerComWorkshops53856.2022.9767498
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.03 MB
Formato Adobe PDF
1.03 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1631848
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 8
social impact