The computing continuum model is a widely ac-cepted and used approach that make possible the existence of applications that are very demanding in terms of low latency and high computing power. In this three-layered model, the Fog or Edge layer can be considered as the weak link in the chain, indeed the computing nodes whose compose it are generally heterogeneous and their uptime cannot be compared with the one offered by the Cloud. Taking into account these inexorable characteristics of the continuum, in this paper, we propose a Reinforcement Learning based scheduling algorithm that makes per-job request decisions (online scheduling) and that is able to maintain an acceptable performance specifically targeting real-time applications. Through a series of simulations and comparisons with other fixed scheduling strategies, we demonstrate how the algorithm is capable of deriving the best possible scheduling policy when Fog or Edge nodes have different speeds and can unpredictably fail.
Leveraging Reinforcement Learning for online scheduling of real-time tasks in the Edge/Fog-to-Cloud computing continuum / Proietti Mattia, Gabriele; Beraldi, Roberto. - (2021), pp. 1-9. (Intervento presentato al convegno IEEE 20th International Symposium on Network Computing and Applications (NCA) tenutosi a Virtual Conference) [10.1109/NCA53618.2021.9685413].
Leveraging Reinforcement Learning for online scheduling of real-time tasks in the Edge/Fog-to-Cloud computing continuum
Proietti Mattia, Gabriele
;Beraldi, Roberto
2021
Abstract
The computing continuum model is a widely ac-cepted and used approach that make possible the existence of applications that are very demanding in terms of low latency and high computing power. In this three-layered model, the Fog or Edge layer can be considered as the weak link in the chain, indeed the computing nodes whose compose it are generally heterogeneous and their uptime cannot be compared with the one offered by the Cloud. Taking into account these inexorable characteristics of the continuum, in this paper, we propose a Reinforcement Learning based scheduling algorithm that makes per-job request decisions (online scheduling) and that is able to maintain an acceptable performance specifically targeting real-time applications. Through a series of simulations and comparisons with other fixed scheduling strategies, we demonstrate how the algorithm is capable of deriving the best possible scheduling policy when Fog or Edge nodes have different speeds and can unpredictably fail.File | Dimensione | Formato | |
---|---|---|---|
ProiettiMattia_Leveraging-Reinforcement-Learning_2021.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
769.03 kB
Formato
Adobe PDF
|
769.03 kB | Adobe PDF | Contatta l'autore |
ProiettiMattia_postprint_Leveraging-Reinforcement-Learning_2021.pdf
accesso aperto
Note: DOI: 10.1109/NCA53618.2021.9685413
Tipologia:
Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
779.78 kB
Formato
Adobe PDF
|
779.78 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.