When deploying a distributed application in the Fog or Edge computing environments, the average service latency among all the involved nodes can be an indicator of how much a node is loaded with respect to the other. Indeed, only considering the average CPU time, or the RAM utilisation, for example, does not give a clear depiction of the load situation because these parameters are application- and hardware-agnostic. They do not give any information about how the application is performing from the user perspective and they cannot be used for a QoS-oriented load balancing of the system. Moreover, due to the displacement of the nodes and the heterogeneity of the computing devices the necessity of a load balancing algorithm is clear. In this paper, we propose a load balancing approach that is focused on the service latency with the objective to level it across all the nodes in a fully decentralized manner, in this way no user will experience a worse QoS than the other. By providing a differential model of the system and an adaptive heuristic to find the solution to the problem, we show both in simulation and in a real-world deployment that our approach is able to level the service latency among a set of heterogeneous nodes organized in different topologies.
A Latency-levelling Load Balancing Algorithm for Fog and Edge Computing / Proietti Mattia, Gabriele; Magnani, Marco; Beraldi, Roberto. - (2022), pp. 5-14. (Intervento presentato al convegno 25th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM’22) tenutosi a Montreal; Canada) [10.1145/3551659.3559048].
A Latency-levelling Load Balancing Algorithm for Fog and Edge Computing
Proietti Mattia, Gabriele;Magnani, Marco;Beraldi, Roberto
2022
Abstract
When deploying a distributed application in the Fog or Edge computing environments, the average service latency among all the involved nodes can be an indicator of how much a node is loaded with respect to the other. Indeed, only considering the average CPU time, or the RAM utilisation, for example, does not give a clear depiction of the load situation because these parameters are application- and hardware-agnostic. They do not give any information about how the application is performing from the user perspective and they cannot be used for a QoS-oriented load balancing of the system. Moreover, due to the displacement of the nodes and the heterogeneity of the computing devices the necessity of a load balancing algorithm is clear. In this paper, we propose a load balancing approach that is focused on the service latency with the objective to level it across all the nodes in a fully decentralized manner, in this way no user will experience a worse QoS than the other. By providing a differential model of the system and an adaptive heuristic to find the solution to the problem, we show both in simulation and in a real-world deployment that our approach is able to level the service latency among a set of heterogeneous nodes organized in different topologies.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.