This paper focuses on a novel use of deep reinforcement learning (RL) to optimally tune in real-time a model predictive control (MPC) smart charging algorithm for plug-in electric vehicles (PEVs). The coefficients of the terminal cost function of the MPC algorithm are updated online by a neural network, which is trained offline to maximize the control performance (linked to the satisfaction of the users’ charging preferences and the tracking of a power reference profile, at PEV fleet level). This approach is different and more flexible compared to most of the other approaches in the literature, which instead use deep RL to fix offline the MPC parametrization. The proposed method allows one to select a shorter MPC control window (compared to standard MPC) and/or a shorter sampling time, while improving the control performance. Simulations are presented to validate the approach: the proposed MPC-RL controller improves control performance by an average of 4.3 % compared to classic MPC, while having a lower computing time.
A hybrid model predictive control-deep reinforcement learning algorithm with application to plug-in electric vehicles smart charging / Liberati, Francesco; Atanasious, Mohab M. H.; De Santis, Emanuele; Di Giorgio, Alessandro. - In: SUSTAINABLE ENERGY, GRIDS AND NETWORKS. - ISSN 2352-4677. - 44:(2025). [10.1016/j.segan.2025.101963]
A hybrid model predictive control-deep reinforcement learning algorithm with application to plug-in electric vehicles smart charging
Francesco Liberati
;Mohab M. H. Atanasious;Emanuele De Santis;Alessandro Di Giorgio
2025
Abstract
This paper focuses on a novel use of deep reinforcement learning (RL) to optimally tune in real-time a model predictive control (MPC) smart charging algorithm for plug-in electric vehicles (PEVs). The coefficients of the terminal cost function of the MPC algorithm are updated online by a neural network, which is trained offline to maximize the control performance (linked to the satisfaction of the users’ charging preferences and the tracking of a power reference profile, at PEV fleet level). This approach is different and more flexible compared to most of the other approaches in the literature, which instead use deep RL to fix offline the MPC parametrization. The proposed method allows one to select a shorter MPC control window (compared to standard MPC) and/or a shorter sampling time, while improving the control performance. Simulations are presented to validate the approach: the proposed MPC-RL controller improves control performance by an average of 4.3 % compared to classic MPC, while having a lower computing time.| File | Dimensione | Formato | |
|---|---|---|---|
|
Liberati_A-hybrid-model_2025.pdf
accesso aperto
Note: https://doi.org/10.1016/j.segan.2025.101963
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
2.46 MB
Formato
Adobe PDF
|
2.46 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


