Distribution network power losses is responsible for a large portion of system energy losses. In an active distribution network (ADN), the greater penetration of distribution generations (DGs) excel to bidirectional power flow that are responsible for large voltage excursions. To address this problem, this research paper proposes a deep reinforcement learning strategy for ADN optimal voltage control that comprises of a solid-state transformer (SST). The proposed scheme computes optimal values of SST's reactive power that helps to solve the problem of ADN bus voltage excursions. Furthermore, in this strategy, the optimal control issue is expressed as a Markov decision process (MDP) that deals with continuous states and action spaces. The deep deterministic policy gradient (DDPG) algorithm is used to study the reactive power control strategies to determine the optimal actions from given states by utilizing the data driven deep neural networks (DNNs). Numerical simulations on modified IEEE 33-bus system using MATLAB show that the proposed strategy effectively sustains entire bus voltage in the allowable limits, and lessens the power loss of the system.

Optimal control of active distribution network using deep reinforcement learning / Tahir, Y.; Nadeem Khan, M. F.; Sajjad, I. A.; Martirano, L.. - (2022), pp. 1-6. (Intervento presentato al convegno 2022 IEEE International conference on environment and electrical engineering and 2022 IEEE Industrial and commercial power systems Europe, EEEIC / I and CPS Europe 2022 tenutosi a Prague; Czech Republic) [10.1109/EEEIC/ICPSEurope54979.2022.9854795].

Optimal control of active distribution network using deep reinforcement learning

Martirano L.
2022

Abstract

Distribution network power losses is responsible for a large portion of system energy losses. In an active distribution network (ADN), the greater penetration of distribution generations (DGs) excel to bidirectional power flow that are responsible for large voltage excursions. To address this problem, this research paper proposes a deep reinforcement learning strategy for ADN optimal voltage control that comprises of a solid-state transformer (SST). The proposed scheme computes optimal values of SST's reactive power that helps to solve the problem of ADN bus voltage excursions. Furthermore, in this strategy, the optimal control issue is expressed as a Markov decision process (MDP) that deals with continuous states and action spaces. The deep deterministic policy gradient (DDPG) algorithm is used to study the reactive power control strategies to determine the optimal actions from given states by utilizing the data driven deep neural networks (DNNs). Numerical simulations on modified IEEE 33-bus system using MATLAB show that the proposed strategy effectively sustains entire bus voltage in the allowable limits, and lessens the power loss of the system.
2022
2022 IEEE International conference on environment and electrical engineering and 2022 IEEE Industrial and commercial power systems Europe, EEEIC / I and CPS Europe 2022
active distribution network; deep deterministic policy gradient; deep neural networks; optimal voltage control; reinforcement learning; solid state transformer
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Optimal control of active distribution network using deep reinforcement learning / Tahir, Y.; Nadeem Khan, M. F.; Sajjad, I. A.; Martirano, L.. - (2022), pp. 1-6. (Intervento presentato al convegno 2022 IEEE International conference on environment and electrical engineering and 2022 IEEE Industrial and commercial power systems Europe, EEEIC / I and CPS Europe 2022 tenutosi a Prague; Czech Republic) [10.1109/EEEIC/ICPSEurope54979.2022.9854795].
File allegati a questo prodotto
File Dimensione Formato  
Tahir_Optimal control_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 646.27 kB
Formato Adobe PDF
646.27 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1668177
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact