This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes, in which the controller seeks to ensure that the probability of satisfying the constraint is above a given threshold. Standard DeepRL approaches require i) the constraints to be included as additional weighted terms in the cost function, in a multi-objective fashion, and ii) the tuning of the introduced weights during the training phase of the Deep Neural Network (DNN) according to the probability thresholds. The proposed approach, instead, requires to separately train one constraint-free DNN and one DNN associated to each constraint and then, at each time-step, to select which DNN to use depending on the system observed state. The presented solution does not require any hyper-parameter tuning besides the standard DNN ones, even if the probability thresholds changes. A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations.

Chance-Constrained Control with Lexicographic Deep Reinforcement Learning / Giuseppi, A.; Pietrabissa, A.. - In: IEEE CONTROL SYSTEMS LETTERS. - ISSN 2475-1456. - 4:3(2020), pp. 1-97. [10.1109/LCSYS.2020.2979635]

Chance-Constrained Control with Lexicographic Deep Reinforcement Learning

Giuseppi A.
Co-primo
;
Pietrabissa A.
Co-primo
2020

Abstract

This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes, in which the controller seeks to ensure that the probability of satisfying the constraint is above a given threshold. Standard DeepRL approaches require i) the constraints to be included as additional weighted terms in the cost function, in a multi-objective fashion, and ii) the tuning of the introduced weights during the training phase of the Deep Neural Network (DNN) according to the probability thresholds. The proposed approach, instead, requires to separately train one constraint-free DNN and one DNN associated to each constraint and then, at each time-step, to select which DNN to use depending on the system observed state. The presented solution does not require any hyper-parameter tuning besides the standard DNN ones, even if the probability thresholds changes. A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations.
2020
constrained control.; deep reinforcement learning; Markov decision processes
01 Pubblicazione su rivista::01a Articolo in rivista
Chance-Constrained Control with Lexicographic Deep Reinforcement Learning / Giuseppi, A.; Pietrabissa, A.. - In: IEEE CONTROL SYSTEMS LETTERS. - ISSN 2475-1456. - 4:3(2020), pp. 1-97. [10.1109/LCSYS.2020.2979635]
File allegati a questo prodotto
File Dimensione Formato  
Giuseppi_Preprint_Chance-Constrained_2020.pdf

accesso aperto

Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 515.94 kB
Formato Adobe PDF
515.94 kB Adobe PDF
Giuseppi_Chance-Constrained_2020.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1382636
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact