One major limitation to the applicability of Reinforcement Learning (RL) to many practical domains is the large number of samples required to learn an optimal policy. To address this problem and improve learning efficiency, we consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain. Each layer is an MDP representing a coarser model of the one immediately below in the hierarchy. In this work, we propose a novel form of Reward Shaping where the solution obtained at the abstract level is used to offer rewards to the more concrete MDP, in such a way that the abstract solution guides the learning in the more complex domain. In contrast with other works in Hierarchical RL, our technique has few requirements in the design of the abstract models and it is also tolerant to modeling errors, thus making the proposed approach practical. We formally analyze the relationship between the abstract models and the exploration heuristic induced in the lower-level domain. Moreover, we prove that the method guarantees optimal convergence and we demonstrate its effectiveness experimentally.

Exploiting Multiple Abstractions in Episodic RL via Reward Shaping / Cipollone, R.; De Giacomo, G.; Favorito, M.; Iocchi, L.; Patrizi, F.. - 37:(2023), pp. 7227-7234. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a usa) [10.1609/aaai.v37i6.25881].

Exploiting Multiple Abstractions in Episodic RL via Reward Shaping

Cipollone R.;De Giacomo G.;Favorito M.;Iocchi L.;Patrizi F.
2023

Abstract

One major limitation to the applicability of Reinforcement Learning (RL) to many practical domains is the large number of samples required to learn an optimal policy. To address this problem and improve learning efficiency, we consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain. Each layer is an MDP representing a coarser model of the one immediately below in the hierarchy. In this work, we propose a novel form of Reward Shaping where the solution obtained at the abstract level is used to offer rewards to the more concrete MDP, in such a way that the abstract solution guides the learning in the more complex domain. In contrast with other works in Hierarchical RL, our technique has few requirements in the design of the abstract models and it is also tolerant to modeling errors, thus making the proposed approach practical. We formally analyze the relationship between the abstract models and the exploration heuristic induced in the lower-level domain. Moreover, we prove that the method guarantees optimal convergence and we demonstrate its effectiveness experimentally.
2023
National Conference of the American Association for Artificial Intelligence
Reinforcement Learning, Markov Decision Process, Hierarchical models
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Exploiting Multiple Abstractions in Episodic RL via Reward Shaping / Cipollone, R.; De Giacomo, G.; Favorito, M.; Iocchi, L.; Patrizi, F.. - 37:(2023), pp. 7227-7234. (Intervento presentato al convegno National Conference of the American Association for Artificial Intelligence tenutosi a usa) [10.1609/aaai.v37i6.25881].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1688756
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact