Our work aims at developing reinforcement learning algorithms that do not rely on the Markov assumption. We consider the class of Non-Markov Decision Processes where histories can be abstracted into a finite set of states while preserving the dynamics. We call it a Markov abstraction since it induces a Markov Decision Process over a set of states that encode the non-Markov dynamics. This phenomenon underlies the recently introduced Regular Decision Processes (as well as POMDPs where only a finite number of belief states is reachable). In all such kinds of decision process, an agent that uses a Markov abstraction can rely on the Markov property to achieve optimal behaviour. We show that Markov abstractions can be learned during reinforcement learning. Our approach combines automata learning and classic reinforcement learning. For these two tasks, standard algorithms can be employed. We show that our approach has PAC guarantees when the employed algorithms have PAC guarantees, and we also provide an experimental evaluation.

Markov Abstractions for PAC Reinforcement Learning in Non-Markov Decision Processes / Ronca, A.; Paludo Licks, G.; De Giacomo, G.. - In: IJCAI. - ISSN 1045-0823. - (2022), pp. 3408-3415. (Intervento presentato al convegno International Joint Conference on Artificial Intelligence tenutosi a Wien; Austria) [10.24963/ijcai.2022/473].

Markov Abstractions for PAC Reinforcement Learning in Non-Markov Decision Processes

Ronca A.
;
Paludo Licks G.
;
De Giacomo G.
2022

Abstract

Our work aims at developing reinforcement learning algorithms that do not rely on the Markov assumption. We consider the class of Non-Markov Decision Processes where histories can be abstracted into a finite set of states while preserving the dynamics. We call it a Markov abstraction since it induces a Markov Decision Process over a set of states that encode the non-Markov dynamics. This phenomenon underlies the recently introduced Regular Decision Processes (as well as POMDPs where only a finite number of belief states is reachable). In all such kinds of decision process, an agent that uses a Markov abstraction can rely on the Markov property to achieve optimal behaviour. We show that Markov abstractions can be learned during reinforcement learning. Our approach combines automata learning and classic reinforcement learning. For these two tasks, standard algorithms can be employed. We show that our approach has PAC guarantees when the employed algorithms have PAC guarantees, and we also provide an experimental evaluation.
2022
International Joint Conference on Artificial Intelligence
non-markov decision processes; reinforcement learning, automata
04 Pubblicazione in atti di convegno::04c Atto di convegno in rivista
Markov Abstractions for PAC Reinforcement Learning in Non-Markov Decision Processes / Ronca, A.; Paludo Licks, G.; De Giacomo, G.. - In: IJCAI. - ISSN 1045-0823. - (2022), pp. 3408-3415. (Intervento presentato al convegno International Joint Conference on Artificial Intelligence tenutosi a Wien; Austria) [10.24963/ijcai.2022/473].
File allegati a questo prodotto
File Dimensione Formato  
Ronca_Markov_2022.pdf

accesso aperto

Note: https://www.ijcai.org/proceedings/2022/0473.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 601.84 kB
Formato Adobe PDF
601.84 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1728605
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact