This paper deals with offline (or batch) Reinforcement Learning (RL) in episodic Regular Decision Processes (RDPs). RDPs are the subclass of Non-Markov Decision Processes where the dependency on the history of past events can be captured by a finite-state automaton. We consider a setting where the automaton that underlies the RDP is unknown, and a learner strives to learn a near-optimal policy using pre-collected data, in the form of non-Markov sequences of observations, without further exploration. We present RegORL, an algorithm that suitably combines automata learning techniques and state-of-the-art algorithms for offline RL in MDPs. RegORL has a modular design allowing one to use any off-the-shelf offline RL algorithm in MDPs. We report a non-asymptotic high-probability sample complexity bound for RegORL to yield an ε-optimal policy, which makes appear a notion of concentrability relevant for RDPs. Furthermore, we present a sample complexity lower bound for offline RL in RDPs. To our best knowledge, this is the first work presenting a provably efficient algorithm for offline learning in RDPs.

Provably Efficient Offline Reinforcement Learning in Regular Decision Processes / Cipollone, R.; Ronca, A.; Jonsson, A.; Talebi, M. S.. - 36:(2023). (Intervento presentato al convegno NeurIPS tenutosi a Ernest N. Morial Convention Center, usa).

Provably Efficient Offline Reinforcement Learning in Regular Decision Processes

Cipollone R.
;
Ronca A.
;
2023

Abstract

This paper deals with offline (or batch) Reinforcement Learning (RL) in episodic Regular Decision Processes (RDPs). RDPs are the subclass of Non-Markov Decision Processes where the dependency on the history of past events can be captured by a finite-state automaton. We consider a setting where the automaton that underlies the RDP is unknown, and a learner strives to learn a near-optimal policy using pre-collected data, in the form of non-Markov sequences of observations, without further exploration. We present RegORL, an algorithm that suitably combines automata learning techniques and state-of-the-art algorithms for offline RL in MDPs. RegORL has a modular design allowing one to use any off-the-shelf offline RL algorithm in MDPs. We report a non-asymptotic high-probability sample complexity bound for RegORL to yield an ε-optimal policy, which makes appear a notion of concentrability relevant for RDPs. Furthermore, we present a sample complexity lower bound for offline RL in RDPs. To our best knowledge, this is the first work presenting a provably efficient algorithm for offline learning in RDPs.
2023
NeurIPS
Reinforcement Learning; Offline Reinforcement Learning; Regular Decision Processes; Sample complexity; Automata; Batch Reinforcement Learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Provably Efficient Offline Reinforcement Learning in Regular Decision Processes / Cipollone, R.; Ronca, A.; Jonsson, A.; Talebi, M. S.. - 36:(2023). (Intervento presentato al convegno NeurIPS tenutosi a Ernest N. Morial Convention Center, usa).
File allegati a questo prodotto
File Dimensione Formato  
Cipollone_Provably_2023.pdf

accesso aperto

Note: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://proceedings.neurips.cc/paper_files/paper/2023/file/7bf3e93543a612b75b6373178ba1faa4-Paper-Conference.pdf
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 644.83 kB
Formato Adobe PDF
644.83 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1717851
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact