The purpose of continual reinforcement learning is to train an agent on a sequence of tasks such that it learns the ones that appear later in the sequence while retaining the ability to perform the tasks that appeared earlier. Experience replay is a popular method used to make the agent remember previous tasks, but its effectiveness strongly relies on the selection of experiences to store. Kompella et al. (2023) proposed organizing the experience replay buffer into partitions, each storing transitions leading to a rare but crucial event, such that these key experiences get revisited more often during training. However, the method is sensitive to the manual selection of event states. To address this issue, we introduce ProtoCRL, a prototype-based architecture leveraging a variational Gaussian mixture model to automatically discover effective event states and build the associated partitions in the experience replay buffer. The proposed approach is tested on a sequence of MiniGrid environments, demonstrating the agent's ability to adapt and learn new skills incrementally.

ProtoCRL: Prototype-based Network for Continual Reinforcement Learning / Proietti, Michela; Wurman, Peter R.; Stone, Peter; Capobianco, Roberto. - In: REINFORCEMENT LEARNING JOURNAL. - ISSN 2996-8569. - (2025). (Intervento presentato al convegno Reinforcement Learning Conference tenutosi a Edmonton).

ProtoCRL: Prototype-based Network for Continual Reinforcement Learning

Michela Proietti
;
Roberto Capobianco
2025

Abstract

The purpose of continual reinforcement learning is to train an agent on a sequence of tasks such that it learns the ones that appear later in the sequence while retaining the ability to perform the tasks that appeared earlier. Experience replay is a popular method used to make the agent remember previous tasks, but its effectiveness strongly relies on the selection of experiences to store. Kompella et al. (2023) proposed organizing the experience replay buffer into partitions, each storing transitions leading to a rare but crucial event, such that these key experiences get revisited more often during training. However, the method is sensitive to the manual selection of event states. To address this issue, we introduce ProtoCRL, a prototype-based architecture leveraging a variational Gaussian mixture model to automatically discover effective event states and build the associated partitions in the experience replay buffer. The proposed approach is tested on a sequence of MiniGrid environments, demonstrating the agent's ability to adapt and learn new skills incrementally.
2025
Reinforcement Learning Conference
reinforcement learning; continual reinforcement learning; prototype-based networks; experience replay
04 Pubblicazione in atti di convegno::04c Atto di convegno in rivista
ProtoCRL: Prototype-based Network for Continual Reinforcement Learning / Proietti, Michela; Wurman, Peter R.; Stone, Peter; Capobianco, Roberto. - In: REINFORCEMENT LEARNING JOURNAL. - ISSN 2996-8569. - (2025). (Intervento presentato al convegno Reinforcement Learning Conference tenutosi a Edmonton).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1742852
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact