The design and implementation of behaviors for robots operating in dynamic and complex environments are becoming mandatory in nowadays applications. Reinforcement learning is consistently showing remarkable results in learning effective action policies and in achieving super-human performance in various tasks -- without exploiting prior knowledge. However, in robotics, the use of purely learning-based techniques is still subject to strong limitations. Foremost, sample efficiency. Such techniques, in fact, are known to require large training datasets, and long training sessions, in order to develop effective action policies. Hence in this paper, to alleviate such constraint, and to allow learning in such robotic scenarios, we introduce SErP (Sample Efficient robot Policies), an iterative algorithm to improve the sample-efficiency of learning algorithms. SErP exploits a sub-optimal planner (here implemented with a monitor-replanning algorithm) to lead the exploration of the learning agent through its initial iterations. Intuitively, SErP exploits the planner as an expert in order to enable focused exploration and to avoid portions of the search space that are not effective to solve the task of the robot. Finally, to confirm our insights and to show the improvements that SErP carries with, we report the results obtained in two different robotic scenarios: (1) a cartpole scenario and (2) a soccer-robots scenario within the RoboCup@Soccer SPL environment.

Improving Sample Efficiency in Behavior Learning by Using Sub-optimal Planners for Robots / Antonioni, Emanuele; Nardi, Daniele; Riccio, Francesco. - 13132 LNAI:(2022), pp. 103-114. (Intervento presentato al convegno 24th RoboCup International Symposium, RoboCup 2021 tenutosi a Virtual) [10.1007/978-3-030-98682-7_9].

Improving Sample Efficiency in Behavior Learning by Using Sub-optimal Planners for Robots

Emanuele Antonioni
Primo
Conceptualization
;
Daniele Nardi
Ultimo
Supervision
;
Francesco Riccio
Secondo
Supervision
2022

Abstract

The design and implementation of behaviors for robots operating in dynamic and complex environments are becoming mandatory in nowadays applications. Reinforcement learning is consistently showing remarkable results in learning effective action policies and in achieving super-human performance in various tasks -- without exploiting prior knowledge. However, in robotics, the use of purely learning-based techniques is still subject to strong limitations. Foremost, sample efficiency. Such techniques, in fact, are known to require large training datasets, and long training sessions, in order to develop effective action policies. Hence in this paper, to alleviate such constraint, and to allow learning in such robotic scenarios, we introduce SErP (Sample Efficient robot Policies), an iterative algorithm to improve the sample-efficiency of learning algorithms. SErP exploits a sub-optimal planner (here implemented with a monitor-replanning algorithm) to lead the exploration of the learning agent through its initial iterations. Intuitively, SErP exploits the planner as an expert in order to enable focused exploration and to avoid portions of the search space that are not effective to solve the task of the robot. Finally, to confirm our insights and to show the improvements that SErP carries with, we report the results obtained in two different robotic scenarios: (1) a cartpole scenario and (2) a soccer-robots scenario within the RoboCup@Soccer SPL environment.
2022
24th RoboCup International Symposium, RoboCup 2021
robotics; reinforcement learning; automated planning; sample efficiency
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Improving Sample Efficiency in Behavior Learning by Using Sub-optimal Planners for Robots / Antonioni, Emanuele; Nardi, Daniele; Riccio, Francesco. - 13132 LNAI:(2022), pp. 103-114. (Intervento presentato al convegno 24th RoboCup International Symposium, RoboCup 2021 tenutosi a Virtual) [10.1007/978-3-030-98682-7_9].
File allegati a questo prodotto
File Dimensione Formato  
Antonioni_postprint_Improving_2021.pdf

accesso aperto

Note: https://doi.org/10.1007/978-3-030-98682-7_9
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.92 MB
Formato Adobe PDF
1.92 MB Adobe PDF
Antonioni_Improving_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.35 MB
Formato Adobe PDF
3.35 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1621095
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact