Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance.

Q-CP: Learning Action Values for Cooperative Planning / Riccio, Francesco; Capobianco, Roberto; Nardi, Daniele. - ELETTRONICO. - (2018), pp. 6469-6475. (Intervento presentato al convegno 2018 IEEE International Conference on Robotics and Automation (ICRA) tenutosi a Brisbane, QLD, Australia) [10.1109/ICRA.2018.8460180].

Q-CP: Learning Action Values for Cooperative Planning

Francesco Riccio
Primo
;
Roberto Capobianco
Secondo
;
Daniele Nardi
Ultimo
2018

Abstract

Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance.
2018
2018 IEEE International Conference on Robotics and Automation (ICRA)
Robot Planning; Robot Learning; Muti-Robot Systems
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Q-CP: Learning Action Values for Cooperative Planning / Riccio, Francesco; Capobianco, Roberto; Nardi, Daniele. - ELETTRONICO. - (2018), pp. 6469-6475. (Intervento presentato al convegno 2018 IEEE International Conference on Robotics and Automation (ICRA) tenutosi a Brisbane, QLD, Australia) [10.1109/ICRA.2018.8460180].
File allegati a questo prodotto
File Dimensione Formato  
Riccio_Preprint-Q-CP_2018.pdf

accesso aperto

Note: 10.1109/ICRA.2018.8460180
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.32 MB
Formato Adobe PDF
2.32 MB Adobe PDF
Riccio_Q-CP_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.73 MB
Formato Adobe PDF
1.73 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1132267
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 5
social impact