Research on reinforcement learning has demonstrated promising results in manifold applications and domains. Still, efficiently learning effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. multi-agent systems or hyper-redundant robots). To alleviate this problem, we present DOP, a deep model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) plan effective policies. Specifically, we exploit deep neural networks to learn Q-functions that are used to attack the curse of dimensionality during a Monte-Carlo tree search. Our algorithm, in fact, constructs upper confidence bounds on the learned value function to select actions optimistically. We implement and evaluate DOP on different scenarios: (1) a cooperative navigation problem, (2) a fetching task for a 7-DOF KUKA robot, and (3) a human-robot handover with a humanoid robot (both in simulation and real). The obtained results show the effectiveness of DOP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance.

DOP: Deep Optimistic Planning with Approximate Value Function Evaluation / Riccio, Francesco; Capobianco, Roberto; Nardi, Daniele. - ELETTRONICO. - 3:(2018), pp. 2210-2212. (Intervento presentato al convegno 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018 tenutosi a Stockholm; Sweden).

DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

Francesco Riccio
Primo
;
Roberto Capobianco
Secondo
;
Daniele Nardi
Ultimo
2018

Abstract

Research on reinforcement learning has demonstrated promising results in manifold applications and domains. Still, efficiently learning effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. multi-agent systems or hyper-redundant robots). To alleviate this problem, we present DOP, a deep model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) plan effective policies. Specifically, we exploit deep neural networks to learn Q-functions that are used to attack the curse of dimensionality during a Monte-Carlo tree search. Our algorithm, in fact, constructs upper confidence bounds on the learned value function to select actions optimistically. We implement and evaluate DOP on different scenarios: (1) a cooperative navigation problem, (2) a fetching task for a 7-DOF KUKA robot, and (3) a human-robot handover with a humanoid robot (both in simulation and real). The obtained results show the effectiveness of DOP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance.
2018
17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
Robot Planning; Robot Learning; Deep Learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
DOP: Deep Optimistic Planning with Approximate Value Function Evaluation / Riccio, Francesco; Capobianco, Roberto; Nardi, Daniele. - ELETTRONICO. - 3:(2018), pp. 2210-2212. (Intervento presentato al convegno 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018 tenutosi a Stockholm; Sweden).
File allegati a questo prodotto
File Dimensione Formato  
Riccio_Postprint_DOP_2018.pdf

accesso aperto

Note: https://dl.acm.org/citation.cfm?id=3238123
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 743.96 kB
Formato Adobe PDF
743.96 kB Adobe PDF
Riccio_DOP_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.2 MB
Formato Adobe PDF
1.2 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1132276
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
social impact