Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms

Improved Learning of Dynamics Models for Control / Venkatraman, Arun; Capobianco, Roberto; Pinto, Lerrel; Hebert, Martial; Nardi, Daniele; Bagnell, James. - 1:(2017), pp. 703-713. (Intervento presentato al convegno 2016 International Symposium on Experimental Robotics (ISER) tenutosi a Tokyo; Japan) [10.1007/978-3-319-50115-4_61].

Improved Learning of Dynamics Models for Control

CAPOBIANCO, ROBERTO
;
NARDI, Daniele;
2017

Abstract

Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms
2017
2016 International Symposium on Experimental Robotics (ISER)
Reinforcement learning; Optimal control; Dynamics learning; Sequential prediction
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Improved Learning of Dynamics Models for Control / Venkatraman, Arun; Capobianco, Roberto; Pinto, Lerrel; Hebert, Martial; Nardi, Daniele; Bagnell, James. - 1:(2017), pp. 703-713. (Intervento presentato al convegno 2016 International Symposium on Experimental Robotics (ISER) tenutosi a Tokyo; Japan) [10.1007/978-3-319-50115-4_61].
File allegati a questo prodotto
File Dimensione Formato  
Venkatraman_Improved-Learning_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/928064
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 3
social impact