Robots nowadays are being employed in increasingly complex scenarios, where the number of possible assumptions that can be made to ease the control synthesis is getting considerably smaller compared to the past. In fact, back in the day control engineers could heavily rely on a static world assumption and on a perfect knowledge of the system dynamics, since robots were practically only confined in controlled assembly lines where everything was predetermined beforehand. Given these premises, it was fairly easy to synthesize control laws able to solve with high precision the programmed task. Recently, task complexity started to grow considerably with respect to the past, requiring a new type of controller able to adapt continuously to the unknown scenarios to be faced. Among all the new methods, learning-based control can be considered one of the most promising approaches in literature today. This thesis investigates the use of this new control technique in robotics. We start by giving some background materials on Machine Learning, discussing how we can learn a better dynamical model for the robot just from sensor data, or even directly synthesize a control law from experiences. Then, after a small excursus on Optimal Control we present our contributions in this novel field. Specifically, a learning-based feedback linearization controller is proposed to deal with model uncertainties in fully actuated robots. This novel technique is then extended to underactuated systems, where control is tremendously complicated by the impossibility in these robots to follow arbitrary trajectories which are not dynamically feasible, i.e. not generated by an exact knowledge of their models. Finally, we present a contribution in the field of Reinforcement Learning, an approach that is able to learn directly a controller for a given task just by a trial and error mechanism. As detailed in the first chapters, Reinforcement Learning does not assure arbitrary constraints satisfaction in the final learned controller, which limits tremendously its applicability on real platforms. For this aspect, we propose an online mechanism where Optimal Control is used to enhance the safety of the final control law.

Learning-based methods for Robotic control / Turrisi, Giulio. - (2022 May 20).

Learning-based methods for Robotic control

TURRISI, GIULIO
20/05/2022

Abstract

Robots nowadays are being employed in increasingly complex scenarios, where the number of possible assumptions that can be made to ease the control synthesis is getting considerably smaller compared to the past. In fact, back in the day control engineers could heavily rely on a static world assumption and on a perfect knowledge of the system dynamics, since robots were practically only confined in controlled assembly lines where everything was predetermined beforehand. Given these premises, it was fairly easy to synthesize control laws able to solve with high precision the programmed task. Recently, task complexity started to grow considerably with respect to the past, requiring a new type of controller able to adapt continuously to the unknown scenarios to be faced. Among all the new methods, learning-based control can be considered one of the most promising approaches in literature today. This thesis investigates the use of this new control technique in robotics. We start by giving some background materials on Machine Learning, discussing how we can learn a better dynamical model for the robot just from sensor data, or even directly synthesize a control law from experiences. Then, after a small excursus on Optimal Control we present our contributions in this novel field. Specifically, a learning-based feedback linearization controller is proposed to deal with model uncertainties in fully actuated robots. This novel technique is then extended to underactuated systems, where control is tremendously complicated by the impossibility in these robots to follow arbitrary trajectories which are not dynamically feasible, i.e. not generated by an exact knowledge of their models. Finally, we present a contribution in the field of Reinforcement Learning, an approach that is able to learn directly a controller for a given task just by a trial and error mechanism. As detailed in the first chapters, Reinforcement Learning does not assure arbitrary constraints satisfaction in the final learned controller, which limits tremendously its applicability on real platforms. For this aspect, we propose an online mechanism where Optimal Control is used to enhance the safety of the final control law.
20-mag-2022
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Turrisi.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 11.15 MB
Formato Adobe PDF
11.15 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1636342
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact