We present a Transfer and Continual Learning method for robotic grasping tasks, based on small vision-depth (RGBD) datasets and realized through the use of Grasping Features. Given a network architecture composed by a CNN (Convolutional Neural Network) followed by a FCC (Fully Connected Cascade Neural Network), we exploit high-level features specific of the grasping tasks, as extracted by the convolutional network from RGBD images. These features are more descriptive of a grasping task than just visual ones, and thus more efficient for transfer learning purposes. Being datasets for visual grasping less common than those for image recognition, we also propose an efficient way to generate these data using only simple geometric structures. This reduces the computational burden of the FCC and allows to obtain a better performance with the same amount of data. Simulation results using the collaborative UR-10 robot and a jaw gripper are reported to show the quality of the proposed method.

Transfer and continual supervised learning for robotic grasping through grasping features / Monorchio, Luca; Capotondi, Marco; Corsanici, Mario; Villa, Wilson; De Luca, Alessandro; Puja, Francesco. - (2022), pp. 33-47. ( 1st International Workshop on Continual Semi-Supervised Learning, CSSL 2021 Virtual, Online ) [10.1007/978-3-031-17587-9_3].

Transfer and continual supervised learning for robotic grasping through grasping features

Monorchio, Luca;Capotondi, Marco
;
Villa, Wilson;De Luca, Alessandro;Puja, Francesco
2022

Abstract

We present a Transfer and Continual Learning method for robotic grasping tasks, based on small vision-depth (RGBD) datasets and realized through the use of Grasping Features. Given a network architecture composed by a CNN (Convolutional Neural Network) followed by a FCC (Fully Connected Cascade Neural Network), we exploit high-level features specific of the grasping tasks, as extracted by the convolutional network from RGBD images. These features are more descriptive of a grasping task than just visual ones, and thus more efficient for transfer learning purposes. Being datasets for visual grasping less common than those for image recognition, we also propose an efficient way to generate these data using only simple geometric structures. This reduces the computational burden of the FCC and allows to obtain a better performance with the same amount of data. Simulation results using the collaborative UR-10 robot and a jaw gripper are reported to show the quality of the proposed method.
2022
1st International Workshop on Continual Semi-Supervised Learning, CSSL 2021
continual learning; robotic grasping; transfer learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Transfer and continual supervised learning for robotic grasping through grasping features / Monorchio, Luca; Capotondi, Marco; Corsanici, Mario; Villa, Wilson; De Luca, Alessandro; Puja, Francesco. - (2022), pp. 33-47. ( 1st International Workshop on Continual Semi-Supervised Learning, CSSL 2021 Virtual, Online ) [10.1007/978-3-031-17587-9_3].
File allegati a questo prodotto
File Dimensione Formato  
Monorchio_Transfer-and-Continual_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.93 MB
Formato Adobe PDF
1.93 MB Adobe PDF   Contatta l'autore
Monorchio_postprint_Transfer-and-Continual_2022.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 1.46 MB
Formato Adobe PDF
1.46 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1746637
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact