A fundamental challenge in mobile robotics is to provide robots the capability to move autonomously in real world unconstrained scenarios. In the recent years this led to an increased interest towards novel learning paradigms for domain adaptation. In this paper we specifically consider the problem of visual place recognition. Current semantic place categorization approaches typically rely on supervised learning methods. This implies a time consuming human labeling effort. Moreover, once learning has been performed, if the environmental conditions vary or the robot is moved to another location, the learned model may not be useful, as the novel scenario can be very different from the old one. To avoid these issues, we propose a novel transfer learning approach for visual place recognition. With our method the robot is only given some training data, eventually collected in different scenarios by other robots, and is able to decide autonomously if and how much this knowledge is useful in the current scenario. Differently from previous approaches, our method keeps the human annotation effort to the minimum and, thanks to the adoption of a transfer risk measure, is able to quantify automatically the similarity between the old and the novel scenario. The experimental results on publicly available datasets demonstrate the effectiveness of our approach.

Transfer Learning for Visual Place Classification / Costante, G; Ciarfuglia, T; Valigi, P; Ricci, E. - (2013). (Intervento presentato al convegno 2nd RSS Workshop on Robots in Clutter tenutosi a Berlino).

Transfer Learning for Visual Place Classification

Ciarfuglia T;
2013

Abstract

A fundamental challenge in mobile robotics is to provide robots the capability to move autonomously in real world unconstrained scenarios. In the recent years this led to an increased interest towards novel learning paradigms for domain adaptation. In this paper we specifically consider the problem of visual place recognition. Current semantic place categorization approaches typically rely on supervised learning methods. This implies a time consuming human labeling effort. Moreover, once learning has been performed, if the environmental conditions vary or the robot is moved to another location, the learned model may not be useful, as the novel scenario can be very different from the old one. To avoid these issues, we propose a novel transfer learning approach for visual place recognition. With our method the robot is only given some training data, eventually collected in different scenarios by other robots, and is able to decide autonomously if and how much this knowledge is useful in the current scenario. Differently from previous approaches, our method keeps the human annotation effort to the minimum and, thanks to the adoption of a transfer risk measure, is able to quantify automatically the similarity between the old and the novel scenario. The experimental results on publicly available datasets demonstrate the effectiveness of our approach.
2013
2nd RSS Workshop on Robots in Clutter
Transfer learning; Autonomous robots; Place recognition
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Transfer Learning for Visual Place Classification / Costante, G; Ciarfuglia, T; Valigi, P; Ricci, E. - (2013). (Intervento presentato al convegno 2nd RSS Workshop on Robots in Clutter tenutosi a Berlino).
File allegati a questo prodotto
File Dimensione Formato  
VE_2013_11573-1494408.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.73 MB
Formato Adobe PDF
3.73 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1494408
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact