Visual Self-localization in unknown environments is a crucial capability for an autonomous robot. Real life scenarios often present critical challenges for autonomous vision-based localization, such as robustness to viewpoint and appearance changes. To address these issues, this paper proposes a novel strategy that models the visual scene by preserving its geometric and semantic structure and, at the same time, improves appearance invariance through a robust visual representation. Our method relies on high level visual landmarks consisting of appearance invariant descriptors that are extracted by a pre-trained Convolutional Neural Network (CNN) on the basis of image patches. In addition, during the exploration, the landmarks are organized by building an incremental covisibility graph that, at query time, is exploited to retrieve candidate matching locations improving the robustness in terms of viewpoint invariance. In this respect, through the covisibility graph, the algorithm finds, more effectively, location similarities by exploiting the structure of the scene that, in turn, allows the construction of virtual locations i.e., artificially augmented views from a real location that are useful to enhance the loop closure ability of the robot. The proposed approach has been deeply analysed and tested in different challenging scenarios taken from public datasets. The approach has also been compared with a state-of-the-art visual navigation algorithm.

Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features / Cascianelli, Silvia; Costante, Gabriele; Bellocchio, Enrico; Valigi, Paolo; Fravolini, Mario Luca; Ciarfuglia, THOMAS ALESSANDRO. - In: ROBOTICS AND AUTONOMOUS SYSTEMS. - ISSN 0921-8890. - 92:(2017), pp. 53-65. [10.1016/j.robot.2017.03.004]

Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features

CIARFUGLIA, THOMAS ALESSANDRO
2017

Abstract

Visual Self-localization in unknown environments is a crucial capability for an autonomous robot. Real life scenarios often present critical challenges for autonomous vision-based localization, such as robustness to viewpoint and appearance changes. To address these issues, this paper proposes a novel strategy that models the visual scene by preserving its geometric and semantic structure and, at the same time, improves appearance invariance through a robust visual representation. Our method relies on high level visual landmarks consisting of appearance invariant descriptors that are extracted by a pre-trained Convolutional Neural Network (CNN) on the basis of image patches. In addition, during the exploration, the landmarks are organized by building an incremental covisibility graph that, at query time, is exploited to retrieve candidate matching locations improving the robustness in terms of viewpoint invariance. In this respect, through the covisibility graph, the algorithm finds, more effectively, location similarities by exploiting the structure of the scene that, in turn, allows the construction of virtual locations i.e., artificially augmented views from a real location that are useful to enhance the loop closure ability of the robot. The proposed approach has been deeply analysed and tested in different challenging scenarios taken from public datasets. The approach has also been compared with a state-of-the-art visual navigation algorithm.
2017
Place Recognition; Loop Closing; CNN Features; Graph; Semantic
01 Pubblicazione su rivista::01a Articolo in rivista
Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features / Cascianelli, Silvia; Costante, Gabriele; Bellocchio, Enrico; Valigi, Paolo; Fravolini, Mario Luca; Ciarfuglia, THOMAS ALESSANDRO. - In: ROBOTICS AND AUTONOMOUS SYSTEMS. - ISSN 0921-8890. - 92:(2017), pp. 53-65. [10.1016/j.robot.2017.03.004]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1494385
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 47
  • ???jsp.display-item.citation.isi??? 39
social impact