Visual ego-motion estimation, or briefly visual odometry (VO), is one of the key building blocks of modern SLAM systems. In the last decade, impressive results have been demonstrated in the context of visual navigation, reaching very high localization performance. However, all ego-motion estimation systems require careful parameter tuning procedures for the specific environment they have to work in. Furthermore, even in ideal scenarios, most state-of-the-art approaches fail to handle image anomalies and imperfections, which results in less robust estimates. VO systems that rely on geometrical approaches extract sparse or dense features and match them to perform frame-to-frame (F2F) motion estimation. However, images contain much more information that can be used to further improve the F2F estimation. To learn new feature representation, a very successful approach is to use deep convolutional neural networks. Inspired by recent advances in deep networks and by previous work on learning methods applied to VO, we explore the use of convolutional neural networks to learn both the best visual features and the best estimator for the task of visual ego-motion estimation. With experiments on publicly available datasets, we show that our approach is robust with respect to blur, luminance, and contrast anomalies and outperforms most state-of-the-art approaches even in nominal conditions.

Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation / Costante, Gabriele; Mancini, Michele; Valigi, Paolo; Ciarfuglia, THOMAS ALESSANDRO. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 1:1(2016), pp. 18-25. [10.1109/LRA.2015.2505717]

Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation

CIARFUGLIA, THOMAS ALESSANDRO
2016

Abstract

Visual ego-motion estimation, or briefly visual odometry (VO), is one of the key building blocks of modern SLAM systems. In the last decade, impressive results have been demonstrated in the context of visual navigation, reaching very high localization performance. However, all ego-motion estimation systems require careful parameter tuning procedures for the specific environment they have to work in. Furthermore, even in ideal scenarios, most state-of-the-art approaches fail to handle image anomalies and imperfections, which results in less robust estimates. VO systems that rely on geometrical approaches extract sparse or dense features and match them to perform frame-to-frame (F2F) motion estimation. However, images contain much more information that can be used to further improve the F2F estimation. To learn new feature representation, a very successful approach is to use deep convolutional neural networks. Inspired by recent advances in deep networks and by previous work on learning methods applied to VO, we explore the use of convolutional neural networks to learn both the best visual features and the best estimator for the task of visual ego-motion estimation. With experiments on publicly available datasets, we show that our approach is robust with respect to blur, luminance, and contrast anomalies and outperforms most state-of-the-art approaches even in nominal conditions.
2016
Visual Learning; Visual-Based Navigation; Artificial Intelligence; Computer Science Applications1707 Computer Vision and Pattern Recognition; 1707
01 Pubblicazione su rivista::01a Articolo in rivista
Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation / Costante, Gabriele; Mancini, Michele; Valigi, Paolo; Ciarfuglia, THOMAS ALESSANDRO. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 1:1(2016), pp. 18-25. [10.1109/LRA.2015.2505717]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1494393
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 171
  • ???jsp.display-item.citation.isi??? 131
social impact