Ego-motion estimation is a fundamental building block of any autonomous system that needs to navigate in an environment. In large-scale outdoor scenes, 3D LiDARs are often used for this task, as they provide a large number of range measurements at high precision. In this paper, we propose a novel approach that exploits the intensity channel of 3D LiDAR scans to compute an accurate odometry estimate at a high frequency. In contrast to existing methods that operate on full point clouds, our approach extracts a sparse set of salient points from intensity images using data-driven feature extraction architectures originally designed for RGB images. These salient points are then used to compute the relative pose between successive scans. Furthermore, we propose a novel self- supervised procedure to fine-tune the feature extraction network online during navigation, which exploits the estimated relative motion but does not require ground truth data. The experimental evaluation suggests that the proposed approach provides a solid ego-motion estimation at a much higher frequency than the sensor frame rate while improving its estimation accuracy online.

Fast Sparse LiDAR Odometry Using Self-Supervised Feature Selection on Intensity Images / Guadagnino, Tiziano; Chen, Xieyuanli; Sodano, Matteo; Behley, Jens; Grisetti, Giorgio; Stachniss, Cyrill. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 7:3(2022), pp. 7597-7604. [10.1109/LRA.2022.3184454]

Fast Sparse LiDAR Odometry Using Self-Supervised Feature Selection on Intensity Images

Tiziano Guadagnino
Conceptualization
;
Giorgio Grisetti;
2022

Abstract

Ego-motion estimation is a fundamental building block of any autonomous system that needs to navigate in an environment. In large-scale outdoor scenes, 3D LiDARs are often used for this task, as they provide a large number of range measurements at high precision. In this paper, we propose a novel approach that exploits the intensity channel of 3D LiDAR scans to compute an accurate odometry estimate at a high frequency. In contrast to existing methods that operate on full point clouds, our approach extracts a sparse set of salient points from intensity images using data-driven feature extraction architectures originally designed for RGB images. These salient points are then used to compute the relative pose between successive scans. Furthermore, we propose a novel self- supervised procedure to fine-tune the feature extraction network online during navigation, which exploits the estimated relative motion but does not require ground truth data. The experimental evaluation suggests that the proposed approach provides a solid ego-motion estimation at a much higher frequency than the sensor frame rate while improving its estimation accuracy online.
2022
SLAM; Perception; Vision-Based Navigation
01 Pubblicazione su rivista::01a Articolo in rivista
Fast Sparse LiDAR Odometry Using Self-Supervised Feature Selection on Intensity Images / Guadagnino, Tiziano; Chen, Xieyuanli; Sodano, Matteo; Behley, Jens; Grisetti, Giorgio; Stachniss, Cyrill. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 7:3(2022), pp. 7597-7604. [10.1109/LRA.2022.3184454]
File allegati a questo prodotto
File Dimensione Formato  
Guadagnino_Fast_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.6 MB
Formato Adobe PDF
2.6 MB Adobe PDF   Contatta l'autore
Guadagnino_preprint_Fast_2022.pdf

accesso aperto

Note: DOI 10.1109/LRA.2022.3184454
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.96 MB
Formato Adobe PDF
5.96 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1650161
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 10
social impact