Simultaneous Localization and Mapping (SLAM) systems are fundamental building blocks for any autonomous robot navigating in unknown environments. The SLAM implementation heavily depends on the sensor modality employed on the mobile platform. For this reason, assumptions on the scene's structure are often made to maximize estimation accuracy. This paper presents a novel direct 3D SLAM pipeline that works independently for RGB-D and LiDAR sensors. Building upon prior work on multi-cue photometric frame-to-frame alignment [4], our proposed approach provides an easy-to-extend and generic SLAM system. Our pipeline requires only minor adaptations within the projection model to handle different sensor modalities. We couple a position tracking system with an appearance-based relocalization mechanism that handles large loop closures. Loop closures are validated by the same direct registration algorithm used for odometry estimation. We present comparative experiments with state-of-the-art approaches on publicly available benchmarks using RGB-D cameras and 3D LiDARs. Our system performs well in heterogeneous datasets compared to other sensor-specific methods while making no assumptions about the environment. Finally, we release an open-source C++ implementation of our system.

MD-SLAM: Multi-cue Direct SLAM / Di Giammarino, Luca; Brizi, Leonardo; Guadagnino, Tiziano; Stachniss, Cyrill; Grisetti, Giorgio. - (2022), pp. 11047-11054. (Intervento presentato al convegno IEEE/RSJ International Conference on Intelligent Robots and Systems tenutosi a Kyoto) [10.1109/IROS47612.2022.9981147].

MD-SLAM: Multi-cue Direct SLAM

Di Giammarino, Luca
Primo
;
Brizi, Leonardo
Secondo
;
Guadagnino, Tiziano;Grisetti, Giorgio
Ultimo
2022

Abstract

Simultaneous Localization and Mapping (SLAM) systems are fundamental building blocks for any autonomous robot navigating in unknown environments. The SLAM implementation heavily depends on the sensor modality employed on the mobile platform. For this reason, assumptions on the scene's structure are often made to maximize estimation accuracy. This paper presents a novel direct 3D SLAM pipeline that works independently for RGB-D and LiDAR sensors. Building upon prior work on multi-cue photometric frame-to-frame alignment [4], our proposed approach provides an easy-to-extend and generic SLAM system. Our pipeline requires only minor adaptations within the projection model to handle different sensor modalities. We couple a position tracking system with an appearance-based relocalization mechanism that handles large loop closures. Loop closures are validated by the same direct registration algorithm used for odometry estimation. We present comparative experiments with state-of-the-art approaches on publicly available benchmarks using RGB-D cameras and 3D LiDARs. Our system performs well in heterogeneous datasets compared to other sensor-specific methods while making no assumptions about the environment. Finally, we release an open-source C++ implementation of our system.
2022
IEEE/RSJ International Conference on Intelligent Robots and Systems
Slam; Localization; Mapping
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
MD-SLAM: Multi-cue Direct SLAM / Di Giammarino, Luca; Brizi, Leonardo; Guadagnino, Tiziano; Stachniss, Cyrill; Grisetti, Giorgio. - (2022), pp. 11047-11054. (Intervento presentato al convegno IEEE/RSJ International Conference on Intelligent Robots and Systems tenutosi a Kyoto) [10.1109/IROS47612.2022.9981147].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1673357
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 1
social impact