This paper presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data. We introduce a new benchmark targeting visual odometry and SLAM, to advance the research in autonomous robotics and computer vision. This work complements existing datasets by simultaneously addressing several issues, such as environment diversity, motion patterns, and sensor frequency. It uses up-to-date devices and presents effective procedures to accurately calibrate the intrinsic and extrinsic of the sensors while addressing temporal synchronization. During recording, we cover multi-floor buildings, gardens, urban and highway scenarios. Combining handheld and car-based data collections, our setup can simulate any robot (quadrupeds, quadrotors, autonomous vehicles). The dataset includes an accurate 6-dof ground truth based on a novel methodology that refines the RTK-GPS estimate with LiDAR point clouds through Bundle Adjustment. All sequences divided in training and testing are accessible through our website.
VBR: A Vision Benchmark in Rome / Brizi, Leonardo; Giacomini, Emanuele; Giammarino, Luca Di; Ferrari, Simone; Salem, Omar; Rebotti, Lorenzo De; Grisetti, Giorgio. - 25:(2024), pp. 15868-15874. (Intervento presentato al convegno IEEE International Conference on Robotics and Automation (ICRA) tenutosi a Yokohama; Japan) [10.1109/icra57147.2024.10611395].
VBR: A Vision Benchmark in Rome
Brizi, Leonardo
;Giacomini, Emanuele
;Giammarino, Luca Di
;Ferrari, Simone
;Salem, Omar
;Rebotti, Lorenzo De
;Grisetti, Giorgio
2024
Abstract
This paper presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data. We introduce a new benchmark targeting visual odometry and SLAM, to advance the research in autonomous robotics and computer vision. This work complements existing datasets by simultaneously addressing several issues, such as environment diversity, motion patterns, and sensor frequency. It uses up-to-date devices and presents effective procedures to accurately calibrate the intrinsic and extrinsic of the sensors while addressing temporal synchronization. During recording, we cover multi-floor buildings, gardens, urban and highway scenarios. Combining handheld and car-based data collections, our setup can simulate any robot (quadrupeds, quadrotors, autonomous vehicles). The dataset includes an accurate 6-dof ground truth based on a novel methodology that refines the RTK-GPS estimate with LiDAR point clouds through Bundle Adjustment. All sequences divided in training and testing are accessible through our website.File | Dimensione | Formato | |
---|---|---|---|
Brizi_VBR_2024.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
2.3 MB
Formato
Adobe PDF
|
2.3 MB | Adobe PDF | Contatta l'autore |
Brizi_postprint_VBR_2024.pdf
accesso aperto
Note: DOI: 10.1109/ICRA57147.2024.10611395
Tipologia:
Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
8.88 MB
Formato
Adobe PDF
|
8.88 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.