Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.

Toward Domain Independence for Learning-Based Monocular Depth Estimation / Mancini, Michele; Costante, Gabriele; Valigi, Paolo; Ciarfuglia, Thomas A.; Delmerico, Jeffrey; Scaramuzza, Davide. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 2:3(2017), pp. 1778-1785. [10.1109/LRA.2017.2657002]

Toward Domain Independence for Learning-Based Monocular Depth Estimation

Ciarfuglia, Thomas A.;
2017

Abstract

Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.
2017
Collision avoidance; range sensing; visual-based navigation
01 Pubblicazione su rivista::01a Articolo in rivista
Toward Domain Independence for Learning-Based Monocular Depth Estimation / Mancini, Michele; Costante, Gabriele; Valigi, Paolo; Ciarfuglia, Thomas A.; Delmerico, Jeffrey; Scaramuzza, Davide. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 2:3(2017), pp. 1778-1785. [10.1109/LRA.2017.2657002]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1494349
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 59
  • ???jsp.display-item.citation.isi??? 46
social impact