The knowledge of the environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inference on low-power embedded hardwares. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios, due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders an a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, and a feasibility study to predict depth maps over underwater scenarios. Precisely, we propose the MobileNetV3_S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3_LMin for the 8-bit Edge TPU hardwares. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state of the art methods. The proposed architectures would be considered a promising approach for real-time monocular depth estimation with the aim of improving the environment perception for underwater drones, lightweight robots and internet-of-things.

Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios / Papa, Lorenzo; Russo, Paolo; Amerini, Irene. - (2022), pp. 50-55. (Intervento presentato al convegno IEEE International Workshop on Metrology for the Sea Learning to Measure Sea Health Parameters (MetroSea) tenutosi a Milazzo; Italy) [10.1109/MetroSea55331.2022.9950812].

Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios

Papa Lorenzo
Primo
Investigation
;
Russo Paolo
Penultimo
Conceptualization
;
Amerini Irene
Ultimo
Conceptualization
2022

Abstract

The knowledge of the environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inference on low-power embedded hardwares. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios, due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders an a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, and a feasibility study to predict depth maps over underwater scenarios. Precisely, we propose the MobileNetV3_S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3_LMin for the 8-bit Edge TPU hardwares. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state of the art methods. The proposed architectures would be considered a promising approach for real-time monocular depth estimation with the aim of improving the environment perception for underwater drones, lightweight robots and internet-of-things.
2022
IEEE International Workshop on Metrology for the Sea Learning to Measure Sea Health Parameters (MetroSea)
depth estimation; embedded devices; real-time estimation; underwater; deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios / Papa, Lorenzo; Russo, Paolo; Amerini, Irene. - (2022), pp. 50-55. (Intervento presentato al convegno IEEE International Workshop on Metrology for the Sea Learning to Measure Sea Health Parameters (MetroSea) tenutosi a Milazzo; Italy) [10.1109/MetroSea55331.2022.9950812].
File allegati a questo prodotto
File Dimensione Formato  
Papa_postprint_Real-time_2022.pdf

accesso aperto

Note: DOI 10.1109/MetroSea55331.2022.9950812
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 5.73 MB
Formato Adobe PDF
5.73 MB Adobe PDF
Papa_Real-time_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.77 MB
Formato Adobe PDF
6.77 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1654255
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact