The knowledge of the environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inference on low-power embedded hardwares. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios, due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders an a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, and a feasibility study to predict depth maps over underwater scenarios. Precisely, we propose the MobileNetV3_S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3_LMin for the 8-bit Edge TPU hardwares. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state of the art methods. The proposed architectures would be considered a promising approach for real-time monocular depth estimation with the aim of improving the environment perception for underwater drones, lightweight robots and internet-of-things.

Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios / Papa, Lorenzo; Russo, Paolo; Amerini, Irene. - (2022). (Intervento presentato al convegno IEEE International Workshop on Metrology for the Sea, 2022 tenutosi a Milazzo, Messina, Italy) [10.1109/MetroSea55331.2022.9950812].

Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios

Papa Lorenzo
Primo
Investigation
;
Russo Paolo
Penultimo
Conceptualization
;
Amerini Irene
Ultimo
Conceptualization
2022

Abstract

The knowledge of the environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inference on low-power embedded hardwares. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios, due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders an a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, and a feasibility study to predict depth maps over underwater scenarios. Precisely, we propose the MobileNetV3_S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3_LMin for the 8-bit Edge TPU hardwares. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state of the art methods. The proposed architectures would be considered a promising approach for real-time monocular depth estimation with the aim of improving the environment perception for underwater drones, lightweight robots and internet-of-things.
2022
IEEE International Workshop on Metrology for the Sea, 2022
depth estimation; embedded devices; real-time estimation; underwater; deep learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Real-time monocular depth estimation on embedded devices: challenges and performances in terrestrial and underwater scenarios / Papa, Lorenzo; Russo, Paolo; Amerini, Irene. - (2022). (Intervento presentato al convegno IEEE International Workshop on Metrology for the Sea, 2022 tenutosi a Milazzo, Messina, Italy) [10.1109/MetroSea55331.2022.9950812].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1654255
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact