In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.

S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots / Suriani, Vincenzo; Kaszuba, Sara; Sabbella, Sandeep R.; Riccio, Francesco; Nardi, Daniele. - (2021). (Intervento presentato al convegno 10th European Conference on Mobile Robots, ECMR 2021 tenutosi a Virtual, Bonn, Germany) [10.1109/ECMR50962.2021.9568806].

S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots

Suriani, Vincenzo
Primo
;
Kaszuba, Sara
Secondo
;
Sabbella, Sandeep R.;Riccio, Francesco;Nardi, Daniele
2021

Abstract

In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.
2021
10th European Conference on Mobile Robots, ECMR 2021
robotic; semantic Mapping; active vision
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots / Suriani, Vincenzo; Kaszuba, Sara; Sabbella, Sandeep R.; Riccio, Francesco; Nardi, Daniele. - (2021). (Intervento presentato al convegno 10th European Conference on Mobile Robots, ECMR 2021 tenutosi a Virtual, Bonn, Germany) [10.1109/ECMR50962.2021.9568806].
File allegati a questo prodotto
File Dimensione Formato  
Suriani_S-AvE_2021.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.59 MB
Formato Adobe PDF
1.59 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1619629
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 2
social impact