In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.
S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots / Suriani, Vincenzo; Kaszuba, Sara; Sabbella, Sandeep R.; Riccio, Francesco; Nardi, Daniele. - (2021). (Intervento presentato al convegno 10th European Conference on Mobile Robots, ECMR 2021 tenutosi a Virtual, Bonn, Germany) [10.1109/ECMR50962.2021.9568806].
S-AvE: Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots
Suriani, Vincenzo
Primo
;Kaszuba, SaraSecondo
;Sabbella, Sandeep R.;Riccio, Francesco;Nardi, Daniele
2021
Abstract
In order to operate and to understand human commands, robots must be provided with a knowledge representation integrating both geometric and symbolic knowledge. In the literature, such a representation is referred to as a semantic map that enables the robot to interpret user commands by grounding them to its sensory observations. However, even though a semantic map is key to enable cognition and high-level reasoning, it is a complex challenge to address due to generalization to various scenarios. As a consequence, commonly used techniques do not always guarantee rich and accurate representations of the environment and of the objects therein. In this paper, we set aside from previous approaches by attacking the problem of semantic mapping from a different perspective. While proposed approaches mainly focus on generating a reliable map starting from sensory observations often collected with a human user teleoperating the mobile platform, in this paper, we argue that the process of semantic mapping starts at the data gathering phase and it is a combination of both perception and motion. To tackle these issues, we design a new family of approaches to semantic mapping that exploit both active vision and domain knowledge to improve the overall mapping performance with respect to other map-exploration methodologies.File | Dimensione | Formato | |
---|---|---|---|
Suriani_S-AvE_2021.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.59 MB
Formato
Adobe PDF
|
1.59 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.