Human activity recognition systems from static images or video sequences are becoming more and more present in our life. Most computer vision applications such as human-computer interaction, virtual reality, public security, smart home monitoring, or autonomous robotics, to name a few, highly rely on human activity recognition. Of course, basic human activities, such as “walking” and “running”, are relatively easy to recognize. On the other hand, identifying more complex activities is still a challenging task that could be solved by retrieving contextual information from the scene, such as objects, events, or concepts. Indeed, a careful analysis of the scene can help to recognize human activities taking place. In this work, we address a holistic video understanding task to provide a complete semantic level description of the scene. Our solution can bring significant improvements in human activity recognition tasks. Besides, it may allow equipping a robotic and autonomous system with contextual knowledge of the environment. In particular, we want to show how this vision module can be integrated into a social robot to build a more natural and realistic context-based Human-Robot Interaction. We think that social robots must be aware of the surrounding environment to react in a proper and socially acceptable way, according to the different scenarios.

Vision-Based Holistic Scene Understanding for Context-Aware Human-Robot Interaction / DE MAGISTRIS, Giorgio; Caprari, Riccardo; Castro, Giulia; Russo, Samuele; Iocchi, Luca; Nardi, Daniele; Napoli, Christian. - 13196:(2022), pp. 310-325. (Intervento presentato al convegno 20th International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 tenutosi a Virtual, Online) [10.1007/978-3-031-08421-8_21].

Vision-Based Holistic Scene Understanding for Context-Aware Human-Robot Interaction

Giorgio De Magistris
Primo
Methodology
;
Giulia Castro
Software
;
Samuele Russo
Conceptualization
;
Luca Iocchi
Writing – Review & Editing
;
Daniele Nardi
Writing – Review & Editing
;
Christian Napoli
Ultimo
Supervision
2022

Abstract

Human activity recognition systems from static images or video sequences are becoming more and more present in our life. Most computer vision applications such as human-computer interaction, virtual reality, public security, smart home monitoring, or autonomous robotics, to name a few, highly rely on human activity recognition. Of course, basic human activities, such as “walking” and “running”, are relatively easy to recognize. On the other hand, identifying more complex activities is still a challenging task that could be solved by retrieving contextual information from the scene, such as objects, events, or concepts. Indeed, a careful analysis of the scene can help to recognize human activities taking place. In this work, we address a holistic video understanding task to provide a complete semantic level description of the scene. Our solution can bring significant improvements in human activity recognition tasks. Besides, it may allow equipping a robotic and autonomous system with contextual knowledge of the environment. In particular, we want to show how this vision module can be integrated into a social robot to build a more natural and realistic context-based Human-Robot Interaction. We think that social robots must be aware of the surrounding environment to react in a proper and socially acceptable way, according to the different scenarios.
2022
20th International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021
Human activity recognition; Human-robot interaction; Computer vision; Image understanding; Neural Networks
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Vision-Based Holistic Scene Understanding for Context-Aware Human-Robot Interaction / DE MAGISTRIS, Giorgio; Caprari, Riccardo; Castro, Giulia; Russo, Samuele; Iocchi, Luca; Nardi, Daniele; Napoli, Christian. - 13196:(2022), pp. 310-325. (Intervento presentato al convegno 20th International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 tenutosi a Virtual, Online) [10.1007/978-3-031-08421-8_21].
File allegati a questo prodotto
File Dimensione Formato  
DeMagistris_postprint_Vision-Based_2022.pdf.pdf

accesso aperto

Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.86 MB
Formato Adobe PDF
2.86 MB Adobe PDF
DeMagistris_Vision-Based_2022.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.13 MB
Formato Adobe PDF
1.13 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1651140
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 2
social impact