Attention is a concept of human perception that enables human subjects to select the potentially relevant parts out of the huge amount of sensory data and that enables interactions with other human subjects by sharing attention with each other. These abilities are also of large interest for autonomous robots, therefore, interest in modeling concepts of human attention computationally has increased strongly in the robotics community during the last decade. Especially in human-robot interaction, the ability to detect what a human partner is attending to and to act in a similar way to enable intuitive communication, are important skills for a robotic system. Still, there exists a gap in knowledge transfer between researchers in human attention and robotic researchers with their specific, often task-related, problems. Both communities can mutually benefit from each other by sharing ideas. In the workshop, researchers in visual and multi-modal attention can profit from the rapidly growing field of robotics, which offers new and challenging research questions with very concrete applicability to challenging problems. Robotic researchers can learn how to integrate attention to support natural and real-time HRI.

Workshop on attention models in robotics: Visual systems for better HRI / Michael, Zillich; Simone, Frintrop; PIRRI ARDIZZONE, Maria Fiora; Ekaterina, Potapova; Markus, Vincze. - STAMPA. - (2014), pp. 499-500. (Intervento presentato al convegno 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2014 tenutosi a Bielefeld, Germany nel 3 March 2014 through 6 March 2014) [10.1145/2559636.2563723].

Workshop on attention models in robotics: Visual systems for better HRI

PIRRI ARDIZZONE, Maria Fiora;
2014

Abstract

Attention is a concept of human perception that enables human subjects to select the potentially relevant parts out of the huge amount of sensory data and that enables interactions with other human subjects by sharing attention with each other. These abilities are also of large interest for autonomous robots, therefore, interest in modeling concepts of human attention computationally has increased strongly in the robotics community during the last decade. Especially in human-robot interaction, the ability to detect what a human partner is attending to and to act in a similar way to enable intuitive communication, are important skills for a robotic system. Still, there exists a gap in knowledge transfer between researchers in human attention and robotic researchers with their specific, often task-related, problems. Both communities can mutually benefit from each other by sharing ideas. In the workshop, researchers in visual and multi-modal attention can profit from the rapidly growing field of robotics, which offers new and challenging research questions with very concrete applicability to challenging problems. Robotic researchers can learn how to integrate attention to support natural and real-time HRI.
2014
9th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2014
multi-modal attention; joint attention; visual search; 3d vision; saliency; attention models; visual attention
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Workshop on attention models in robotics: Visual systems for better HRI / Michael, Zillich; Simone, Frintrop; PIRRI ARDIZZONE, Maria Fiora; Ekaterina, Potapova; Markus, Vincze. - STAMPA. - (2014), pp. 499-500. (Intervento presentato al convegno 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2014 tenutosi a Bielefeld, Germany nel 3 March 2014 through 6 March 2014) [10.1145/2559636.2563723].
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/645446
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact