The perception of tool–object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool–object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool–object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool–object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool–object contexts and to a lesser extent within the incorrect tool–object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool–object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool–object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative grasp-posture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool–object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention.

The visual encoding of tool-object affordances / Natraj, N; Pella, Y. M.; Borghi, ANNA MARIA; Wheaton, L. A.. - In: NEUROSCIENCE. - ISSN 0306-4522. - STAMPA. - 310:(2015), pp. 512-527. [10.1016/j.neuroscience.2015.09.060]

The visual encoding of tool-object affordances

BORGHI, ANNA MARIA;
2015

Abstract

The perception of tool–object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool–object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool–object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool–object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool–object contexts and to a lesser extent within the incorrect tool–object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool–object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool–object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative grasp-posture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool–object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention.
2015
action; affordances; eye movement; pattern recognition; perception; tool; adult; attention; female; hand strength; humans; male; psychomotor performance; visual perception; young adult; saccades; neuroscience (all)
01 Pubblicazione su rivista::01a Articolo in rivista
The visual encoding of tool-object affordances / Natraj, N; Pella, Y. M.; Borghi, ANNA MARIA; Wheaton, L. A.. - In: NEUROSCIENCE. - ISSN 0306-4522. - STAMPA. - 310:(2015), pp. 512-527. [10.1016/j.neuroscience.2015.09.060]
File allegati a questo prodotto
File Dimensione Formato  
Natraj_Visual_2015.pdf

solo gestori archivio

Note: Articolo principale
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/931674
Citazioni
  • ???jsp.display-item.citation.pmc??? 4
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 23
social impact