We simulated organisms with an arm terminating with a hand composed by two fingers, a thumb and an index, each composed by two segments, whose behavior was guided by a nervous system simulated through an artificial network. The organisms, which evolved through a genetic algorithm, lived in a bidimensional environment containing four objects, either large or small, either grey or black. In a baseline simulation the organisms had to learn to grasp small objects with a precision grip and large objects with a power grip. In Simulation 1 the organisms learned to perform two tasks: in Task 1 they continued to grasp objects according to their size, in Task 2 they had to decide the objects' color by using a precision or a power grip. Learning occured earlier when the grip required to respond to the object and to decide the color was the same than when it was not, even if object size was irrelevant for the task. The simulation replicates the result of an experiment by Tucker & Ellis (2001) suggesting that seeing objects automatically activates motor information on how to grasp them.

Objects and affordances: An Artificial Life simulation / Tsiotas, G.; Borghi, ANNA MARIA; Parisi, D.. - (2005), pp. 2212-2217. (Intervento presentato al convegno XXVII Annual Meeting of the Cognitive Science Society tenutosi a Stresa nel 21-24 luglio).

Objects and affordances: An Artificial Life simulation

BORGHI, ANNA MARIA;
2005

Abstract

We simulated organisms with an arm terminating with a hand composed by two fingers, a thumb and an index, each composed by two segments, whose behavior was guided by a nervous system simulated through an artificial network. The organisms, which evolved through a genetic algorithm, lived in a bidimensional environment containing four objects, either large or small, either grey or black. In a baseline simulation the organisms had to learn to grasp small objects with a precision grip and large objects with a power grip. In Simulation 1 the organisms learned to perform two tasks: in Task 1 they continued to grasp objects according to their size, in Task 2 they had to decide the objects' color by using a precision or a power grip. Learning occured earlier when the grip required to respond to the object and to decide the color was the same than when it was not, even if object size was irrelevant for the task. The simulation replicates the result of an experiment by Tucker & Ellis (2001) suggesting that seeing objects automatically activates motor information on how to grasp them.
2005
XXVII Annual Meeting of the Cognitive Science Society
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Objects and affordances: An Artificial Life simulation / Tsiotas, G.; Borghi, ANNA MARIA; Parisi, D.. - (2005), pp. 2212-2217. (Intervento presentato al convegno XXVII Annual Meeting of the Cognitive Science Society tenutosi a Stresa nel 21-24 luglio).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/929145
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact