The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience.

Perceiving affordances: a computational investigation of grasping affordances / Prevete, Roberto; Tessitore, Giovanni; E., Catanzariti; Tamburrini, Guglielmo. - In: COGNITIVE SYSTEMS RESEARCH. - ISSN 1389-0417. - 12:2(2011), pp. 122-133. [10.1016/j.cogsys.2010.07.005]

Perceiving affordances: a computational investigation of grasping affordances

PREVETE, ROBERTO;TESSITORE, GIOVANNI;TAMBURRINI, GUGLIELMO
2011

Abstract

The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience.
2011
Perceiving affordances: a computational investigation of grasping affordances / Prevete, Roberto; Tessitore, Giovanni; E., Catanzariti; Tamburrini, Guglielmo. - In: COGNITIVE SYSTEMS RESEARCH. - ISSN 1389-0417. - 12:2(2011), pp. 122-133. [10.1016/j.cogsys.2010.07.005]
File in questo prodotto:
File Dimensione Formato  
preveteEtAl_CSR_Affordances_final.pdf

non disponibili

Descrizione: Articolo Principale
Tipologia: Documento in Post-print
Licenza: Accesso privato/ristretto
Dimensione 683.9 kB
Formato Adobe PDF
683.9 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/370994
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact