This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, gripaperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring gripaperture in an observer-independent way) for extracting gripaperture in aview-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that gripaperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute gripaperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems.

A connectionist architecture for view-independent grip-aperture computation / Prevete, Roberto; Tessitore, Giovanni; M., Santoro; Catanzariti, Ezio. - In: BRAIN RESEARCH. - ISSN 0006-8993. - STAMPA. - 1225:(2008), pp. 133-145. [10.1016/j.brainres.2008.04.076]

A connectionist architecture for view-independent grip-aperture computation

PREVETE, ROBERTO;TESSITORE, GIOVANNI;CATANZARITI, EZIO
2008

Abstract

This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, gripaperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring gripaperture in an observer-independent way) for extracting gripaperture in aview-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that gripaperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute gripaperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems.
2008
A connectionist architecture for view-independent grip-aperture computation / Prevete, Roberto; Tessitore, Giovanni; M., Santoro; Catanzariti, Ezio. - In: BRAIN RESEARCH. - ISSN 0006-8993. - STAMPA. - 1225:(2008), pp. 133-145. [10.1016/j.brainres.2008.04.076]
File in questo prodotto:
File Dimensione Formato  
brainResearch2008.pdf

non disponibili

Tipologia: Documento in Post-print
Licenza: Accesso privato/ristretto
Dimensione 1.38 MB
Formato Adobe PDF
1.38 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/345927
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 8
social impact