The computational model presented here, Grasping Affordances (GA) model, provides a precise explication of the notion of affordance in the context of grasping actions carried out by monkeys. This explication is consistent with both direct perception theories and neuroscientific models of monkey brains, insofar as the identification of grasping affordances requires, according to this model, neither object recognition processes nor access to semantic memory. Nevertheless, this model posits a cascade of complicated computational processes, in the way of visuo-motor transformations, which suggest the advisability of qualifying and re-interpreting the claim that (grasping) affordances are directly available to an acting biological system. This re-interpretation undermines the alleged alternative between direct and indirect perception theories, to the extent that substantive visuo-motor transformations have to be posited in order to identify grasping affordances
File in questo prodotto:
Non ci sono file associati a questo prodotto.