There is a growing interest in developing computational models of grasping action recognition. This interest is increasingly motivated by a wide range of applications in robotics, neuroscience, HCI, motion capture and other research areas. In many cases, a vision-based approach to grasping action recognition appears to be more promising. For example, in HCI and robotic applications, such an approach often allows for simpler and more natural interaction. However, a vision-based approach to grasping action recognition is a challenging problem due to the large number of hand self-occlusions which make the mapping from hand visual appearance to the hand pose an inverse ill-posed problem. The approach proposed here builds on the work of Santello and co-workers which demonstrate a reduction in hand variability within a given class of grasping actions. The proposed neural network architecture introduces specialized modules for each class of grasping actions and viewpoints, allowing for a more robust hand pose estimation. A quantitative analysis of the proposed architecture obtained by working on a synthetic data set is presented and discussed as a basis for further work.

An Action-tuned Neural Network Architecture for Hand Pose Estimation / Tessitore, Giovanni; Donnarumma, Francesco; Prevete, Roberto. - STAMPA. - (2010), pp. 358-363. (Intervento presentato al convegno International Joint Conference on Computational Intelligence IJCCI 2010 tenutosi a Valencia, Spain nel October 24-26, 2010).

An Action-tuned Neural Network Architecture for Hand Pose Estimation

TESSITORE, GIOVANNI;DONNARUMMA, FRANCESCO;PREVETE, ROBERTO
2010

Abstract

There is a growing interest in developing computational models of grasping action recognition. This interest is increasingly motivated by a wide range of applications in robotics, neuroscience, HCI, motion capture and other research areas. In many cases, a vision-based approach to grasping action recognition appears to be more promising. For example, in HCI and robotic applications, such an approach often allows for simpler and more natural interaction. However, a vision-based approach to grasping action recognition is a challenging problem due to the large number of hand self-occlusions which make the mapping from hand visual appearance to the hand pose an inverse ill-posed problem. The approach proposed here builds on the work of Santello and co-workers which demonstrate a reduction in hand variability within a given class of grasping actions. The proposed neural network architecture introduces specialized modules for each class of grasping actions and viewpoints, allowing for a more robust hand pose estimation. A quantitative analysis of the proposed architecture obtained by working on a synthetic data set is presented and discussed as a basis for further work.
2010
9789898425324
An Action-tuned Neural Network Architecture for Hand Pose Estimation / Tessitore, Giovanni; Donnarumma, Francesco; Prevete, Roberto. - STAMPA. - (2010), pp. 358-363. (Intervento presentato al convegno International Joint Conference on Computational Intelligence IJCCI 2010 tenutosi a Valencia, Spain nel October 24-26, 2010).
File in questo prodotto:
File Dimensione Formato  
actionTuned.pdf

non disponibili

Descrizione: Articolo Principale
Tipologia: Documento in Pre-print
Licenza: Accesso privato/ristretto
Dimensione 1.35 MB
Formato Adobe PDF
1.35 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/417545
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact