Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this paper a framework is proposed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and the corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking, exploiting non-standard inference services, allows to: (i) detect postures via on-the-fly comparison of the retrieved annotations with standard posture descriptions stored as instances of a proper Knowledge Base; (ii) compare subsequent postures in order to recognize gestures. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility of the proposed approach
Semantic matchmaking for Kinect-based posture and gesture recognition / Ruta, Michele; Scioscia, Floriano; di Summa, M; Ieva, Saverio; DI SCIASCIO, Eugenio; Sacco, M.. - ELETTRONICO. - (2014), pp. 15-22. (Intervento presentato al convegno 8th IEEE International Conference on Semantic Computing, ICSC 2014 tenutosi a Newport Beach, CA nel June 16-18, 2014) [10.1109/ICSC.2014.28].
Semantic matchmaking for Kinect-based posture and gesture recognition
RUTA, Michele;SCIOSCIA, Floriano;IEVA, Saverio;DI SCIASCIO, Eugenio;
2014-01-01
Abstract
Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this paper a framework is proposed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and the corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking, exploiting non-standard inference services, allows to: (i) detect postures via on-the-fly comparison of the retrieved annotations with standard posture descriptions stored as instances of a proper Knowledge Base; (ii) compare subsequent postures in order to recognize gestures. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility of the proposed approachI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.