Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this paper a framework is proposed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and the corresponding ontology provide the formal underpinning for posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking, exploiting non-standard inference services, allows to: (i) detect postures via on-the-fly comparison of the annotations with standard posture descriptions stored as instances of a proper Knowledge Base; (ii) compare subsequent postures in order to recognize gestures. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility of the proposed approach.
|Titolo:||Semantic Matchmaking for Kinect-Based Posture and Gesture Recognition|
|Data di pubblicazione:||2014|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1142/S1793351X14400169|
|Appare nelle tipologie:||1.1 Articolo in rivista|