Powerful data analysis techniques are currently applied to 3D motion sensing devices like Microsft Kinect for posture and gesture recognition. Though effective, they are computationally intensive and require complex training. This paper proposes an approach for on-the-fly automated posture and gesture recognition, exploiting Kinect and treating the detection as a semantic-based resource discovery problem. A proper data model and an ontology support the annotation of body postures and gestures. The proposed system automatically annotates Kinect data with a Semantic Web standard logic formalism and then attempts to recognize postures by applying a semantic-based matchmaking between descriptions and reference body poses stored in a Knowledge Base. In addition, sequences of postures are compared in order to recognize gestures. The paper presents details about the prototype implementing the framework as well as an early experimental evaluation on a public dataset, in order to assess the feasibility of both ideas and algorithms.
Semantic matchmaking as a way for attitude discovery / Ruta, Michele; Scioscia, Floriano; Ieva, Saverio; Capurso, Giovanna; Di Sciascio, Eugenio. - ELETTRONICO. - (2019), pp. 85-90. (Intervento presentato al convegno IEEE 8th International Workshop on Advances in Sensors and Interfaces, IWASI 2019 tenutosi a Otranto, Italy nel June 13-14, 2019) [10.1109/IWASI.2019.8791270].
Semantic matchmaking as a way for attitude discovery
Michele Ruta;Floriano Scioscia;Saverio Ieva;Giovanna Capurso;Eugenio Di Sciascio
2019-01-01
Abstract
Powerful data analysis techniques are currently applied to 3D motion sensing devices like Microsft Kinect for posture and gesture recognition. Though effective, they are computationally intensive and require complex training. This paper proposes an approach for on-the-fly automated posture and gesture recognition, exploiting Kinect and treating the detection as a semantic-based resource discovery problem. A proper data model and an ontology support the annotation of body postures and gestures. The proposed system automatically annotates Kinect data with a Semantic Web standard logic formalism and then attempts to recognize postures by applying a semantic-based matchmaking between descriptions and reference body poses stored in a Knowledge Base. In addition, sequences of postures are compared in order to recognize gestures. The paper presents details about the prototype implementing the framework as well as an early experimental evaluation on a public dataset, in order to assess the feasibility of both ideas and algorithms.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.