Introduction and objective: the purpose of this work is to design and implement an innovative tool to recognize 16 different human gestural actions and use them to predict 7 different emotional states. The solution proposed in this paper is based on RGB and depth information of 2D/3D images acquired from a commercial RGB-D sensor called Kinect. Materials: the dataset is a collection of several human actions made by different actors. Each action is performed by each actor for three times in each video. 20 actors perform 16 different actions, both seated and upright, totalling 40 videos per actor. Methods: human gestural actions are recognized by means feature extractions as angles and distances related to joints of human skeleton from RGB and depth images. Emotions are selected according to the state-of-the-art. Experimental results: despite truly similar actions, the overall accuracy reached is approximately 80%. Conclusions and future works: the proposed work seems to be back-ground- and speed independent, and it will be used in the future as part of a multimodal emotion recognition software based on facial expressions and speech analysis as well.

A new tool for gestural action recognition to support decisions in emotional framework / Bevilacqua, Vitoantonio; Barone, D; Cipriani, F; D’Onghia, G; Mastrandrea, G; Mastronardi, Giuseppe; Suma, M; D’Ambruoso, D.. - (2014), pp. 184-191. (Intervento presentato al convegno IEEE International Symposium on Innovations in Intelligent Systems and Applications, INISTA 2014 tenutosi a Alberobello; Italy nel June 23-25, 2014) [10.1109/INISTA.2014.6873616].

A new tool for gestural action recognition to support decisions in emotional framework

BEVILACQUA, Vitoantonio;MASTRONARDI, Giuseppe;
2014-01-01

Abstract

Introduction and objective: the purpose of this work is to design and implement an innovative tool to recognize 16 different human gestural actions and use them to predict 7 different emotional states. The solution proposed in this paper is based on RGB and depth information of 2D/3D images acquired from a commercial RGB-D sensor called Kinect. Materials: the dataset is a collection of several human actions made by different actors. Each action is performed by each actor for three times in each video. 20 actors perform 16 different actions, both seated and upright, totalling 40 videos per actor. Methods: human gestural actions are recognized by means feature extractions as angles and distances related to joints of human skeleton from RGB and depth images. Emotions are selected according to the state-of-the-art. Experimental results: despite truly similar actions, the overall accuracy reached is approximately 80%. Conclusions and future works: the proposed work seems to be back-ground- and speed independent, and it will be used in the future as part of a multimodal emotion recognition software based on facial expressions and speech analysis as well.
2014
IEEE International Symposium on Innovations in Intelligent Systems and Applications, INISTA 2014
978-1-4799-3019-7
A new tool for gestural action recognition to support decisions in emotional framework / Bevilacqua, Vitoantonio; Barone, D; Cipriani, F; D’Onghia, G; Mastrandrea, G; Mastronardi, Giuseppe; Suma, M; D’Ambruoso, D.. - (2014), pp. 184-191. (Intervento presentato al convegno IEEE International Symposium on Innovations in Intelligent Systems and Applications, INISTA 2014 tenutosi a Alberobello; Italy nel June 23-25, 2014) [10.1109/INISTA.2014.6873616].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/22278
Citazioni
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 3
social impact