Service robots are expected to be used in many household in the near future, provided that proper interfaces are developed for the human robot interaction. Gesture recognition has been recognized as a natural way for the communication especially for elder or impaired people. With the developments of new technologies and the large availability of inexpensive depth sensors, real time gesture recognition has been faced by using depth information and avoiding the limitations due to complex background and lighting situations. In this paper the Kinect Depth Camera, and the OpenNI framework have been used to obtain real time tracking of human skeleton. Then, robust and significant features have been selected to get rid of unrelated features and decrease the computational costs. These features are fed to a set of Neural Network Classifiers that recognize ten different gestures. Several experiments demonstrate that the proposed method works effectively. Real time tests prove the robustness of the method for realization of human robot interfaces.
A neural network approach for human gesture recognition with a Kinect sensor / D’Orazio, T.; Attolico, C.; Cicirelli, G.; Guaragnella, C.. - STAMPA. - (2014), pp. 741-746. (Intervento presentato al convegno 3rd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2014 tenutosi a Angers, France nel March 6-8, 2014) [10.5220/0004919307410746].
A neural network approach for human gesture recognition with a Kinect sensor
C. Attolico;C. Guaragnella
2014-01-01
Abstract
Service robots are expected to be used in many household in the near future, provided that proper interfaces are developed for the human robot interaction. Gesture recognition has been recognized as a natural way for the communication especially for elder or impaired people. With the developments of new technologies and the large availability of inexpensive depth sensors, real time gesture recognition has been faced by using depth information and avoiding the limitations due to complex background and lighting situations. In this paper the Kinect Depth Camera, and the OpenNI framework have been used to obtain real time tracking of human skeleton. Then, robust and significant features have been selected to get rid of unrelated features and decrease the computational costs. These features are fed to a set of Neural Network Classifiers that recognize ten different gestures. Several experiments demonstrate that the proposed method works effectively. Real time tests prove the robustness of the method for realization of human robot interfaces.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.