A correct recognition of nonverbal expressions is currently one of the most important challenges of research in the field of human computer interaction. The ability to recognize human actions could change the way to interact with machines in several environments and contexts, or even the way to live. In this paper, we describe the advances of a previous study finalized to design, implement and validate an innovative recognition system already developed by some of the authors. It was aimed at recognizing two opposite emotional conditions (resonance and dissonance) of a candidate to a job position interacting with the recruiter during a job interview. Results in terms of the accuracy, resonance rate, and dissonance rate of the three new optimized neural network-based (NN) classifiers are discussed. Comparison with previous results of three NN classifiers is also presented based on three single domains: facial, vocal and gestural.
|Titolo:||A Multimodal System for Nonverbal Human Feature Recognition in Emotional Framework|
|Data di pubblicazione:||2015|
|Nome del convegno:||9th ACM Conference on Recommender Systems, RecSys 2015|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1145/2809643.2809645|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|