The recent global increase in average life expectancy has led to an inevitable increase in the elderly population (22% in 2050), with an (unrelated) reduction in the young population. The increase in life expectancy leads to negative consequences due to the onset of diseases related to aging causing physical and social disabilities of the elderly. This condition leads to the need for hospitalizations (with a consequent increase in the costs at the expense of the sanitary system) or 24/7 care in dedicated facilities that often suffer from dedicated specialized personnel shortage. Home care through dedicated caregivers is also following the same trend. The context of Ambient Assisted Living (AAL) proposes a solution to the problem by equipping domestic or assistive infrastructures with intelligent sensor networks scattered throughout the environment, and robotic platforms (i.e., AAL robots). Moreover, the employment of the latter recorded a drastic increase (>20%) in the last two years due to the pandemic emergency. In this context, this Ph.D. thesis presents a novel robot-empowered AAL infrastructure capable of creating a pervasive environment, which is fully-aware of the psychophysical status of the patient and that adapts to different degrees of disability and emotional involvement of the patient. Specifically, the features introduced by this thesis in the context of an AAL infrastructure are: (i) the introduction of a Brain-Computer Interface (BCI) for the selection of services, able to allow patients with severe disabilities to formalize requests; (ii) the incorporation of a cognitive status recognition system based on electroencephalographic (EEG) signals able to provide monitoring parameters and support for remote diagnosis; (iii) the inclusion in the infrastructure of an emotion recognition system capable of triggering an adaptive behavioral system by the robot component of the architecture. Concerning the BCI-based selection system, two neural interfaces are proposed. A BCI exploits movement-related cortical potentials (MRP), while the other one is based on event-related potentials (ERP) and in particular on an ERP component: the P300. The former BCI is a first-of-a-kind MRP-based neural interface that allows a quick selection of services by using nested binary choices and a low computational complexity processing algorithm. Specifically, the BCI is based on a symbolization method capable of translating the EEG signal into a binary string, speeding up the subsequent inferential system based on a Support Vector Machine (SVM). Differently, the second BCI (i.e., the P300-based one) permits a greater number of choices in a single session (i.e., 12) and employs a Dense Neural Network (NN) as an inferential system. Specifically, the thesis proposes a framework for the selection of the appropriate “user-tailored” NN topology based on a dedicated tuning of the hyperparameters. The cognitive status extraction system analyzes in parallel the EEG signals provided by the P300-based BCI. In fact, this system exploits the biomarking properties of the P300 component for the identification of cognitive impairment. For this purpose, an algorithm for the spatio-temporal reconstruction of ERPs, named t-RIDE, is introduced. It provides a quantitative parameter useful for the remote support to diagnosis or monitoring of the patient. The EEG-based emotional state recognition system employs a user-specific selection framework. The proposed selection method analyzes, through the cascade of two grid-search-based routines, a sequence of feature extraction algorithms used in the emotion recognition field, assessing their accuracy-versus-complexity ratio. The implemented system is designed to discriminate up to 8 different emotions by using a circumplex model based on three parameters for the emotion definition: arousal, valence, and dominance. BCIs and cognitive/emotion recognition systems are trained/calibrated offline on a dedicated computer, while the online inference is realized through three different Android apps running in background on Pepper tablet. Since the services offered by BCIs integrated into the AAL infrastructure also include picking and delivery of various goods, as part of the thesis work the Pepper robot has been equipped with object handling capabilities, which is natively missing. In this context, a set of routines for object manipulation based on the built-in RGB cameras and the 3D sensor has been realized. The implemented routines exploit the Pepper sensing systems to (i) identify the object to be grabbed, (ii) plan the arms movement sequence, and (iii) grab the object for final recognition. The routines, programmed via Python to run in the background on the robot’s operating system, are designed to operate fully automatically, without the need for an internet connection (ensuring intrinsic protection for sensitive data). To test innovative features introduced by this thesis, in-vivo measurement sessions have been carried out on 13 volunteers from the Politecnico di Bari and g.tec medical engineering GmbH (Austria) company. Overall, MRP-based BCI demonstrated to be able to discriminate two choices with an average accuracy of 84.07 % with a mean Information Translate Rate (ITR) of 11.17 commands/minute. However, this BCI can achieve a mean ITR of 22.33 comm./minute (i.e., 1 out of 222 available choices/ min) by simply reducing the interstimulus interval from 2 s to 1 s. In the same context, the P300-based BCI demonstrated to be able in achieving a choice recognition accuracy of 96.66 % after 21s of stimulation. Considering 12 possible choices, the P300-based BCI showed a maximum ITR of 16 comm./min. The emotion recognition system outcomes show that in an 8-emotion discrimination problem it can achieve a multiclass accuracy of ~76 % (mean value), or considering a model that involves only 4 emotions, ~ 80 %. Experimental tests on the cognitive status extraction system demonstrated that all the subjects involved in the tests, belonged to the healthy subjects class, having the ERP reconstructed characteristics compatible with this class. Finally, concerning the object manipulation capabilities, the proposed routine tested in real-life scenarios, returned a grabbing accuracy of ~87% for different shelf heights, demonstrating the employability of improved social robotics for daily-life assistance and ambulatorial contexts. 

Design and Implementation of a Novel EEG-based Brain-Computer Interface to Improve "Perception - Understanding - Action" in Humanoid Robotics

Mezzina, Giovanni
2022-01-01

Abstract

The recent global increase in average life expectancy has led to an inevitable increase in the elderly population (22% in 2050), with an (unrelated) reduction in the young population. The increase in life expectancy leads to negative consequences due to the onset of diseases related to aging causing physical and social disabilities of the elderly. This condition leads to the need for hospitalizations (with a consequent increase in the costs at the expense of the sanitary system) or 24/7 care in dedicated facilities that often suffer from dedicated specialized personnel shortage. Home care through dedicated caregivers is also following the same trend. The context of Ambient Assisted Living (AAL) proposes a solution to the problem by equipping domestic or assistive infrastructures with intelligent sensor networks scattered throughout the environment, and robotic platforms (i.e., AAL robots). Moreover, the employment of the latter recorded a drastic increase (>20%) in the last two years due to the pandemic emergency. In this context, this Ph.D. thesis presents a novel robot-empowered AAL infrastructure capable of creating a pervasive environment, which is fully-aware of the psychophysical status of the patient and that adapts to different degrees of disability and emotional involvement of the patient. Specifically, the features introduced by this thesis in the context of an AAL infrastructure are: (i) the introduction of a Brain-Computer Interface (BCI) for the selection of services, able to allow patients with severe disabilities to formalize requests; (ii) the incorporation of a cognitive status recognition system based on electroencephalographic (EEG) signals able to provide monitoring parameters and support for remote diagnosis; (iii) the inclusion in the infrastructure of an emotion recognition system capable of triggering an adaptive behavioral system by the robot component of the architecture. Concerning the BCI-based selection system, two neural interfaces are proposed. A BCI exploits movement-related cortical potentials (MRP), while the other one is based on event-related potentials (ERP) and in particular on an ERP component: the P300. The former BCI is a first-of-a-kind MRP-based neural interface that allows a quick selection of services by using nested binary choices and a low computational complexity processing algorithm. Specifically, the BCI is based on a symbolization method capable of translating the EEG signal into a binary string, speeding up the subsequent inferential system based on a Support Vector Machine (SVM). Differently, the second BCI (i.e., the P300-based one) permits a greater number of choices in a single session (i.e., 12) and employs a Dense Neural Network (NN) as an inferential system. Specifically, the thesis proposes a framework for the selection of the appropriate “user-tailored” NN topology based on a dedicated tuning of the hyperparameters. The cognitive status extraction system analyzes in parallel the EEG signals provided by the P300-based BCI. In fact, this system exploits the biomarking properties of the P300 component for the identification of cognitive impairment. For this purpose, an algorithm for the spatio-temporal reconstruction of ERPs, named t-RIDE, is introduced. It provides a quantitative parameter useful for the remote support to diagnosis or monitoring of the patient. The EEG-based emotional state recognition system employs a user-specific selection framework. The proposed selection method analyzes, through the cascade of two grid-search-based routines, a sequence of feature extraction algorithms used in the emotion recognition field, assessing their accuracy-versus-complexity ratio. The implemented system is designed to discriminate up to 8 different emotions by using a circumplex model based on three parameters for the emotion definition: arousal, valence, and dominance. BCIs and cognitive/emotion recognition systems are trained/calibrated offline on a dedicated computer, while the online inference is realized through three different Android apps running in background on Pepper tablet. Since the services offered by BCIs integrated into the AAL infrastructure also include picking and delivery of various goods, as part of the thesis work the Pepper robot has been equipped with object handling capabilities, which is natively missing. In this context, a set of routines for object manipulation based on the built-in RGB cameras and the 3D sensor has been realized. The implemented routines exploit the Pepper sensing systems to (i) identify the object to be grabbed, (ii) plan the arms movement sequence, and (iii) grab the object for final recognition. The routines, programmed via Python to run in the background on the robot’s operating system, are designed to operate fully automatically, without the need for an internet connection (ensuring intrinsic protection for sensitive data). To test innovative features introduced by this thesis, in-vivo measurement sessions have been carried out on 13 volunteers from the Politecnico di Bari and g.tec medical engineering GmbH (Austria) company. Overall, MRP-based BCI demonstrated to be able to discriminate two choices with an average accuracy of 84.07 % with a mean Information Translate Rate (ITR) of 11.17 commands/minute. However, this BCI can achieve a mean ITR of 22.33 comm./minute (i.e., 1 out of 222 available choices/ min) by simply reducing the interstimulus interval from 2 s to 1 s. In the same context, the P300-based BCI demonstrated to be able in achieving a choice recognition accuracy of 96.66 % after 21s of stimulation. Considering 12 possible choices, the P300-based BCI showed a maximum ITR of 16 comm./min. The emotion recognition system outcomes show that in an 8-emotion discrimination problem it can achieve a multiclass accuracy of ~76 % (mean value), or considering a model that involves only 4 emotions, ~ 80 %. Experimental tests on the cognitive status extraction system demonstrated that all the subjects involved in the tests, belonged to the healthy subjects class, having the ERP reconstructed characteristics compatible with this class. Finally, concerning the object manipulation capabilities, the proposed routine tested in real-life scenarios, returned a grabbing accuracy of ~87% for different shelf heights, demonstrating the employability of improved social robotics for daily-life assistance and ambulatorial contexts. 
Ambient Assisted Living; Brain Computer Interface; Robotics
File in questo prodotto:
File Dimensione Formato  
35 ciclo-MEZZINA Giovanni.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Tutti i diritti riservati
Dimensione 7.36 MB
Formato Adobe PDF
7.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/246121
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact