: In clinical practice, several standardized neuropsychological tests have been designed to assess and monitor the neurocognitive status of patients with neurodegenerative diseases such as Alzheimer's disease. Important research efforts have been devoted so far to the development of multivariate machine learning models that combine the different test indexes to predict the diagnosis and prognosis of cognitive decline with remarkable results. However, less attention has been devoted to the explainability of these models. In this work, we present a robust framework to (i) perform a threefold classification between healthy control subjects, individuals with cognitive impairment, and subjects with dementia using different cognitive indexes and (ii) analyze the variability of the explainability SHAP values associated with the decisions taken by the predictive models. We demonstrate that the SHAP values can accurately characterize how each index affects a patient's cognitive status. Furthermore, we show that a longitudinal analysis of SHAP values can provide effective information on Alzheimer's disease progression.
A robust framework to investigate the reliability and stability of explainable artificial intelligence markers of Mild Cognitive Impairment and Alzheimer's Disease / Lombardi, Angela; Diacono, Domenico; Amoroso, Nicola; Biecek, Przemysław; Monaco, Alfonso; Bellantuono, Loredana; Pantaleo, Ester; Logroscino, Giancarlo; De Blasi, Roberto; Tangaro, Sabina; Bellotti, Roberto. - In: BRAIN INFORMATICS. - ISSN 2198-4026. - 9:1(2022), p. 17. [10.1186/s40708-022-00165-5]
A robust framework to investigate the reliability and stability of explainable artificial intelligence markers of Mild Cognitive Impairment and Alzheimer's Disease
Lombardi, Angela;
2022-01-01
Abstract
: In clinical practice, several standardized neuropsychological tests have been designed to assess and monitor the neurocognitive status of patients with neurodegenerative diseases such as Alzheimer's disease. Important research efforts have been devoted so far to the development of multivariate machine learning models that combine the different test indexes to predict the diagnosis and prognosis of cognitive decline with remarkable results. However, less attention has been devoted to the explainability of these models. In this work, we present a robust framework to (i) perform a threefold classification between healthy control subjects, individuals with cognitive impairment, and subjects with dementia using different cognitive indexes and (ii) analyze the variability of the explainability SHAP values associated with the decisions taken by the predictive models. We demonstrate that the SHAP values can accurately characterize how each index affects a patient's cognitive status. Furthermore, we show that a longitudinal analysis of SHAP values can provide effective information on Alzheimer's disease progression.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.