Reliable classification of cognitive states in longitudinal Alzheimer’s Disease (AD) studies is critical for early diagnosis and intervention. However, inconsistencies in diagnostic labeling, arising from subjective assessments, evolving clinical criteria, and measurement variability, introduce noise that can impact machine learning (ML) model performance. This study explores the potential of explainable artificial intelligence to detect and characterize noisy labels in longitudinal datasets. A predictive model is trained using a Leave-One-Subject-Out validation strategy, ensuring robustness across subjects while enabling individual-level interpretability. By leveraging SHapley Additive exPlanations values, we analyze the temporal variations in feature importance across multiple patient visits, aiming to identify transitions that may reflect either genuine cognitive changes or inconsistencies in labeling. Using statistical thresholds derived from cognitively stable individuals, we propose an approach to flag potential misclassifications while preserving clinical labels. Rather than modifying diagnoses, this framework provides a structured way to highlight cases where diagnostic reassessment may be warranted. By integrating explainability into the assessment of cognitive state transitions, this approach enhances the reliability of longitudinal analyses and supports a more robust use of ML in AD research.

Detecting label noise in longitudinal Alzheimer’s data with explainable artificial intelligence / Sorino, Paolo; Lombardi, Angela; Lofu, Domenico; Colafiglio, Tommaso; Ferrara, Antonio; Narducci, Fedelucio; Di Sciascio, Eugenio; Di Noia, Tommaso. - In: BRAIN INFORMATICS. - ISSN 2198-4026. - ELETTRONICO. - 12:1(2025). [10.1186/s40708-025-00261-2]

Detecting label noise in longitudinal Alzheimer’s data with explainable artificial intelligence

Paolo Sorino;Angela Lombardi
;
Domenico Lofu;Tommaso Colafiglio;Antonio Ferrara;Fedelucio Narducci;Eugenio Di Sciascio;Tommaso Di Noia
2025

Abstract

Reliable classification of cognitive states in longitudinal Alzheimer’s Disease (AD) studies is critical for early diagnosis and intervention. However, inconsistencies in diagnostic labeling, arising from subjective assessments, evolving clinical criteria, and measurement variability, introduce noise that can impact machine learning (ML) model performance. This study explores the potential of explainable artificial intelligence to detect and characterize noisy labels in longitudinal datasets. A predictive model is trained using a Leave-One-Subject-Out validation strategy, ensuring robustness across subjects while enabling individual-level interpretability. By leveraging SHapley Additive exPlanations values, we analyze the temporal variations in feature importance across multiple patient visits, aiming to identify transitions that may reflect either genuine cognitive changes or inconsistencies in labeling. Using statistical thresholds derived from cognitively stable individuals, we propose an approach to flag potential misclassifications while preserving clinical labels. Rather than modifying diagnoses, this framework provides a structured way to highlight cases where diagnostic reassessment may be warranted. By integrating explainability into the assessment of cognitive state transitions, this approach enhances the reliability of longitudinal analyses and supports a more robust use of ML in AD research.
2025
Detecting label noise in longitudinal Alzheimer’s data with explainable artificial intelligence / Sorino, Paolo; Lombardi, Angela; Lofu, Domenico; Colafiglio, Tommaso; Ferrara, Antonio; Narducci, Fedelucio; Di Sciascio, Eugenio; Di Noia, Tommaso. - In: BRAIN INFORMATICS. - ISSN 2198-4026. - ELETTRONICO. - 12:1(2025). [10.1186/s40708-025-00261-2]
File in questo prodotto:
File Dimensione Formato  
2025_Detecting_label_noise_in_longitudinal_Alzheimer’s_data_with_explainable_artificial_intelligence_pdfeditoriale.pdf

accesso aperto

Tipologia: Versione editoriale
Licenza: Creative commons
Dimensione 2.22 MB
Formato Adobe PDF
2.22 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/288460
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact