Brain age, a biomarker of neurological health, is widely used in neuroimaging for early detection of neurodegenerative diseases. While deep learning models have shown promise in brain age prediction from MRI, data imbalance and model interpretability remain key challenges. This study investigates the impact of data augmentation (DA) on both predictive accuracy and explanation stability in convolutional neural networks (CNNs) for brain age prediction. We compare three training strategies: (i) a baseline model, (ii) a model augmented with real MRI scans from OASIS-3, and (iii) a model trained with synthetic data generated by a diffusion model. Model performance is evaluated using mean absolute error (MAE), while interpretability is assessed through Explainable AI (XAI) methods, including DeepSHAP, Grad-CAM, and Occlusion. Our findings indicate that synthetic augmentation improves predictive accuracy, particularly for underrepresented age groups (individuals aged 40–80 years), while real-data augmentation provides more stable feature attributions. However, differences in XAI methods suggest that explanation reliability varies across training strategies. These results highlight the trade-offs between accuracy and interpretability in AI-driven neuroimaging, emphasizing the need for balanced augmentation strategies to develop clinically trustworthy models.

Comparing XAI Explanations and Synthetic Data Augmentation Strategies in Neuroimaging AI / Danese, Danilo; Fasano, Giuseppe; Lombardi, Angela; Di Sciascio, Eugenio; Di Noia, Tommaso. - STAMPA. - (2026), pp. 3-26. ( Explainable Artificial Intelligence 3rd World Conference, xAI 2025 Istanbul, Turkey July 9-11, 2025) [10.1007/978-3-032-08330-2_1].

Comparing XAI Explanations and Synthetic Data Augmentation Strategies in Neuroimaging AI

Danese, Danilo;Fasano, Giuseppe;Lombardi, Angela
;
Di Sciascio, Eugenio;Di Noia, Tommaso
2026

Abstract

Brain age, a biomarker of neurological health, is widely used in neuroimaging for early detection of neurodegenerative diseases. While deep learning models have shown promise in brain age prediction from MRI, data imbalance and model interpretability remain key challenges. This study investigates the impact of data augmentation (DA) on both predictive accuracy and explanation stability in convolutional neural networks (CNNs) for brain age prediction. We compare three training strategies: (i) a baseline model, (ii) a model augmented with real MRI scans from OASIS-3, and (iii) a model trained with synthetic data generated by a diffusion model. Model performance is evaluated using mean absolute error (MAE), while interpretability is assessed through Explainable AI (XAI) methods, including DeepSHAP, Grad-CAM, and Occlusion. Our findings indicate that synthetic augmentation improves predictive accuracy, particularly for underrepresented age groups (individuals aged 40–80 years), while real-data augmentation provides more stable feature attributions. However, differences in XAI methods suggest that explanation reliability varies across training strategies. These results highlight the trade-offs between accuracy and interpretability in AI-driven neuroimaging, emphasizing the need for balanced augmentation strategies to develop clinically trustworthy models.
2026
Explainable Artificial Intelligence 3rd World Conference, xAI 2025
978-3-032-08329-6
Comparing XAI Explanations and Synthetic Data Augmentation Strategies in Neuroimaging AI / Danese, Danilo; Fasano, Giuseppe; Lombardi, Angela; Di Sciascio, Eugenio; Di Noia, Tommaso. - STAMPA. - (2026), pp. 3-26. ( Explainable Artificial Intelligence 3rd World Conference, xAI 2025 Istanbul, Turkey July 9-11, 2025) [10.1007/978-3-032-08330-2_1].
File in questo prodotto:
File Dimensione Formato  
2026_Comparing_XAI_Explanations_and_Synthetic_Data_Augmentation_Strategies_in_Neuroimaging_AI_pdfeditoriale.pdf

accesso aperto

Tipologia: Versione editoriale
Licenza: Creative commons
Dimensione 1.6 MB
Formato Adobe PDF
1.6 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/292620
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact