This study presents a systematic framework for evaluating convolutional neural networks (CNNs) in the context of brain age prediction, with a focus on interpretability through explainable AI (XAI) methods. Brain age prediction has advanced significantly using 2D and 3D CNNs, which provide high predictive accuracy. However, the complexity of these models creates challenges for clinical interpretation. Our framework not only assesses model performance but also provides deeper insights into various aspects of CNN models, including how the selected backgrounds can influence XAI values. By facilitating multisite data analysis, the framework helps identify the impact of site-specific characteristics on model behavior. The results underscore the importance of local explanations and highlight the need for careful interpretation when using population-level saliency maps.
BRAINEX: A Systematic Framework for CNN Models Evaluation and XAI Methods Comparison in Brain Age Prediction / Fasano, Giuseppe; De Bonis, Maria Luigia Natalia; Lombardi, Angela; Ardito, Carmelo Antonio; Di Sciascio, Eugenio; Di Noia, Tommaso (LECTURE NOTES IN COMPUTER SCIENCE). - In: Lecture Notes in Computer Science[s.l] : Springer Science and Business Media Deutschland GmbH, 2025. - ISBN 9789819632930. - pp. 256-269 [10.1007/978-981-96-3294-7_20]
BRAINEX: A Systematic Framework for CNN Models Evaluation and XAI Methods Comparison in Brain Age Prediction
Fasano, Giuseppe;De Bonis, Maria Luigia Natalia;Lombardi, Angela
;Ardito, Carmelo Antonio;Di Sciascio, Eugenio;Di Noia, Tommaso
2025
Abstract
This study presents a systematic framework for evaluating convolutional neural networks (CNNs) in the context of brain age prediction, with a focus on interpretability through explainable AI (XAI) methods. Brain age prediction has advanced significantly using 2D and 3D CNNs, which provide high predictive accuracy. However, the complexity of these models creates challenges for clinical interpretation. Our framework not only assesses model performance but also provides deeper insights into various aspects of CNN models, including how the selected backgrounds can influence XAI values. By facilitating multisite data analysis, the framework helps identify the impact of site-specific characteristics on model behavior. The results underscore the importance of local explanations and highlight the need for careful interpretation when using population-level saliency maps.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

