Predicting brain age from neuroimaging data is increasingly used to study aging trajectories and detect deviations linked to neurological conditions. Machine learning models trained on large datasets have shown promising results, but data privacy regulations and the challenge of sharing medical data across institutions limit the feasibility of centralized training. Federated Learning (FL) offers a solution by allowing multiple sites to collaboratively train a model without sharing raw data. However, it remains unclear how FL affects the explainability of these models, raising concerns about the consistency and reliability of their predictions. In this study, we analyze the consistency of model explanations between centralized and federated training paradigms. Using DeepSHAP we compare feature attributions in brain age prediction models trained on the multi-site, publicly available OpenBHB dataset. We examine the impact of how data is distributed across sites (IID vs. non-IID), the number of sites participating per training round (sampling rate), and different FL aggregation methods (FedAVG, FedProx). Our findings show that federated models provide different explanations compared to centralized models, even when trained on the same data and task. Non-IID data distributions reduce the consistency of explanations, while including a larger number of sites per training round improves stability. Interestingly, some federated models trained on non-IID data capture biologically meaningful patterns of brain aging even more effectively than centralized models. These results suggest that careful choices in how data is distributed and how training is conducted in FL can impact model accuracy and interpretability.

Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction / Fasano, Giuseppe; Lombardi, Angela; Ferrara, Antonio; Di Sciascio, Eugenio; Di Noia, Tommaso. - STAMPA. - (2026), pp. 295-315. ( Explainable Artificial Intelligence 3rd World Conference, xAI 2025 Istanbul, Turkey July 9-11, 2025) [10.1007/978-3-032-08317-3_14].

Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction

Fasano, Giuseppe;Lombardi, Angela
;
Ferrara, Antonio;Di Sciascio, Eugenio;Di Noia, Tommaso
2026

Abstract

Predicting brain age from neuroimaging data is increasingly used to study aging trajectories and detect deviations linked to neurological conditions. Machine learning models trained on large datasets have shown promising results, but data privacy regulations and the challenge of sharing medical data across institutions limit the feasibility of centralized training. Federated Learning (FL) offers a solution by allowing multiple sites to collaboratively train a model without sharing raw data. However, it remains unclear how FL affects the explainability of these models, raising concerns about the consistency and reliability of their predictions. In this study, we analyze the consistency of model explanations between centralized and federated training paradigms. Using DeepSHAP we compare feature attributions in brain age prediction models trained on the multi-site, publicly available OpenBHB dataset. We examine the impact of how data is distributed across sites (IID vs. non-IID), the number of sites participating per training round (sampling rate), and different FL aggregation methods (FedAVG, FedProx). Our findings show that federated models provide different explanations compared to centralized models, even when trained on the same data and task. Non-IID data distributions reduce the consistency of explanations, while including a larger number of sites per training round improves stability. Interestingly, some federated models trained on non-IID data capture biologically meaningful patterns of brain aging even more effectively than centralized models. These results suggest that careful choices in how data is distributed and how training is conducted in FL can impact model accuracy and interpretability.
2026
Explainable Artificial Intelligence 3rd World Conference, xAI 2025
978-3-032-08316-6
Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction / Fasano, Giuseppe; Lombardi, Angela; Ferrara, Antonio; Di Sciascio, Eugenio; Di Noia, Tommaso. - STAMPA. - (2026), pp. 295-315. ( Explainable Artificial Intelligence 3rd World Conference, xAI 2025 Istanbul, Turkey July 9-11, 2025) [10.1007/978-3-032-08317-3_14].
File in questo prodotto:
File Dimensione Formato  
2026_Exploring_Explainability_in_Federated_Learning_pdfeditoriale.pdf

accesso aperto

Tipologia: Versione editoriale
Licenza: Creative commons
Dimensione 3.19 MB
Formato Adobe PDF
3.19 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/292621
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact