The recent advancements in the surging field of Deep Learning (DL) have revolutionized every sphere of life, and the healthcare domain is no exception. The enormous success of DL models, particularly with image data, has led to the development of several computeraided diagnosis and clinical support systems. These intelligent imaging systems can help physicians in numerous medical tasks including classification and staging of the various diseases, image-guided surgical procedures, and many more. Additionally, the proliferation of medical datasets has further facilitated the applications of DL techniques in healthcare realm. Moreover, all the perks DL offers are remarkable, however, DL architectures are typically blackbox, i.e. they hide the decision making mechanism, therefore, interpreting how the model arrived at a particular decision is hidden. Additionally, Convolutional Neural Networks (CNNs), which are most widely used DL techniques, are prone to adversarial examples, where small, imperceptible perturbations to the input data can cause the model to make incorrect predictions. These facts question the applicability of DL in healthcare sector where explainability holds paramount significance to build a trust on surging field of machine learning. The concept of eXplainable Artificial Intelligence (XAI) brings forward the possibility of explaining the results of DL models and reveals how the models produce results. These techniques aim to improve the transparency and interpretability of AI models, which can enhance trust in their results and facilitate their adoption in clinical practice. XAI approaches have the potential to advance the understanding of complex medical image analysis tasks and improve the reliability of AI-based diagnosis and treatment planning. The story does not end here, the XAI methods in the context of medical imaging generally produce saliency maps and compute feature importance to explain the results of DL models. The sensitive nature of healthcare industry, because of having the direct correlation with human life, questions the authenticity of XAI outcomes, and demands a qualitative and quantitative measure to evaluate these evaluation methods. Furthermore, heatmap visualizations alone are often insufficient for achieving transparency and interpretability of DL models in medical imaging to foster the AI and biomedical synergy. Inspired by the latest trends and contributions in light of the aforementioned concerns, this thesis designs, develops, and validates an interpretable and transparent intelligent clinical decision support system based on traditional machine and DL architectures, whose outcomes can be qualitatively and quantitatively explained with XAI methods. The thesis also comprises a segmentation and detection pipeline for image-driven surgical applications. These novel intelligent systems aims to assist the physicians and clinicians in image-guided diagnostic and treatment systems. The developed interpretable diagnostic frameworks offer wide range of applications and can be extended to several clinical scenarios. Concerning the XAI, transparency and interpretability of CNN architectures are achieved through two families of XAI methods, i.e. perceptive and mathematical XAI. Furthermore, within each of these XAI families, two explanation frameworks are employed. These methods facilitated to investigate the reliability of features and learning process, to critically analyse various CNN architectures and XAI methods, and to compare the outcomes of both XAI pipelines. To further highlight the applications of DL in the image-guided surgical domain, a case study has been performed on image-guided surgical procedures and interventions. The case study also encompasses a detailed investigative study of public datasets and presents the legal and ethical issues of DL-driven image-guided surgery. The study additionally underlines the risks and limitations towards the autonomous systems and provides the future perspective. Finally, the second case study investigates the qualitative and quantitative evaluation of the XAI techniques in regards to the medical images. The case study also sheds light on the evaluation measures, metrics for XAI, quality of explanation, types of explanation, and few more. The clinical efficacy of the developed solutions is evaluated through comparison with existing state-of-the-art methods, and is further validated through consultation with physicians where feasible. The datasets incorporated during the study are either obtained from the online open source platforms or collected from local health institutions.

Explainable deep learning for medical image processing: computer-aided diagnosis and robot-assisted surgery / Hussain, Sardar Mehboob. - ELETTRONICO. - (2023). [10.60576/poliba/iris/hussain-sardar-mehboob_phd2023]

Explainable deep learning for medical image processing: computer-aided diagnosis and robot-assisted surgery.

Hussain, Sardar Mehboob
2023-01-01

Abstract

The recent advancements in the surging field of Deep Learning (DL) have revolutionized every sphere of life, and the healthcare domain is no exception. The enormous success of DL models, particularly with image data, has led to the development of several computeraided diagnosis and clinical support systems. These intelligent imaging systems can help physicians in numerous medical tasks including classification and staging of the various diseases, image-guided surgical procedures, and many more. Additionally, the proliferation of medical datasets has further facilitated the applications of DL techniques in healthcare realm. Moreover, all the perks DL offers are remarkable, however, DL architectures are typically blackbox, i.e. they hide the decision making mechanism, therefore, interpreting how the model arrived at a particular decision is hidden. Additionally, Convolutional Neural Networks (CNNs), which are most widely used DL techniques, are prone to adversarial examples, where small, imperceptible perturbations to the input data can cause the model to make incorrect predictions. These facts question the applicability of DL in healthcare sector where explainability holds paramount significance to build a trust on surging field of machine learning. The concept of eXplainable Artificial Intelligence (XAI) brings forward the possibility of explaining the results of DL models and reveals how the models produce results. These techniques aim to improve the transparency and interpretability of AI models, which can enhance trust in their results and facilitate their adoption in clinical practice. XAI approaches have the potential to advance the understanding of complex medical image analysis tasks and improve the reliability of AI-based diagnosis and treatment planning. The story does not end here, the XAI methods in the context of medical imaging generally produce saliency maps and compute feature importance to explain the results of DL models. The sensitive nature of healthcare industry, because of having the direct correlation with human life, questions the authenticity of XAI outcomes, and demands a qualitative and quantitative measure to evaluate these evaluation methods. Furthermore, heatmap visualizations alone are often insufficient for achieving transparency and interpretability of DL models in medical imaging to foster the AI and biomedical synergy. Inspired by the latest trends and contributions in light of the aforementioned concerns, this thesis designs, develops, and validates an interpretable and transparent intelligent clinical decision support system based on traditional machine and DL architectures, whose outcomes can be qualitatively and quantitatively explained with XAI methods. The thesis also comprises a segmentation and detection pipeline for image-driven surgical applications. These novel intelligent systems aims to assist the physicians and clinicians in image-guided diagnostic and treatment systems. The developed interpretable diagnostic frameworks offer wide range of applications and can be extended to several clinical scenarios. Concerning the XAI, transparency and interpretability of CNN architectures are achieved through two families of XAI methods, i.e. perceptive and mathematical XAI. Furthermore, within each of these XAI families, two explanation frameworks are employed. These methods facilitated to investigate the reliability of features and learning process, to critically analyse various CNN architectures and XAI methods, and to compare the outcomes of both XAI pipelines. To further highlight the applications of DL in the image-guided surgical domain, a case study has been performed on image-guided surgical procedures and interventions. The case study also encompasses a detailed investigative study of public datasets and presents the legal and ethical issues of DL-driven image-guided surgery. The study additionally underlines the risks and limitations towards the autonomous systems and provides the future perspective. Finally, the second case study investigates the qualitative and quantitative evaluation of the XAI techniques in regards to the medical images. The case study also sheds light on the evaluation measures, metrics for XAI, quality of explanation, types of explanation, and few more. The clinical efficacy of the developed solutions is evaluated through comparison with existing state-of-the-art methods, and is further validated through consultation with physicians where feasible. The datasets incorporated during the study are either obtained from the online open source platforms or collected from local health institutions.
2023
Explainable artificial intelligence; explainable deep learning; artificial intelligence; machine learning; deep learning; robotic surgery; robot-assisted surgery; computer-aided diagnosis, image-guided surgery; XAI, evaluation of XAI; breast cancer classification
Explainable deep learning for medical image processing: computer-aided diagnosis and robot-assisted surgery / Hussain, Sardar Mehboob. - ELETTRONICO. - (2023). [10.60576/poliba/iris/hussain-sardar-mehboob_phd2023]
File in questo prodotto:
File Dimensione Formato  
35 Ciclo HUSSAIN Sardar Mehboob.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 7.95 MB
Formato Adobe PDF
7.95 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/249000
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact