ORCID
- Megan Courtman: 0000-0002-8984-7798
Abstract
Machine learning is increasingly being applied to medical imaging tasks. However, the "black box'' nature of techniques such as deep learning has inhibited the interpretability and trustworthiness of these methods, and therefore their clinical utility. In recent years, explainability methods have been developed to allow better interrogation of these approaches.This thesis presents the novel application of explainable deep learning to several medical imaging tasks, to investigate its potential in patient safety and research. It presents the novel application of explainable deep learning to the detection of aneurysm clips in CT brains for MRI safety. It also presents the novel application of explainable deep learning to the detection of confounding pathology in radiology report texts for dataset curation. Furthermore, it makes novel contributions to Parkinson’s research, using explainable deep learning to identify progressive brain changes in MRI brain scans, and to identify differences in the brains of non-manifesting carriers of Parkinson's genetic risk variants in MRI brain scans. In each case, convolutional neural networks were developed for classification of data, and Shapley Additive exPlanations (SHAP) were used to explain predictions. A novel pipeline was developed to apply SHAP to volumetric medical imaging data.The application of explainable deep learning to various types of data and task demonstrates the flexibility of the combination of convolutional neural networks and SHAP. Additionally, these applications highlight the importance of combining explainability with clinical expertise, to check the viability of the models and to ensure that they meet a clinical need. These novel applications represent useful new tools for safety and research, and potentially for improvement of clinical care.
Keywords
Artificial intelligence, Machine learning, Medical imaging, MRI safety, Deep learning, Explainable artificial intelligence, SHapley Additive exPlanations, Parkinson's disease, Natural language processing, Health data science, Convolutional neural networks
Document Type
Thesis
Publication Date
2024
Embargo Period
2024-09-26
Recommended Citation
Courtman, M. (2024) Explainable Deep Learning for Medical Imaging Classification. Thesis. University of Plymouth. Retrieved from https://pearl.plymouth.ac.uk/secam-theses/539