ORCID
- Jan Stodt: 0000-0001-9115-7668
Abstract
Artificial Neural Networks (ANNs) have achieved significant success in fields likehealthcare, but their "black box" nature challenges transparency and user trust.Existing Explainable AI (XAI) methods aim to interpret ANN decisions, yet manyare not understandable to non-AI-experts, emphasizing the need for approachesthat prioritize both accuracy and usability, especially in high-stakes environments.This thesis investigates the reliability and usability of selected existing XAI meth-ods, evaluating how effectively they convey meaningful explanations to users withAI varying expertise. Assessments of methods like LIME, GradCAM, and Fast-CAM identify key limitations, such as inconsistent visual saliency maps and a lackof user-centred design. These findings underpin the need of more understandableXAI methods tailored to specific needs.Among its various contributions, the research outlines domain-adapted approachto XAI within healthcare by automating the integration of domain knowledge.This customization reduces manual effort, ensuring that XAI methods providetechnically accurate and contextually meaningful explanations in applications likesurgical tool classification.To enhance XAI evaluation, the thesis introduces novel metrics such as ExplanationSignificance Assessment (ESA), Weighted Explanation Significance Assessment(WESA), and the Unified Intersection over Union (UIoU). These metrics addressgaps in existing techniques by emphasizing precision and clarity, improvingtransparency in AI systems for both AI experts and non-AI-experts.Finally, the thesis introduces the Explainable Object Classification (EOC) frame-work, which integrates object parts, attributes, and domain knowledge to offercomprehensive, multimodal explanations accessible to users with varying ex-pertise. By providing text, images, and decision paths, EOC enables users tounderstand AI decisions more effectively, aiding informed decision-making incritical sectors like healthcare.This thesis contributes to advancing XAI by developing methods that bridge thegap between AI developers and users, ensuring AI outputs are interpretable andpractically useful in real-world contexts.
Document Type
Thesis
Publication Date
2024
Embargo Period
2024-12-05
Recommended Citation
Stodt, J. (2024) EXPLAINABILITY AND UNDERSTANDABILITY OF ARTIFICIAL NEURAL NETWORKS. Thesis. University of Plymouth. Retrieved from https://pearl.plymouth.ac.uk/secam-theses/546