Thursday, December 5, 2024

An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection

 

 https://www.mdpi.com/2078-2489/15/12/783

 

Explainable artificial intelligence (XAI) is becoming a cornerstone of modern AI applications, especially in sensitive fields like healthcare, where the need for transparency and reliability is paramount. Our latest research focuses on enhancing the interpretability of convolutional neural networks (CNNs) used for melanoma diagnosis, a field where accurate and trustworthy tools can significantly impact clinical practice.

Melanoma is one of the most aggressive forms of skin cancer, posing challenges in diagnosis due to its visual similarity to benign lesions. While deep learning models have demonstrated remarkable diagnostic accuracy, their adoption in clinical workflows has been hindered by their "black box" nature. Physicians need to understand why a model makes specific predictions, not only to trust the results but also to integrate these tools into their decision-making processes. In this context, our research introduces a novel workflow that combines state-of-the-art XAI techniques to provide both qualitative and quantitative insights into the decision-making process of CNNs. The uniqueness of our approach lies in the integration of additional handcrafted features, specifically Local Binary Pattern (LBP) texture features, into the CNN architecture. These features, combined with the automatically extracted data from the neural network, allow us to analyze and interpret the network's predictions more effectively.

The study leverages public datasets of dermoscopic images from the ISIC archive, carefully balancing training and validation datasets to ensure robust results. The modified CNN architecture features five convolutional layers followed by dense layers to reduce dimensionality, making the network’s internal processes more interpretable. Alongside dermoscopic images, the network is fed LBP features, which are injected into the flattened layer to augment the learning process.

To explain the model's predictions, we employed two key XAI techniques: Grad-CAM and Layer-wise Relevance Propagation (LRP). Grad-CAM generates activation maps that highlight regions of the image influencing the network’s decisions, while LRP goes further by assigning relevance scores to individual pixels. Together, these methods provide a visual explanation of the decision-making process, helping to identify which areas of an image the model considers most important for classification. Interestingly, we observed that LRP was particularly effective in distinguishing clinically relevant patterns, while Grad-CAM occasionally identified spurious correlations. For a quantitative perspective, we used the kernel SHAP method, grounded in game theory, to assess the importance of features in the network’s predictions. This analysis revealed that most of the classification power - approximately 76.6% - came from features learned by the network, while the remaining 23.4% was contributed by the handcrafted LBP features. Such insights not only validate the role of feature injection but also open avenues for integrating diagnostically meaningful features, such as lesion asymmetry or border irregularities, into future models.

The performance of our modified CNN surpassed both our earlier work and other state-of-the-art approaches, achieving an accuracy of 98.41% and an AUC of 98.00% on the external test set. These results underscore the effectiveness of our interpretability framework, proving that improving transparency does not necessarily compromise accuracy that can be enhanced.

While this research marks significant progress, it also highlights areas for future exploration. The use of handcrafted features with limited diagnostic value, such as LBP, points to the need for incorporating features more aligned with clinical evaluation, like the ABCDE rule used for melanoma assessment. Moreover, involving dermatologists in the evaluation process could provide valuable qualitative feedback to refine the interpretability methods further.

This work demonstrates that XAI is a tool for explaining AI decisions and a a critical component for building trust in AI systems, especially in high-stakes fields like medical diagnostics. By combining visual and quantitative explanations, we hope to bridge the gap between AI and clinical practice, paving the way for broader adoption of AI-assisted tools in healthcare. Through this transparent and interpretable approach, we aim to empower clinicians, enhance diagnostic accuracy, and ultimately improve patient outcomes.

Here the paper: https://www.mdpi.com/2078-2489/15/12/783







No comments:

Post a Comment

The Future of Lithium-Ion Battery Diagnostics: Insights from Degradation Mechanisms and Differential Curve Modeling

  Featured Research paper: Degradation mechanisms and differential curve modeling for non-invasive diagnostics of lithium cells: An overview...