Explainable Deep Learning for Brain Cancer Classification: A Comparative Study of Transfer Learning and Training-from-Scratch Models Using SHAP and LIME

Authors

  • Asif Rahman Department of Computer Science, Abdul Wali Khan University, Mardan, KPK, Pakistan. Author
  • Maqsood Hayat Department of Computer Science, Abdul Wali Khan University, Mardan, KPK, Pakistan Author
  • Nadeem Iqbal Department of Computer Science, Abdul Wali Khan University, Mardan, KPK, Pakistan. Author
  • Hashim Ali Department of Computer Science, Abdul Wali Khan University, Mardan, KPK, Pakistan Author
  • Ishaq Ahmad Department of Computer Science& Information Technology, University of Malakand, KPK, Pakistan. Author

DOI:

https://doi.org/10.66021/pakmcr832

Keywords:

Brain Tumor Detection; MRI; CNN; Transfer Learning; VGG16; LIME; SHAP

Abstract

Diagnosing brain cancer with Magnetic Resonance Imaging (MRI) is not only critical but also a very difficult and time-consuming task in medical analysis of images. Despite the Deep Learning (DL) model's performance with interpretability, the DL still dramatically enhances the ability to automate the procedure. In this research study, we first trained a convolutional neural network (CNN) from scratch on 2,000 MRI images, comprising 1,000 tumors and 1,000 non-tumors, and then tested it on 600 images, comprising 300 tumors and 300 non-tumors. The scratch model (CNN) achieved 98% accuracy with sensitivity 100%, precision 96%, F1-score 98%, and specificity 96%. To enhance performance, we proceeded to transfer learning using four models: DenseNet121, InceptionV3, ResNet50, and VGG16. Among these architectures, VGG16 achieved the best results, achieving perfect classification across all evaluation metrics (e.g., accuracy, precision, sensitivity, F1-score, and specificity: 100%). To discuss the challenges in interpretability, we also applied model explainability techniques to visualize the result-making procedures of VGG16, e.g., Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). Both local and global insights from these explainability methods were provided, highlighting critical tumor regions, validating the model's predictions, and enhancing trust in its real-world clinical applications. The results confirmed that VGG16 achieved the highest performance and provided interpretable explanations, making it a robust and trustworthy model for automating the brain cancer diagnosis process. asifrahman557/Explainable-AI-using-SHAP-LIME

Author Biographies

  • Hashim Ali, Department of Computer Science, Abdul Wali Khan University, Mardan, KPK, Pakistan

     

     

     

  • Ishaq Ahmad, Department of Computer Science& Information Technology, University of Malakand, KPK, Pakistan.

     

     

     

Downloads

Published

2026-04-14

How to Cite

Explainable Deep Learning for Brain Cancer Classification: A Comparative Study of Transfer Learning and Training-from-Scratch Models Using SHAP and LIME. (2026). Pakistan Journal of Medical & Cardiological Review, 5(2), 379-405. https://doi.org/10.66021/pakmcr832