Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

XAI Using Formal Concept Lattice for Image Data


Affiliations
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India
     

   Subscribe/Renew Journal


A Formal concept lattice can be used to generate explanations from a black box model. This novel approach has been applied and proven for tabular data. It has also been compared to popular techniques in XAI. In this work, we apply the approach to image data. Image data, in general, comprises large dimensions and hence poses a challenge to build a formal concept lattice from such large data. We break the image into parts and build multiple sub-lattices. Using the combination of sub-lattice explanations, we generate the complete explanation for the entire image. We present our work beginning with a simple synthetic dataset for providing an intuitive idea of explanations and its credibility. This is followed by explanations of a model built on the popular MNIST dataset proving consistency of explanations on a real dataset. Text explanations from the lattice are converted to images for ease of visual understanding. We compare our work with DeepLIFT by viewing image masks obtained through contrastive explanation for specific digits from the MNIST dataset. This work proves the feasibility of using formal concept lattices for image data.

Keywords

Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI for Images.
Subscription Login to verify subscription
User
Notifications
Font Size

  • C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
  • Alejandro Barredo Arrieta, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio GilLopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
  • M.T. Ribeiro and C. Guestrin “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
  • Alvarez-Melis and Tommi S. Jaakkola, “On the Robustness of Interpretability Methods”, arXiv preprint arXiv:1806.08049, 2018.
  • G. Visani, Alessandro Poluzzi and Davide Capuzzo, “Statistical Stability Indices for LIME: Obtaining Reliable Explanations for Machine Learning Models”, Journal of the Operational Research Society, Vol. 73, No. 1, pp. 91-101, 2022.
  • Marzyeh Ghassemi, Luke Oakden-Rayner and Andrew L Beam, “The False Hope of Current Approaches to Explainable Artificial Intelligence in Healthcare”, The Lancet Digital Health, Vol. 3, No. 11, pp. 745-750, 2021.
  • S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions”, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017.
  • R.R. Selvaraju and D. Batra, “Grad-CAM: Why did you say that?”, CoRR abs/1611.07450, pp.1-13, 2016.
  • D. Smilkov and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, CoRR abs/1706.03825, pp. 1-12, 2017.
  • J.T. Springenberg and M.A. Riedmiller, “Striving for Simplicity: The All-Convolutional Net”, Proceedings of International Conference on Machine Learning, 2015.
  • M.L. Leavitt and A. Morcos, “Towards falsifiable interpretability research”, Proceedings of International Conference on Neural Information Processing Systems ML Retrospectives, Surveys and Meta-Analyses, pp. 1-13, 2020.
  • M. Sundararajan and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
  • J. Adebayo and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Conference on Neural Computing, pp. 9525-9536, 2018.
  • Venkatsubramaniam Bhaskaran and Pallav Kumar Baruah, “A Novel Approach to Explainable AI Using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 36-48, 2022.
  • A. Sangroya and L. Vig, “Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models”, Proceedings of International Conference on Machine Learning, pp. 1-16, 2020.
  • A. Sangroya and M. Rastogi, “Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models”, Proceedings of International Conference on Artificial Intelligence, pp. 19-26, 2019.
  • UCI, “UC Irvine Machine Learning Repository”, Available at: https://archive.ics.uci.edu/ml/index.php, Accessed at 2022.
  • R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, 1992.
  • UCI, “UCI Car Evaluation DataSet”, Available at: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation, Accessed at 2022.
  • Avanti Shrikumar, Peyton Greenside and Anshul Kundaje, “Learning Important Features through propagating Activation Differences”, Proceedings of International Conference on Machine Learning, pp. 3145-3153, 2017.
  • Jianqing Fan, Cong Ma and Yiqiao Zhong, “A Selective Overview of Deep Learning”, Proceedings of International Conference on Machine Learning, pp. 98-104, 2019.
  • Laurens Van Der Maaten and Geoffrey Hinton, “Visualizing Data using t-SNE”, Journal of Machine Learning Research, Vol.12, No. 2, pp. 1-15, 2008.
  • Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation”, Proceedings of International Conference on Machine Learning, pp. 1-5, 2014.
  • Matthew D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of European Conference on Computer Vision, pp. 1-8, 2014.
  • Karen Simonyan, Andrea Vedaldi and Andrew Zisserman, “Deep Inside Convolutional Network: Visualizing image classification models and Saliency Maps”, Proceedings of International Conference on Machine Learning, pp. 1-9, 2014.
  • B. Zhou, A. Khosla, A. Lapedriza, A. Oliva and A. Torralba, “Learning Deep Features for Discriminative Localization”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.
  • R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization”, Proceedings of IEEE International Conference on Computer Vision, pp. 618-626, 2017.
  • A. Chattopadhay, A. Sarkar, P. Howlader and V.N. Balasubramanian, "Grad-CAM++: Generalized GradientBased Visual Explanations for Deep Convolutional Networks”, Proceedings of IEEE Winter Conference on Applications of Computer Vision, pp. 839-847, 2018.
  • Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viegas and Martin Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, CoRR abs/1706.03825, pp.1-9, 2017.
  • Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler and Fernanda Viegas. “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)”, Proceedings of International Conference on Machine Learning, pp. 2668- 2677. 2018.

Abstract Views: 105

PDF Views: 0




  • XAI Using Formal Concept Lattice for Image Data

Abstract Views: 105  |  PDF Views: 0

Authors

Bhaskaran Venkatsubramaniam
Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India
Pallav Kumar Baruah
Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., India

Abstract


A Formal concept lattice can be used to generate explanations from a black box model. This novel approach has been applied and proven for tabular data. It has also been compared to popular techniques in XAI. In this work, we apply the approach to image data. Image data, in general, comprises large dimensions and hence poses a challenge to build a formal concept lattice from such large data. We break the image into parts and build multiple sub-lattices. Using the combination of sub-lattice explanations, we generate the complete explanation for the entire image. We present our work beginning with a simple synthetic dataset for providing an intuitive idea of explanations and its credibility. This is followed by explanations of a model built on the popular MNIST dataset proving consistency of explanations on a real dataset. Text explanations from the lattice are converted to images for ease of visual understanding. We compare our work with DeepLIFT by viewing image masks obtained through contrastive explanation for specific digits from the MNIST dataset. This work proves the feasibility of using formal concept lattices for image data.

Keywords


Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI for Images.

References