Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Comparative Study of Xai Using Formal Concept Lattice and Lime


Affiliations
1 Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India
     

   Subscribe/Renew Journal


Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.

Keywords

Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, Deterministic methods for XAI
Subscription Login to verify subscription
User
Notifications
Font Size

  • C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
  • Alejandro Barredo Arrieta, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
  • M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You? : Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
  • Alvarez-Melis, David and Tommi S. Jaakkola, “On the Robustness of Interpretability Methods”, Proceedings of International Conference on Machine Learning, pp. 1-7, 2018.
  • G. Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi and Davide Capuzzo, “Statistical Stability Indices for LIME: Obtaining Reliable Explanations for Machine Learning Models”, Journal of the Operational Research Society, Vol. 73, No. 1, pp. 91-101, 2022.
  • Marzyeh Ghassemi, Luke Oakden Rayner and Andrew L Beam, “The False Hope of Current Approaches to Explainable Artificial Intelligence in Healthcare”, The Lancet Digital Health, Vol. 3, No. 11, pp. 745-750, 2021.
  • S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017.
  • R.R. Selvaraju, A. Das and D. Batra, “Grad-CAM: Why did you say that?”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 354-356, 2016.
  • D. Smilkov, N. Thorat, B. Kim and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1-8, 2017.
  • J.T. Springenberg and M.A. Riedmiller, “Striving for Simplicity: the all Convolutional Net”, Proceedings of International Workshop on Information Communications, pp. 1-6, 2015.
  • M.L. Leavitt and A. Morcos, “Towards Falsifiable Interpretability Research”, Proceedings of International Workshop on Neural Information Processing Systems, pp. 98-104, 2020.
  • M. Sundararajan, A, Taly and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
  • J. Adebayo, J. Gilmer, M. Muelly and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Conference on Neurocomputing, pp. 9525-9536, 2018.
  • Venkatsubramaniam Bhaskaran and Pallav Kumar Baruah, “A Novel Approach to Explainable AI Using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-17, 2022.
  • A. Sangroya, M. Rastogi and L. Vig, “Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models”, Proceedings of International Workshop on Computational Intelligence, pp. 1-17, 2020.
  • A. Sangroya, C. Anantaram, M. Rawat and M. Rastogi, “Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models”, Proceedings of International Workshop on Machine Learning, pp. 19-26, 2019.
  • UCI, “UC Irvine Machine Learning Repository”, Available at: https://archive.ics.uci.edu/ml/index.php, Accessed at 2022.
  • R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
  • UCI, “UCI Car Evaluation Data Set”, Available at: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation, Accessed at 2022.

Abstract Views: 65

PDF Views: 2




  • Comparative Study of Xai Using Formal Concept Lattice and Lime

Abstract Views: 65  |  PDF Views: 2

Authors

Bhaskaran Venkatsubramaniam
Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India
Pallav Kumar Baruah
Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India

Abstract


Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.

Keywords


Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, Deterministic methods for XAI

References