Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Evaluation of Lattice Based XAI


Affiliations
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India
     

   Subscribe/Renew Journal


With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.

Keywords

Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI Evaluation.
Subscription Login to verify subscription
User
Notifications
Font Size

  • M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
  • R. Selvaraju, R. Ramprasaath, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh and Dhruv Batra, “Grad-Cam: Visual Explanations from Deep Networks via Gradient-based Localization”, Proceedings of the IEEE International Conference on Computer Vision, pp. 618-626, 2017.
  • D. Smilkov and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of the IEEE International Conference on Computer Vision, pp. 18-26, 2017.
  • R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
  • Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “A Novel Approach to Explainable AI using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-13, 2022.
  • Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “XAI using Formal Concept Lattice for Image Data”, ICTACT Journal on Image and Video Processing, Vol 13, No. 3, pp. 2904-2913, 2023.
  • Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “Comparative Study OF XAI using Formal Concept Lattice and LIME”, ICTACT Journal on Soft Computing, Vol 13, No. 1, pp. 2782-2791, 2022.
  • C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
  • Alejandro Barredo Arrieta, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
  • O. Biran and C. Cotton, “Explanation and Justification in Machine Learning: A Survey”, Proceedings of Workshop on Explainable Artificial Intelligence, pp. 1-6, 2017. [11] R.R. Hoffman and Jordan Litman, “Metrics for Explainable AI: Challenges and Prospects”, Proceedings of the IEEE International Conference on Computer Vision, pp. 1-14, 2018.
  • Sina Mohseni, Niloofar Zarei and Eric D. Ragan, “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems”, ACM Transactions on Interactive Intelligent Systems, Vol. 11, No. 3-4, pp. 1-45, 2021.
  • Andrew Slavin Ross, Michael C. Hughes and Finale Doshi-Velez, “Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 2662-2670, 2017.
  • S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions”, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017. [15] Avanti Shrikumar, Peyton Greenside and Anshul Kundaje, “Learning Important Features through Propagating Activation Differences”, Proceedings of International Joint Conference on Machine Learning, pp. 3145-3153, 2017.
  • Oliver Zhang, Randall J. Lee, Yiran Chen and Xiao Hu, “Explainability Metrics of Deep Convolutional Networks for Photoplethysmography Quality Assessment”, IEEE Access, Vol. 9, pp. 29736-29745, 2021.
  • M. Sundararajan and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
  • J. Adebayo and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 9525-9536, 2018.
  • M.D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of International Joint Conference on Computer Vision, pp. 818-833, 2014.
  • Springenberg, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller, “Striving for Simplicity: The all Convolutional Net.”, Proceedings of International Joint Conference on Computer Vision, pp. 1-8, 2014.
  • Sebastian Bach, Frederick Klauschen, Klaus-Robert Muller and Wojciech Samek, “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation”, PloS One, Vol. 10, No. 7, pp. 1-12, 2015.

Abstract Views: 126

PDF Views: 1




  • Evaluation of Lattice Based XAI

Abstract Views: 126  |  PDF Views: 1

Authors

Bhaskaran Venkatsubramaniam
Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India
Pallav Kumar Baruah
Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India

Abstract


With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.

Keywords


Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI Evaluation.

References