Refine your search
Collections
Journals
Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Baruah, Pallav Kumar
- Comparative Study of Xai Using Formal Concept Lattice and Lime
Abstract Views :66 |
PDF Views:2
Authors
Affiliations
1 Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Soft Computing, Vol 13, No 1 (2023), Pagination: 2782-2791Abstract
Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, Deterministic methods for XAIReferences
- C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
- Alejandro Barredo Arrieta, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
- M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You? : Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
- Alvarez-Melis, David and Tommi S. Jaakkola, “On the Robustness of Interpretability Methods”, Proceedings of International Conference on Machine Learning, pp. 1-7, 2018.
- G. Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi and Davide Capuzzo, “Statistical Stability Indices for LIME: Obtaining Reliable Explanations for Machine Learning Models”, Journal of the Operational Research Society, Vol. 73, No. 1, pp. 91-101, 2022.
- Marzyeh Ghassemi, Luke Oakden Rayner and Andrew L Beam, “The False Hope of Current Approaches to Explainable Artificial Intelligence in Healthcare”, The Lancet Digital Health, Vol. 3, No. 11, pp. 745-750, 2021.
- S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017.
- R.R. Selvaraju, A. Das and D. Batra, “Grad-CAM: Why did you say that?”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 354-356, 2016.
- D. Smilkov, N. Thorat, B. Kim and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1-8, 2017.
- J.T. Springenberg and M.A. Riedmiller, “Striving for Simplicity: the all Convolutional Net”, Proceedings of International Workshop on Information Communications, pp. 1-6, 2015.
- M.L. Leavitt and A. Morcos, “Towards Falsifiable Interpretability Research”, Proceedings of International Workshop on Neural Information Processing Systems, pp. 98-104, 2020.
- M. Sundararajan, A, Taly and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
- J. Adebayo, J. Gilmer, M. Muelly and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Conference on Neurocomputing, pp. 9525-9536, 2018.
- Venkatsubramaniam Bhaskaran and Pallav Kumar Baruah, “A Novel Approach to Explainable AI Using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-17, 2022.
- A. Sangroya, M. Rastogi and L. Vig, “Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models”, Proceedings of International Workshop on Computational Intelligence, pp. 1-17, 2020.
- A. Sangroya, C. Anantaram, M. Rawat and M. Rastogi, “Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models”, Proceedings of International Workshop on Machine Learning, pp. 19-26, 2019.
- UCI, “UC Irvine Machine Learning Repository”, Available at: https://archive.ics.uci.edu/ml/index.php, Accessed at 2022.
- R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
- UCI, “UCI Car Evaluation Data Set”, Available at: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation, Accessed at 2022.
- Blockchain Enabled, Collaborative Platform for AI as a Service
Abstract Views :138 |
PDF Views:0
Authors
Affiliations
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India, IN
2 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India, IN
2 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
Source
ICTACT Journal on Soft Computing, Vol 13, No 3 (2023), Pagination: 2909-2916Abstract
With the advent of technology, modern human activities produce a huge amount of data. This vast amount of data facilitates a better model training, thus creating accurate predictions. But most of the business entities lack the facilities and resources to develop an AI system. There is a need for a platform, to which the business can outsource the process of data collection, model development, and its deployment. These models should be tailored for each use case. The work presented here attempts to address these issues using blockchain and incremental learning. The transactions and user identification in the platform are implemented using blockchain, thus maintaining the ownership of the model and dataset in a transparent, immutable and decentralized manner. Incremental learning algorithms are employed to facilitate the real-time updation of the model. All the models and the datasets collected in the platform are considered resources. The platform opens up an avenue for developing a marketplace for data and trained models.Keywords
Blockchain, Incremental Learning, AI as a Service, Data Market Place.References
- Parsaeefard, Saeedeh and Tabrizian, Iman and Leon-Garcia, “Artificial Intelligence as a Services (AI-aaS) on SoftwareDefined Infrastructure”, Cotton Ginners' Handbook, 2019.
- Overview of Amazon Web Services, Available at https://docs.aws.amazon.com/whitepapers/latest/awsoverview/introduction.html, Accessed in 2022.
- Practitioners Guide to MLOps: A Framework for Continuous Delivery and Automation of Machine Learning, Available at https://services.google.com/fh/files/misc/practitioners_guid e_to_mlops_whitepaper.pdf, Accessed in 2021.
- Azure Arc-enabled Machine Learning, Available at https://azure.microsoft.com/en-gb/resources/azure-arcenabled-machine-learning-white-paper/, Accessed in 2022.
- A. Gepperth and Barbara Hammer, “Incremental Learning Algorithms and Applications”, Proceedings of European Symposium on Artificial Neural Networks, pp. 1-13, 2016.
- D. Justin and B.W. Harris, “Decentralized and Collaborative Ai on Blockchain”, Proceedings of IEEE International Conference on Blockchain, pp. 1-7, 2019.
- J.D. Harris, “Analysis of Models for Decentralized and Collaborative AI on Blockchain”, Proceedings of IEEE International Conference on Blockchain, pp. 1-7, 2020.
- Xuhui Chen, “When Machine Learning Meets Blockchain: A Decentralized, Privacy-Preserving and Secure Design”, Proceedings of IEEE International Conference on Big Data, pp. 1-13, 2018.
- Nenad Petrovic, “Model-Driven Approach to BlockchainEnabled MLOps”, Proceedings of 9th IEEE International Conference on IcETRAN, pp. 1-6, 2022.
- A. Marcelletti and Andrea Morichetta, “Exploring the Benefits of Blockchain Technology for MLOps Pipeline”, Proceedings of IEEE International Conference on Foundations of Consensus and Distributed Ledgers, pp. 13- 17, 2022.
- D.C. Nguyen, “Federated Learning meets Blockchain in Edge Computing: Opportunities and Challenges”, IEEE Internet of Things Journal, Vol. 8, No. 16, pp. 12806-12825, 2021.
- Shiva Raj and Jinho Choi, “Federated Learning with Blockchain for Autonomous Vehicles: Analysis and Design Challenges”, IEEE Transactions on Communications, Vol. 68, No. 8, pp. 4734-4746, 2020.
- H. Kim, “Blockchained on-Device Federated Learning”, IEEE Communications Letters, Vol. 24, No. 6, pp. 1279- 1283, 2019.
- System Haber and W. Scott Stornetta, “How to Time Stamp A Digital Document”, Proceedings of International Conference on the Theory and Application of Cryptography, pp. 1-6, 1990.
- Satoshi Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash”, Available at https://www.ussc.gov/sites/default/files/pdf/training/annual -national-trainingseminar/2018/Emerging_Tech_Bitcoin_Crypto.pdf, Accessed in 2009.
- C. Zhang, Cangshuai Wu and Xinyi Wang, “Overview of Blockchain Consensus Mechanism”, Proceedings of International Conference on Big Data Engineering, pp. 1- 13, 2020.
- Liudmila Zavolokina, Noah Zani and Gerhard Schwabe, “Why should I Trust a Blockchain Platform? Designing for Trust in the Digital Car Dossier”, Proceedings of International Conference on Design Science Research in Information Systems and Technology, pp. 1-13, 2019.
- Vitalik Buterin, “A Next-Generation Smart Contract and Decentralized Application Platform”, Available at https://blockchainlab.com/pdf/Ethereum_white_papera_next_generation_smart_contract_and_decentralized_appl ication_platform-vitalik-buterin.pdf, Accessed in 2014.
- Markus Schaffer and Gernot Salzer. “Performance and Scalability of Private Ethereum Blockchains”, Proceedings of International Conference on Business Process Management, pp. 1-7, 2019.
- Jasvant Mandloi and Pratosh Bansal, “An Empirical Review on Blockchain Smart Contracts: Application and Challenges in Implementation”, International Journal of Computer Networks and Applications, Vol. 7, No. 2, pp. 43-61, 2020.
- M. Wohrer and Uwe Zdun, “Smart Contracts: Security Patterns in the Ethereum Ecosystem and Solidity”, Proceedings of International Workshop on Blockchain Oriented Software Engineering, pp. 43-56, 2018.
- Alexander Gepperth and Barbara Hammer, “Incremental Learning Algorithms and Applications”, Proceedings of European Symposium on Artificial Neural Networks, pp. 1- 5, 2016.
- Y. Liu and H. Song, “Class-Incremental Learning for Wireless Device Identification in IoT”, IEEE Internet of Things, Vol. 8, No, 23, pp. 17227-17235, 2021.
- T. Li, S. Fong, R.C. Millham, J. Fiaidhi and S. Mohammed, “Fast Incremental Learning with Swarm Decision Table and Stochastic Feature Selection in an IoT Extreme Automation Environment”, IT Professional, Vol. 21, No. 2, pp. 14-26, 2019.
- Jacob Montiel, Talel Abdessalem and Albert Bifet, “River: Machine Learning for Streaming Data in Python”, Proceedings of International Conference on Machine Learning, pp.1-8, 2020.
- Nakhoon Choi and Heeyoul Kim, “A Blockchain-Based User Authentication Model using MetaMask”, Journal of Internet Computing and Services, Vol. 20, No. 6, pp. 119- 127, 2019.
- Svelte-Cybernetically Enhanced Web Apps, Available at https://svelte.dev/, Accessed in 2022.
- Mattias Levlin, “DOM Benchmark Comparison of the Front-End JavaScript Frameworks React”, Angular, Vue, and Svelte”, Available at https://www.doria.fi/handle/10024/177433, Accessed in 2020.
- Mufid Mohammad Robihul, “Design an MVC Model using Python for Flask Framework Development”, Proceedings of IEEE International Symposium on Electronics, pp. 1-5, 2019.
- Andrea Perrichon-Chretien and Nicolas Herbaut. “Saiaas: A Blockchain-based Solution for Secure Artificial Intelligence as-a-Service”, Proceedings of International Conference on Deep Learning, Big Data and Blockchain, pp. 1-7, 2022.
- Open Zepplin, Available at https://github.com/OpenZeppelin, Accessed in 2022.
- S. Moreschini, “MLOps for Evolvable AI Intensive Software Systems”, Proceedings IEEE International Conference on Software Analysis, Evolution and Reengineering, pp. 1-13, 2022.
- AWS IAM UserGuide, Available at https://docs.aws.amazon.com/IAM/latest/UserGuide/iamug.pdf, Accessed in 2021.
- Elie Kapengut and Bruce Mizrach, “An Event Study of the Ethereum Transition to Proof-of-Stake”, Proceedings of European Symposium on Artificial Neural Networks, pp. 1- 8, 2022.
- M. Harikrishnan and K.V. Lakshmy, “Secure Digital Service Payments using Zero Knowledge Proof in Distributed Network”, Proceedings of International Conference on Advanced Computing and Communication Systems, pp. 1-8, 2019.
- Salem Alqahtani and Murat Demirbas, “Bottlenecks in Blockchain Consensus Protocols”, Proceedings of IEEE International Conference on Omni-Layer Intelligent Systems, pp. 1-8, 2021.
- Evaluation of Lattice Based XAI
Abstract Views :41 |
PDF Views:1
Authors
Affiliations
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Soft Computing, Vol 14, No 2 (2023), Pagination: 3180-3187Abstract
With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI Evaluation.References
- M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
- R. Selvaraju, R. Ramprasaath, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh and Dhruv Batra, “Grad-Cam: Visual Explanations from Deep Networks via Gradient-based Localization”, Proceedings of the IEEE International Conference on Computer Vision, pp. 618-626, 2017.
- D. Smilkov and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of the IEEE International Conference on Computer Vision, pp. 18-26, 2017.
- R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “A Novel Approach to Explainable AI using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-13, 2022.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “XAI using Formal Concept Lattice for Image Data”, ICTACT Journal on Image and Video Processing, Vol 13, No. 3, pp. 2904-2913, 2023.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “Comparative Study OF XAI using Formal Concept Lattice and LIME”, ICTACT Journal on Soft Computing, Vol 13, No. 1, pp. 2782-2791, 2022.
- C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
- Alejandro Barredo Arrieta, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
- O. Biran and C. Cotton, “Explanation and Justification in Machine Learning: A Survey”, Proceedings of Workshop on Explainable Artificial Intelligence, pp. 1-6, 2017. [11] R.R. Hoffman and Jordan Litman, “Metrics for Explainable AI: Challenges and Prospects”, Proceedings of the IEEE International Conference on Computer Vision, pp. 1-14, 2018.
- Sina Mohseni, Niloofar Zarei and Eric D. Ragan, “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems”, ACM Transactions on Interactive Intelligent Systems, Vol. 11, No. 3-4, pp. 1-45, 2021.
- Andrew Slavin Ross, Michael C. Hughes and Finale Doshi-Velez, “Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 2662-2670, 2017.
- S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions”, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017. [15] Avanti Shrikumar, Peyton Greenside and Anshul Kundaje, “Learning Important Features through Propagating Activation Differences”, Proceedings of International Joint Conference on Machine Learning, pp. 3145-3153, 2017.
- Oliver Zhang, Randall J. Lee, Yiran Chen and Xiao Hu, “Explainability Metrics of Deep Convolutional Networks for Photoplethysmography Quality Assessment”, IEEE Access, Vol. 9, pp. 29736-29745, 2021.
- M. Sundararajan and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
- J. Adebayo and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 9525-9536, 2018.
- M.D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of International Joint Conference on Computer Vision, pp. 818-833, 2014.
- Springenberg, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller, “Striving for Simplicity: The all Convolutional Net.”, Proceedings of International Joint Conference on Computer Vision, pp. 1-8, 2014.
- Sebastian Bach, Frederick Klauschen, Klaus-Robert Muller and Wojciech Samek, “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation”, PloS One, Vol. 10, No. 7, pp. 1-12, 2015.