Open Access
Subscription Access
An Experimental Evaluation of Bayesian Classifiers Applied to Intrusion Detection
Background/Objectives: Security is gaining its importance in today’s highly connected world. In this paper we study the application of Bayesian classifiers to improve Intrusion Detection. Methods/Statistical analysis: We compared three Bayesian classifiers that are/ can be used for Intrusion detection viz., Naïve Bayes, Naïve Bayes Updateable and BayesNet classifiers. These classifiers are tested using a data mining tool called WEKA. The dataset used for the course of our work (to perform comparative/ experimental evaluation) is NSL-KDD dataset. Findings: We performed the experimental evaluation of above mentioned algorithms using NSL KDD dataset. The results proved BayesNet as the better classifier; however, it still requires some improvements. BayesNet had a True Positive Rate of around 95% and False Positives were as low as 4.87% whereas both Naïve Bayes and its updateable version resulted in True Positive Rate of around 80% and False Positive rate of 19.26% which is not good when compared to BayesNet. Similarly, BayesNet had lesser error rates than it counterparts. The evaluation of BayesNet resulted in the Mean Absolute Error of around 5% and Root Mean Squared Error of around 21% while in the case of Naïve Bayes and Naïve Bayes Updateable Mean Absolute Error was around 4times than that of BayesNet and Root Mean Squared Error was twice of the BayesNet’s. Further results and analysis are provided in the sections 7 and 8 respectively. Application/Improvements: The studied classifiers need further improvements e.g., model building time for BayesNet classifier and classification rate for the other two classifiers.
Keywords
Bayes Net, Classification, Data Mining, Flex Bayes, Intrusion Detection, Naïve Bayes, Naïve Bayes Updateable, WEKA
User
Information
Abstract Views: 223
PDF Views: 0