Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Experimental Study and Review of Boosting Algorithms


Affiliations
1 Computer Department, Dharamsinh Desai University, Nadiad, Gujarat, India
2 Computer Department, Charusat University, Changa, Gujarat, India
3 CHARUSAT University, Gujarat, India
     

   Subscribe/Renew Journal


At present, an active research topic is the use of ensembles of classifiers. They are obtained by generating and combining base classifiers, constructed using other machine learning methods. It is an obvious approach to making decisions more reliable to combine the output of different models. Several machine learning techniques do this by learning an ensemble of models and using them in combination:prominent among these are schemes called bagging, boosting, and stacking. They can all, more often than not, increase predictive performance over a single model. And they are general techniques that can be applied to numeric prediction problems and to classification tasks. Bagging, boosting, and stacking have only been developed over the past decade, and their performance is often astonishingly good. In this paper, we do a comparative study of Boosting algorithms and also their performance comparison with Bagging and Stacking. ML algorithms implemented in WEKA (Waikato Environment for Knowledge Analysis) are used for comparative study. Results obtained over different datasets by different algorithms are compared.

Keywords

Bagging, Bayesian Network, Boosting, Classifiers, Ensemble Learning, Feature Selection, Machine learning, MLP (Multi Layer Perceptron), Naive Bayes Classifier, Predictive Accuracy, SMO (Sequential Minimal Optimization Algorithm for Training a Support Vector Classifier), and Stacking.
User
Subscription Login to verify subscription
Notifications
Font Size

Abstract Views: 178

PDF Views: 2




  • Experimental Study and Review of Boosting Algorithms

Abstract Views: 178  |  PDF Views: 2

Authors

Harshita L. Patel
Computer Department, Dharamsinh Desai University, Nadiad, Gujarat, India
Amit P. Ganatra
Computer Department, Charusat University, Changa, Gujarat, India
C. K. Bhensdadia
Computer Department, Dharamsinh Desai University, Nadiad, Gujarat, India
Y. P. Kosta
CHARUSAT University, Gujarat, India

Abstract


At present, an active research topic is the use of ensembles of classifiers. They are obtained by generating and combining base classifiers, constructed using other machine learning methods. It is an obvious approach to making decisions more reliable to combine the output of different models. Several machine learning techniques do this by learning an ensemble of models and using them in combination:prominent among these are schemes called bagging, boosting, and stacking. They can all, more often than not, increase predictive performance over a single model. And they are general techniques that can be applied to numeric prediction problems and to classification tasks. Bagging, boosting, and stacking have only been developed over the past decade, and their performance is often astonishingly good. In this paper, we do a comparative study of Boosting algorithms and also their performance comparison with Bagging and Stacking. ML algorithms implemented in WEKA (Waikato Environment for Knowledge Analysis) are used for comparative study. Results obtained over different datasets by different algorithms are compared.

Keywords


Bagging, Bayesian Network, Boosting, Classifiers, Ensemble Learning, Feature Selection, Machine learning, MLP (Multi Layer Perceptron), Naive Bayes Classifier, Predictive Accuracy, SMO (Sequential Minimal Optimization Algorithm for Training a Support Vector Classifier), and Stacking.