Open Access
Subscription Access
Open Access
Subscription Access
Impact of Weight Initialization on Training of Sigmoidal Ffann
Subscribe/Renew Journal
During training one of the most important factor is weight initialization that affects the training speed of the neural network. In this paper we have used random and Nguyen-Widrow weight initialization along with the proposed weight initialization methods for training the FFANN. We have used various types of data sets as input. Five data sets are taken from UCI machine learning repository. We have used PROP Back-Propagation algorithms for training and testing. We have taken different number of inputs and hidden layer nodes with single output node for experimentation. We have found that in almost all the cases the proposed weight initialization method gives better results.
Keywords
Feed Forward Artificial Neural Network, Back-Propagation Algorithm, Weight Initialization.
Subscription
Login to verify subscription
User
Font Size
Information
- G.P. Drago and S. Ridella, “Statistically Controlled Activation Weight Initialization (SCAWI)”, IEEE Transactions on Neural Networks, Vol. 3, No. 4, pp. 627631, 1992.
- Y.K. Kim and J.B. Ra, “Weight Value Initialization for Improving Training SPEED in the Back Propagation Networks”, Proceedings of IEEE International Joint Conference on Neural Networks, pp. 2396-2401, 1991.
- S.S. Sodi and P. Chandra, “Interval based Weight Initialization Method for Sigmoidal Feed Forward Artificial Neural Networks”, Proceedings of 2nd AASRI Conference on Computational Intelligence and Bioinformatics, pp. 19-25, 2014.
- D. Nyugen and B. Widrow, “Improving the Learning Speed of 2-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights”, Proceedings of International Joint Conference on Neural Networks, pp. 23-29, 1990.
- M.P.S. Veenu Bhatia and P. Chandra, “Comparison of Sigmoidal FFANN Training Algorithms for Function Approximation Problems”, Proceedings of International Conference on Computing for Sustainable Global Development, pp. 325-329, 2015.
- M. Stinchcombe and H. White, “Approximating and Learning Unknown Mappings using Multilayer Feed Forward Networks with Bounded Weights”, Proceedings of International Joint Conference on Neural Networks, Vol. 3, pp. 7-16, 1990.
- V. Cherkassky and R. Shepherd, “Regularization Effect of Weight Initialization in Back Propagation Networks”, Proceedings of IEEE International Joint conference on Neural Networks, pp. 2258-2261, 1998.
- M. Reidmiller and H. Braun, “A Direct Adaptive Method for faster Back Propagation Learning: The RPROP Algorithm”, Proceedings of IEEE International Conference on Neural Networks, pp. 586-591, 1993.
- Y. LeCun, L. Bottou, G. Orr and K. Muller, “Efficient BackProp”, Springer, 1998.
- V. Cherkassky, Do Gehring and F. Mulier, “Comparison of Adaptive Methods for Function Estimation from Samples”, IEEE Transactions on Neural Networks, Vol. 7, No. 4, pp. 969-984, 1996.
- G. Thimm and E Fiesler, “Neural Network Initialization”, Proceedings of International Workshop on Artificial Neural Networks, pp. 535-542, 2005.
- MATLAB version R2013a 8.1, Available at: https://www.mathworks.com/matlabcentral/answers/11264 9-matlab-student-r2013a-compatibility-with-windows-8-1.
- Neural Network Toolbox, Available at:http://www.mathworks.in/help/nnet/ref.
- Simon Haykin, “Neural Networks-A Comprehensive Foundation”, 2nd Edition, Prentice Hall, 1999.
- Simon Haykin, “Neural Networks and Learning Machines”, 3rd Edition, PHI Learning Private Limited, 2011.
- UCI Machine Learning Repository, Available at: archive.ics.uci.edu/ml/.
Abstract Views: 264
PDF Views: 3