Open Access Open Access  Restricted Access Subscription Access

An Overview of Artificial Neural Networks: Part 4 Learning Mechanism


   Subscribe/Renew Journal


This paper presents the concepts of Learning Mechanism in Artificial Neural Networks (ANNs). The set of exact rules for the resolution of a learning difficulty is called a learning algorithm. Every learning algorithm varies from the other in the approach in which the modification to a synaptic weight of a neuron is expressed. There are different learning rules; several of them are discussed in this paper. Hebbian, Delta, Competitive, Memory based, Outstar and Boltzmann learning rules are discussed in detail, tabulated and compared in terms of weight adjustment, initial weight setting and different learning pragdism i.e. supervised or unsupervised. The learning rules are deliberated with consequence, its separate mathematical justification and applicability. Definitely, ANNs can be trained using these rules to perform meaningful tasks such as grouping, recognition or relationship. The concept of Perceptron is given discussed in next part of this paper series.


Keywords

Hebbian Learning, Delta Learning Rule, Competitive Learning Rule, Memory Based Learning, Outstar Learning Rule Boltzmann Learning Rule.
User
Subscription Login to verify subscription
Notifications
Font Size

  • A.D. Dongare, R. R. Kharde, Amit D. Kachare “Introduction to Artificial Neural Network”, International Journal of Engineering and Innovative Technology (IJEIT) Volume 2, Issue 1, July 2012, pp- 189-194.
  • Debasis Samanta “Artificial Neural Network: Introduction”, IIT Kharagpur, 2016, pp 1-20.
  • Peter Tino, Lubica Benuskova, Alessandro Sperduti, “Artificial Neural Network Model”, Springer Handbook of Computational Intelligence, 2015, pp-455-471.
  • Kruse R, “Computational Intelligence a Methodological Introduction Chapter 2: Introduction to Neural Network”, Springer, 2016.
  • Neha, Ashta Gupta, Nidhi “Neural Network”, International Journal of Innovative Research ion Technology, 2014, Volume: 1, Issue: 5, pp 261-265.
  • Shervin Emami. “Introduction to Face Detection and Face Recognition”, Face Recognition, 2nd June, 2010.
  • Abraham, A. (2004) Meta-Learning Evolutionary Artificial Neural Networks, Neurocomputing Journal, Vol. 56c, Elsevier Science, Netherlands, pp 1–38.
  • R. B. Dhumale, M. P. Ghatule, N. D. Thombare, P. M. Bangare, “An Overview of Artificial Neural Networks : Part 1”, Ciit International Journal of Artificial Intelligent Systems and Machine Learning, Feb 2018, Vol. 10, No. 2, (Accepted).
  • Bishop, C.M. (1995) Neural Networks for Pattern Recognition, Oxford University Press, Oxford, UK.
  • M. L. Minsky and S. Papert, Perceptrons (MIT Press, Cambridge, MA, 1969).
  • N. J. Nilsson, Learning Machines (McGraw-Hill, New York, 1965).
  • D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning Internal Representations by Error Propagation," Ch 8 in Parallel Distributed Processing, vol 1., D. E. Rumelhart and J. L. McClelland, (MIT Press, Cambridge, MA, 1986).
  • P. Peretto and J. J. Niez, "Long Term Memory Storage Capacity of Multiconnected Neural Networks," Biological Cybernetics 54, p 53 (1986).
  • T. Kohonen, SelfOrganization andAssociativeMemory, Third Edition, Springer-Verlag, New York, 1989.
  • R.A. Wilkinson et al., eds. Tech. Report, NISTIR 4912, US Dept. Commerce, NIST, Gaithersburg, Md., 1992
  • Fausett, L. (1994) Fundamentals of Neural Networks, Prentice Hall, USA.
  • David H. Ackley, Geoffrey Hinton, “A Learning Algorithm for Boltzmann Machines”, Cognitive Science 9, 1985, pp 147-169.
  • B. Yegnanarayana, “Artificial Neural Network”, PHI Learning Private Limited, Nineteenth Edition, January, 2012, pp -35.
  • Anil k Jain, Jianchang Mao, K.M. Mohiuddin, “Artificial Neural Network: A Tutorial”, Computer, Volume 29. Issue 3, pp 31-44.

Abstract Views: 407




  • An Overview of Artificial Neural Networks: Part 4 Learning Mechanism

Abstract Views: 407  | 

Authors

Abstract


This paper presents the concepts of Learning Mechanism in Artificial Neural Networks (ANNs). The set of exact rules for the resolution of a learning difficulty is called a learning algorithm. Every learning algorithm varies from the other in the approach in which the modification to a synaptic weight of a neuron is expressed. There are different learning rules; several of them are discussed in this paper. Hebbian, Delta, Competitive, Memory based, Outstar and Boltzmann learning rules are discussed in detail, tabulated and compared in terms of weight adjustment, initial weight setting and different learning pragdism i.e. supervised or unsupervised. The learning rules are deliberated with consequence, its separate mathematical justification and applicability. Definitely, ANNs can be trained using these rules to perform meaningful tasks such as grouping, recognition or relationship. The concept of Perceptron is given discussed in next part of this paper series.


Keywords


Hebbian Learning, Delta Learning Rule, Competitive Learning Rule, Memory Based Learning, Outstar Learning Rule Boltzmann Learning Rule.

References