Open Access Open Access  Restricted Access Subscription Access

Training Neural Network Elements Created From Long Shot Term Memory


Affiliations
1 Department of Informatics, Novi Sad, Serbia
 

This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are based on the concept of gradient. The efficiency of these algorithms is demonstrated in the training network created from units that are characterized by long term and long shot term memory of networks. The presented methology is effective and relative simple.

Keywords

Artificial Neural Networks (ANN), Feed-Forward Neural Networks (FNN), Recurent Neural Networks (RNN), Simulation Graph Model (SGM) Networks, Stochastic Direct Search (SDS), Long Term Memory (LTM), Long Shot Term Memory (LST) Units.
User
Notifications
Font Size

  • Haykin S. Neural Networks, Macmilan College Publishing Company, New York,1994.
  • Metskar L.R. and Jain L.S.. Recurrent Neural Networks: Design and Applications. The CRS Press International Series on Computational Intelligence 2000.
  • Jeager H. A tutorial on training recurrent neural networks, covering BPTT, RTL, EKF and the “echo state network” approach. Fifth revision, FraunhoverInstitute Autonomous Intelligent System (AIS); International University Bremen, 2013.
  • Rumelhart D.E., Hinton G.E. and Williams R.J.: Learning Representation by Back-propagation Errors. Nature, 1986a; No 232, pp 533-536.
  • Rumelhart, D.E., G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 1 Chapter 8, Cambridge, MA: MIT Press. 1986.
  • MilenkovicS. Artficial Neural Networks. Doctoral Thesis, Foudation Andrejevic, 1997, pp 40-44.
  • Shade J.P. and Ford D.H., Basic Neurology, Elsevier Scientific Publishing Company (Second Revised Edition), Amsterdam, London, New York, 1973 8. Siegelmann H.T. and Sontag E.D Computational Power of Recurrent Networks, Applied Mathematics Letters, 1991, 4, pp 77-80
  • K Ku C.C. and Lee K.Y., Diagonal Recurrent Networks for Dynamic Control. In IEEE Transaction of Neural Networks, 6,N 1, 1995
  • Patrick S. and Krose B, Robot Control, In Introduction to Neural Networks, Eghth edition, 1996, pp 85-95.
  • Haykin S. Stochastic Neurons. In Neural Networks, Macmilan College Publishing Company, New York, 1994, pp 309-311..
  • Milenkovic S., Dynamic Models, In Artficial Neural Networks. Doctoral Thesis, Foudation Andrejevic, 1997, pp 19-22
  • Pineda F.J., Generalization of Back- Propagation to Rekurrent Neural Networks, Phisical Review Letters, 59, N 19, 1989.
  • Rastrigin, L.A. Comparison of methods of gradient and stochastic search methods. In: Stochastic Search Methods., Science Publishing, 1968; Moscow, Russia. pp. 102-108
  • Nikolic K.P.: Stochastic Search Algorithms for Identification, Optimization and Training of Artificial Neural Networks. International Journal AANS, Hindawi, 2015
  • Hemsoth N. Black Box Problem closes in on Neural Networks, Wikipedia, 2015
  • Feldkamp L.A., Prokhorov D., Eagen C.F. and Yuan F., Enhanced multi–stream Kalman filter training for recurrent neural networks. In X XX (ed.) Nonlinear modeling advanced black –box techniques, 1986, Boston Kluwer, pp 29-53.
  • Haykin S., Recurrent Beak-Propagation, In Neural Networks, Macmilan College Publishing Company, New York, 1994, pp 577-588.
  • SIMULINK of MATLAB 2014b, The MATH WORKS Inc, 2014.
  • Sporns O. Networks of the brain. The MIT Press. Cembridgs, Massachusetts, London, England, 2011.
  • Sporns O., Discovering the human connectome. The MIT Press. 2012.

Abstract Views: 272

PDF Views: 4




  • Training Neural Network Elements Created From Long Shot Term Memory

Abstract Views: 272  |  PDF Views: 4

Authors

Kostantin P. Nikolic
Department of Informatics, Novi Sad, Serbia

Abstract


This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are based on the concept of gradient. The efficiency of these algorithms is demonstrated in the training network created from units that are characterized by long term and long shot term memory of networks. The presented methology is effective and relative simple.

Keywords


Artificial Neural Networks (ANN), Feed-Forward Neural Networks (FNN), Recurent Neural Networks (RNN), Simulation Graph Model (SGM) Networks, Stochastic Direct Search (SDS), Long Term Memory (LTM), Long Shot Term Memory (LST) Units.

References





DOI: https://doi.org/10.13005/ojcst%2F10.01.01