Open Access
Subscription Access
Improving the Character Recognition Efficiency of Feed forward BP Neural Network
This work is focused on improving the character recognition capability of feed-forward back-propagation neural network by using one, two and three hidden layers and the modified additional momentum term. 182 English letters were collected for this work and the equivalent binary matrix form of these characters was applied to the neural network as training patterns. While the network was getting trained, the connection weights were modified at each epoch of learning. For each training sample, the error surface was examined for minima by computing the gradient descent. We started the experiment by using one hidden layer and the number of hidden layers was increased up to three and it has been observed that accuracy of the network was increased with low mean square error but at the cost of training time. The recognition accuracy was improved further when modified additional momentum term was used.
Keywords
Character Recognition, MLP, Hidden Layers, Back-Propagation, Momentum Term.
User
Font Size
Information
Abstract Views: 427
PDF Views: 173