Open Access Open Access  Restricted Access Subscription Access

Experiments with LMS Algorithm


Affiliations
1 Department of Applied Science and Humanities, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India
2 Department of Electronics and Communication, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India
 

Objectives: This article focuses on improving the convergence rate and reducing the number of operations used to train the Least Mean Square (LMS) algorithm. Methods/Statistical Analysis: In this paper, two modifications are suggested to train an adaptive filter using the LMS algorithm; one is based on initialization of weights and another on early termination of the training of a sequence. Findings: The optimum weights of an adaptive filter are found by initializing the weights by zeros, providing several random sequences as input and updating the weights according to the error. Moreover, the weights are continuously updated for the entire sequence even if the weights have been converged. In the proposed algorithm, the weights are initialized with zeros only once for the first sequence. The optimum weights obtained for a sequence are used as initial weights for the subsequent sequence to improve the convergence rate. Further, to reduce the number of operations, the weight update process for a sequence is terminated when the error is below some prescribed threshold. Applications/Improvements: Results shows that by making these modifications, the rate of convergence increases and number of multiplications decrease.

Keywords

Convergence Rate, Initialization of Weights, LMS Algorithm, Multiplications, Mean Square Error, Threshold.
User

Abstract Views: 166

PDF Views: 0




  • Experiments with LMS Algorithm

Abstract Views: 166  |  PDF Views: 0

Authors

Rajesh Chandrakant Sanghvi
Department of Applied Science and Humanities, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India
Himanshu B Soni
Department of Electronics and Communication, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India

Abstract


Objectives: This article focuses on improving the convergence rate and reducing the number of operations used to train the Least Mean Square (LMS) algorithm. Methods/Statistical Analysis: In this paper, two modifications are suggested to train an adaptive filter using the LMS algorithm; one is based on initialization of weights and another on early termination of the training of a sequence. Findings: The optimum weights of an adaptive filter are found by initializing the weights by zeros, providing several random sequences as input and updating the weights according to the error. Moreover, the weights are continuously updated for the entire sequence even if the weights have been converged. In the proposed algorithm, the weights are initialized with zeros only once for the first sequence. The optimum weights obtained for a sequence are used as initial weights for the subsequent sequence to improve the convergence rate. Further, to reduce the number of operations, the weight update process for a sequence is terminated when the error is below some prescribed threshold. Applications/Improvements: Results shows that by making these modifications, the rate of convergence increases and number of multiplications decrease.

Keywords


Convergence Rate, Initialization of Weights, LMS Algorithm, Multiplications, Mean Square Error, Threshold.



DOI: https://doi.org/10.17485/ijst%2F2016%2Fv9i48%2F138626