Binary Step Size Variations of LMS and NLMS

Similar documents
Adaptive Filtering Part II

Assesment of the efficiency of the LMS algorithm based on spectral information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

AdaptiveFilters. GJRE-F Classification : FOR Code:

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

Riccati difference equations to non linear extended Kalman filter constraints

LOSSLESS IMAGE COMPRESSION USING NEW BIORTHOGONAL WAVELETS

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Blind Source Separation with a Time-Varying Mixing Matrix

EE482: Digital Signal Processing Applications

The Selection of Weighting Functions For Linear Arrays Using Different Techniques

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

Least Mean Squares Regression

DOA Estimation of Uncorrelated and Coherent Signals in Multipath Environment Using ULA Antennas

Title without the persistently exciting c. works must be obtained from the IEE

III.C - Linear Transformations: Optimal Filtering

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Least Mean Squares Regression. Machine Learning Fall 2018

Statistical and Adaptive Signal Processing

Ch4: Method of Steepest Descent

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

State Space Modeling for MIMO Wireless Channels

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Adaptive Filter Theory

On the Stability of the Least-Mean Fourth (LMF) Algorithm

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

Signal Correction of Load Cell Output Using Adaptive Method

Removing ECG-contamination from EMG signals

EE482: Digital Signal Processing Applications

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

SIMON FRASER UNIVERSITY School of Engineering Science

TRANSMISSION STRATEGIES FOR SINGLE-DESTINATION WIRELESS NETWORKS

26. Filtering. ECE 830, Spring 2014

ADAPTIVE signal processing algorithms (ASPA s) are

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 3, MARCH

Linear Optimum Filtering: Statement

Revision of Lecture 4

Adaptive Channel Modeling for MIMO Wireless Communications

Iterative Algorithms for Radar Signal Processing

Image Compression using DPCM with LMS Algorithm

The Lifting Wavelet Transform for Periodogram Smoothing

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

3.4 Linear Least-Squares Filter

Recursive Least Squares for an Entropy Regularized MSE Cost Function

MANY papers and books are devoted to modeling fading

Ch5: Least Mean-Square Adaptive Filtering

A NOVEL APPROACH FOR HIGH SPEED CONVOLUTION OF FINITE AND INFINITE LENGTH SEQUENCES USING VEDIC MATHEMATICS

Design and Study of Enhanced Parallel FIR Filter Using Various Adders for 16 Bit Length

A Strict Stability Limit for Adaptive Gradient Type Algorithms

On the Use of A Priori Knowledge in Adaptive Inverse Control

Determining the Optimal Decision Delay Parameter for a Linear Equalizer

Lecture 2. Fading Channel

IMPLEMENTATION OF SIGNAL POWER ESTIMATION METHODS

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER

Independent Control of Speed and Torque in a Vector Controlled Induction Motor Drive using Predictive Current Controller and SVPWM

Chapter 2 Wiener Filtering

Optimal and Adaptive Filtering

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

IMPROVED NOISE CANCELLATION IN DISCRETE COSINE TRANSFORM DOMAIN USING ADAPTIVE BLOCK LMS FILTER

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

Transformation Techniques for Real Time High Speed Implementation of Nonlinear Algorithms

SNR lidar signal improovement by adaptive tecniques

Comparison of Loss Sensitivity Factor & Index Vector methods in Determining Optimal Capacitor Locations in Agricultural Distribution

Difference between Reconstruction from Uniform and Non-Uniform Samples using Sinc Interpolation

Maximum Achievable Diversity for MIMO-OFDM Systems with Arbitrary. Spatial Correlation

An Adaptive Sensor Array Using an Affine Combination of Two Filters

Parameter Derivation of Type-2 Discrete-Time Phase-Locked Loops Containing Feedback Delays

ADAPTIVE FILTER ALGORITHMS. Prepared by Deepa.T, Asst.Prof. /TCE

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Implementation of Digital Chaotic Signal Generator Based on Reconfigurable LFSRs for Multiple Access Communications

Model Order Reduction of Interval Systems using Mihailov Criterion and Factor Division Method

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

Blind Equalization Formulated as a Self-organized Learning Process

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

Navneet Agrawal Deptt. Of Electronics & Communication Engineering CTAE,MPUAT,Udaipur,India

2612 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters

Digital Image Watermarking Algorithm Based on Wavelet Packet

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

Efficient Equalization for Wireless Communications in Hostile Environments

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Lecture - 1 Introduction to Adaptive Filters

An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation

Novel DTC-SVM for an Adjustable Speed Sensorless Induction Motor Drive

Lecture: Adaptive Filtering

Digital Baseband Systems. Reference: Digital Communications John G. Proakis

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Blind Channel Equalization in Impulse Noise

ECE 3800 Probabilistic Methods of Signal and System Analysis

Transcription:

IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume, Issue 4 (May. Jun. 013), PP 07-13 e-issn: 319 400, p-issn No. : 319 4197 Binary Step Size Variations of LMS and NLMS C Mohan Rao 1, Dr. B Stephen Charles, Dr. M N Giri Prasad 3 1 Asso. Prof., Department of ECE, NBKRIST, Vidhyanagar, India Principal, SSCET, Kurnool, India 3 Prof. & Head, Department of ECE, JNTUACE, Pulivendula, India Abstract: Due to its ease of implementation, the least mean square (LMS) algorithm is one of the most wellknown algorithms for mobile communication systems. However, the main limitation of this approach is its relatively slow convergence rate. In this paper two new variable step size Least Mean Square (LMS) adaptive filter algorithms are proposed. In the first algorithm two step sizes will be calculated from two values which will vary iteration to iteration. This algorithm is analogous to LMS algorithm, and produces better convergence performance compared to that of LMS. In the second algorithm also two step sizes are calculated based on a variable. This algorithm is analogous to Normalised Least Mean Square (NLMS) and produces better convergence performance compared to that of NLMS. Keywords LMS, NLMS, Binary Step, Channel Equalization. I. INTRODUCTION Consider the communication system depicted in the figure 1. The communication channel is continuous, but due to matched filtering, the effective signaling from the Transmitter pulse shaping to Receiver matched filter is discrete. Then, we can model the system as a discrete one. y [ n] L 1 l0 h x[ n l] v[ n] l If h l 0, l 0, the channel causes ISI. A symbol x[n] interferes next L-1 symbols. Matched-filter alone cannot overcome this degradation. An equalizer is needed. Figure 1. Communication System An equalizer can be classified in several different ways as it was shown in figure. Figure. Types of Equalizers 7 Page

1.1. Related Work In the field of adaptive signal processing, the least mean square (LMS) algorithm is an extensively explored algorithm due to its simplicity [1-3]. The LMS algorithm has been widely used in mobile communications [4 6, 7, 8] such as multi-user detection, adaptive antenna arrays, and equalizations, to name just a few. The main limitation of the algorithm is its convergence rate [6, 10]. That is, the LMS algorithm has the conflicting requirements of a small step-size parameter to reduce mis-adjustment and a large step-size parameter to achieve fast convergence. There are two main categories for methods addressing the convergence rate issue: time-domain and transform-domain methods. In the time-domain category, researchers constantly look for new methods of step size selection to improve convergence rate. The most common scheme is the gear shifting approach [9], which uses a large step size during the transient state and then shifts to a smaller size during the steady state. Other important timedomain approaches include variable step-size LMS algorithms, which make the step size data-dependent. However, when the input signal is highly colored, LMS algorithms tend to produce a slow convergence rate. Previous methods alleviate the correlation of the input signal by pre-whitening it using a number of transforms, of which the frequency domain versions are more common. Frequency-domain LMS algorithms can increase the convergence speed for broadband signals. In the current paper two variable step size LMS algorithms are proposed, in which one algorithm named Binary Step Size LMS outperforms LMS and other named Binary Step Size NLMS outperforms NLMS. The rest of the paper was organized as follows. In the next two sections LMS and NLMS algorithms are explained. In the section 4, the binary step size algorithms are presented. The simulation results are given in the section 5 and the section 6 concludes this paper. II. LEAST MEAN SQUARES FILTER (LMS) Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff [11]. Figure 3. LMS Adaptive Filtering.1 Relationship to the least mean squares filter The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix and output vector is ˆ T 1 T ( X X ) X y. The FIR Wiener filter is related to the least mean squares filter, but minimizing its error criterion does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution. Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system h ( is to be identified and the adaptive filter attempts to adapt the filter h ˆ( n ) to make it as close as possible to h (, while using only observable signals x (, d ( and e ( ; but y (, v( and h ( are not directly observable. Its solution is closely related to the Wiener filter. The basic idea behind LMS filter is to approach the optimum filter weights ( R 1 P), by updating the filter weights in a manner to converge to the optimum filter weight. The algorithm starts by assuming a small weights (zero in most cases), and at each step, by finding the gradient of the mean square error, the weights are updated. That is, if the MSE-gradient is positive, it implies, the error would keep increasing positively, if the 8 Page

same weight is used for further iterations, which means we need to reduce the weights. In the same way, if the gradient is negative, we need to increase the weights. So, the basic weight update equation is : Wn 1 Wn [ n], where represents the mean-square error. The negative sign indicates that, we need to change the weights in a direction opposite to that of the gradient slope. The mean-square error, as a function of filter weights is a quadratic function which means it has only one extreme that minimizes the mean-square error, which is the optimal weight. The LMS thus, approaches towards these optimal weights by ascending/descending down the mean-square-error vs filter weight curve. th The LMS algorithm for p order algorithm can be summarized as Parameters: p is the filter order, is the step size Initialization: h ˆ (0) 0 Computation: For n = 0, 1,,. x ( [, n 1),..., n p 1)] ˆ H e( d( h ( h ˆ ( n 1) hˆ( e * (.. Convergence and stability in the mean As the LMS algorithm does not use the exact values of the expectations, the weights would never reach the optimal weights in the absolute sense, but a convergence is possible in mean. That is even-though, the weights may change by small amounts, it changes about the optimal weights. However, if the variance with which the weights change, is large, convergence in mean would be misleading. This problem may occur, if the value of step-size is not chosen properly. If is chosen to be large, the amount with which the weights change depends heavily on the gradient estimate, and so the weights may change by a large value so that gradient which was negative at the first instant may now become positive. And at the second instant, the weight may change in the opposite direction by a large amount because of the negative gradient and would thus keep oscillating with a large variance about the optimal weights. On the other hand if is chosen to be too small, time to converge to the optimal weights will be too large. Thus, an upper bound on is needed which is given as 0 where max is the greatest eigenvalue of the autocorrelation matrix R E{ }. If this condition is not fulfilled, the algorithm becomes unstable and h ˆ( n ) diverges. Maximum convergence speed is achieved when, where min is the smallest max min eigenvalue of R. Given that is less than or equal to this optimum, the convergence speed is determined by min, with a larger value yielding faster convergence. This means that faster convergence can be achieved when max is close to min, that is, the maximum achievable convergence speed depends on the eigenvalue spread of R. A white noise signal has autocorrelation matrix R I where is the variance of the signal. In this case all eigenvalues are equal, and the eigenvalue spread is the minimum over all possible matrices. The common interpretation of this result is therefore that the LMS converges quickly for white input signals, and slowly for colored input signals, such as processes with low-pass or high-pass characteristics. It is important to note that the above upper-bound on only enforces stability in the mean, but the coefficients of h ˆ( n ) can still grow infinitely large, i.e. divergence of the coefficients is still possible. A more practical bound is 0, where tr[r] denotes the trace of R. This bound guarantees that the coefficients tr[ R] of h ˆ( n ) do not diverge (in practice, the value of should not be chosen close to this upper bound, since it is somewhat optimistic due to approximations and assumptions made in the derivation of the bound). T max 9 Page

III. NORMALIZED LEAST MEAN SQUARES FILTER (NLMS) The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its input x (. This makes it very hard (if not impossible) to choose a learning rate that guarantees stability of the algorithm. The Normalized least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalizing with the power of the input. The NLMS algorithm can be summarized as: Parameters: p filter order step size Initialization: h ˆ (0) 0 Computation: For n = 0, 1,, x ( [, n 1),..., n p 1)] ˆ H e( d( h ( * e ( hˆ( n 1) hˆ( H x ( 3.1. Optimal learning rate It can be shown that if there is no interference ( v( 0), then the optimal learning rate for the NLMS algorithm is 1 and is independent of the input and the real (unknow impulse opt response h(. In the general case with interference ( v ( 0) the optimal learning rate is E[ y( yˆ( ] opt E[ e( ] The results above assume that the signals v( and are uncorrelated to each other, which is generally the case in practice. IV. T BINARY STEP SIZE ALGORITHMS 4.1. Binary Step Size LMS (BSSLMS) Here, we have two step sizes calculated from values, delta and deviation. When the error increases from the previous value of error step size is delta+deviation. And when the error decreases from its previous value step size is delta-deviation. It has been found that this converges nearly as fast as NLMS algorithm. 4.. Binary Step Size NLMS (BSSNLMS) Here, we have two step sizes calculated from a single value delta. When the error increases from the previous value of error step size is delta+0.01. And when the error decreases from its previous value step size is delta-0.01. It has been found that this converges nearly as fast as NLMS algorithm, plus an added advantage that the mean square error decreases as compared to NLMS. V. SIMULATION RESULTS The performance of proposed as well as LMS and NLMS are presented in this section. In general the mean square error has to reach zero as soon as possible. But in practice, because of random nature of various phenomena makes it difficult for the automated systems to serve the purpose. The basic algorithm in the adaptive filtering, the LMS, initializes the process of predicting the unknown system or channel. But because of same step size the LMS takes considerable amount of time as well as complexity, modification of LMS in different ways are proposed in the literature. The performance of LMS algorithm is shown in the part a figures 5, 6 and 7. In fig. 5, the Mean Square Error (MSE) is plotted with respect to number of iterations with inherently represents the time to converge. As can be observed, in the case of LMS, the maximum value of MSE is around 0.45. The MSE is under 0.05 after 50 iterations. But even after 500 iterations also the error is not less than 0.01. In the Normalized LMS algorithm on the other hand the MSE has a maximum value around 0.85. The MSE is greater than 0.05 even till 100 iterations. From that point onwards the MSE is less than even 0.01. In the case BSSLMS, the maximum value of MSE is around 0.45. The MSE is less than 0.05 by 5 iterations itself. From 50 iterations onwards the MSE is less than 0.01. In the case BSSLMS, the maximum value of MSE is around 0.38. The MSE is greater than 0.1 even till 50 iterations. From 100 iterations onwards 10 Page

the MSE is less than 0.005. a) LMS b) NLMS c) BSSLMS d) BSSNLMS Figure 4. Performace of Different Algorithms (MSE Vs No. Iterations(0-500) a) LMS b) NLMS 11 Page

c) BSSLMS d) BSSNLMS Figure 5. Performace of Different Algorithms (MSE Vs No. Iterations(0-50) a) LMS b) NLMS c) BSSLMS d) BSSNLMS Figure 6. Performace of Different Algorithms (MSE Vs No. Iterations(50-500) VI. CONCLUSIONS The proposed two algorithms are producing comparatively good results when compared with LMS and NLMS. The BSSLMS has excellent delay in the MSE in the first 5 iterations by that the average MSE, if we consider, would be very less compared to all the algorithms of our interest. The BSSNLMS on the other hand, has high MSE in the initial iterations but after 100 iterations the MSE almost 0.005 and it is still reducing in the further iterations. By that we can conclude that the proposed algorithms outperform the well-known LMS and NLMS. The obtained are comparable to those of [11]-[13]. REFERENCES Journal Papers: [1] W.-P. Ang, B. Farhang-Boroujeny, A new class of gradient adaptive step-size LMS algorithms. IEEE Trans. Signal Process. 49, 805 810 (001). [] C. Gazor, Predictions in LMS-type adaptive algorithm for smoothly time-varying environments. IEEE Trans. Signal Process. 47, 1735 1739 (1999). [3] D.G. Manolakis, V.K. Ingle, M.S. Kogan, Statistical and Adaptive Signal Processing, McGraw-Hill, New York, 000. 1 Page

[4] L.C. Godara, Applications of array antenna to mobile communications, Part I: Performance improvement, feasibility, and system considerations. Proc. IEEE 85, 1031 1060 (1997). [5] L.C. Godara, Applications of array antenna to mobile communications, Part II: Beam forming and direction-of-arrival considerations. Proc. IEEE 85, 1195 138 (1997). [6] S. Haykin, Adaptive Filter Theory, Prentice Hall, New York, 1995. [7] J.H. Winters, Signal acquisition and tracking with adaptive arrays in the digital mobile radio system IS-54 with flat fading. IEEE Trans. Veh. Technol. 4, 377 384 (1993). [8] J.H. Winters, Smart antenna for wireless systems. IEEE Pers. Commun. 3 7 (1998). [9] V. Solo, X. Kong, Adaptive Signal Processing Algorithms: Stability and Performance, Prentice Hall, New York, 1995. [10] Bernard Widrow and Marcian E. Hoff, Adaptive Switching Circuits, IRE Wescon Convention Record, 1960. [11] Qi Zhang, Yuancheng Yao, Mingwei Qin, An Uncorrelated Variable Step-size LMS Adaptive Algorithm, Journal of Emerging Trends in Computing and Information Sciences, VOL. 3, NO.11 Nov, 01. [1] Mohammad purbagher, Abdolkhalegh Mohammadi, Javad Nourinia, Changiz ghobadi, Variable Step Variable Weights Lms (Vsvwlms) Algorithm By Modified Leaky Lms International Journal of Modern Engineering Research (IJMER), Vol., Issue.4, July-Aug 01 pp-889-89. [13] Zhang Jingjing, Variable Step Size LMS Algorithm, International Journal of Future Computer and Communication, Vol. 1, No. 4, December 01. C. Mohan Rao (India) born on 1st April 1975, is pursuing Ph.D in Jawaharlal Nehru Technological University Anantapur. He received Master of Technology from Pondicherry Central University in the Year 1999, Bachelor of Technology form S.V University, Tirupati in the year 1997. He started his carrier as hardware Engineer in Hi-com technologies, during 1999 to 001, after he worked as Assistant Professor in G. P. R. E. C during 001 to 007 and as Associate Professor in N. B. K. R. I. S. T. from 007 to till date. He is a member of IETE and ISTE. Dr. B. Stephen Charles (India) born on the 9th of August 1965. He received Ph.d degree in Electronics & Communication Engineering from Jawaharlal Nehru Technological University, Hyderabad in 001. His area of interest is Digital signal Processing. He received his B.Tech degree in Electronics and Communication Engineering from Nagarjuna University, India in 1986. He started his carrier as Assistant professor in Karunya institute of technology during 1989 to 1993, later joined as Associate Professor in K. S. R. M. College of Engg. During 1993 to 001 after that he worked as Principal of St. John's College of Engineering & Technology during 001 to 007 and now he is the Secretary, Correspondent and Principal in Stanley Stephen College of Engineering & Technology, Kurnool. He has 4 years of teaching and research experience. He published more than 40 research papers in national and international journals and more than 30 research papers in national and international conferences. He is a member of Institute of Engineers and ISTE. Dr. M.N. Giri Prasad received his B.Tech degree from J.N.T University College of Engineering, Anantapur, Andhrapradesh, India in 198. M.Tech degree from Sri Venkateshwara University, Tirupati, Andhra Pradesh, India in 1994 and Ph.D degree from J.N.T. University, Hyderabad, Andhra Pradesh, Indian in 003. Presently he is working as a Professor in the Department of Electronics and Communication at J.N.T University College of Engineering Anantapur, Andhrapradesh, India. He has more than 5 years of teaching and research experience. He has published more than 50 papers in national and international journals and more than 30 research papers in national and international conferences. His research areas are Wireless Communications and Biomedical instrumentation, digital signal processing, VHDL coding and evolutionary computing. He is a member of ISTE, IE & NAFEN. 13 Page