IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

Similar documents
Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Adaptive Filtering Part II

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Assesment of the efficiency of the LMS algorithm based on spectral information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

ADAPTIVE FILTER THEORY

Title without the persistently exciting c. works must be obtained from the IEE

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

An Adaptive Sensor Array Using an Affine Combination of Two Filters

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

Chapter 2 Wiener Filtering

Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

ADAPTIVE FILTER THEORY

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

EE482: Digital Signal Processing Applications

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

Adaptive Filter Theory

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

EE482: Digital Signal Processing Applications

ADAPTIVE signal processing algorithms (ASPA s) are

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

Performance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling

Adaptive Systems Homework Assignment 1

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters

Linear Optimum Filtering: Statement

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

CONTROL SYSTEMS ANALYSIS VIA BLIND SOURCE DECONVOLUTION. Kenji Sugimoto and Yoshito Kikkawa

Steady-state performance analysis of a variable tap-length LMS algorithm

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter

AdaptiveFilters. GJRE-F Classification : FOR Code:

SNR lidar signal improovement by adaptive tecniques

A Unified Approach to the Steady-State and Tracking Analyses of Adaptive Filters

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

26. Filtering. ECE 830, Spring 2014

Ch5: Least Mean-Square Adaptive Filtering

Reduced-cost combination of adaptive filters for acoustic echo cancellation

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS

On the Use of A Priori Knowledge in Adaptive Inverse Control

Detection & Estimation Lecture 1

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

LEAST Mean Squares Algorithm (LMS), introduced by

FxLMS-based Active Noise Control: A Quick Review

III.C - Linear Transformations: Optimal Filtering

Ch4: Method of Steepest Descent

Optimal Polynomial Control for Discrete-Time Systems

ACTIVE noise control (ANC) ([1], [2]) is an established

Research Article Efficient Multichannel NLMS Implementation for Acoustic Echo Cancellation

Statistical and Adaptive Signal Processing

Detection & Estimation Lecture 1

A Low-Distortion Noise Canceller and Its Learning Algorithm in Presence of Crosstalk

ESE 531: Digital Signal Processing

Efficient Use Of Sparse Adaptive Filters

H Optimal Nonparametric Density Estimation from Quantized Samples

Blind Deconvolution via Maximum Kurtosis Adaptive Filtering

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lecture Notes in Adaptive Filters

KNOWN approaches for improving the performance of

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

A new structure for nonlinear narrowband active noise control using Volterra filter

EEL 6502: Adaptive Signal Processing Homework #4 (LMS)

Temporal Backpropagation for FIR Neural Networks

inear Adaptive Inverse Control

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Transient Analysis of Data-Normalized Adaptive Filters

MMSE Decision Feedback Equalization of Pulse Position Modulated Signals

Performance Analysis of Norm Constraint Least Mean Square Algorithm

SIMON FRASER UNIVERSITY School of Engineering Science

Blind Source Separation with a Time-Varying Mixing Matrix

A Strict Stability Limit for Adaptive Gradient Type Algorithms

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation

Optimal and Adaptive Filtering

On Information Maximization and Blind Signal Deconvolution

Recursive Generalized Eigendecomposition for Independent Component Analysis

FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

EIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Blind Channel Equalization in Impulse Noise

ELEG-636: Statistical Signal Processing

Constrained controllability of semilinear systems with delayed controls

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University

3.4 Linear Least-Squares Filter

Transcription:

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl The Least Mean Squares (LMS) algorithm and its variants are the most popular choice in many systems that require gradient-based adaptation. Examples of such applications include system identification, line enhancement, line equalization, adaptive echo cancellation and active noise cancellation. The only drawback of the LMS-family algorithms is the need for careful step size choice. Too small step size, although giving good excess mean squared error (MSE), results in slow convergence speed. Too large step size results in large excess MSE and may lead to the loss of convergence and instability. Therefore, there are many theoretical studies of the algorithm behavior with the aim to provide useful bounds on the step size. Regardless of the analytic method applied, the common result of many investigations seems to be the lower bound for LMS-like algorithms convergence given as zero (e.g. µ > 0). In this paper we show that at least one of the LMS-family algorithms the Leaky LMS algorithm, is capable of stable operation even if the step size has (small) negative value. Theoretical derivations of the necessary stability condition has been validated by a number of simulations. 1. Introduction The history of the Least Mean Squares (LMS) algorithm stability and convergence analysis is long. The first results were given in 70-s by Widrow, who invented the LMS algorithm. 10 This results were obtained under many assumptions; therefore the research continued during the next decade with the aim to lessen the assumptions and provide results useful in practice. In 1984 Gardner published a comparative study of the results obtained thus far, using so called independence assumption. 4 This assumption requires that the input signal sequences are independent, identically distributed (i.i.d.) sequences. It is clear that the independence assumption does not apply to the case when the input vector is constructed from a tapped delay line: any two following input vector contain the majority of such same samples, and therefore are not independent. Ten years must have elapsed before a comprehensive analysis without the independence assumption was published, and this analysis was delivered by Butterweck. 3 Butterweck used so called small step size assumption, meaning that the step size is small enough to treat the LMS adaptive filter as a low-pass filter with low cutoff frequency. The small step size assumption sufficiently well describes an adaptive filter operating near its optimum value, with only very slight adjustments made by the LMS algorithm. The small step size assumption is common in modern LMS algorithm analysis. However, the small step size assumption does not apply to the phase of rapid adaptation, e.g. during the initial phase of the filter operation. ICSV21, Beijing, China, July 13-17, 2014 1

There are many modifications of the LMS algorithm present in the literature. The best known is probably the Normalized LMS algorithm. 5 Many algorithms are developed with the aim to improve the convergence or the excess mean square error (excess MSE), e.g. correlation LMS. 6 Other variants are developed for specific applications, e.g. filtered-x LMS algorithm was developed for active noise control. Usually, authors of a modified LMS algorithm provide theoretical analysis of the modified algorithm and give bounds on the step size that guarantee convergence. Some authors claim that the bounds are a necessary and sufficient condition for convergence, others maintain the bounds constitute just a sufficient condition. While the upper bound varies with the algorithm, the lower bound is always given no matter which convergence type is discussed as µ > 0. For the majority of the algorithms it is perfectly true: the step size must be positive for the algorithm to converge. But there is at least one algorithm that can operate with a negative step size and remain convergent in the mean (stable). This algorithm is called the Leaky LMS algorithm. The Leaky LMS algorithm was first introduced by Ungerboeck in 1976. 9 The convergence in the mean (stability) and the convergence in the mean square sense analysis was provided, under the independence assumptions, by Mayyas et al. 7 The authors of this publication claim that for the Leaky LMS algorithm to be stable it is necessary to have the step size greater then zero. Other authors give similar stability bounds, e.g. Sayed. 8 The goal of this paper is to prove that the publications that give µ > 0 as a necessary condition for the Leaky LMS algorithm stability (or convergence in the mean) are wrong. We will show that even if we do not limit the input signal to the i.i.d. sequences, the Leaky LMS algorithm can operate with (small) negative step size and remain stable. It must be emphasized that this paper deals with stability, or convergence in the mean, only. 2. Assumptions and notation Consider the classical adaptive filtering problem, 5 where the input signal, u(n), is filtered with the adaptive filter, W, to produce the output signal, y(n). The output signal is then compared with the desired signal, d(n), to produce the error signal, e(n). The problem will be dealt with with the following two of assumptions: all the signals are discrete-time, real, finite-valued, the adaptive filter W is a linear, discrete-time, transversal filter with finite impulse response (FIR) and real taps. Using the above assumptions, the filter output can be written as: L 1 y(n) = w i (n)u(n i), (1) where w i (n) is the i-th filter coefficient (tap) at the discrete time n, and L is the filter length. To simplify the notion it is useful to define: w(n) = [w 0 (n), w 1 (n),... w L 1 (n)] T, (2) u(n) = [u(n), u(n 1),... u(n L + 1)] T, (3) where T denotes transpose. Then, Eq. (1) can be written as: y(n) = w T (n)u(n) = u T (n)w(n). (4) To keep equations simple, we will write the Leaky LMS algorithm update equation in a lesscommon form: w(n + 1) = γw(n) + µu(n)e(n), (5) where 0 γ < 1 is the leakage, µ is the step size, and e(n) = d(n) y(n) is the error signal. Please note that we exclude the case when γ = 1, which is treated elsewhere. 1 ICSV21, Beijing, China, July 13-17, 2014 2

3. Stability of the Leaky LMS algorithm Using the filter output Eq. (4), the error can be written as: e(n) = d(n) u T (n)w(n). (6) Substituting Eq. (6) into Eq. (5), and rearranging the terms, we have: w(n + 1) = [ γi µu(n)u T (n) ] w(n) + µu(n)d(n). (7) The above equation can be viewed as a discrete, nonstationary system state-space equation: with the matrices and vectors defined as: x(n + 1) = A(n) x(n) + B(n)ũ(n), (8) A(n) = γi µu(n)u T (n), x(n) = w(n) (9) B(n) = µu(n) ũ(n) = d(n). (10) The matrix A(n) defined in Eq. (9) will be referred to as the Leaky LMS algorithm stability matrix. For this matrix the following theorem holds. Theorem 1 (Leaky LMS Stability Matrix Eigenvalues and Eigenvector). Assume matrix A(n) R L L is the LMS stability matrix defined as in Eq. (9) at any discrete time n. Then, the matrix has an eigenvalue: L 1 λ 1 (n) = γ µ u 2 (n i), (11) with the corresponding eigenvector u(n). The remaining eigenvalues are all equal to γ. For the proof of this theorem see 1 and the Appendix. Using the principle of contraction mapping 2 and considering that the Leaky LMS algorithm stability matrix defined as in Eq. (9) is a symmetrical matrix, we may conclude that the Leaky LMS algorithm stability sufficient condition is defined by the only non-γ eigenvalue (the remaining eigenvalues are inside the unit circle, as γ < 1). If the absolute value of λ 1 is less than or equal to 1 in all the adaptation steps, the adaptive system remains stable. 1 Thus, we may write the Leaky LMS algorithm stability sufficient condition as: L 1 n λ 1 (n) = γ µ u 2 (n i) 1. (12) Solving the above inequality gives: γ 1 n L 1 u2 (n i) µ γ + 1 L 1 u2 (n i), (13) provided L 1 u2 (n i) = u(n) 2 0. Remembering that γ < 1, from Eq. (13) it follows that the lower bound for the step size is negative. For example, if γ = 0.98, the step size should be greater than or equal to 0.02 divided by the squared norm of the input vector. This is in contraction with the result provided by Mayyes et al., 7 where the authors claim the step size is required to be positive (a necessary condition). ICSV21, Beijing, China, July 13-17, 2014 3

4. Simulation results Figure 1. The identification experiment with the leakage factor γ = 0.98. Consider the Leaky-Normalized LMS algorithm, given by: w(n + 1) = γw(n) + µ(n)u(n)e(n), (14) where µ µ(n) = L 1 u2 (n i). (15) Combining Eqs. (13) and (15) we conclude that for this algorithm to be stable it suffices that: γ 1 µ γ + 1. (16) The Leaky-Normalized LMS algorithm and the above condition constitutes the easiest way to check the validity of the theory developed in the previous section. It must be remembered, however, that the condition defined in Eq. (16) is a sufficient condition only; therefore stable adaptation with even lower (or greater) step size is also possible. 4.1 Identification experiments The first experiments are based on the system identification principle. The identified plant was in the form of a second-order, all-pole model with the transfer function: K(z 1 ) = 1 (z 0.8)(z 0.9). (17) This plant was excited using a Gaussian white noise with the variance 1, while the output was disturbed with an additive Gaussian noise with the variance 10 2. The adaptive FIR filter modeling the plant had 10 taps. The leakage factor was γ = 0.98. The simulations were repeated 100 times with different white noise sequences, and the results were averaged. The results of these experiments, for different values of the step size, are presented on Fig. 1. Using the assumed leakage factor value and the Eq. (16), we conclude that for stability of the adaptation it suffices that the normalized step size is within: 0.02 < µ < 1.98. (18) From Fig. 1 it is clear that the Leaky LMS algorithm remains stable (although not convergent) for µ 0.02; moreover, it is even stable for µ = 0.03. This is all in agreement with the theory, as the developed condition is a sufficient condition only. ICSV21, Beijing, China, July 13-17, 2014 4

Figure 2. The adaptive line enhancer experiments with the leakage factor γ = 0.98. 4.2 Line enhancement experiments Another experiments performed to verify the result in Eq. (13) were concerned with the line enhancement. An adaptive line enhancer (ALE) is a technique that allows to extract highly-correlated components from non-correlated signals. 5 It may be used to remove correlated disturbances from speech signals, and this application was simulated here. The input signal to the ALE was a speech recording, disturbed with four sines with different, constant frequencies. The ALE length and the decorrelation delay were both equal to 10. The filter was adapted using the Leaky-Normalized LMS algorithm, with the same leakage factor γ = 0.98; therefore the bounds for the stable operation remain the same as in the previous experiment. The results of the ALE experiments are presented on Figure 2. Similar to the identification experiments, the Leaky-Normalized LMS algorithm remains stable for small negative values of the step size. Here, it is stable as long as the step size µ 0.05. Further experiments, not showed here for clarity of the presentation, revealed that the stable region is even larger: no instability was observed for µ > 0.1 with this setup. Again, it is in agreement with Eq. (13), as the stability condition is a sufficient condition only. 5. Conclusions It is common to assume that the step size in the LMS-originated algorithm must be positive for the algorithm to remain stable. Such requirement is also introduced in the literature as a necessary condition for the Leaky LMS algorithm stability. Although it is probably a correct condition for the majority of the algorithms derived from the LMS algorithm, the Leaky LMS algorithm is capable of stable operation even when the step size has small negative value. The paper shows how the discrete systems theory can be used to calculate a correct stability sufficient condition for the Leaky LMS algorithm. It also presents simulation experiments to verify that in case of the Leaky LMS algorithm the theoretical small negative lower bound for the step size is valid and allows for stable adaptation. Appendix In the following proof, for clarity of the presentation, the LMS stability matrix defined in Eq. (9) will be expressed as: A = γi µuu T, (19) (the time index n has been omitted). It is assumed that the adaptive filter length, and therefore also the input vector length as well as both the dimensions of the LMS stability matrix A are equal to L. ICSV21, Beijing, China, July 13-17, 2014 5

First, consider that the rank of the uu T matrix is equal to one; therefore only one of its eigenvalues is non-zero. The direct result is that the LMS stability matrix defined in Eq. (19) has L 1 eigenvalues equal to γ. Now consider right-multiplication of the Leaky LMS stability matrix defined in Eq. (19) by the vector u: Au = ( γi µuu T ) u = γu µuu T u = u ( γ µu T u ) (20) As u T u is a scalar being an inner product of the vector u by itself, the above equation can be expressed as: ( ) L 1 Au = γ µ u, (21) where u i denotes u(n i). Equation (21) may be also viewed as the definition of the eigenvalue and the associated eigenvector. This concludes the proof. Acknowledgment This research is within a project financed by the National Science Centre, based on decision no. DEC-2012/07/B/ST7/01408. REFERENCES 1 Dariusz Bismor. Extension of the LMS stability condition over wide set of signals. submitted to Journal of Adaptive Control and Signal Processing, 2012. 2 Zdzislaw Bubnicki. Modern Control Theory. Springer-Verlag, Berlin, 2005. 3 H. J. Butterweck. A steady-state analysis of the LMS adaptive algorithm without use of the independence assumption. Proceedings of ICASSP, pages 1404 1407, 1995. 4 W. A. Gardner. Learning characteristics of stochastic-gradient-descent algorithms: a general study, analysis and critique. Signal Processing, 6:113 133, 1984. 5 S. Haykin. Adaptive Filter Theory, Fourth Edition. Prentice Hall, New York, 2002. 6 S. M. Kuo and D. R. Morgan. Active noise control: a tutorial review. Proceedings of the IEEE, 87(6):943 973, 1999. 7 K. Mayyas and T. Aboulnasr. Leaky LMS algorithm: MSE analysis for gaussian data. IEEE Transactions on Signal Processing, 45(4):927 934, 1997. 8 Ali H. Sayed. Fundamentals of Adaptive Filtering. John Wiley & Sons, New York, 2003. 9 G. Ungerboeck. Fractional tap-spacing equalizer and consequences for clock recovery in data modems. IEEE Transactions on Communications, 24(8):856 864, Aug 1976. 10 B. Widrow, J.M. McCool, M.G. Larimore, and Jr. Johnson, C.R. Stationary and nonstationary learning characteristics of the LMS adaptive filter. Proceedings of the IEEE, 64(8):1151 1162, aug. 1976. u 2 i ICSV21, Beijing, China, July 13-17, 2014 6