Steady-state performance analysis of a variable tap-length LMS algorithm

Similar documents
Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Title without the persistently exciting c. works must be obtained from the IEE

Publication VI. Esa Ollila On the circularity of a complex random variable. IEEE Signal Processing Letters, volume 15, pages

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

Performance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling

Article (peer-reviewed)

On the Behavior of Information Theoretic Criteria for Model Order Selection

approach to underdetermined blind source separation with application to temporomandibular disorders Loughborough University Institutional Repository

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Citation Ieee Signal Processing Letters, 2001, v. 8 n. 6, p

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

An algorithm for computing minimal bidirectional linear recurrence relations

Determining the Optimal Decision Delay Parameter for a Linear Equalizer

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

IEEE Transactions on Automatic Control, 2013, v. 58 n. 6, p

Image Compression using DPCM with LMS Algorithm

AdaptiveFilters. GJRE-F Classification : FOR Code:

A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model

City, University of London Institutional Repository

Reduced-cost combination of adaptive filters for acoustic echo cancellation

On the Stability of the Least-Mean Fourth (LMF) Algorithm

Approximation Metrics for Discrete and Continuous Systems

An Adaptive Sensor Array Using an Affine Combination of Two Filters

Constant modulus blind equalisation algorithms under soft constraint satisfaction

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

Title algorithm for active control of mul IEEE TRANSACTIONS ON AUDIO SPEECH A LANGUAGE PROCESSING (2006), 14(1):

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

A Unified Approach to the Steady-State and Tracking Analyses of Adaptive Filters

Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters

A normalized gradient algorithm for an adaptive recurrent perceptron

The Modeling and Equalization Technique of Nonlinear Wireless Channel

Acomplex-valued harmonic with a time-varying phase is a

Efficient Use Of Sparse Adaptive Filters

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

ACCORDING to Shannon s sampling theorem, an analog

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

of an algorithm for automated cause-consequence diagram construction.

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT

Adaptive Filter Theory

Ch6-Normalized Least Mean-Square Adaptive Filtering

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Transient Analysis of Data-Normalized Adaptive Filters

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

KNOWN approaches for improving the performance of

Lyapunov Stability of Linear Predictor Feedback for Distributed Input Delays

Performance Analysis of Norm Constraint Least Mean Square Algorithm

IEEE copyright notice

Blind Source Separation with a Time-Varying Mixing Matrix

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Stability Analysis of Continuous-Time Switched Systems With a Random Switching Signal. Title. Xiong, J; Lam, J; Shu, Z; Mao, X

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

Defect Detection Using Hidden Markov Random Fields

Optimal Quantization in Energy-Constrained Sensor Networks under Imperfect Transmission

BILINEAR forms were addressed in the literature in

VARIABLE step-size adaptive filters allow the filters to dynamically

Analysis of incremental RLS adaptive networks with noisy links

Optimal Speech Enhancement Under Signal Presence Uncertainty Using Log-Spectral Amplitude Estimator

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Acoustic Signal Processing. Algorithms for Reverberant. Environments

On the estimation of the K parameter for the Rice fading distribution

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces

Title. Author(s)Igarashi, Hajime; Watanabe, Kota. CitationIEEE Transactions on Magnetics, 46(8): Issue Date Doc URL. Rights.

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

Wavelet Based Image Restoration Using Cross-Band Operators

Computing running DCTs and DSTs based on their second-order shift properties

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

On the Cost of Worst-Case Coding Length Constraints

Modifying Voice Activity Detection in Low SNR by correction factors

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Expressions for the covariance matrix of covariance data

CONVENTIONAL decision feedback equalizers (DFEs)

DUE to its practical importance in communications, the

CONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE RESOURCE ALLOCATION

An enriched RWG basis for enforcing global current conservation in EM modelling of capacitance extraction

Assessing system reliability through binary decision diagrams using bayesian techniques.

Flat Rayleigh fading. Assume a single tap model with G 0,m = G m. Assume G m is circ. symmetric Gaussian with E[ G m 2 ]=1.

Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints

Optimal Decentralized Control of Coupled Subsystems With Control Sharing

Efficient signal reconstruction scheme for timeinterleaved

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

Estimation Error Bounds for Frame Denoising

Energy-Efficient Noncoherent Signal Detection for Networked Sensors Using Ordered Transmissions

Filter Design for Linear Time Delay Systems

A Quality-aware Incremental LMS Algorithm for Distributed Adaptive Estimation

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

Towards a fully LED-based solar simulator - spectral mismatch considerations

CSE 417T: Introduction to Machine Learning. Lecture 11: Review. Henry Chai 10/02/18

CO-OPERATION among multiple cognitive radio (CR)

Application-Oriented Estimator Selection

Weibull-Gamma composite distribution: An alternative multipath/shadowing fading model

Blind Channel Equalization in Impulse Noise

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

Transcription:

Loughborough University Institutional Repository Steady-state performance analysis of a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: ZHANG, Y, LI, N, CHAMBERS, J SAYED, A.H., 008. Steadystate performance analysis of a variable tap-length LMS algorithm. IEEE Transactions on Signal Processing, 56 (), PP 839-845. Additional Information: This article was published in the journal IEEE Transactions on Signal Processing [ c IEEE] is also available at: http://ieeexplore.ieee.org/. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Metadata Record: https://dspace.lboro.ac.uk/134/587 Version: Published Publisher: c IEEE Please cite the published version.

This item was submitted to Loughborough s Institutional Repository (https://dspace.lboro.ac.uk/) by the author is made available under the following Creative Commons Licence conditions. For the full text of this licence, please go to: http://creativecommons.org/licenses/by-nc-nd/.5/

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 839 Steady-State Performance Analysis of a Variable Tap-Length LMS Algorithm Yonggang Zhang, Student Member, IEEE, Ning Li, Jonathon A. Chambers, Senior Member, IEEE, A. H. Sayed, Fellow, IEEE Abstract A steady-state performance analysis of the fractional tap-length (FT) variable tap-length least mean square (LMS) algorithm is presented in this correspondence. Based on the analysis, a mathematical formulation for the steady-state tap length is obtained. Some general criteria for parameter selection are also given. The analysis the associated discussions give insight into the performance of the FT algorithm, which may potentially extend its practical applicability. Simulation results support the theoretical analysis discussions. Index Terms Adaptive filters, steady-state performance analysis, variable tap-length LMS algorithm. I. INTRODUCTION The least mean square (LMS) adaptive algorithm has been extensively used as a consequence of its simplicity robustness [1], []. In many applications of the LMS algorithm, the tap length of the adaptive filter is kept fixed. However, in certain situations, the tap length of the optimal filter is unknown or variable. According to the analysis in [3] [4], the mean square error (MSE) of the adaptive filter is likely to increase if the tap length is undermodeled. To avoid such a situation, a sufficiently large filter tap length is needed. However, the computational cost the excess mean square error (EMSE) of the LMS algorithm will increase if the tap length is too large; thus, a variable tap-length LMS algorithm is needed to find a proper choice of the tap length. Several variable tap-length LMS algorithms have been proposed in recent years. Among existing variable tap-length LMS algorithms, some are designed to not only establish a suitable steady-state tap length, but also to speed up the convergence rate [3], [5]. These methods are based on the assumption that the unknown optimal filter has an impulse response sequence with an exponentially decaying envelope, which limits their utility. Other methods are more general are designed to search for the optimal filter tap length at steady-state [6] [9]; a summary of these works is given in [10]. As analyzed in [10], the fractional tap-length (FT) algorithm is more robust has lower computational complexity when compared with other methods. A convex combination structure of the FT algorithm has been proposed Manuscript received December 16, 006; revised June 9, 007. The associate editor coordinating the review of this manuscript approving it for publication was Prof. John J. Shynk. Y. Zhang is with the Centre of Digital Signal Processing, Cardiff School of Engineering, Cardiff University, CF4 3AA, U.K., also with the Automation School, Harbin Engineering University, Harbin, Heilongjiang, 150001, China (e-mail: zhangy15@cf.ac.uk). N. Li is with the Automation School, Harbin Engineering University, Harbin, Heilongjiang, 150001, China, also with the Centre of Digital Signal Processing, Cardiff School of Engineering, Cardiff University, CF4 3AA, U.K. (e-mail: lin3@cf.ac.uk). J. A. Chambers is with the Centre of Digital Signal Processing, Cardiff School of Engineering, Cardiff University, CF4 3AA, U.K. (e-mail: chambersj@cf.ac. uk). A. H. Sayed is with the Department of Electrical Engineering, University of California, Los Angeles 90095 USA (e-mail: sayed@ee.ucla.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSP.007.907805 in [11] to establish the optimal tap length in high-noise conditions, in which two filters are updated simultaneously with different parameters, so that the overall filter can obtain both a rapid convergence rate from the fast filter a smooth curve for the steady-state tap length from the slow filter. Although the FT method has many advantages in comparison to other methods, can be potentially used in many applications, no steady-state performance analysis has been given in the literature, due to the nonlinearity between the output error the tap length. Thus, the choice of the algorithm parameters is difficult. In this correspondence, we provide a steady-state performance analysis of the FT algorithm. At first, the fractional tap-length update equation is rewritten, which gives insight into the fractional tap-length evolution process. Based on this equation, a mathematical formulation of the steady-state tap length is then obtained. Some general guidelines for the parameter choice are also given, which highlight the practical applicability of the FT algorithm. As will be confirmed in the simulations, the guidelines for the parameter choice are reasonable useful. The remainder of the correspondence is organized as follows. In Section II, we formulate the FT variable tap-length LMS algorithm. The steady-state performance analysis of the FT algorithm some guidelines for parameter selection are given in Section III. Simulations are performed in Section IV to illustrate the analysis discussions. Section V concludes the correspondence. II. FT VARIABLE TAP-LENGTH LMS ALGORITHM The FT variable tap-length LMS algorithm is designed to find the optimal filter tap length. In agreement with most approaches used to derive algorithms for adaptive filtering, the design problem is related to the optimization of a certain criterion that is dependent on the tap length. For convenience, we shall formulate the LMS algorithm within a system identification framework, in which the unknown filter cl has an unknown tap length L opt, which is to be identified. In this model, the desired signal d(n) is represented as d(n) =x T L (n)cl + v(n) (1) where xl (n) is the input vector with a tap length of L opt; v(n) is a zero-mean additive noise term uncorrelated with the input, n denotes the discrete time index, ( 1 ) T denotes the transpose operation. In this correspondence, all quantities are assumed to be real valued. For convenience of description, we assume that at steady state the tap length of the adaptive filter is a fixed value denoted by L; wl xl(n) are, respectively, the corresponding steady-state adaptive filter vector input vector. Also, we define the segmented steady-state error as [10] e (L) M (n) =d(n) 0 wt L;1:M xl;1:m (n); as n!1 () where 1 M L; wl;1:m xl;1:m (n) are, respectively, vectors consisting of the first M coefficients of the steady-state filter vector wl the input vector xl(n). The mean square of this segmented steady-state error is defined as M (L) basis of the FT method is to find the minimum value of L satisfying [10]: = Ef(e(L) M (n)) g. The underlying (L) L01 0 (L) L " (3) where 1 is a positive integer less than L " is a small positive value determined by the system requirements. The minimum L that satisfies 1053-587X/$5.00 007 IEEE Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

840 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 (3) is then chosen as the optimal tap length. A detailed description of this criterion another similar criterion can be found in [10]. Gradient-based methods can be used to estimate L on the basis of (3). However, the tap length that should be used in the adaptive filter structure must be an integer, this constrains the adaptation of the tap length. Different approaches have been applied to solve this problem [6] [10]. In [10], the concept of pseudo fractional tap length, denoted by l f (n), is utilized to make instantaneous tap-length adaptation possible. The update of the fractional tap-length is as follows: l f (n +1)=(l f (n) 0 ) 0 (e (L(n)) L(n) ) 0 (e (L(n)) L(n)01 ) (4) where is the step size for the tap-length adaptation, is a positive leakage parameter [10]. As explained in [10], l f (n) is no longer constrained to be an integer, the tap length L(n +1), which will be used in the adaptation of the filter weights in the next iteration, is obtained from the fractional tap length l f (n) as follows: L(n +1)= bl f (n)c; if jl(n) 0 l f (n)j > L(n); otherwise where b:c is the floor operator, which rounds down the embraced value to the nearest integer is a small integer. Next, we will give a steady-state performance analysis based on the above formulation, which is the novel contribution in this correspondence. Some general guidelines for the parameter choice will also be provided. III. STEADY-STATE PERFORMANCE In the FT algorithm, the filter coefficients are updated as w L(n) (n +1)=w L(n) (n) +e (L(n)) L(n) (n)x L(n) (n) (6) where w L(n) x L(n) are, respectively, the L(n)-tap adaptive filter vector the input vector, is the positive step size for the update of the coefficients. For convenience of analysis, we use a vector c N to denote the unknown filter, where N is an integer larger than both the optimal tap length L opt the maximum value of the variable tap-length sequence L(n), c N is obtained by padding c L with zeros. This unknown filter vector c N can be split into three parts: c 0 c 00 c 000 where c 0 is the part modeled by w 0 (n); w 0 (n) = w L(n);1:L(n)01 (n); c 00 is the part modeled by w 00 (n); w 00 (n) = w L(n);L(n)01+1:L(n) (n) c 00 is the undermodeled part. We let g N (n) denote the total coefficient error vector (5) g N (n) =c N 0 w N (n) (7) where w N (n) is obtained by padding w L(n) (n) with zeros. Therefore, g N (n) can be similarly split as g 0 (n) g 00 (n) g 000 (n) : The mean square deviation (MSD) between the optimal filter vector the adaptive filter vector is given by Efkg N (n)k g, where k1k denotes the squared Euclidean distance. For convenience of description, the input vector x N (n) is split similarly to that of c N (n) g N (n). With the above notation substituting (1) (7) into (), padding all the vectors in (1) () with zeros to make their lengths equal to N,wehave e (L(n)) L(n) (n) = e (L(n)) L(n)01 (n) = x 0 (n) x 00 (n) x 000 (n) 0 x (n) x 00 (n) x 000 (n) T T g 0 (n) g 00 (n) + v(n) (8) c 000 g 0 (n) c 00 + v(n): (9) c 000 The term (e (L(n)) L(n) ) 0 (e (L(n)) L(n)01 ), which is the key term in the fractional tap-length update (4), can then be written as e (L(n)) L(n) 0 e (L(n)) L(n)01 =[v(n) +x 0T (n)g 0 (n) + x 00T (n)g 00 (n) +x 00T (n)c 00 +x 000T (n)c 000 ] [x 00T (n)g 00 (n) 0 x 00T (n)c 00 ]: (10) This term can be exped as e (L(n)) L(n) 0 e (L(n)) L(n)01 =v(n)x 00T (n)g 00 (n) 0 v(n)x 00T (n)c 00 A +x 0T (n)g 0 (n)x 00T (n)g 00 (n) C 0 x 0T (n)g 0 (n)x 00T (n)c 00 D +[x 00T (n)g 00 (n)] 0 [x 00T (n)c 00 ] E +x 000T (n)c 000 x 00T (n)g 00 (n) G 0 x 000T (n)c 000 x 00T (n)c 00 H F B : (11) Substituting (11) into (4), the steady-state fractional tap-length update equation can be rewritten as l f (n +1) = l f (n) 0 ( + (A 0 B + C 0 D + E 0 F + G 0 H)) (1) where terms A; B; C; D; E; F; G; H are denoted in (11). Next, a steady-state performance analysis will be given based on this update equation. A. Steady-State Performance Analysis Before we perform the steady-state analysis, we shall assume that the system has arrived at steady-state if the quantities Ef(e (L(n)) L(n) ) g Efl f (n)g tend to constants as n!1. To simplify the analysis, we make several further assumptions. A.1. At steady-state, the tap length will converge, or can be approximately deemed to have converged to a fixed value L(1). As will be Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 841 shown by the simulations, if all the parameters are set properly, the tap length will slightly fluctuate around a fixed value. Also we assume that at steady state, L(1) > L opt. We justify that in most simulations, if the parameter is not chosen too small, the steady-state tap length L(1) will be always larger than L opt. This overestimate phenomenon is also justified discussed in [10] can be seen in the simulations in the next section. A.. Both the input signal x(n) the noise signal v(n) are statistically independent identically distributed (i.i.d.) zero-mean Gaussian white signals with variances x v, respectively. A.3. The tail elements of the unknown optimal filter vector cl can be deemed to be drawn from a rom white sequence with zero mean variance c. This assumption is used to simplify the analysis. We note that even for a filter with a decaying impulse response structure, the tail elements can be approximately deemed as to have the same variance if the tap length is long enough; thus, this assumption matches the observations in many applications. A.4. At steady state, the vectors g 0 (n) g 00 (n), which are due to the adaptive noise, are independent of xn (n). The justification of this assumption is that the updates of g 0 (n) g 00 (n) only depend on the past input vectors, from assumption A. we know that xn (n) is independent of xn (j) if j 6= n; thus, g 0 (n) g 00 (n) are independent of xn (n) [1]. Also, in order to simplify the analysis, we assume that at steady state Ef (n) g0 g 00 (n) g 0 (n) g 00 (n) T g = gi (13) where I is the identity matrix g is the variance of the elements of g 0 (n) g 00 (n). Also the tap length is constrained to be not less than a lower floor value L min, where L min > 1, during its evolution, i.e., if the tap length fluctuates under L min, it will be set to L min. This operation is necessary since L(n) 0 1 is used as a tap length in the FT algorithm, as can be seen in (4), it should be positive. Taking expectation of both sides of (1), we have Ef(A 0 B + C 0 D + E 0 F + G 0 H)g = 0 : (14) Using assumptions A.1 A.3, we have (15) (16), shown at the bottom of the page. With (15), it is straightforward to see that the terms G H in (14) will disappear at steady state. Using assumptions A.1, A., A.3, A.4, we know that the expectations of terms A;B;C; D will be zero at steady state, (14) can be written as Ef(A 0 B + C 0 D + E 0 F )g = x(efkg 00 (n)k g0kc 00 k ): (17) It is straightforward to see that if L(1) >L opt +1, it will imply that (14) (17) contradict each other, i.e., together with (16) we know that the right-h side (RHS) of (17) is larger than zero if L(1) > L opt +1, but the RHS of (14) is a negative value. We therefore conclude that L(1) L opt +1, so that by also exploiting assumption A.1, the condition that L opt L(1) L opt +1will always be used in the following derivations. In a manner similar to [3] [5], in order to speed up the convergence rate of the FT variable tap-length LMS algorithm, the step size is made variable rather than fixed, according to the range of described in [3]: (n) = 0 =((L(n)+) x) (18) where 0 is a constant. With this variable step size, the term Efkg 00 (n)kg can be derived as in (9) in Appendix I. Substituting (16) (9) into (17) yields EfA 0 B + C 0 D + E 0 F g 1 0 v x 0 ( 0 0 )L opt x (L opt +10 L(1)) c : (19) Utilizing (14) in (19), we have 0 1 0 v x ( 0 0 )L opt x From (0), we obtain 0 (L opt +10 L(1)) c : (0) L(1) L opt +10 1 0 v 0 x c ( 0 0 )L opt x c : (1) This equation gives a mathematical formulation of the steady-state tap length. Since the steady-state tap-length value given in (1) will seldom be an integer, in practice the steady-state tap length will fluctuate around this value. This can be clearly seen in later simulations. The steady-state MSE can then be easily obtained with the steady-state tap length given in (1) by using the analysis results provided in [3]. In practice, the last two terms of the RHS of (1) will be small, the steady-state tap length will be close to the value L opt +1, as will be shown in the simulations in next section. To avoid the undermodeling situation, the parameters should be chosen to make L(1) > L opt obtain a small fluctuation of the steady-state tap length. Next, we will give some guidelines for the parameters choice. B. Guidelines for the Parameter Choice In this section, we give some general guidelines for parameter choice. To choose the parameters properly, we need estimations of the optimal tap length, the input variance, the noise variance, the desired system MSE. The availability of these estimations will be application dependent therefore outside of the scope of this study. c 000 = 0 (15) jc 00 k (Lopt +10 L(1)) c ; if L opt <L(1) L opt +1 0; if L(1) >L opt +1 (16) Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

84 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 With these estimated values, the parameters used in the FT algorithm can be determined as follows. 1. The parameter in (5) is not a crucial parameter, since it is just used to obtain an integer value of the tap length for the coefficients adaptation can be easily set to a small integer.. The choice of the parameter 0 can be determined according to the system MSE requirement. Similar to the step-size choice of the LMS algorithm, 0 can be a large value in low-noise conditions to obtain faster convergence of the MSE should be small in high-noise conditions to avoid a large MSE. 3. The parameter 1 should be as large as possible to obtain a fast convergence rate of the tap length, but also much smaller than the estimate of L opt, so that the steady-state tap length formulated in (1) will not be significantly biased from L opt. For example, 1 0:1L opt will be a good choice for a wide range of optimal tap lengths. 4. The leakage parameter should not be too large, so that it will not influence the initial tap-length convergence rate too much. The parameter should not be too small either, so that once the tap length is overestimated, can make the tap length converge close to the steady-state value as soon as possible. For example, =0:0001 is not a good choice, since it means that after 10 000 iterations, the leakage parameter will reduce the tap length by one tap, which is usually too slow. Generally, values between 0.001 0.01 are good choices for. 5. The parameter is the step-size parameter that controls the adaptation process of the variable tap length. Similar to the step size in the LMS algorithm, a large parameter will speed up the convergence rate of the tap length, but will result in a large fluctuation of the steady-state tap length. On the other h, a small parameter can obtain a small fluctuation of the steady-state tap length but lead to a slow convergence rate. Thus, provides a tradeoff between the convergence rate of the tap length the steady-state tap-length variance. The choice of this parameter is important in the FT algorithm. A detailed discussion for the choice of this parameter is as follows. At first, to avoid undermodeling the optimal tap length, the steadystate tap length of the FT method L(1) should not be less than L opt. Considering the fluctuation of the steady-state tap length, the parameter should be set properly so that L(1) >L opt +, where is a small positive integer can be chosen according to the system requirement of the fluctuation of the steady-state tap length. For example, = is a reasonable choice. Substituting (1) into the inequality L(1) > L opt +, we can obtain the lower bound value l : l = (1 0 ) x c 0 1 (0 )L : () Second, the parameter should not be too large to avoid a large fluctuation of the steady-state tap length. The update process for the steadystate fractional tap length is formulated in (1). The fluctuation of the steady-state fractional tap length is brought about by the fluctuation of the term + (A 0 B + C 0 D + E 0 F + G 0 H). It is straightforward to see in (1) that large variance of the term + (A 0 B + C 0 D + E 0 F + G 0 H), which is denoted as f, will result in large fluctuations of the steady-state fractional tap length. To avoid such a situation, a simple intuitive approach is to make the stard deviation f much smaller than the parameter in (5), so that the probability of the steady-state fractional tap length fluctuating outside the range (L(1) 0 ; L(1) +) can be very small. A simple criterion to satisfy such a requirement is f < (3) where is a small positive value can be decided according to the system requirement of the fluctuation of the steady-state tap length. The derivation of the variance f is given in (51) in Appendix II. Using (51) in (3) after rearrangement, we obtain the upper bound value u : 0 K K 0 K K + 4( + ) K u = (4) where K K 3 are, respectively, formulated in (54) (55). The parameter should be chosen so that the possibility of the taplength fluctuating under L opt is nearly zero. In general, for high noise condition, this parameter should be chosen small, for low noise condition it can be chosen larger. Examples for the choice of can be seen in the simulations in the next section. With the lower bound value given in () the upper bound value given in (4), the parameter can then be easily chosen. According to the motivation of the upper bound value u we know that values close to this value will be good choices to avoid a large fluctuation of the steady-state tap length while retaining as quick as possible convergence rate; thus, in practice should be chosen close to u larger than l. Since in practice, all the parameters x; v; c, especially the parameter L opt are unknown, approximate estimations of these parameters can be used in the calculations. Next, we will perform several simulations to confirm the above analysis discussions. IV. SIMULATION RESULTS In this section, we will perform two simulations to support the analysis discussions in the previous section. In the first simulation, a low-noise condition is used while a high-noise environment is utilized in the second simulation. A. Low-Noise Case: SNR = 0 db The setup of this simulation is as follows. The impulse response sequence of the unknown filter is a white Gaussian sequence with zero mean variance 0.01. The tap length L opt is set to 00. The input signal is another white Gaussian sequence with zero mean unit variance. The noise signal is a zero mean uncorrelated rom Gaussian sequence scaled to make the SNR 0 db. According to the parameter choice guidelines in Section III-B, the parameter is set to. The step size 0 is set to 0.5. The leakage parameter is set to 0.005, 1 is set to 0. To obtain the lower bound value of, we set the parameter =, using the above parameter settings in (), we have l =0:0314. Similarly, we have u =17:954 by using =0:5in (4). To compare the performance with different values of, we use = 0:1 u ; = u = 10 u in the simulation, respectively. Note that all these sets of are larger than the lower bound value l. The evolution curves of the tap length with different parameter values are shown in Fig. 1. The evolution curves of the EMSE with different parameter values are shown in Fig.. It is clear to see from Fig. 1 that = u provides a good tradeoff between the convergence rate of the tap length the steady-state tap-length variance. The algorithm with a parameter =0:1 u gives a very smooth curve of the steady-state tap length, but the convergence rate of both the tap length the EMSE is very slow. The algorithm with a parameter =10 u provides a quick convergence rate, but the Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 843 Fig. 1. Evolution curves of the tap length with different step sizes under a lownoise condition, SNR = 0 db. Fig. 3. Evolution curves of the tap length with different step sizes under a highnoise condition, SNR = 0 db. Fig.. Evolution curves of the EMSE with different step sizes under a lownoise condition, SNR = 0 db. Fig. 4. Evolution curves of the EMSE with different step sizes under a highnoise condition, SNR = 0 db. tap length fluctuates greatly. When the tap length is undermodeled, i.e., L(n) <L opt, the EMSE will increase, as can be seen in Fig.. Substituting all the parameter sets into (1) we obtain the theoretical values of the steady-state tap length L(1j = 0:1u) = 19:65; L(1j = u) = 19:90 L(1j =10u) = 19:93, which are all close to L opt +1. It can be seen from Fig. 1 that for the parameter sets = u =0:1u, the simulation results of the steady-state tap length match the theoretical values very well: the steady-state tap length fluctuates around the theoretical value L(1), which confirms (1). B. High-Noise Case: SNR = 0dB In this simulation, a high-noise environment is used. The setup for this simulation is as follows. The unknown filter is the same as that in the previous simulation, which is a filter with an impulse response sequence drawn from a white Gaussian sequence with zero mean a variance of 0.01, a tap length of 00. The input signal is another white Gaussian sequence with zero mean unit variance. The noise signal is a zero mean uncorrelated rom Gaussian sequence, scaled to make the SNR 0 db. According to the parameter choice guidelines in Section III-B, the parameter is set. The step size 0 is set as 0.05 to obtain a small EMSE. The leakage parameter is set to 0.005. 1 is set to 0. Similar to the first simulation, in order to obtain the lower bound value of, we set the parameter =, using the above parameter settings in (), we have l =0:033. Similarly, we have u =1:055 by using =0:in (4). To compare the performance with different values of, we use =0:1u; = u =10u in the simulation, respectively. The evolution curves of the tap length with different parameter values are shown in Fig. 3. The evolution curves of the EMSE with different parameter values are shown in Fig. 4. Again from Fig. 3 we can see that = u provides a good tradeoff between the convergence rate of the tap length the steady-state tap-length variance. The convergence rate of both the tap length EMSE with parameter =0:1uis too slow for the algorithm. The algorithm with parameter =10u provides a quick convergence rate of the tap length, but the fluctuation of the steady-state tap length is very large. Once the tap length fluctuates under L opt, EMSE will increase, as can be seen in Fig. 4. Substituting all the parameter sets into (1), we obtain the theoretical values of the steady-state tap length L(1j = 0:1u) = 14:6;L(1j = u) = 19:0 L(1j = 10u) = 19:4, which are all close to Lopt + 1. For = u, the simulation results of the steady-state tap length match the theoretical values very well, which confirms (1). To obtain both a fast convergence rate a small steady-state EMSE for high noise condition, the convex combination approach can be considered, in which two filters are updated simultaneously with different parameters =10u = u, so that the overall filter can obtain Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

844 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 both a rapid convergence rate from the fast filter a smooth curve for the steady-state tap length from the slow filter [11]. From the previous discussion, we know L(1) Lopt +1, in practice 1 L opt ; thus, if L(1) >L opt ;L(1) will be very close to L opt. Equation (3) can then be approximately written as V. CONCLUSION A steady-state performance analysis for the FT variable tap-length LMS algorithm is given in this correspondence. In this analysis, a mathematical formulation of the steady-state tap-length value is obtained, some general guidelines for the parameter choice are also given, which aid the practical applicability of the algorithm. Simulation results show that the analysis is correct the parameter choice guidelines are very useful. The analysis discussion results can be widely used in the applications of the FT algorithm. APPENDIX I According to the analysis in [3], we know that with a fixed tap length, the steady-state value of Efkg N (n)k g can be expressed as 1 0 Efkg 00 (n)kg v : (33) ( 0 0 )Loptx APPENDIX II Note that all the following derivation is based on the condition Lopt L(1) Lopt + 1, the terms G H are equal to 0 at steady state. The variance of the term + (A 0 B + C 0 D + E 0 F + G 0 H) is f = Ef( + (A 0 B + C 0 D + E 0 F )) 0Ef + (A 0 B + C 0 D + E 0 F )gg : (34) From (14), we have where Efkg N(n)k g = (s 0 r)kc000 k + t 1 0 r (5) r =10 x +(L(1) +) 4 x (6) Ef + (A 0 B + C 0 D + E 0 F )g =0: (35) Substituting (14) (35) into (34), using assumptions A., A.3, A.4, the mathematical formulation of terms A; B; C; D; E F, it is straightforward to obtain that s =1+L(1) 4 x (7) f = ( A + B + C + D + E + F 0 EfEFg) 0 : (36) With assumptions A. A.4 using (33) in the mathematical formulation of term A, we can obtain the variance of term A as t = L(1) x v: (8) A = 410 4 v ( 0 0 )L opt : (37) Moreover, the MSD can be divided into three parts [3]: Efkg N (n)k g = Efkg 0 (n)k g + Efkg 00 (n)k g + kc 000 k : (9) Substituting (5) into (9), we have (s 0 1)kc000 k Efkg 0 (n)kg + Efkg 00 (n)kg + t = : (30) 1 0 r Substituting (15), (6), (7), (8) into (30), with the approximation L(1) L(1) +,wehave 0 Efkg 0 (n)kg + Efkg 00 (n)kg v : (31) ( 0 0 )x With assumption A.4, we have Efkg 0 (n)k g =(L(1) 0 1) g Efkg 00 (n)k g =1 g, together with (31), we have Similarly, with assumptions A. A.3, using (16) in the mathematical formulation of term B, we obtain the variance of term B as B 4 v(l opt +10 L(1)) c x: (38) Substituting (1) into (38), we have B 4 0 v + 1 v 4 ( 0 0 )L opt : (39) Using assumptions A. A.4 in the mathematical formulation of term C, we can obtain the variance of term C as With A.4 using (33), we have C 41(L(1) 0 1) 4 g 4 x: (40) 1 0 Efkg 00 (n)kg v : (3) ( 0 0 )L(1)x 0 g v : (41) ( 0 0 )L opt x Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO., FEBRUARY 008 845 Substituting (41) into (40), with the approximation Lopt L(1) 0 1, wehave Substituting (41) into (49), we have C 410 4 v ( 0 0 ) L opt : (4) Similarly, with assumptions A., A.3, A.4, using (16), (1), (41) in the mathematical formulation of term D, with the approximation Lopt L(1) 0 1, we can obtain the variance of term D as v EfEFg = 10 c ((1 0 1)x +3) : (50) ( 0 0 )Lopt Substituting (37), (39), (4), (43), (45), (47), (50) into (36), we have f = K + K 3 0 (51) D 40 v ( 0 0 ) + 1 0 v : (43) ( 0 0 )L opt Since the term x 00T (n)g 00 (n) can be approximately deemed as a sum of 1 i.i.d. rom variables, from the central limit theory we know that the probability density function (pdf) of this term will be very close to a Gaussian distribution with a zero mean; thu,s term E can be approximately deemed as chi-squared distributed with one degree of freedom. With assumptions A. A.4, using (9) we have the mean value of term E where K K 1 = 1 0 v ( 0 0 )L opt (5) =+4 v + 4LoptK1 1 (53) m E 1 0 v ( 0 0 )L opt : (44) K3 =K1K 0 K1 c (1 0 1) x +3 : (54) Thus, the variance of term E is 1 0 E v =m E : (45) ( 0 0 )L opt Similarly, the term x 00T (n)c 00 can be approximately deemed as Gaussian distributed; thus, term F can be approximately deemed as chi-squared distributed with one degree of freedom. With assumptions A.3 A.4, using (16) (1), we have the mean value of term F m F + Thus, the variance for term F is 1 0 v ( 0 0 )L opt : (46) F =m F = + 1 0 v ( 0 0 )L opt : (47) With assumptions A.3 A.4, the term EfEFg can be exped as EfEFg = Ef(x 00T (n)g 00 (n)) (x 00T (n)c 00 ) g = Efx 00T (n)g 00 (n)g 00T (n)x 00 (n)x 00T (n)c 00 c 00T x(n)g g c Efx 00T (n)x 00 (n)x 00T (n)x 00 (n)g (48) by approximating the instantaneous term c 00 c 00T with its statistical averaging value. The input signal is an i.i.d Gaussian sequence, thus Efx 4 g =3 x. From (48) it is straightforward to obtain EfEFg = g c (1(1 0 1) 4 x +31 x): (49) ACKNOWLEDGMENT The authors would like to thank all the reviewers for improving the clarity of the presentation of this correspondence. REFERENCES [1] B. Farhang-Boroujeny, Adaptive Filters: Theory Applications. New York: Wiley, 1998. [] A. H. Sayed, Fundamentals of Adaptive Filtering. New York: Wiley, 003. [3] Y. Gu, K. Tang, H. Cui, W. Du, Convergence analysis of a deficient-length LMS filter optimal-length sequence to model exponential decay impulse response, IEEE Signal Process. Lett., vol. 10, no. 1, pp. 4 7, Jan. 003. [4] K. Mayyas, Performance analysis of the deficient length LMS adaptive algorithm, IEEE Trans. Signal Process., vol. 53, no. 8, pp. 77 734, Aug. 005. [5] Y. Zhang, J. A. Chambers, S. Sanei, P. Kendrick, T. J. Cox, A new variable tap-length LMS algorithm to model an exponential decay impulse response vol. 14, no. 4, pp. 63 66, Apr. 007. [6] F. Riero-Palou, J. M. Noras, D. G. M. Cruickshank, Linear equalisers with dynamic automatic length selection, Electron. Lett., vol. 37, no. 5, pp. 1553 1554, Dec. 001. [7] Y. Gu, K. Tang, H. Cui, W. Du, LMS algorithm with gradient descent filter length, IEEE Signal Process. Lett., vol. 11, no. 3, pp. 305 307, Mar. 004. [8] Y. Gong C. F. N. Cowan, A novel variable tap-length algorithm for linear adaptive filters, in Proc. Int. Conf. Acoustics, Speech, Signal Processing (ICASSP), Jan. 004, pp. 85 88. [9] Y. Gong C. F. N. Cowan, Structure adaptation of linear MMSE adaptive filters, Proc. Inst. Elect. Eng. Vision, Image, Signal Process., vol. 151, no. 4, pp. 71 77, Aug. 004. [10] Y. Gong C. F. N. Cowan, An LMS style variable tap-length algorithm for structure adaptation, IEEE Trans. Signal Process., vol. 53, no. 7, pp. 400 407, Jul. 005. [11] Y. Zhang J. A. Chambers, Convex combination of adaptive filters for variable tap-length LMS algorithm, IEEE Signal Process. Lett., vol. 13, no. 10, pp. 68 631, Oct. 006. [1] M. Tarrab A. Feuer, Convergence performance analysis of the normalized LMS algorithm with uncorrelated Gaussian data, IEEE Trans. Inf. Theory, vol. 34, no. 4, pp. 680 691, Jul. 1988. Authorized licensed use limited to: LOUGHBOROUGH UNIVERSITY. Downloaded on November 4, 009 at 10:31 from IEEE Xplore. Restrictions apply.