IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?
|
|
- Bernard Ford
- 6 years ago
- Views:
Transcription
1 IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, Gliwice, Poland, The Least Mean Squares (LMS) algorithm and its variants are the most popular choice in many systems that require gradient-based adaptation. Examples of such applications include system identification, line enhancement, line equalization, adaptive echo cancellation and active noise cancellation. The only drawback of the LMS-family algorithms is the need for careful step size choice. Too small step size, although giving good excess mean squared error (MSE), results in slow convergence speed. Too large step size results in large excess MSE and may lead to the loss of convergence and instability. Therefore, there are many theoretical studies of the algorithm behavior with the aim to provide useful bounds on the step size. Regardless of the analytic method applied, the common result of many investigations seems to be the lower bound for LMS-like algorithms convergence given as zero (e.g. µ > 0). In this paper we show that at least one of the LMS-family algorithms the Leaky LMS algorithm, is capable of stable operation even if the step size has (small) negative value. Theoretical derivations of the necessary stability condition has been validated by a number of simulations. 1. Introduction The history of the Least Mean Squares (LMS) algorithm stability and convergence analysis is long. The first results were given in 70-s by Widrow, who invented the LMS algorithm. 10 This results were obtained under many assumptions; therefore the research continued during the next decade with the aim to lessen the assumptions and provide results useful in practice. In 1984 Gardner published a comparative study of the results obtained thus far, using so called independence assumption. 4 This assumption requires that the input signal sequences are independent, identically distributed (i.i.d.) sequences. It is clear that the independence assumption does not apply to the case when the input vector is constructed from a tapped delay line: any two following input vector contain the majority of such same samples, and therefore are not independent. Ten years must have elapsed before a comprehensive analysis without the independence assumption was published, and this analysis was delivered by Butterweck. 3 Butterweck used so called small step size assumption, meaning that the step size is small enough to treat the LMS adaptive filter as a low-pass filter with low cutoff frequency. The small step size assumption sufficiently well describes an adaptive filter operating near its optimum value, with only very slight adjustments made by the LMS algorithm. The small step size assumption is common in modern LMS algorithm analysis. However, the small step size assumption does not apply to the phase of rapid adaptation, e.g. during the initial phase of the filter operation. ICSV21, Beijing, China, July 13-17,
2 There are many modifications of the LMS algorithm present in the literature. The best known is probably the Normalized LMS algorithm. 5 Many algorithms are developed with the aim to improve the convergence or the excess mean square error (excess MSE), e.g. correlation LMS. 6 Other variants are developed for specific applications, e.g. filtered-x LMS algorithm was developed for active noise control. Usually, authors of a modified LMS algorithm provide theoretical analysis of the modified algorithm and give bounds on the step size that guarantee convergence. Some authors claim that the bounds are a necessary and sufficient condition for convergence, others maintain the bounds constitute just a sufficient condition. While the upper bound varies with the algorithm, the lower bound is always given no matter which convergence type is discussed as µ > 0. For the majority of the algorithms it is perfectly true: the step size must be positive for the algorithm to converge. But there is at least one algorithm that can operate with a negative step size and remain convergent in the mean (stable). This algorithm is called the Leaky LMS algorithm. The Leaky LMS algorithm was first introduced by Ungerboeck in The convergence in the mean (stability) and the convergence in the mean square sense analysis was provided, under the independence assumptions, by Mayyas et al. 7 The authors of this publication claim that for the Leaky LMS algorithm to be stable it is necessary to have the step size greater then zero. Other authors give similar stability bounds, e.g. Sayed. 8 The goal of this paper is to prove that the publications that give µ > 0 as a necessary condition for the Leaky LMS algorithm stability (or convergence in the mean) are wrong. We will show that even if we do not limit the input signal to the i.i.d. sequences, the Leaky LMS algorithm can operate with (small) negative step size and remain stable. It must be emphasized that this paper deals with stability, or convergence in the mean, only. 2. Assumptions and notation Consider the classical adaptive filtering problem, 5 where the input signal, u(n), is filtered with the adaptive filter, W, to produce the output signal, y(n). The output signal is then compared with the desired signal, d(n), to produce the error signal, e(n). The problem will be dealt with with the following two of assumptions: all the signals are discrete-time, real, finite-valued, the adaptive filter W is a linear, discrete-time, transversal filter with finite impulse response (FIR) and real taps. Using the above assumptions, the filter output can be written as: L 1 y(n) = w i (n)u(n i), (1) where w i (n) is the i-th filter coefficient (tap) at the discrete time n, and L is the filter length. To simplify the notion it is useful to define: w(n) = [w 0 (n), w 1 (n),... w L 1 (n)] T, (2) u(n) = [u(n), u(n 1),... u(n L + 1)] T, (3) where T denotes transpose. Then, Eq. (1) can be written as: y(n) = w T (n)u(n) = u T (n)w(n). (4) To keep equations simple, we will write the Leaky LMS algorithm update equation in a lesscommon form: w(n + 1) = γw(n) + µu(n)e(n), (5) where 0 γ < 1 is the leakage, µ is the step size, and e(n) = d(n) y(n) is the error signal. Please note that we exclude the case when γ = 1, which is treated elsewhere. 1 ICSV21, Beijing, China, July 13-17,
3 3. Stability of the Leaky LMS algorithm Using the filter output Eq. (4), the error can be written as: e(n) = d(n) u T (n)w(n). (6) Substituting Eq. (6) into Eq. (5), and rearranging the terms, we have: w(n + 1) = [ γi µu(n)u T (n) ] w(n) + µu(n)d(n). (7) The above equation can be viewed as a discrete, nonstationary system state-space equation: with the matrices and vectors defined as: x(n + 1) = A(n) x(n) + B(n)ũ(n), (8) A(n) = γi µu(n)u T (n), x(n) = w(n) (9) B(n) = µu(n) ũ(n) = d(n). (10) The matrix A(n) defined in Eq. (9) will be referred to as the Leaky LMS algorithm stability matrix. For this matrix the following theorem holds. Theorem 1 (Leaky LMS Stability Matrix Eigenvalues and Eigenvector). Assume matrix A(n) R L L is the LMS stability matrix defined as in Eq. (9) at any discrete time n. Then, the matrix has an eigenvalue: L 1 λ 1 (n) = γ µ u 2 (n i), (11) with the corresponding eigenvector u(n). The remaining eigenvalues are all equal to γ. For the proof of this theorem see 1 and the Appendix. Using the principle of contraction mapping 2 and considering that the Leaky LMS algorithm stability matrix defined as in Eq. (9) is a symmetrical matrix, we may conclude that the Leaky LMS algorithm stability sufficient condition is defined by the only non-γ eigenvalue (the remaining eigenvalues are inside the unit circle, as γ < 1). If the absolute value of λ 1 is less than or equal to 1 in all the adaptation steps, the adaptive system remains stable. 1 Thus, we may write the Leaky LMS algorithm stability sufficient condition as: L 1 n λ 1 (n) = γ µ u 2 (n i) 1. (12) Solving the above inequality gives: γ 1 n L 1 u2 (n i) µ γ + 1 L 1 u2 (n i), (13) provided L 1 u2 (n i) = u(n) 2 0. Remembering that γ < 1, from Eq. (13) it follows that the lower bound for the step size is negative. For example, if γ = 0.98, the step size should be greater than or equal to 0.02 divided by the squared norm of the input vector. This is in contraction with the result provided by Mayyes et al., 7 where the authors claim the step size is required to be positive (a necessary condition). ICSV21, Beijing, China, July 13-17,
4 4. Simulation results Figure 1. The identification experiment with the leakage factor γ = Consider the Leaky-Normalized LMS algorithm, given by: w(n + 1) = γw(n) + µ(n)u(n)e(n), (14) where µ µ(n) = L 1 u2 (n i). (15) Combining Eqs. (13) and (15) we conclude that for this algorithm to be stable it suffices that: γ 1 µ γ + 1. (16) The Leaky-Normalized LMS algorithm and the above condition constitutes the easiest way to check the validity of the theory developed in the previous section. It must be remembered, however, that the condition defined in Eq. (16) is a sufficient condition only; therefore stable adaptation with even lower (or greater) step size is also possible. 4.1 Identification experiments The first experiments are based on the system identification principle. The identified plant was in the form of a second-order, all-pole model with the transfer function: K(z 1 ) = 1 (z 0.8)(z 0.9). (17) This plant was excited using a Gaussian white noise with the variance 1, while the output was disturbed with an additive Gaussian noise with the variance The adaptive FIR filter modeling the plant had 10 taps. The leakage factor was γ = The simulations were repeated 100 times with different white noise sequences, and the results were averaged. The results of these experiments, for different values of the step size, are presented on Fig. 1. Using the assumed leakage factor value and the Eq. (16), we conclude that for stability of the adaptation it suffices that the normalized step size is within: 0.02 < µ < (18) From Fig. 1 it is clear that the Leaky LMS algorithm remains stable (although not convergent) for µ 0.02; moreover, it is even stable for µ = This is all in agreement with the theory, as the developed condition is a sufficient condition only. ICSV21, Beijing, China, July 13-17,
5 Figure 2. The adaptive line enhancer experiments with the leakage factor γ = Line enhancement experiments Another experiments performed to verify the result in Eq. (13) were concerned with the line enhancement. An adaptive line enhancer (ALE) is a technique that allows to extract highly-correlated components from non-correlated signals. 5 It may be used to remove correlated disturbances from speech signals, and this application was simulated here. The input signal to the ALE was a speech recording, disturbed with four sines with different, constant frequencies. The ALE length and the decorrelation delay were both equal to 10. The filter was adapted using the Leaky-Normalized LMS algorithm, with the same leakage factor γ = 0.98; therefore the bounds for the stable operation remain the same as in the previous experiment. The results of the ALE experiments are presented on Figure 2. Similar to the identification experiments, the Leaky-Normalized LMS algorithm remains stable for small negative values of the step size. Here, it is stable as long as the step size µ Further experiments, not showed here for clarity of the presentation, revealed that the stable region is even larger: no instability was observed for µ > 0.1 with this setup. Again, it is in agreement with Eq. (13), as the stability condition is a sufficient condition only. 5. Conclusions It is common to assume that the step size in the LMS-originated algorithm must be positive for the algorithm to remain stable. Such requirement is also introduced in the literature as a necessary condition for the Leaky LMS algorithm stability. Although it is probably a correct condition for the majority of the algorithms derived from the LMS algorithm, the Leaky LMS algorithm is capable of stable operation even when the step size has small negative value. The paper shows how the discrete systems theory can be used to calculate a correct stability sufficient condition for the Leaky LMS algorithm. It also presents simulation experiments to verify that in case of the Leaky LMS algorithm the theoretical small negative lower bound for the step size is valid and allows for stable adaptation. Appendix In the following proof, for clarity of the presentation, the LMS stability matrix defined in Eq. (9) will be expressed as: A = γi µuu T, (19) (the time index n has been omitted). It is assumed that the adaptive filter length, and therefore also the input vector length as well as both the dimensions of the LMS stability matrix A are equal to L. ICSV21, Beijing, China, July 13-17,
6 First, consider that the rank of the uu T matrix is equal to one; therefore only one of its eigenvalues is non-zero. The direct result is that the LMS stability matrix defined in Eq. (19) has L 1 eigenvalues equal to γ. Now consider right-multiplication of the Leaky LMS stability matrix defined in Eq. (19) by the vector u: Au = ( γi µuu T ) u = γu µuu T u = u ( γ µu T u ) (20) As u T u is a scalar being an inner product of the vector u by itself, the above equation can be expressed as: ( ) L 1 Au = γ µ u, (21) where u i denotes u(n i). Equation (21) may be also viewed as the definition of the eigenvalue and the associated eigenvector. This concludes the proof. Acknowledgment This research is within a project financed by the National Science Centre, based on decision no. DEC-2012/07/B/ST7/ REFERENCES 1 Dariusz Bismor. Extension of the LMS stability condition over wide set of signals. submitted to Journal of Adaptive Control and Signal Processing, Zdzislaw Bubnicki. Modern Control Theory. Springer-Verlag, Berlin, H. J. Butterweck. A steady-state analysis of the LMS adaptive algorithm without use of the independence assumption. Proceedings of ICASSP, pages , W. A. Gardner. Learning characteristics of stochastic-gradient-descent algorithms: a general study, analysis and critique. Signal Processing, 6: , S. Haykin. Adaptive Filter Theory, Fourth Edition. Prentice Hall, New York, S. M. Kuo and D. R. Morgan. Active noise control: a tutorial review. Proceedings of the IEEE, 87(6): , K. Mayyas and T. Aboulnasr. Leaky LMS algorithm: MSE analysis for gaussian data. IEEE Transactions on Signal Processing, 45(4): , Ali H. Sayed. Fundamentals of Adaptive Filtering. John Wiley & Sons, New York, G. Ungerboeck. Fractional tap-spacing equalizer and consequences for clock recovery in data modems. IEEE Transactions on Communications, 24(8): , Aug B. Widrow, J.M. McCool, M.G. Larimore, and Jr. Johnson, C.R. Stationary and nonstationary learning characteristics of the LMS adaptive filter. Proceedings of the IEEE, 64(8): , aug u 2 i ICSV21, Beijing, China, July 13-17,
Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112
Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More informationDominant Pole Localization of FxLMS Adaptation Process in Active Noise Control
APSIPA ASC 20 Xi an Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9209, Auckland, New
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationAssesment of the efficiency of the LMS algorithm based on spectral information
Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA
More informationOn the Stability of the Least-Mean Fourth (LMF) Algorithm
XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface
More informationTitle without the persistently exciting c. works must be obtained from the IEE
Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544
More information2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms
2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST 1999 A General Class of Nonlinear Normalized Adaptive Filtering Algorithms Sudhakar Kalluri, Member, IEEE, and Gonzalo R. Arce, Senior
More informationAn Adaptive Sensor Array Using an Affine Combination of Two Filters
An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia
More informationNew Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,
More informationNSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters
NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters R. K. Martin and C. R. Johnson, Jr. School of Electrical Engineering Cornell University Ithaca, NY 14853 {frodo,johnson}@ece.cornell.edu
More informationLMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.
Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller
More informationConvergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization
Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty
More informationA METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION
A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,
More informationAdaptive MMSE Equalizer with Optimum Tap-length and Decision Delay
Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk
More informationMITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS
17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS Muhammad Tahir AKHTAR
More informationChapter 2 Wiener Filtering
Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start
More informationError Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
American J. of Engineering and Applied Sciences 3 (4): 710-717, 010 ISSN 1941-700 010 Science Publications Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification
More informationADAPTIVE FILTER THEORY
ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department
More informationSparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels
Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute
More information2.6 The optimum filtering solution is defined by the Wiener-Hopf equation
.6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationPOLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS
POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel
More informationSamira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon
Echo Cancelation Using Least Mean Square (LMS) Algorithm Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon Abstract The aim of this work is to investigate methods for
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationA Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases
A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationADAPTIVE signal processing algorithms (ASPA s) are
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 12, DECEMBER 1998 3315 Locally Optimum Adaptive Signal Processing Algorithms George V. Moustakides Abstract We propose a new analytic method for comparing
More informationV. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline
V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least
More informationSystem Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain
System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline
More informationPerformance analysis and design of FxLMS algorithm in broadband ANC system with online secondary-path modeling
Title Performance analysis design of FxLMS algorithm in broadb ANC system with online secondary-path modeling Author(s) Chan, SC; Chu, Y Citation IEEE Transactions on Audio, Speech Language Processing,
More informationAdaptive Systems Homework Assignment 1
Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as
More informationSubmitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co
Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University
More informationLecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters
1 Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters Overview Block Adaptive Filters Iterating LMS under the assumption of small variations in w(n) Approximating the gradient by time
More informationLinear Optimum Filtering: Statement
Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error
More informationA low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao
ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation
More informationCONTROL SYSTEMS ANALYSIS VIA BLIND SOURCE DECONVOLUTION. Kenji Sugimoto and Yoshito Kikkawa
CONTROL SYSTEMS ANALYSIS VIA LIND SOURCE DECONVOLUTION Kenji Sugimoto and Yoshito Kikkawa Nara Institute of Science and Technology Graduate School of Information Science 896-5 Takayama-cho, Ikoma-city,
More informationSteady-state performance analysis of a variable tap-length LMS algorithm
Loughborough University Institutional Repository Steady-state performance analysis of a variable tap-length LMS algorithm This item was submitted to Loughborough University's Institutional Repository by
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More informationCHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter
CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS 4.1 Adaptive Filter Generally in most of the live applications and in the environment information of related incoming information statistic is not available
More informationAdaptiveFilters. GJRE-F Classification : FOR Code:
Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
More informationSNR lidar signal improovement by adaptive tecniques
SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it
More informationA Unified Approach to the Steady-State and Tracking Analyses of Adaptive Filters
314 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 2, FEBRUARY 2001 A Unified Approach to the Steady-State acking Analyses of Adaptive Filters Nabil R. Yousef, Student Member, IEEE, Ali H. Sayed,
More informationComparative Performance Analysis of Three Algorithms for Principal Component Analysis
84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.
More informationESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu
ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,
More informationVariable Learning Rate LMS Based Linear Adaptive Inverse Control *
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of
More informationPMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron
PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider
More information26. Filtering. ECE 830, Spring 2014
26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem
More informationCh5: Least Mean-Square Adaptive Filtering
Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm
More informationReduced-cost combination of adaptive filters for acoustic echo cancellation
Reduced-cost combination of adaptive filters for acoustic echo cancellation Luis A. Azpicueta-Ruiz and Jerónimo Arenas-García Dept. Signal Theory and Communications, Universidad Carlos III de Madrid Leganés,
More informationNEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS
NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS J.E. COUSSEAU, J.P. SCOPPA and P.D. DOÑATE CONICET- Departamento de Ingeniería Eléctrica y Computadoras Universidad Nacional del Sur Av. Alem 253, 8000
More informationOn the Use of A Priori Knowledge in Adaptive Inverse Control
54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,
More informationDetection & Estimation Lecture 1
Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven
More informationAdaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco
Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard
More informationLEAST Mean Squares Algorithm (LMS), introduced by
1 Hard Threshold Least Mean Squares Algorithm Lampros Flokas and Petros Maragos arxiv:1608.0118v1 [cs.sy] 3 Aug 016 Abstract This work presents a new variation of the commonly used Least Mean Squares Algorithm
More informationFxLMS-based Active Noise Control: A Quick Review
APSIPA ASC 211 Xi an FxLMS-based Active Noise Control: A Quick Review Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9219, Auckland, New Zealand Abstract This paper
More informationIII.C - Linear Transformations: Optimal Filtering
1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients
More informationCh4: Method of Steepest Descent
Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationACTIVE noise control (ANC) ([1], [2]) is an established
286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,
More informationResearch Article Efficient Multichannel NLMS Implementation for Acoustic Echo Cancellation
Hindawi Publishing Corporation EURASIP Journal on Audio, Speech, and Music Processing Volume 27, Article ID 78439, 6 pages doi:1.1155/27/78439 Research Article Efficient Multichannel NLMS Implementation
More informationStatistical and Adaptive Signal Processing
r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory
More informationDetection & Estimation Lecture 1
Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven
More informationA Low-Distortion Noise Canceller and Its Learning Algorithm in Presence of Crosstalk
414 IEICE TRANS. FUNDAMENTALS, VOL.E84 A, NO.2 FEBRUARY 2001 PAPER Special Section on Noise Cancellation Reduction Techniques A Low-Distortion Noise Canceller Its Learning Algorithm in Presence of Crosstalk
More informationESE 531: Digital Signal Processing
ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn
More informationEfficient Use Of Sparse Adaptive Filters
Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract
More informationH Optimal Nonparametric Density Estimation from Quantized Samples
H Optimal Nonparametric Density Estimation from Quantized Samples M. Nagahara 1, K. I. Sato 2, and Y. Yamamoto 1 1 Graduate School of Informatics, Kyoto University, 2 Graduate School of Economics, Kyoto
More informationBlind Deconvolution via Maximum Kurtosis Adaptive Filtering
Blind Deconvolution via Maximum Kurtosis Adaptive Filtering Deborah Pereg Doron Benzvi The Jerusalem College of Engineering Jerusalem, Israel doronb@jce.ac.il, deborahpe@post.jce.ac.il ABSTRACT In this
More informationEFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander
EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North
More informationLesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:
Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl
More informationLecture Notes in Adaptive Filters
Lecture Notes in Adaptive Filters Second Edition Jesper Kjær Nielsen jkn@es.aau.dk Aalborg University Søren Holdt Jensen shj@es.aau.dk Aalborg University Last revised: September 19, 2012 Nielsen, Jesper
More informationKNOWN approaches for improving the performance of
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas
More informationIMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT
IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN Jared K. Thomas Brigham Young University Department of Mechanical Engineering ABSTRACT The application of active noise control (ANC)
More informationA new structure for nonlinear narrowband active noise control using Volterra filter
A new structure for nonlinear narrowband active noise control using Volterra filter Jian LIU 1 ; Yegui XIAO 2 ; Hui CHEN 1 ; Wenbo LIU 1 1 Nanjing University of Aeronautics and Astronautics, Nanjing, China
More informationEEL 6502: Adaptive Signal Processing Homework #4 (LMS)
EEL 6502: Adaptive Signal Processing Homework #4 (LMS) Name: Jo, Youngho Cyhio@ufl.edu) WID: 58434260 The purpose of this homework is to compare the performance between Prediction Error Filter and LMS
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationinear Adaptive Inverse Control
Proceedings of the 36th Conference on Decision & Control San Diego, California USA December 1997 inear Adaptive nverse Control WM15 1:50 Bernard Widrow and Gregory L. Plett Department of Electrical Engineering,
More informationMachine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.
Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,
More informationTransient Analysis of Data-Normalized Adaptive Filters
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 3, MARCH 2003 639 Transient Analysis of Data-Normalized Adaptive Filters Tareq Y Al-Naffouri and Ali H Sayed, Fellow, IEEE Abstract This paper develops
More informationMMSE Decision Feedback Equalization of Pulse Position Modulated Signals
SE Decision Feedback Equalization of Pulse Position odulated Signals AG Klein and CR Johnson, Jr School of Electrical and Computer Engineering Cornell University, Ithaca, NY 4853 email: agk5@cornelledu
More informationPerformance Analysis of Norm Constraint Least Mean Square Algorithm
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 5, MAY 2012 2223 Performance Analysis of Norm Constraint Least Mean Square Algorithm Guolong Su, Jian Jin, Yuantao Gu, Member, IEEE, and Jian Wang Abstract
More informationSIMON FRASER UNIVERSITY School of Engineering Science
SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More informationA Strict Stability Limit for Adaptive Gradient Type Algorithms
c 009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional A Strict Stability Limit for Adaptive Gradient Type Algorithms
More informationEstimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition
Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract
More informationPerformance Analysis and Enhancements of Adaptive Algorithms and Their Applications
Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment
More informationSparseness-Controlled Affine Projection Algorithm for Echo Cancelation
Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation ei iao and Andy W. H. Khong E-mail: liao38@e.ntu.edu.sg E-mail: andykhong@ntu.edu.sg Nanyang Technological University, Singapore Abstract
More informationOptimal and Adaptive Filtering
Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1
More informationOn Information Maximization and Blind Signal Deconvolution
On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate
More informationRecursive Generalized Eigendecomposition for Independent Component Analysis
Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu
More informationFAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION
Journal of ELECTRICAL ENGINEERING, VOL. 55, NO. 5-6, 24, 113 121 FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION Khaled Mayyas The block subband adaptive algorithm in
More informationEIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis
EIGENFILTERS FOR SIGNAL CANCELLATION Sunil Bharitkar and Chris Kyriakakis Immersive Audio Laboratory University of Southern California Los Angeles. CA 9. USA Phone:+1-13-7- Fax:+1-13-7-51, Email:ckyriak@imsc.edu.edu,bharitka@sipi.usc.edu
More informationAdaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems
Wireless Signal Processing & Networking Workshop: Emerging Wireless Technologies, Sendai, Japan, 28 Oct. 2013. Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications
More informationBlind Channel Equalization in Impulse Noise
Blind Channel Equalization in Impulse Noise Rubaiyat Yasmin and Tetsuya Shimamura Graduate School of Science and Engineering, Saitama University 255 Shimo-okubo, Sakura-ku, Saitama 338-8570, Japan yasmin@sie.ics.saitama-u.ac.jp
More informationELEG-636: Statistical Signal Processing
ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical
More informationConstrained controllability of semilinear systems with delayed controls
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 56, No. 4, 28 Constrained controllability of semilinear systems with delayed controls J. KLAMKA Institute of Control Engineering, Silesian
More informationCONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström
PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,
More informationADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING. Information Systems Lab., EE Dep., Stanford University
ADAPTIVE INVERSE CONTROL BASED ON NONLINEAR ADAPTIVE FILTERING Bernard Widrow 1, Gregory Plett, Edson Ferreira 3 and Marcelo Lamego 4 Information Systems Lab., EE Dep., Stanford University Abstract: Many
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More information