LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Similar documents
Lecture 4 1. Block-based LMS 3. Block-based LMS 4. Standard LMS 2. bw(n + 1) = bw(n) + µu(n)e (n), bw(k + 1) = bw(k) + µ u(kl + i)e (kl + i),

Adaptive Filtering Part II

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method

EE482: Digital Signal Processing Applications

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

ELEG-636: Statistical Signal Processing

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method

Ch5: Least Mean-Square Adaptive Filtering

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

EE482: Digital Signal Processing Applications

Ch4: Method of Steepest Descent

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Computer exercise 1: Steepest descent

Least Mean Square Filtering

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

ELEG-636: Statistical Signal Processing

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

Adaptive Filter Theory

3.4 Linear Least-Squares Filter

ESE 531: Digital Signal Processing

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

Lecture Notes in Adaptive Filters

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Adaptive SP & Machine Intelligence Linear Adaptive Filters and Applications

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

An overview on optimized NLMS algorithms for acoustic echo cancellation

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

26. Filtering. ECE 830, Spring 2014

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

Ch6-Normalized Least Mean-Square Adaptive Filtering

Advanced Signal Processing Adaptive Estimation and Filtering

Lecture 7: Linear Prediction

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

A FAST AND ACCURATE ADAPTIVE NOTCH FILTER USING A MONOTONICALLY INCREASING GRADIENT. Yosuke SUGIURA

On the Stability of the Least-Mean Fourth (LMF) Algorithm

Department of Electrical and Electronic Engineering

III.C - Linear Transformations: Optimal Filtering

ADAPTIVE signal processing algorithms (ASPA s) are

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

ADAPTIVE FILTER THEORY

A SPARSENESS CONTROLLED PROPORTIONATE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

A Convex Combination of Two Adaptive Filters as Applied to Economic Time Series. Adaptive Signal Processing EEL May Casey T.

Hybrid Time-Frequency Domain Adaptive Filtering Algorithm for control of the vibration control systems

EEL 6502: Adaptive Signal Processing Homework #4 (LMS)

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

Reduced-cost combination of adaptive filters for acoustic echo cancellation

Good vibrations: the issue of optimizing dynamical reservoirs

Equalization Prof. David Johns University of Toronto (

Control Systems. State Estimation.

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

Algorithms and structures for long adaptive echo cancellers

Samira A. Mahdi University of Babylon/College of Science/Physics Dept. Iraq/Babylon

AdaptiveFilters. GJRE-F Classification : FOR Code:

2262 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 8, AUGUST A General Class of Nonlinear Normalized Adaptive Filtering Algorithms

Lecture: Adaptive Filtering

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Statistical and Adaptive Signal Processing

Adaptive Stereo Acoustic Echo Cancelation in reverberant environments. Amos Schreibman

Comparison of Modern Stochastic Optimization Algorithms

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Combinations of Adaptive Filters

ADAPTIVE FILTER THEORY

Chapter 2 Wiener Filtering

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.

ECS171: Machine Learning

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

An Adaptive Sensor Array Using an Affine Combination of Two Filters

X t = a t + r t, (7.1)

Modified Leaky LMS Algorithms Applied to Satellite Positioning

Identification of ARX, OE, FIR models with the least squares method

Acoustic MIMO Signal Processing

CS260: Machine Learning Algorithms

23 Adaptive IIR Filters

Least Mean Squares Regression. Machine Learning Fall 2018

HARMONIC EXTENSION OF AN ADAPTIVE NOTCH FILTER FOR FREQUENCY TRACKING

SIMON FRASER UNIVERSITY School of Engineering Science

ADaptive signal processing algorithms have been widely

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

A Flexible ICA-Based Method for AEC Without Requiring Double-Talk Detection

Open Economy Macroeconomics: Theory, methods and applications

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

CSC 411 Lecture 6: Linear Regression

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Binary Step Size Variations of LMS and NLMS

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Jie Yang

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Optimal and Adaptive Filtering

Transcription:

Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller LMS and eigenvalue spread The convergence speed for the LMS depends on the spread of the eigenvalues of R. This spread is normally written as the ratio between the largest and the smallest aigenvalues. χ(r) = λ max λ min If χ(r) >, the contour plot of J(n) for the -dimensional case (M = ) is elliptic. The convergence is fastest along the short axis of the ellips since the cost function J(n) has its fastes growth in this direction. Along the long axis, the convergence is the slowest since the growth of J(n) is minimal in this direction. LMS and eigenvalue spread LMS and eigenvalue spread 4 Example: Idebtification [b, b ] T v (n) Calculation of the correlations: = au(n )v (n) E{[ au(n )]}= E{v (n)} = σ v d(n) v (n) az [ bw, bw ] T ˆd(n) Σ σv = a WGN E{u(n )[ au(n )]}= E{u(n )v (n)} =»» " a r() σ #» = v r() = " σ # v a r() r() a aσv d(n)=b b u(n )v (n) R =» a a χ(r) = a a LMS» b b p = a b ab E{d(n)}= E{[b b u(n )v (n)]} E{u(n )d(n)}= E{u(n )[b b u(n )v (n)]}»»» p() r() r() b = = Rb p( ) r( ) r() b σ d = E{[b b u(n )v (n)]d(n)} = bt Rbσ v

.5 7.5.5 7.5 LMS and eigenvalue spread 5 LMS and eigenvalue spread 6 Let us assume that b = [.5,.5] T, and that σ v =.. What we now want to achieve is identification of this filter, i.e bw = b, by using the LMS algorithm. We can vary the parameter a, a <, in order to get different degrees of colouring of the exciting noise. For a = är the exciting noise is white, and χ(r) =. For this structure The effect of colouring of the noise. Completely white input signal a =. a =., χ(r) =. a =., χ(r) =.5 bw.5.5.5 8.5 7.5.5.5.5 bw.5.5.5.5.5 J min = σ d σ b d = b T Rb σ v w T o Rw o = σ v 7.5 7.5 8.5.5 since the length of the adaptive filter matches the filter to be identified. Without addititve noise in the form of v (n), J min and thereby J ex becomes zero!.5 9.5 8.5.5.5.5.5.5 9.5.5.5.5.5.5.5 bw bw LMS and eigenvalue spread 7 LMS and eigenvalue spread 8 The effect of colouring of the noise. a =.4, χ(r). a =.6, χ(r) = 4. The effect of colouring of the noise. a =.8, χ(r) = 9..5.5.5.5.5.5.5.5.5.5.5.5 bw.5 bw bw 7.5.5.5.5 8.5 9.5.5 7.5.5.5.5.5 9.5 8.5.5.5 9.5.5.5 8.5 7.5.5.5 7.5.5.5.5.5.5.5.5.5.5.5.5 bw bw.5.5.5.5 bw

LMS and eigenvalue spread 9 LMS and eigenvalue spread bw bw 7.5 7.5 8.5 8.5 9.5.5.5.5.5 7.5.5 7.5 8.5 9.5.5.5 bw.5.5 7.5.5 J(n) J(n) 5 4 5 4 Start [.,.] Start [.5,.] J min µ =., a = realizations 4 5 6 7 8 9 Iteration n Start [.,.] Start [.5,.] J min µ =., a =.8 realizations The comparison shows that for a coloured input the convergence speed depends on the start values of the filter. The two choices of initial values that was investigated was bw() = [.,.] T and bw() = [.5,.] T. When the input signal was uncorrelated, i.e. white (a = ), there was no difference in the convergence speed between these two choices, while, when the input signal was coloured (a =.8), the convergence speed was dependent on the initial values of the filter. Since bw() = [.,.] T deviates from w o mainly along the short axis of the ellips, this case converged faster. bw 4 5 6 7 8 9 Iteration n The Normalized LMS LMS with time-variable adaptation step For the standard LMS which we have looked at earlier, the step is proportial to : ŵ(n ) = ŵ(n) µe (n). This means that the gradient noise is amplified when the signal is strong. One solution to this problem is to normalize the update bw(n) med = u H (n): µ bw(n) = bw(n) a e (n) One way to reduce the misadjustment M without affecting the convergence is to use a variable adpatiation step, bw(n ) = bw(n) α(n)e (n), where α(n)for example can be α(n) = n c. This method is useful for steady-state self-tuning, but cannot be used for tracking. where < µ <, and a is a positive protection constant.

LMS with independent adaptation steps The Leaky LMS 4 It is also possible to choose independent adaptation steps for each filter coefficient, bw(n ) = bw(n) Me (n), where M is given by α... α... M = α.... 6 4.... 7... 5... α M This method becomes more interesting when updating the filter coefficients in the frequency domain, where different frequency components can be assigned different step sizes. As for the standard LMS we start with the method of the Steepest descent, w(n ) = w(n) µ [ J(n)], but we now instead use the cost function The gradient becomes J(n) = E{ } α w(n) = E{e (n)} αw H (n)w(n). J(n) = p Rw(n) αw(n). Withe estimated statistics we get b J(n) = bp(n) b R(n)bw(n) αbw(n). The Leaky LMS 5 The Leaky LMS 6 The update equation for the Leaky LMS is given by bw(n ) = ( µα) {z } bw(n) µe (n). For the Leaky LMS it can be shown that The word läckning indicates that energy leaks out of the algorithm and its effect on the filter coefficients is allowed to decay. In systems without noise, the Leaky LMS makes the performance poorer.. lim n E{bw(n)} = (R αi) p i.e., it results in the same effect as if white noise with variance α was added at the input. This increases the uncertainty in the input signal, which hold the filter coefficients back. The phenomenon is referred to as prewhitening, and α is denoted prewhitening parameter. The factor ( µα) is called Leakage factor, where α is bounded by α < N.

The Sign LMS 7 System: Echo canceller I 8 Again, starting from the method of the Steepest descent w(n ) = w(n) µ [ J(n)], and the cost function J(n) = E{ } the gradient becomes b J(n) = sign[]. Person FIR filter Adaptive algorithm bd(n) Hybrid Person With this gradient, the update equation becomes Σ d(n) bw(n ) = bw(n) α sign[]. The Sign LMS is used in applications with extremely high requirements on computational complexity. System: Ekosläckare II 9 What to read Person Haykin chap. 5.9, 6. 6. FIR filter bd(n) Adaptive algorithm Σ d(n) Impuls response Echo path Person Σ Exercises: 4.6, 4.7, 4.8 Computer exercise, theme: Implement the NLMS in Matlab, and compare the NLMS to the LMS for echo cancellation of a speech signal (Echo canceller I)