Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller LMS and eigenvalue spread The convergence speed for the LMS depends on the spread of the eigenvalues of R. This spread is normally written as the ratio between the largest and the smallest aigenvalues. χ(r) = λ max λ min If χ(r) >, the contour plot of J(n) for the -dimensional case (M = ) is elliptic. The convergence is fastest along the short axis of the ellips since the cost function J(n) has its fastes growth in this direction. Along the long axis, the convergence is the slowest since the growth of J(n) is minimal in this direction. LMS and eigenvalue spread LMS and eigenvalue spread 4 Example: Idebtification [b, b ] T v (n) Calculation of the correlations: = au(n )v (n) E{[ au(n )]}= E{v (n)} = σ v d(n) v (n) az [ bw, bw ] T ˆd(n) Σ σv = a WGN E{u(n )[ au(n )]}= E{u(n )v (n)} =»» " a r() σ #» = v r() = " σ # v a r() r() a aσv d(n)=b b u(n )v (n) R =» a a χ(r) = a a LMS» b b p = a b ab E{d(n)}= E{[b b u(n )v (n)]} E{u(n )d(n)}= E{u(n )[b b u(n )v (n)]}»»» p() r() r() b = = Rb p( ) r( ) r() b σ d = E{[b b u(n )v (n)]d(n)} = bt Rbσ v
.5 7.5.5 7.5 LMS and eigenvalue spread 5 LMS and eigenvalue spread 6 Let us assume that b = [.5,.5] T, and that σ v =.. What we now want to achieve is identification of this filter, i.e bw = b, by using the LMS algorithm. We can vary the parameter a, a <, in order to get different degrees of colouring of the exciting noise. For a = är the exciting noise is white, and χ(r) =. For this structure The effect of colouring of the noise. Completely white input signal a =. a =., χ(r) =. a =., χ(r) =.5 bw.5.5.5 8.5 7.5.5.5.5 bw.5.5.5.5.5 J min = σ d σ b d = b T Rb σ v w T o Rw o = σ v 7.5 7.5 8.5.5 since the length of the adaptive filter matches the filter to be identified. Without addititve noise in the form of v (n), J min and thereby J ex becomes zero!.5 9.5 8.5.5.5.5.5.5 9.5.5.5.5.5.5.5 bw bw LMS and eigenvalue spread 7 LMS and eigenvalue spread 8 The effect of colouring of the noise. a =.4, χ(r). a =.6, χ(r) = 4. The effect of colouring of the noise. a =.8, χ(r) = 9..5.5.5.5.5.5.5.5.5.5.5.5 bw.5 bw bw 7.5.5.5.5 8.5 9.5.5 7.5.5.5.5.5 9.5 8.5.5.5 9.5.5.5 8.5 7.5.5.5 7.5.5.5.5.5.5.5.5.5.5.5.5 bw bw.5.5.5.5 bw
LMS and eigenvalue spread 9 LMS and eigenvalue spread bw bw 7.5 7.5 8.5 8.5 9.5.5.5.5.5 7.5.5 7.5 8.5 9.5.5.5 bw.5.5 7.5.5 J(n) J(n) 5 4 5 4 Start [.,.] Start [.5,.] J min µ =., a = realizations 4 5 6 7 8 9 Iteration n Start [.,.] Start [.5,.] J min µ =., a =.8 realizations The comparison shows that for a coloured input the convergence speed depends on the start values of the filter. The two choices of initial values that was investigated was bw() = [.,.] T and bw() = [.5,.] T. When the input signal was uncorrelated, i.e. white (a = ), there was no difference in the convergence speed between these two choices, while, when the input signal was coloured (a =.8), the convergence speed was dependent on the initial values of the filter. Since bw() = [.,.] T deviates from w o mainly along the short axis of the ellips, this case converged faster. bw 4 5 6 7 8 9 Iteration n The Normalized LMS LMS with time-variable adaptation step For the standard LMS which we have looked at earlier, the step is proportial to : ŵ(n ) = ŵ(n) µe (n). This means that the gradient noise is amplified when the signal is strong. One solution to this problem is to normalize the update bw(n) med = u H (n): µ bw(n) = bw(n) a e (n) One way to reduce the misadjustment M without affecting the convergence is to use a variable adpatiation step, bw(n ) = bw(n) α(n)e (n), where α(n)for example can be α(n) = n c. This method is useful for steady-state self-tuning, but cannot be used for tracking. where < µ <, and a is a positive protection constant.
LMS with independent adaptation steps The Leaky LMS 4 It is also possible to choose independent adaptation steps for each filter coefficient, bw(n ) = bw(n) Me (n), where M is given by α... α... M = α.... 6 4.... 7... 5... α M This method becomes more interesting when updating the filter coefficients in the frequency domain, where different frequency components can be assigned different step sizes. As for the standard LMS we start with the method of the Steepest descent, w(n ) = w(n) µ [ J(n)], but we now instead use the cost function The gradient becomes J(n) = E{ } α w(n) = E{e (n)} αw H (n)w(n). J(n) = p Rw(n) αw(n). Withe estimated statistics we get b J(n) = bp(n) b R(n)bw(n) αbw(n). The Leaky LMS 5 The Leaky LMS 6 The update equation for the Leaky LMS is given by bw(n ) = ( µα) {z } bw(n) µe (n). For the Leaky LMS it can be shown that The word läckning indicates that energy leaks out of the algorithm and its effect on the filter coefficients is allowed to decay. In systems without noise, the Leaky LMS makes the performance poorer.. lim n E{bw(n)} = (R αi) p i.e., it results in the same effect as if white noise with variance α was added at the input. This increases the uncertainty in the input signal, which hold the filter coefficients back. The phenomenon is referred to as prewhitening, and α is denoted prewhitening parameter. The factor ( µα) is called Leakage factor, where α is bounded by α < N.
The Sign LMS 7 System: Echo canceller I 8 Again, starting from the method of the Steepest descent w(n ) = w(n) µ [ J(n)], and the cost function J(n) = E{ } the gradient becomes b J(n) = sign[]. Person FIR filter Adaptive algorithm bd(n) Hybrid Person With this gradient, the update equation becomes Σ d(n) bw(n ) = bw(n) α sign[]. The Sign LMS is used in applications with extremely high requirements on computational complexity. System: Ekosläckare II 9 What to read Person Haykin chap. 5.9, 6. 6. FIR filter bd(n) Adaptive algorithm Σ d(n) Impuls response Echo path Person Σ Exercises: 4.6, 4.7, 4.8 Computer exercise, theme: Implement the NLMS in Matlab, and compare the NLMS to the LMS for echo cancellation of a speech signal (Echo canceller I)