Adaptive Filter Theory

Size: px
Start display at page:

Download "Adaptive Filter Theory"

Transcription

1 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) (Mobile)

2 able of Contents 1 Wiener Filters Gradient Search by Steepest Descent Method Stochastic Gradient Adaptive Algorithms Recursive Least Square (RLS) Algorithm

3 Wiener Filters

4 Filter Optimization Problem 3 Wiener Filtering Aprioriknowledge of the signal statistics or at least their estimates are required. Complex and expensive hardware systems are necessary (particularly, in nonstationary environments). Adaptive Filtering Complete knowledge of the signal statistics is not required. Filter weights eventually converge to the optimum Wiener solutions for stationary processes. Filter weights show tracking capability in slowly time-varying nonstationary environments. Complex and expensive hardware systems are not, in general, necessary.

5 Wiener Filters (1/6) 4 Objectives: We want to design a filter h that minimizes the mean-squared estimation error so that the i E e ( n ) estimated signal dˆ ( n) best approximates d(n). h { } Estimation Error Signal Desired Signal dn ( ) en ( ) = dn ( ) dn ˆ ( ) Reference Signal x( n) h i 0 i N 1 N 1 dˆ( n ) = h x ( n i ) i= 0 i Estimated Signal

6 Wiener Filters (/7) 5 Basic Structure: dn () x() n h 0 N 1 en ( ) = dn ( ) hxn ( i) i= 0 = d ( n ) H X ( n ) i xn ( 1) z 1 h 1 xn ( ) z 1 h dn ˆ( ) Linear combination of the current and past input signals xn ( N + 1) z 1 hn 1

7 Wiener Filters (3/7) 6 Basic Assumptions: d(n)andx(n) are zero-mean. d(n) and x(n) are jointly wide-sense stationary. Notations: Filter Coefficient Vector: [,,, ] H = h h L h 0 1 N 1 Reference Input Vector: X() n = [ xn (), xn ( 1), L, xn ( N+ 1) ] () () ( 1) ( 1) Estimation Error Signal: Autocorrelation Matrix: N 1 en ( ) = dn ( ) hxn ( i) XX i= 0 = d( n) H X( n) { ( ) ( ) } R = E X n X n i Cross-correlation Vector: RdX = { ( ) ( )} E d n X n Optimum Filter Coefficient Vector: Hopt = h0, opt, h1, opt, L, hn 1, opt

8 Wiener Filters (4/7) 7 Performance Measure (Cost Function): ξ = E{ e ( n )} ( ( ) ( )) { ( )} { ( ) ( )} { ( ) ( )} { ( )} dx XX = E d n H X n = E d n H E d n X n + H E X n X n H = E d n H R + H R H We now want to minimize ξ with respect to H: ξ = RdX + RXX H = 0 H Wiener-Hopf Solution (1931): R H = XX opt R dx H = R 1 opt R XX dx

9 Wiener Filters (5/7) 8 Autocorrelation Matrix R XX : RXX = E{ X( n) X ( n) } rxx (0) rxx(1) L rxx( N 1) rxx(1) rxx(0) L rxx ( N ) = M M O M rxx( N 1) rxx( N ) L rxx (0) R XX is symmetric and oeplitz. Is R XX invertible? Yes, almost always. R XX is almost always a positive definite matrix. A symmetric matrix A is called positive definite if x Ax > 0 for every nonzero x. All the eigenvalues of A is positive. he determinant of every principal submatrix of A is positive. Since the determinant of A is not zero, A is invertible.

10 Wiener Filters (6/7) 9 Let X B (n) denote the vector obtained by rearranging the elements of X(n) backward, i.e., hen B [ ] X ( n) = x( n N + 1), x( n N + ), L, x( n) { ( ) ( )} E X n X n = R B B XX Cross-correlation Vector R dx : R dx rdx(0) r (1) dx = E{ d( n) X( n) } = M rdx ( N 1) Minimum Estimation Error: opt emin ( n ) = d ( n ) H X ( n ) = d ( n ) X ( n ) H opt

11 Wiener Filters (7/7) 10 Minimum Mean-Squared Estimation Error: ξ min = E { e min ( n) } = E ( d( n) Hopt X( n) ) = E { d ( n) } Hopt RdX = { ( )} { } opt XX opt E d n H R H Example: ξ N = 1 ξ N = ξ min Error Surface ξ min Error Surface h 1 0,opt h 0 h ( h0, opt, h1, opt ) h 0

12 Orthogonality Principle: 11 dn ( ) e min ( n ) θ Plane M dn ˆ( ) he plane M is spanned by X() n = [ xn (), xn ( 1), L, xn ( N+ + 1) ] () () ( 1) ( 1) he plane M is spanned by. N 1 dn ˆ( ) = hxn ( i ) i=0 i he plane M e ( n min ) E{ emin ( n ) X ( n ) } = 0 N he perfect estimation is possible if θ = 0, and the estimation fails if θ = π/.

13 Some Drawbacks of the Wiener Filter: 1 Signal statistics must be known a priori. We must know R XX and R dx or at least their estimates. A matrix inversion operation is required. Heavy computational load. Not proper for real-time applications. Situations get worse in nonstationary environments. We have to compute R XX (n) and R dx (n) at every time n. We must compute the matrix inversion operation at every time n.

14 13 Gradient Search by Steepest Descent Method

15 Steepest Descent Method (1/5) 14 Objectives: We want to design a filter h in a recursive form in order to avoid the matrix inversion operation i ( n) required in Wiener solution. dn ( ) en ( ) = dn ( ) dn ˆ( ) x( n) h i (n) 0 i N 11 N 1 dˆ( n ) = h ( n ) x ( n i ) i= 0 i

16 Steepest Descent Method (/5) 15 Basic Structure: dn () xn () z 1 h 0 ( n) N 1 en ( ) = dn ( ) h( n) xn ( i) i= 0 = d ( n ) H ( n ) X ( n ) i xn ( 1) z 1 h 1 ( n) dn ˆ( ) xn ( ) h ( n) z 1 xn ( N+ 1) h N 1 ( n )

17 Steepest Descent Method (3/5) 16 Basic Assumptions: d(n)andx(n) are zero-mean. d(n) and x(n) are jointly wide-sense stationary. Notations: Filter Coefficient Vector: [ ] H ( n) = h ( n), h ( n), L, h ( n) 0 1 N 1 Reference Input Vector: X() n = [ xn (), xn ( 1), L, xn ( N+ 1) ] () () ( 1) ( 1) Estimation Error Signal: Autocorrelation Matrix: N 1 en ( ) = dn ( ) h( nxn ) ( i) XX i= 0 = d( n) H ( n) X( n) { ( ) ( ) } R = E X n X n i Cross-correlation Vector: RdX = { ( ) ( )} E d n X n Optimum Filter Coefficient Vector: Hopt = h0, opt, h1, opt, L, hn 1, opt

18 Steepest Descent Method (4/5) 17 he filter coefficient vector at time n+1 is equal to the coefficient vector at time n plus a change proportional to the negative gradient of the mean-squared error, ie i.e., 1 H ( n+ 1) = H( n) μ H( n) ( n) μ = Adaptation Step-size [ ] H ( n) = h ( n), h ( n), L, h ( n) 0 1 N 1 Performance Measure (Cost Function): { } { } ξ ( n) = E e ( n) = E d ( n) H ( n) R + H ( n) R H( n) dx XX

19 Steepest Descent Method (5/5) 18 he Gradient of the Mean-Squared Error: H( n) ( n) ξ( n) = H( n) = R + R H ( n ) dx XX herefore, the recursive update equation for the coefficient vector becomes [ ] H( n+ 1) = I μ R H( n) +μr N XX dx Misalignment Vector: V( n) = H( n) Hopt [ ] V( n+ 1) = I μr V( n) N XX

20 Convergence of Steepest Descent Method (1/) 19 Convergence (or Stability) Condition: 1 μλ i < 1 0 < μ<, i λ λ i 0 <μ<, i λ max (λ i = the i-th eigenvalue of R XX ) λ Slow convergence if max is large. λ λ min

21 Convergence of Steepest Descent Method (/) 0 ime Constant: he convergence behavior of the i-thi element of the misalignment vector: ( ) v ( n+ 1) = 1 μλ v ( n) i i i ( ) n i i vi v ( n) = 1 μλ (0) ime constant for the i-th element of the misalignment vector: 1 1 μλ i = exp τi 1 1 τ i = (samples) for μ 1 ln 1 μλ ( μλ ) i i Steady-State Value: H( ) = H or V( ) = 0 opt N We still need a priori knowledge of signal statistics.

22 1 Stochastic Gradient Adaptive Algorithms

23 Stochastic Gradient Adaptive Filters Motivations: No a priori information about signal statistics No matrix inversion racking capability Slfd Self-designing i (Recursive method) he filter gradually learns the required correlation of the input signals and adjusts its coefficient vector recursively according to some suitably chosen instantaneous error criterion. Evaluation Criteria: Rate of convergence Misadjustment (Deviation from the optimum solution) Robustness for ill-conditioned data Computational costs Hardware implementation costs Numerical problems

24 Applications of Stochastic Gradient Adaptive Filters (1/) 3 System Identifications: ξ(n) Unknown d(n) x (n) Σ Σ e(n) System Adaptive Filter Adaptive Prediction: d(n) Σ e(n) z Δ x ( n ) = d ( n Δ ) Adaptive Filter

25 Applications of Stochastic Gradient Adaptive Filters (1/) 4 Noise Cancellation: y(n) Σ d( n) = y( n) + ξ( n) Σ e(n) ξ(n) x(n)? Adaptive Filter ˆ ξ( n ) Inverse Filtering: raining Signal (RX) raining Signal (X) Unknown Channel z Δ d(n) Received Signal x(n) Adaptive Σ Filter ξ(n) Σ e(n )

26 Classification of Adaptive Filters 5 System Identification: System Identification Layered Earth Modeling Adaptive Prediction: Linear Predictive Coding Autoregressive Spectral Analysis ADPCM Noise Cancellation: Adaptive Noise Cancellation Adaptive Echo Cancellation Active Noise Control Adaptive Beamforming Inverse Filtering: Adaptive Equalization Deconvolution Blind Equalization

27 Stochastic Gradient Adaptive Algorithms (1/6) 6 dn ( ) en ( ) = dn ( ) dn ˆ( ) x( n) h i (n) 0 i N 1 Adaptive Algorithm N 1 dˆ ( n) = h ( n) x( n i) i= 0 i en ( ) = d ( n ) H ( n ) X ( n ) μ H ( n+ 1) = H( n) H( n) ( n) α Various forms according to the choice of the performance measure. ( ) ( ) H n n = e ( ) ( n H n ) α If no correlation between d(n) and x(n), then no estimation can be made.

28 Stochastic Gradient Adaptive Algorithms (/6) 7 Notations: Filter Coefficient Vector: H ( n) = [ h ( n), h ( n), L, h ( n) ] ( ) ( ) ( ) ( ) 0 1 N 1 Reference Input Vector: X n = [ xn xn L xn N+ ] () (), ( 1),, ( 1) Estimation Error Signal: Autocorrelation Matrix: N 1 en ( ) = dn ( ) h( nxn ) ( i) XX i= 0 = d ( n ) H ( n ) X ( n ) { ( ) ( )} R = E X n X n i Cross-correlation Vector: RdX = { ( ) ( )} E d n X n Optimum Filter Coefficient Vector: Hopt = h0, opt, h1, opt, L, hn 1, opt Misalignment Vector: V( n) = H( n) Hopt Covariance Matrix of the Misalignment Vector: Kn ( ) = EVnV { ( ) ( n) }

29 Stochastic Gradient Adaptive Algorithms (3/6) 8 Sign Algorithm: α = 1 he sign algorithm tries to minimize the instantaneous absolute error value at each iteration. en ( ) = d ( n ) H ( n ) X ( n ) H ( n ) ( n) en ( ) = HH ( n ) Filter Coefficient Updates: { } H ( n+ 1) = H( n) μx( n)sign e( n) { en} sign ( ) 1, en ( ) 0 = 1, en ( ) < 0

30 Stochastic Gradient Adaptive Algorithms (4/6) 9 Least Mean Square (LMS) Algorithm: α = he LMS algorithm tries to minimize the instantaneous squared error value at each iteration. en ( ) = d ( n ) H ( n ) X ( n ) H( n) e ( n) ( ) ( n ) = HH ( n ) Filter Coefficient Updates: H ( n+ 1) = H( n) μx( n) e( n)

31 Stochastic Gradient Adaptive Algorithms (5/6) 30 Least Mean Absolute hird (LMA) Algorithm: α = 3 he LMA algorithm tries to minimize the instantaneous absolute error value to the third power at each iteration. en ( ) = dn ( ) H ( nx ) ( n) H( n) ( n) 3 en ( ) = H ( n) Filter Coefficient Updates: { } H ( n+ 1) = H( n) μx( n) e ( n)sign e( n)

32 Stochastic Gradient Adaptive Algorithms (6/6) 31 Least Mean Fourth (LMF) Algorithm: α = 4 he LMF algorithm tries to minimize the instantaneous error value to the fourth power at each iteration. en ( ) = dn ( ) H ( nx ) ( n) e ( n) H ( n ) ( n ) = H ( n) 4 Filter Coefficient Updates: H ( n+ 1) = H( n) μx( n) e ( n) 3

33 Convergence of the Adaptive Algorithms (1/) 3 Basically, we need to know the mean and mean-squared behavior of the algorithms. For the analysis of the statistical mean behavior: We want to know a set of statistical difference equations that characterizes E{H(n)} or E{V(n)}. We also need to check Stability conditions Convergence speed Unbiased estimation capability For the analysis of the statistical mean-squared behavior: We want to know a set of statistical difference equations that characterizes { } and K( n) = E V( n) V ( n). We also need to check Stability conditions Convergence speed Estimation precision { } σ e ( n ) = E e ( n )

34 Convergence of the Adaptive Algorithms (/) 33 Basic Assumptions for the Convergence Analysis: he input signals d(n) and x(n) are zero-mean, jointly wide-sense stationary, and jointly Gaussian with finite variances. A consequence of this assumption is that t the estimation error e(n) ( ) = d(n) ) H (n)x(n) )is also a zeromean and Gaussian when conditioned on the coefficient vector H(n). Independence d Assumption: he input pair {d(n), X(n)} at time n is independent of {d(k), X(k)} at time k, if n is not equal to k. his assumption is seldom true in practice, but is valid when the step-size μ is chosen to be sufficiently i small. One direct consequence of the independence assumption is that the coefficient vector H(n) is uncorrelated with the input pair {d(n), X(n)}, since H(n) depends only on inputs at time n-1 and before..

35 Sign Algorithm (1/) 34 Mean Behavior: E { Hn ( 1) } I μ { ( )} e( ) R EHn μ + = + πσ n πσe( n) R N XX dx μ E V n IN RXX E V n πσe( n) { ( + 1) } = { ( )} Mean-Squared Behavior: e min { } σ ( n) =ξ + tr K( n) R XX μ K( n+ 1) = K( n) +μ R K( n) R + R K( n) μ [ ] XX πσ( n) XX XX

36 Sign Algorithm (/) 35 Steady-State Mean-Squared Estimation Error: μ π σe( ) ξ min + ξ min { } tr R XX Convergence Condition (Weak Convergence): he long-term time-average of the MAE is bounded for any positive value of μ. Very robust, but slow.

37 LMS Algorithm (1/) 36 Mean Behavior: { ( 1) } [ ] { ( )} E H n+ = I μ R E H n +μr N XX dx { ( + 1) } = [ μ ] { ( )} EVn I R EVn N XX Mean-Squared Behavior: σ ( n ) =ξ + tr { K ( n ) R } e min XX [ ] K ( n+ 1) = K ( n ) μ K ( n ) R + R K ( n ) XX +μ σ + XX e ( ni ) N RXX Kn ( ) RXX

38 LMS Algorithm (/) 37 Steady-State Mean-Squared Estimation Error: μ σe( ) ξ min + ξ min { } tr R XX Mean Convergence: 0 <μ< λ max Mean-Squared Convergence: 0 <μ< tr R { } 3 XX π 1 If, then μ LMS =μsign σe( ) LMS σe( ) sign. ξ min he convergence of the algorithm strongly depends on the input signal statistics.

39 LMA Algorithm (1/) 38 Mean Behavior: E{ H( n 1) } + = I μ σ ( n) R E{ H( n) } + μ σ ( n) R π π N e XX e dx EVn IN e nrxx EVn π { ( + 1) } = μ σ ( ) { ( )} Mean-Squared Behavior: e min { } σ ( n) =ξ + tr K( n) R XX K( n + 1) = K ( n) μ σ e( n) K ( n) R XX + R XXK ( n) π [ ] + 3 μ σ ( ) σ ( ) + 3 ( ) e n e n RXX RXX K n RXX

40 LMA Algorithm (/) 39 Steady-State Mean-Squared Estimation Error: 3μ π σe( ) ξ + ξ ξ 4 min min min { } tr R XX Mean Convergence: π 1 0 < μ<, n λ σ ( n) max e Very fast, but must be careful. he convergence of the LMA algorithm depends on the initial choice of the coefficient vector. μ = μ 1 σ ( ) σ ( ) If, then. LMA LMS e LMA e LMS 3 π ξmin

41 LMF Algorithm (1/) 40 Mean Behavior: { } { } N e XX e dx E H( n+ 1) = I 3 μσ ( n) R E H( n) + 3 μσ ( n) R { ( + 1) } = 3 μσ ( ) { ( )} EVn IN e nrxx EVn Mean-Squared Behavior: σ ( n ) =ξ + tr { K ( n ) R } e min XX [ ] e XX XX Kn ( + 1) = Kn ( ) 3 μσ ( n) KnR ( ) + R Kn ( ) + 15 μσ ( ) σ ( ) + 6 ( ) 4 e n e n IN RXXK n RXX

42 LMF Algorithm (/) 41 Steady-State Mean-Squared Estimation Error:? Mean Convergence: 0 < μ<, n 3 λ σ ( n) max e Very fast, but must be careful also. he convergence of the LMF algorithm also depends on the initial choice of the coefficient vector.

43 Further Observations (1/) 4 ( ) M ξ Misadjustment: ex ξ min Sign Algorithm: { } tr R M μ π XX ξ min LMS Algorithm: LMA Algorithm: μ M tr R XX 3μ π 4 { } { } M ξ min tr R XX LMF Algorithm:?

44 Further Observations (/) 43 he misadjustment M increases with the filter order N. he misadjustment M is directly proportional to μ. he convergence speed is inversely proportional to μ. Convergence Speed: (Fast) LMA LMF LMS Sign (Slow) Robustness (or Stability): (Good) Sign LMS LMA LMF (Bad)

45 Example: System Identification Mode (1/6) 44 ξ (n) Unknown d(n) ( ) x (n) Σ Σ e(n) System Adaptive Filter H opt = [ 0.1, 0.3, 0.5, 0.7, 0.5, 0.3, 0.1]

46 Example: System Identification Mode (/6) 45 wo Sets of Reference Inputs: CASE 1: Eigenvalue Spread Ratio = 5.3 x ( n) = ζ ( n) x ( n 1) 0.1 x ( n ) 0. x ( n 3) CASE : Eigenvalue Spread Ratio = x ( n ) =ζ ( n ) x ( n 1) x ( n ) 0.5 x ( n 3) Measurement Noise ζ(n): White Gaussian Process Convergence Parameter μ: Sign LMS LMA LMF

47 Example: System Identification Mode (3/6) 46 CASE 1: Eigenvalue Spread Ratio = 5.3 x ( n) =ζ ( n) x ( n 1) 0.1 x ( n ) 0. x ( n 3) MSE in db : LM A : LM S 3 : LM F 4 : S IG N # o f Itera tion Mean-Squared Behavior of the Coefficients

48 Example: System Identification Mode (4/6) : LM A : LM S 3 : LM F 4 : S IG N E( h1(n)) # of Iteration Mean Behavior of the Coefficients

49 Example: System Identification Mode (5/6) 48 CASE : Eigenvalue Spread Ratio = x ( n) =ζ ( n) x ( n 1) x ( n ) 0.5 x ( n 3) 10 MSE in db 0 1:LMA : L M S 3:LMF 4:SIGN # of Iteration Mean-Squared Behavior of the Coefficients

50 Example: System Identification Mode (6/6) ) E( h1(n) :LMA :LMS 3:LMF 4:SIGN # of Iteration Mean Behavior of the Coefficients

51 Other Algorithms (1/) 50 Signed Regressor Algorithm: + = +μ { } H( n 1) H( n) sign X( n) e( n) Sign-Sign Algorithm: H( n+ 1) = H( n) +μ sign { X( n) } sign { e( n) } Normalized LMS Algorithm: Hn ( + 1) = Hn ( ) + Xnen ( ) ( ) X μ ( n) X( n) H ( n+ 1) = H( n) +μx ( n) e( n) Complex LMS Algorithm: *

52 Other Algorithms (/) 51 Hybrid Algorithm #1: LMS + LMF { e () n (1 ) e 4 () n } φ + φ Hn ( )() n =,0 φ 1 Hn () 3 { } H ( n+ 1) = H( n) +μ φ X( n) e( n) + (1 φ) X( n) e ( n) Hybrid Algorithm #: Sign + LMA { en () (1 ) en () 3 } φ + φ Hn ( )() n =,0 φ 1 Hn () { } { } Hn ( + 1) = Hn ( ) +μ φ Xn ( ) + 3(1 φ) Xne ( ) ( n) sign en ( )

53 5 Recursive Least Square (RLS) Algorithm

54 RLS Algorithm (1/5) 53 Cost Function: n ε ( n) = β( n, i) e ( i) i= 1 where n = Length of the observable data Error signal at time instance i: ei () = di () H ( nx ) () i he coefficient vector H(n) ( ) remains fixed during the observation interval 1 i n. Weight Vector: 0 < β( ni, ) 1 (Normally, β ( ni, ) =λ, λ = Forgetting Factor) n i By the method of exponentially weighted least squares, we want to minimize n n i ε ( n ) = λ e ( i ) i= 1 Very fast, but computationally very complex. he algorithm is useful when the number of taps required is small.

55 RLS Algorithm (/5) 54 Normal Equation: Φ ( nhn ) ( ) =Θ( n) where n Φ ( n) = λ X() i X () i i= 1 n i= 1 n i n i Θ ( n) = λ d( i) X( i) We write n 1 n 1 i Φ ( n ) =λ λ X ( i ) X ( i ) + X ( n ) X ( n ) i= 1 =λφ( n 1) + X( n) X ( n) Θ ( n) =λθ( n 1) + d( n) X( n) Do we need a matrix inversion? No!

56 RLS Algorithm (3/5) 55 Matrix Inversion Lemma: ( ) If A = B + CD C, then A = B BC D + C BC C B. where A and B = N N Positive Definite C = N M D = M M Positive Definite 1 Letting A=Φ ( n), B =λφ( n 1), C = X( n), D= 1, we express in a recursive form: Φ ( n 1) λ Φ ( n 1) X( n) X ( n) Φ ( n 1) Φ ( n) = λ λ X ( n ) Φ ( n 1) X ( n ) K(n)

57 RLS Algorithm (4/5) 56 Define 1 Ρ ( n) =Φ ( n) ( N N) 1 λ Ρ( n 1) X( n) Κ ( n) = ( N 1) 1 1 +λ X ( n) Ρ( n 1) X( n) 1 1 Κ ( n) +λ X ( n) Ρ( n 1) X( n) =λ Ρ( n 1) X( n) 1 1 { } Κ ( n) = λ Ρ( n 1) λ X ( n) Ρ( n 1) X( n) Κ ( n) =Ρ( n) X( n) 1 Κ ( n) =Φ ( n) X( n) herefore, 1 1 Ρ ( n) =λ Ρ( n 1) λ Κ( n) X ( n) Ρ( n 1)

58 RLS Algorithm (5/5) 57 ime Update for H(n): 1 H ( n) = Φ ( n) Θ( n) =Ρ( n) Θ( n) = λρ ( n ) Θ ( n 1) + d( n) Ρ( n) X( n) =Ρ( n 1) Θ( n 1) Κ( n) X ( n) Ρ( n 1) Θ( n 1) + d( n) Κ( n) 1 1 =Φ ( n 1) Θ( n 1) Κ( n) X ( n) Φ ( n 1) Θ( n 1) + d( n) Κ( n) Hn ( ) = Hn ( 1) +Κ( n) dn ( ) X ( nhn ) ( 1) Innovation: α ( n) = d( n) X ( n) H( n 1) A priori estimation error H ( n) = H ( n 1) + Κ( n) α( n) A posteriori Estimation error e(n): en ( ) = dn ( ) X ( nh ) ( n)

59 Summary of the RLS Algorithm 58 Initialization: Determine the forgetting factor λ (Normally, 0.9 λ<1) 1 ( N N) : Ρ (0) =δ I N, ( δ= a small positive number) ( N N): H(0) = 0 N Main Iteration: 1 Κ n n = λ Ρ +λ 1 Ρ ( N 1): ( ) ( 1) X( n) 1 X ( n) ( n 1) X( n) (1 1) : α ( n) = d( n) X ( n) H( n 1) ( N 1): H ( n ) = H ( n 1) +Κ ( n ) α ( n ) 1 1 ( N 1): Ρ ( n) =λ Ρ( n 1) λ Κ( n) X ( n) Ρ( n 1) (1 1) : en ( ) = dn ( ) X ( nh ) ( n) (if necessary)

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1

More information

Ch5: Least Mean-Square Adaptive Filtering

Ch5: Least Mean-Square Adaptive Filtering Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation .6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

ESE 531: Digital Signal Processing

ESE 531: Digital Signal Processing ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn

More information

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS 4.1 Adaptive Filter Generally in most of the live applications and in the environment information of related incoming information statistic is not available

More information

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment

More information

Adaptive Systems Homework Assignment 1

Adaptive Systems Homework Assignment 1 Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as

More information

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline

More information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

On the Stability of the Least-Mean Fourth (LMF) Algorithm XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We

More information

Adaptive Beamforming Algorithms

Adaptive Beamforming Algorithms S. R. Zinka srinivasa_zinka@daiict.ac.in October 29, 2014 Outline 1 Least Mean Squares 2 Sample Matrix Inversion 3 Recursive Least Squares 4 Accelerated Gradient Approach 5 Conjugate Gradient Method Outline

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Advanced Signal Processing Adaptive Estimation and Filtering

Advanced Signal Processing Adaptive Estimation and Filtering Advanced Signal Processing Adaptive Estimation and Filtering Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall 2015 1 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X

More information

Least Mean Square Filtering

Least Mean Square Filtering Least Mean Square Filtering U. B. Desai Slides tex-ed by Bhushan Least Mean Square(LMS) Algorithm Proposed by Widrow (1963) Advantage: Very Robust Only Disadvantage: It takes longer to converge where X(n)

More information

Chapter 2 Fundamentals of Adaptive Filter Theory

Chapter 2 Fundamentals of Adaptive Filter Theory Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal

More information

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation

Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation Sparseness-Controlled Affine Projection Algorithm for Echo Cancelation ei iao and Andy W. H. Khong E-mail: liao38@e.ntu.edu.sg E-mail: andykhong@ntu.edu.sg Nanyang Technological University, Singapore Abstract

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Examination with solution suggestions SSY130 Applied Signal Processing

Examination with solution suggestions SSY130 Applied Signal Processing Examination with solution suggestions SSY3 Applied Signal Processing Jan 8, 28 Rules Allowed aids at exam: L. Råde and B. Westergren, Mathematics Handbook (any edition, including the old editions called

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems Wireless Signal Processing & Networking Workshop: Emerging Wireless Technologies, Sendai, Japan, 28 Oct. 2013. Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications

More information

Machine Learning and Adaptive Systems. Lectures 5 & 6

Machine Learning and Adaptive Systems. Lectures 5 & 6 ECE656- Lectures 5 & 6, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 c. Performance Learning-LMS Algorithm (Widrow 1960) The iterative procedure in steepest

More information

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo Adaptive Linear Filtering Using Interior Point Optimization Techniques Lecturer: Tom Luo Overview A. Interior Point Least Squares (IPLS) Filtering Introduction to IPLS Recursive update of IPLS Convergence/transient

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

Lecture: Adaptive Filtering

Lecture: Adaptive Filtering ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate

More information

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed

More information

Efficient Use Of Sparse Adaptive Filters

Efficient Use Of Sparse Adaptive Filters Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract

More information

Assesment of the efficiency of the LMS algorithm based on spectral information

Assesment of the efficiency of the LMS algorithm based on spectral information Assesment of the efficiency of the algorithm based on spectral information (Invited Paper) Aaron Flores and Bernard Widrow ISL, Department of Electrical Engineering, Stanford University, Stanford CA, USA

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation

Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation Shane Martin Haas April 12, 1999 Thesis Defense for the Degree of Master of Science in Electrical Engineering Department

More information

Ch4: Method of Steepest Descent

Ch4: Method of Steepest Descent Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Widely Linear Estimation and Augmented CLMS (ACLMS)

Widely Linear Estimation and Augmented CLMS (ACLMS) 13 Widely Linear Estimation and Augmented CLMS (ACLMS) It has been shown in Chapter 12 that the full-second order statistical description of a general complex valued process can be obtained only by using

More information

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 27/06/2016 Murat Üney (IDCOM) Optimal and Adaptive Filtering 27/06/2016 1 / 69 This presentation aims

More information

BLOCK LMS ADAPTIVE FILTER WITH DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS

BLOCK LMS ADAPTIVE FILTER WITH DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS BLOCK LMS ADAPTIVE FILTER WIT DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS S. Olmos, L. Sörnmo, P. Laguna Dept. of Electroscience, Lund University, Sweden Dept. of Electronics Eng. and Communications,

More information

Adaptive SP & Machine Intelligence Linear Adaptive Filters and Applications

Adaptive SP & Machine Intelligence Linear Adaptive Filters and Applications Adaptive SP & Machine Intelligence Linear Adaptive Filters and Applications Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters

NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters NSLMS: a Proportional Weight Algorithm for Sparse Adaptive Filters R. K. Martin and C. R. Johnson, Jr. School of Electrical Engineering Cornell University Ithaca, NY 14853 {frodo,johnson}@ece.cornell.edu

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification American J. of Engineering and Applied Sciences 3 (4): 710-717, 010 ISSN 1941-700 010 Science Publications Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

More information

An overview on optimized NLMS algorithms for acoustic echo cancellation

An overview on optimized NLMS algorithms for acoustic echo cancellation Paleologu et al. EURASIP Journal on Advances in Signal Processing (015) 015:97 DOI 10.1186/s13634-015-083-1 REVIEW Open Access An overview on optimized NLMS algorithms for acoustic echo cancellation Constantin

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

KNOWN approaches for improving the performance of

KNOWN approaches for improving the performance of IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas

More information

FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION

FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION Journal of ELECTRICAL ENGINEERING, VOL. 55, NO. 5-6, 24, 113 121 FAST IMPLEMENTATION OF A SUBBAND ADAPTIVE ALGORITHM FOR ACOUSTIC ECHO CANCELLATION Khaled Mayyas The block subband adaptive algorithm in

More information

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method SGN 21006 Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 20 Adaptive filtering:

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework

Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework Scientia Iranica, Vol. 15, No. 2, pp 195{202 c Sharif University of Technology, April 2008 Research Note Variable, Step-Size, Block Normalized, Least Mean, Square Adaptive Filter: A Unied Framework M.

More information

New Statistical Model for the Enhancement of Noisy Speech

New Statistical Model for the Enhancement of Noisy Speech New Statistical Model for the Enhancement of Noisy Speech Electrical Engineering Department Technion - Israel Institute of Technology February 22, 27 Outline Problem Formulation and Motivation 1 Problem

More information

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method 1 Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method Adaptive filtering: Problem statement Consider the family of variable parameter FIR filters, computing their

More information

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Hong Zhang, Ali Abdi and Alexander Haimovich Center for Wireless Communications and Signal Processing Research Department

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS

NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS NEW STEIGLITZ-McBRIDE ADAPTIVE LATTICE NOTCH FILTERS J.E. COUSSEAU, J.P. SCOPPA and P.D. DOÑATE CONICET- Departamento de Ingeniería Eléctrica y Computadoras Universidad Nacional del Sur Av. Alem 253, 8000

More information

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao

A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation USNRao ISSN: 77-3754 International Journal of Engineering and Innovative echnology (IJEI Volume 1, Issue, February 1 A low intricacy variable step-size partial update adaptive algorithm for Acoustic Echo Cancellation

More information

SNR lidar signal improovement by adaptive tecniques

SNR lidar signal improovement by adaptive tecniques SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Machine Learning and Adaptive Systems. Lectures 3 & 4

Machine Learning and Adaptive Systems. Lectures 3 & 4 ECE656- Lectures 3 & 4, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 What is Learning? General Definition of Learning: Any change in the behavior or performance

More information

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π. Today ESE 53: Digital Signal Processing! IIR Filter Design " Lec 8: March 30, 207 IIR Filters and Adaptive Filters " Bilinear Transformation! Transformation of DT Filters! Adaptive Filters! LMS Algorithm

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Lecture 19: Bayesian Linear Estimators

Lecture 19: Bayesian Linear Estimators ECE 830 Fall 2010 Statistical Signal Processing instructor: R Nowa, scribe: I Rosado-Mendez Lecture 19: Bayesian Linear Estimators 1 Linear Minimum Mean-Square Estimator Suppose our data is set X R n,

More information

Least squares: introduction to the network adjustment

Least squares: introduction to the network adjustment Least squares: introduction to the network adjustment Experimental evidence and consequences Observations of the same quantity that have been performed at the highest possible accuracy provide different

More information

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard

More information

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1. Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller

More information

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters 1 Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters Overview Block Adaptive Filters Iterating LMS under the assumption of small variations in w(n) Approximating the gradient by time

More information

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL T F T I G E O R G A I N S T I T U T E O H E O F E A L P R O G R ESS S A N D 1 8 8 5 S E R V L O G Y I C E E C H N O Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL Aravind R. Nayak

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Department of Electrical and Electronic Engineering

Department of Electrical and Electronic Engineering Imperial College London Department of Electrical and Electronic Engineering Final Year Project Report 27 Project Title: Student: Course: Adaptive Echo Cancellation Pradeep Loganathan ISE4 Project Supervisor:

More information

Combinations of Adaptive Filters

Combinations of Adaptive Filters SUBMITTED TO THE IEEE SIGNAL PROCESSING MAGAZINE 2014 1 Combinations of Adaptive Filters Jerónimo Arenas-García, Senior Member, IEEE, Luis A. Azpicueta-Ruiz, Member, IEEE, Magno T. M. Silva, Member, IEEE,

More information

Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm

Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm J.. Górriz a and J. Ramírez Department of Signal Theory, University of Granada, Andalucia

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Jie Yang

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Jie Yang Adaptive Filter Design for Sparse Signal Estimation A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Jie Yang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

EEL 6502: Adaptive Signal Processing Homework #4 (LMS)

EEL 6502: Adaptive Signal Processing Homework #4 (LMS) EEL 6502: Adaptive Signal Processing Homework #4 (LMS) Name: Jo, Youngho Cyhio@ufl.edu) WID: 58434260 The purpose of this homework is to compare the performance between Prediction Error Filter and LMS

More information

Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009

Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009 Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009 Announcement Homework 3 due one week from today. Not so long ago in a classroom very very closeby Unconstrained Optimization The Method of Steepest

More information