Optimal and Adaptive Filtering

Size: px
Start display at page:

Download "Optimal and Adaptive Filtering"

Transcription

1 Optimal and Adaptive Filtering Murat Üney Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

2 Table of Contents 1 Optimal Filtering Optimal filter design Application examples Optimal solution: Wiener-Hopf equations Example: Wiener equaliser 2 Adaptive filtering Introduction Recursive Least Squares Adaptation Least Mean Square Algorithm Applications 3 Optimal signal detection Application examples and optimal hypothesis testing Additive white and coloured noise 4 Summary Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

3 Optimal Filtering Optimal filter design Optimal filter design Observation sequence Linear time invariant system Estimation Figure 1: Optimal filtering scenario. y(n): Observation related to a stationary signal of interest x(n). h(n): The impulse response of an LTI estimator. ˆx(n): Estimate of x(n) given by ˆx(n) = h(n) y(n) = i= h(i)y(n i). Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

4 Optimal Filtering Optimal filter design Optimal filter design Observation sequence Linear time invariant system Estimation Figure 1: Optimal filtering scenario. Find h(n) with the best error performance: e(n) = x(n) ˆx(n) = x(n) h(n) y(n) The error performance is measured by the mean squared error (MSE) [ ( ) ] 2 ξ = E e(n). Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

5 Optimal Filtering Optimal filter design Optimal filter design Observation sequence Linear time invariant system Estimation Figure 1: Optimal filtering scenario. The MSE is a function of h(n), i.e., h = [, h( 2), h( 1), h(0), h(1), h(2), ] [ ( ) ] [ 2 ( ) ] 2 ξ(h) = E e(n) = E x(n) h(n) y(n). Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

6 Optimal Filtering Optimal filter design Optimal filter design Observation sequence Linear time invariant system Estimation Figure 1: Optimal filtering scenario. The MSE is a function of h(n), i.e., h = [, h( 2), h( 1), h(0), h(1), h(2), ] [ ( ) ] [ 2 ( ) ] 2 ξ(h) = E e(n) = E x(n) h(n) y(n). Thus, optimal filtering problem is h opt = arg min ξ(h) h Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

7 Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

8 Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals d = 1 Prediction for anti-aircraft fire control. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

9 Optimal Filtering Application examples Application examples 1) Prediction, interpolation and smoothing of signals d = 1 (smoothing) d = 1/2 (interpolation) Signal denoising applications, estimation of missing data points. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

10 Optimal Filtering Application examples Application examples 2) System identification Figure 2: System identification using a training sequence t(n) from an ergodic and stationary ensemble. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

11 Optimal Filtering Application examples Application examples 2) System identification Figure 2: System identification using a training sequence t(n) from an ergodic and stationary ensemble. y(n) transmitter microphone Echo cancellation in full duplex data transmission. Modem filter synthesized echo x^ ( n) hybrid transformer two wire line earpiece receiver e (n) x (n) received signal + echo Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

12 Optimal Filtering Application examples Application examples 3) Inverse System identification Figure 3: Inverse system identification using x(n) as a training sequence. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

13 Optimal Filtering Application examples Application examples 3) Inverse System identification Figure 3: Inverse system identification using x(n) as a training sequence. Channel equalisation in digital communication systems. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

14 Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Normal equations Consider the MSE ξ(h) = E [(e(n)) 2] The optimal filter satisfies ξ(h) hopt = 0. Equivalently, for all j =..., 2, 1, 0, 1, 2,... [ ξ h(j) = E 2e(n) e(n) ] h(j) [ = E 2e(n) ( x(n) ] i= h(i)y(n i)) h(j) [ ] ( h(j)y(n j)) = E 2e(n) h(j) = 2E [e(n)y(n j)] Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

15 Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Normal equations Consider the MSE ξ(h) = E [(e(n)) 2] The optimal filter satisfies ξ(h) hopt = 0. Equivalently, for all j =..., 2, 1, 0, 1, 2,... [ ξ h(j) = E 2e(n) e(n) ] h(j) [ = E 2e(n) ( x(n) ] i= h(i)y(n i)) h(j) [ ] ( h(j)y(n j)) = E 2e(n) h(j) = 2E [e(n)y(n j)] Hence, the optimal filter solves the normal equations E [e(n)y(n j)] = 0, j =..., 2, 1, 0, 1, 2,... Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

16 Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j Z E [e opt (n)y(n j)] = 0 which is known as the principle of orthogonality. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

17 Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j Z E [e opt (n)y(n j)] = 0 which is known as the principle of orthogonality. Furthermore, [( ) ] E [e opt(n)y(n j)] = E x(n) h opt(i)y(n i) y(n j) i= = E [x(n)y(n j)] h opt(i)e [y(n i)y(n j)] = 0 i= Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

18 Optimal Filtering Optimal solution: Wiener-Hopf equations Optimal solution: Wiener-Hopf equations The error of h opt is orthogonal to its observations, i.e., for all j Z E [e opt (n)y(n j)] = 0 which is known as the principle of orthogonality. Furthermore, [( ) ] E [e opt(n)y(n j)] = E x(n) h opt(i)y(n i) y(n j) i= = E [x(n)y(n j)] h opt(i)e [y(n i)y(n j)] = 0 i= Result (Wiener-Hopf equations) h opt (i)r yy (i j) = r xy (j) i= Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

19 Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt (n) r yy (n) = r xy (n) H opt (z)p yy (z) = P xy (z) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

20 Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt (n) r yy (n) = r xy (n) H opt (z)p yy (z) = P xy (z) Result (The Wiener filter) H opt (z) = P xy(z) P yy (z) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

21 Optimal Filtering Optimal solution: Wiener-Hopf equations The Wiener filter Wiener-Hopf equations can be solved indirectly, in the complex spectral domain: h opt (n) r yy (n) = r xy (n) H opt (z)p yy (z) = P xy (z) Result (The Wiener filter) H opt (z) = P xy(z) P yy (z) The optimal filter has an infinite impulse response (IIR), and, is non-causal, in general. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

22 Optimal Filtering Optimal solution: Wiener-Hopf equations Causal Wiener filter We project the unconstrained solution H opt (z) onto the set of causal and stable IIR filters by a two step procedure: First, factorise P yy (z) into causal (right sided) Q yy (z), and anti-causal (left sided) parts Q yy(1/z ), i.e., P yy (z) = σ 2 yq yy (z)q yy(1/z ). Select the causal (right sided) part of P xy (z)/q yy(1/z ). Result (Causal Wiener filter) H + opt (z) = 1 σ 2 yq yy (z) [ Pxy (z) ] Qyy(1/z ) + Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

23 Optimal Filtering Optimal solution: Wiener-Hopf equations FIR Wiener-Hopf equations received sequence { y (n)} z -1 z -1 z -1 h 0 h h N output { x(n) } Figure 4: A finite impulse response (FIR) estimator. Wiener-Hopf equations for the FIR optimal filter of N taps: Result (FIR Wiener-Hopf equations) N 1 i=0 h opt(i)r yy (i j) = r xy (j), for j = 0, 1,..., N 1. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

24 Optimal Filtering Optimal solution: Wiener-Hopf equations FIR Wiener Filter FIR Wiener-Hopf equations in vector-matrix form. r yy (0) r yy (1)... r yy (N 1) h(0) r xy (0) r yy (1) r yy (0)... r yy (N 2) h(1) r xy (1).... =.. r yy (N 1) r yy (N 2)... r yy (0) h(n 1) r xy (N 1) }{{}}{{}}{{} R yy : Autocorrelation matrix of y(n) which is Toeplitz. h opt r xy Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

25 Optimal Filtering Optimal solution: Wiener-Hopf equations FIR Wiener Filter FIR Wiener-Hopf equations in vector-matrix form. r yy (0) r yy (1)... r yy (N 1) h(0) r xy (0) r yy (1) r yy (0)... r yy (N 2) h(1) r xy (1).... =.. r yy (N 1) r yy (N 2)... r yy (0) h(n 1) r xy (N 1) }{{}}{{}}{{} R yy : Autocorrelation matrix of y(n) which is Toeplitz. h opt r xy Result (FIR Wiener filter) h opt = R 1 yy r xy. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

26 Optimal Filtering Optimal solution: Wiener-Hopf equations MSE surface MSE is a quadratic function of h ξ(h) = h T R yy h 2h T r xy + E [(x(n)) 2] ξ(h) = 2R yy h 2r xy MSE surface 30 MSE contours with gradient vectors MSE (db) tap (a) 20 0 tap 0 20 tap tap 0 Figure 5: For a 2-tap Wiener filtering example: (a) the MSE surface, (b) gradient vectors. (b) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

27 Optimal Filtering Example: Wiener equaliser Example: Wiener equaliser noise η( n) (a) data {x( n)} channel { y(n) } equaliser C( z) H( z) ^ {x(n-d )} noise η(n) (b) data { x(n)} channel C( z) { y(n) } equaliser H(z) ^ { x(n-d )} { e( n-d )} z -d { x(n-d )} Figure 6: (a) The Wiener equaliser. (b) Alternative formulation. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

28 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. For notational convenience define: x (n) = x(n d) e (n) = x(n d) ˆx(n d) (1) Label the output of the channel filter as y (n) where y(n) = y (n) + η(n) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

29 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Wiener filter h opt = R yy 1 r x y (2) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

30 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Wiener filter h opt = R yy 1 r x y (2) The (i, j)th entry in R yy is r yy (j i) = E [y(j)y(i)] = E [ (y (j) + η(j))(y (i) + η(i)) ] = r y y (j i) + σ2 ηδ(j i) P yy (z) = P y y (z) + σ2 η Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

31 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Remember y (n) = c(n) x(n) r y y = c(n) c( n) r xx(n) P y y (z) = C(z)C(z 1 )P xx (z) Consider a white data sequence x(n), i.e., r xx (n) = σ 2 xδ(n) P xx (z) = σ 2 x. Then, the complex spectra of the autocorrelation sequence of interest is P yy (z) = P y y (z) + σ2 x = C(z)C(z 1 )σ 2 x + σ 2 η Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

32 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Wiener filter h opt = R yy 1 r x y (3) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

33 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Wiener filter h opt = R yy 1 r x y (3) The (j)th entry in r x y is r x y(j) = E [ x (n)y(n j) ] = E [ x(n d)(y (n j) + η(n j)) ] = r xy (j d) P x y(z) = P xy (z)z d (4) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

34 Optimal Filtering Example: Wiener equaliser Wiener equaliser η(n) data {x(n)} channel C ( z) y (n) { y(n) } equaliser H( z) ^ {x(n d)} { e( n d)} e (n) d z { x(n d)} x (n) Figure 7: Channel equalisation scenario. Remember y (n) = c(n) x(n) r xy = c( n) r xx (n) P xy (z) = C(z 1 )P xx (z) Then, the complex spectra of the cross correlation sequence of interest is P x y(z) = P xy (z)z d = σ 2 xc(z 1 )z d Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

35 Optimal Filtering Example: Wiener equaliser Wiener equaliser Suppose that c = [c(0) = 0.5, c(1) = 1] T C(z) = (0.5 + z 1 ) Then, P yy (z) = C(z)C(z 1 )σ 2 x + σ 2 η = (0.5 + z 1 )(0.5 + z)σ 2 x + σ 2 η P x y(z) = σ 2 xc(z 1 )z d = (0.5z d + z d+1 )σ 2 x Suppose that d = 1, σ 2 x = 1, and, σ 2 η = 0.1 r yy (0) = 1.35, r yy (1) = 0.5, and r yy (2) = 0 r x y(0) = 1, r x y(1) = 0.5, and r x y(2) = 0 The Wiener filter is obtained as h opt = = The MSE is found as ξ(h opt ) = σ 2 x h T opt r x y = Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

36 Adaptive filtering Introduction Adaptive filtering - Introduction observed sequence z-1 z-1 z-1... estimated signal training sequence impulse response error signal adaptation algorithm Figure 8: FIR adaptive filtering configuration. For notational convenience, define y(n) [y(n), y(n 1),..., y(n N + 1)] T The output of the adaptive filter is Optimum solution ˆx(n) = h T (n)y(n) h opt = R 1 yy r xy, h(n) [h 0, h 1,..., h N 1 ] T Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

37 Adaptive filtering Recursive Least Squares Adaptation Recursive least squares Minimise cost function n ξ(n) = (x(k) ˆx(k)) 2 (5) k=0 Solution R yy (n)h(n) = r xy (n) LS autocorrelation matrix n R yy (n) = y(k)y T (k) k=0 LS cross-correlation vector n r xy (n) = y(k)x(k) k=0 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

38 Adaptive filtering Recursive Least Squares Adaptation Recursive least squares Recursive relationships Substitute for r xy R yy (n) = R yy (n 1) + y(n)y T (n) r xy (n) = r xy (n 1) + y(n)x(n) R yy (n)h(n) = R yy (n 1)h(n 1) + y(n)x(n) Replace R yy (n 1) ( ) R yy (n)h(n) = R yy (n) y(n)y T (n) h(n 1) + y(n)x(n) Multiple both sides by R 1 yy (n) h(n) = h(n 1) + R 1 yy (n)y(n)e(n) e(n) = x(n) h T (n 1)y(n) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

39 Adaptive filtering Recursive Least Squares Adaptation Recursive least squares Recursive relationships Apply Sherman-Morrison identity R yy (n) = R yy (n 1) + y(n)y T (n) R 1 yy (n) = R 1 yy (n 1) R 1 yy (n 1)y(n)y T (n)r 1 yy (n 1) 1 + y T (n)r 1 yy (n 1)y(n) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

40 Adaptive filtering Recursive Least Squares Adaptation Summary Recursive least squares (RLS) algorithm: 1: R yy (0) = 1 δ I N with small positive δ Initialisation 1 2: h(0) = 0 Initialisation 2 3: for n = 1, 2, 3,... do Iterations 4: ˆx(n) = h T (n 1)y(n) Estimate x(n) 5: e(n) = x(n) ˆx(n) Find the error 6: R 1 1 yy (n) = α ( R 1 yy (n 1) R 1 yy (n 1)y(n)y T (n)r 1 yy (n 1) α+y T (n)r 1 yy (n 1)y(n) Update the inverse of the autocorrelation matrix 7: h(n) = h(n 1) + R 1 yy (n)y(n)e(n) Update the filter coefficients 8: end for ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

41 Adaptive filtering Least Mean Square Algorithm Stochastic gradient algorithms MSE contour - 2-tap example: h wiener optimum (MMSE) contour of constant MSE Figure 9: Method of steepest descent. h 0 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

42 Adaptive filtering Least Mean Square Algorithm Steepest descent MSE contour - 2-tap example: 10 h gradient vector initial guess h(n) h (n) h 0 The gradient vector [ ] ξ h (n) = h(0), ξ h(1),..., ξ T = 2R h(n 1) yy h(n) 2r xy h=h(n) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

43 Adaptive filtering Least Mean Square Algorithm Steepest descent MSE contour - 2-tap example: h µ (n) h h 0 Update initial guess in the direction of steepest descent: h(n + 1) = h(n) µ h (n) Step-size µ. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

44 Adaptive filtering Least Mean Square Algorithm Steepest descent MSE contour - 2-tap example: h h 0 Gradient at new guess. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

45 Adaptive filtering Least Mean Square Algorithm Convergence of steepest descent MSE contour - 2-tap example: 10 h Wiener optimum (MMSE) h(n + 1) = h(n) µ h (n) [ ξ h (n) = h(0), 0 < µ < 1 λ max ξ h(1),..., contour of constant MSE initial guess h(n) gradient vector (n) h h 0 ] ξ T = 2R yy h(n) 2r xy h(n 1) h=h(n) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69 (6)

46 Adaptive filtering Least Mean Square Algorithm Stochastic gradient algorithms A time recursion: The exact gradient: h (n) = 2E h(n + 1) = h(n) µ ˆ h (n) [ ] y(n)(x(n) y(n) T h(n)) = 2E [y(n)e(n)] A simple estimate of the gradient ˆ h (n) = 2y(n + 1)e(n + 1) The error e(n + 1) = x(n + 1) h(n) T y(n + 1) (7) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

47 Adaptive filtering Least Mean Square Algorithm The Least-mean-squares (LMS) algorithm: 1: h(0) = 0 Initialisation 2: for n = 1, 2, 3,... do Iterations 3: ˆx(n) = h T (n 1)y(n) Estimate x(n) 4: e(n) = x(n) ˆx(n) Find the error 5: h(n) = h(n 1) + 2µy(n)e(n) Update the filter coefficients 6: end for Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

48 Adaptive filtering Least Mean Square Algorithm LMS block diagram y ( n) z -1 z -1 z -1 z -1 h 0 h 1 h 2 h N-1 x^ (n) z -1 e ( n) x (n) 2µ scaled error Figure 10: Least mean-square adaptive filtering. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

49 Adaptive filtering Least Mean Square Algorithm Convergence of the LMS MSE contour - 2-tap example: h Wiener optimum (MMSE) contour of constant MSE initial guess h(n) gradient vector (n) h Eigenvalues of R yy (in this example, λ 0 and λ 1 ). The largest time constant τ max > λmax 2λ min Eigenvalue ratio (EVR) is λmax λ min Practical range for step-size 0 < µ < 1 3Nσ 2 y h 0 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

50 Adaptive filtering Least Mean Square Algorithm Eigenvalue ratio (EVR) v max contour of constant MSE v min Wiener optimum 1/λmin (a) 1/λ max 6 6 tap tap (b) tap 0 (c) tap 0 Figure 11: Eigenvectors, eigenvalues and convergence: (a) the relationship between eigenvectors, eigenvalues and the contours of constant MSE; (b) steepest descent for EVR of 2; (c) EVR of 4. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

51 Adaptive filtering Least Mean Square Algorithm Comparison of RLS and LMS white noise white noise noise shaping filter y(n) unknown system h opt x(n) adaptive filter h(n) x(n) ^ Figure 12: Adaptive system identification configuration. Error vector norm ρ(n) = E [ ] (h(n) h opt ) T (h(n) h opt ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

52 Adaptive filtering Least Mean Square Algorithm Comparison: Performance norm (db) RLS LMS iterations Figure 13: Covergence plots for N = 16 taps adaptive filtering in the system identification configuration: EVR = 1 (i.e., the impulse response of the noise shaping filter is δ(n)). Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

53 Adaptive filtering Least Mean Square Algorithm Comparison: Performance norm (db) RLS LMS iterations Figure 14: Covergence plots for N = 16 taps adaptive filtering in the system identification configuration: EVR = 11. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

54 Adaptive filtering Least Mean Square Algorithm Comparison: Performance LMS norm (db) RLS iterations Figure 15: Covergence plots for N = 16 taps adaptive filtering in the system identification configuration: EVR (and, correspondingly the spectral coloration of the input signal) progressively increases to 68. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

55 Adaptive filtering Least Mean Square Algorithm Comparison: Complexity Table: Complexity comparison of N-point FIR filter algorithms. Algorithm Implementation Computational load class multiplications adds/subtractions divisions RLS fast Kalman 10N+1 9N+1 2 SG LMS 2N 2N BLMS (via FFT) 10log(N)+8 15log(N)+30 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

56 Adaptive filtering Applications Applications Adaptive filtering algorithms can be used in all application areas of optimal filtering. Some examples: Adaptive line enhancement Adaptive tone suppression Echo cancellation Channel equalisation Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

57 Adaptive filtering Applications y(n) unknown x(n) system adaptive lter x^ ( n) (a) e(n) x(n) unknown system y(n) adaptive lter x(n) ^ (b) e(n) x(n) delay y(n) adaptive lter ^(n) x (c) e(n) Figure 16: Adaptive filtering configurations: (a) direct system modelling; (b) inverse system modelling; (c) linear prediction. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

58 Figure 17: Adaptive line enhacement: (a) signal spectrum; (b) system configuration. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69 Adaptive filtering Applications Adaptive line enhancement spectrum narrow-band signal broad-band noise (a) 0 ω 0 frequency x ( n) z -1 z -1 z -1 a0 a 1 a2 prediction ^x (n) signal (b) prediction error e (n) noise

59 Adaptive filtering Applications Adaptive predictor x (n) x(n 1) x(n 2) z 1 z 1 z 1 x(n 3) a 0 a 1 a 2 prediction filter prediction signal ^ x(n) prediction error filter prediction error e (n) noise Prediction filter: a 0 + a 1 z 1 + a 2 z 2 Prediction error filter: 1 a 0 z 1 a 1 z z 2 a 2 z 3 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

60 Figure 18: Adaptive tone suppression: (a) signal spectrum; (b) system configuration. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69 Adaptive filtering Applications Adaptive tone suppression spectrum interference signal spectrum (a) 0 ω 0 frequency x ( n) z -1 z -1 z -1 a0 a 1 a2 prediction ^x (n) interference (b) prediction error e (n) signal

61 Figure 19: Adaptive noise whitening: (a) input spectrum; (b) system configuration. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69 Adaptive filtering Applications Adaptive noise whitening spectrum coloured noise (a) 0 ω 0 frequency x ( n) z -1 z -1 z -1 a0 a 1 a2 prediction ^x (n) (b) prediction error e (n) whitened noise

62 Adaptive filtering Applications Echo cancellation A typical telephone connection transmitter transmitter microphone hybrid transformer two wire line hybrid transformer receiver receiver earpiece Telephone A Telephone B Hybrid transformers to route signal paths. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

63 Adaptive filtering Applications Echo cancellation (contd) Echo paths in a telephone system transmitter transmitter microphone hybrid transformer two wire line hybrid transformer near end echo receiver receiver earpiece Telephone A far end echo Telephone B Near and far echo paths. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

64 Adaptive filtering Applications Echo cancellation (contd) transmitter y(n) microphone filter synthesized echo x^ ( n) hybrid transformer two wire line earpiece receiver e (n) x (n) received signal + echo Fixed filter? Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

65 Adaptive filtering Applications Echo cancellation (contd) transmitter y(n) microphone e( n) adaptive filter ^x (n) hybrid transformer two-wire line receiver x (n) earpiece Figure 20: Application of adaptive echo cancellation in a telephone handset. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

66 Channel equalisation Adaptive filtering noise Applications x (n) channel filter y( n) adaptive filter delay x( n-d) error ^x(n-d) Figure 21: Adaptive equaliser system configuration. Simple channel y(n) = ±h 0 + noise Decision circuit if y(n) 0 then x(n) = +1 elsex(n) = 1 Channel with intersymbol interference (ISI) 2 y(n) = h i x(n i) + noise i=0 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

67 Adaptive filtering Applications Channel equalisation y(n ) transversal ^x (n) filter decision circuit e(n) x( n) delay Figure 22: Decision directed equaliser. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

68 Optimal signal detection Application examples and optimal hypothesis testing Optimal signal detection - Introduction Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

69 Optimal signal detection Application examples and optimal hypothesis testing Optimal signal detection - Introduction Example: Detection of gravitational waves Figure 23: Gravitational waves from a binary black hole merger (left, Uni. of Birmingham, Gravitational Wave Group), LIGO block diagram (middle), expected signal (right) Abbot et. al., Observation of gravitational waves from a binary black hole merger, Phys. Rev. Let., Feb Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

70 Optimal signal detection Application examples and optimal hypothesis testing Optimal signal detection Signal detection as 2-ary (binary) hypothesis testing: H 0 : y(n) = η(n) H 1 : y(n) = x(n) + η(n) (8) In a sense, decide which of the two possible ensembles y(n) is generated from. Finite length signals, i.e., Vector notation n = 0, 1, 2,..., N 1 H 0 : y = η H 1 : y = x + η Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

71 Optimal signal detection Application examples and optimal hypothesis testing Optimal signal detection Example (radar): In active sensing, x(t) is the probing waveform subject to design. y(n) = a 0 x(n n 0 ) + a 1 x(n n 1 ) + + η(n) (9) Figure 24: Probing waveform x(n) and returns from the surveillance region constituting y(n). Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

72 Optimal signal detection Application examples and optimal hypothesis testing Optimal signal detection Example (radar): In active sensing, x(t) is the probing waveform subject to design. y(n) = a 0 x(n n 0 ) + a 1 x(n n 1 ) + + η(n) (9) Figure 24: Probing waveform x(n) and returns from the surveillance region constituting y(n). A similar problem also arises in digital communications. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

73 Optimal signal detection Application examples and optimal hypothesis testing Bayesian hypothesis testing Consider a random variable H with H = H(ζ) and H {H 0, H 1 }. The measurement ȳ = y(ζ) is a realisation of y. The measurement model specifies the likelihood p Y H (ȳ H) Find the probabilities of H = H 1 and H = H 0 based on the measurement vector ȳ. Decide on the hypothesis with the maximum a posteriori probability: Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

74 Optimal signal detection Application examples and optimal hypothesis testing Bayesian hypothesis testing Consider a random variable H with H = H(ζ) and H {H 0, H 1 }. The measurement ȳ = y(ζ) is a realisation of y. The measurement model specifies the likelihood p Y H (ȳ H) Find the probabilities of H = H 1 and H = H 0 based on the measurement vector ȳ. Decide on the hypothesis with the maximum a posteriori probability: Find the maximum a-posteriori (MAP) estimate of H. Ĥ = arg max P H y(h ȳ) (10) H {H 0,H 1 } where the a posterior probability of H is given by for i = 0, 1. P H y (H i ȳ) = p Y H (ȳ H i )P H (H i ) p Y H (ȳ H 0 )P H (H 0 ) + p Y H (ȳ H 1 )P H (H 1 ) (11) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

75 Optimal signal detection Application examples and optimal hypothesis testing Bayesian hypothesis testing: The likelihood ratio test MAP decision rule: Ĥ = arg max P(H ȳ) H {H 0,H 1 } MAP decision as a likelihood ratio test: p(h 1 ȳ) Ĥ=H 1 > < Ĥ=H 0 p(h 0 ȳ) p(ȳ H 1 )P(H 1 ) p(ȳ H 1 ) p(ȳ H 0 ) H 1 > < H 0 p(ȳ H 0 )P(H 0 ) H 1 > < H 0 P(H 0 ) P(H 1 ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

76 Optimal signal detection Additive white and coloured noise Bayesian hypothesis testing - AWGN Example Example: Detection of deterministic signals in additive white Gaussian noise: H 0 : y = η H 1 : y = x + η where x is a known vector, η N (.; 0, σ 2 I). The likelihoods are specified by the noise distribution: p(ȳ H 1 ) p(ȳ H 0 ) N (ȳ; x, σ 2 I) N (ȳ; 0, σ 2 I) H 1 > P(H 0 ) < H 0 P(H 1 ) H 1 > P(H 0 ) < H 0 P(H 1 ) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

77 Optimal signal detection Additive white and coloured noise Detection of deterministic signals - AWGN (contd) The numerator and denominator of the likelihood ratio are Similarly Therefore p(ȳ H 1 ) = N (ȳ x; 0, σ 2 I) N 1 1 } (ȳ(n) x(n))2 = exp { (2πσ 2 ) N/2 2σ 2 n=0 { ( 1 = exp 1 N 1 )} (ȳ(n) x(n)) 2 (2πσ 2 ) N/2 2σ 2 p(ȳ H 0 ) = N (ȳ; 0, σ 2 I) = p(ȳ H 1 ) p(ȳ H 0 ) = exp 1 exp (2πσ 2 ) N/2 { 1 σ 2 { ( N 1 1 2σ 2 n=0 ( N 1 )} (ȳ(n)) 2 n=0 )} (ȳ(n)x(n) 1 2 x(n)2 ) n=0 (12) (13) (14) Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

78 Optimal signal detection Additive white and coloured noise Detection of deterministic signals - AWGN (contd) Take the logarithm of both sides of the likelihood ratio test { ( N 1 )} 1 log exp (ȳ(n)x(n) 1 H1 σ 2 2 x(n)2 > ) < log P(H 0) H 0 P(H 1 ) n=0 Now, we have a linear statistical test N 1 n=0 H 1 > < ȳ(n)x(n) σ 2 log P(H 0) H 0 P(H 1 ) N 1 n=0 x(n) 2 } {{ } τ:decision threshold Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

79 Optimal signal detection Additive white and coloured noise Detection of deterministic signals - AWGN (contd) Matched filtering for optimal detection: Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

80 Optimal signal detection Additive white and coloured noise Detection of deterministic signals under coloured noise For the case, the noise sequence ρ(n) has an auto-correlation function r ν [l] different than r ν [l] = ση 2 δ[l], and, H 0 : y = ν H 1 : y = x + ν ν N.; 0, C ν = r ν (0) r ν ( 1)... r ν ( N + 1) r ν (1) r ν (0)... r ν ( N + 2).... r ν (N 1) r ν (N 2)... r ν (0) Whitening lter Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

81 Optimal signal detection Additive white and coloured noise Detection of deterministic signals - coloured noise ex (t) < (t) > H GPS time relative to (s) (upper left) Noise (amplitude) spectral density. (upper right) Abstract, Abbot et. al., Phys. Rev. Let., Feb (lower left) Matched filter outputs: Best MF (blue) and the expected MF (purple). (lower right) Measurement, reconstructed and noise signals around the detection. Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

82 Summary Summary Optimal filtering: Problem statement General solution via Wiener-Hopf equations FIR Wiener filter Adaptive filtering as an online optimal filtering strategy Recursive least-squares (RLS) algorithm Least mean-square (LMS) algorithm Application examples Optimal signal detection via matched filtering Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

83 Summary Further reading C. Therrien, Discrete Random Signals and Statistical Signal Processing, Prentice-Hall, S. Haykin, Adaptive Filter Theory, 5th ed., Prentice-Hall, B. Mulgrew, P. Grant, J. Thompson, Digital Signal Processing: Concepts and Applications, 2nd ed., Palgrave Macmillan, D. Manolakis, V. Ingle, S. Kogon, Statistical and Adaptive Signal Processing, McGraw Hill, Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/ / 69

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 27/06/2016 Murat Üney (IDCOM) Optimal and Adaptive Filtering 27/06/2016 1 / 69 This presentation aims

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M: Lesson 1 Optimal Signal Processing Optimal signalbehandling LTH September 2013 Statistical Digital Signal Processing and Modeling, Hayes, M: John Wiley & Sons, 1996. ISBN 0471594318 Nedelko Grbic Mtrl

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Department of Electrical and Electronic Engineering

Department of Electrical and Electronic Engineering Imperial College London Department of Electrical and Electronic Engineering Final Year Project Report 27 Project Title: Student: Course: Adaptive Echo Cancellation Pradeep Loganathan ISE4 Project Supervisor:

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Chapter 2 Fundamentals of Adaptive Filter Theory

Chapter 2 Fundamentals of Adaptive Filter Theory Chapter 2 Fundamentals of Adaptive Filter Theory In this chapter we will treat some fundamentals of the adaptive filtering theory highlighting the system identification problem We will introduce a signal

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model 5 Kalman filters 5.1 Scalar Kalman filter 5.1.1 Signal model System model {Y (n)} is an unobservable sequence which is described by the following state or system equation: Y (n) = h(n)y (n 1) + Z(n), n

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept. INTRODUCTION Noise is present in many situations

More information

Adaptive Systems Homework Assignment 1

Adaptive Systems Homework Assignment 1 Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as

More information

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard

More information

An Adaptive Blind Channel Shortening Algorithm for MCM Systems

An Adaptive Blind Channel Shortening Algorithm for MCM Systems Hacettepe University Department of Electrical and Electronics Engineering An Adaptive Blind Channel Shortening Algorithm for MCM Systems Blind, Adaptive Channel Shortening Equalizer Algorithm which provides

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π. Today ESE 53: Digital Signal Processing! IIR Filter Design " Lec 8: March 30, 207 IIR Filters and Adaptive Filters " Bilinear Transformation! Transformation of DT Filters! Adaptive Filters! LMS Algorithm

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Wiener Filtering. EE264: Lecture 12

Wiener Filtering. EE264: Lecture 12 EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,

More information

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet EE538 Final Exam Fall 005 3:0 pm -5:0 pm PHYS 3 Dec. 17, 005 Cover Sheet Test Duration: 10 minutes. Open Book but Closed Notes. Calculators ARE allowed!! This test contains five problems. Each of the five

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13 CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

MMSE DECISION FEEDBACK EQUALIZER FROM CHANNEL ESTIMATE

MMSE DECISION FEEDBACK EQUALIZER FROM CHANNEL ESTIMATE MMSE DECISION FEEDBACK EQUALIZER FROM CHANNEL ESTIMATE M. Magarini, A. Spalvieri, Dipartimento di Elettronica e Informazione, Politecnico di Milano, Piazza Leonardo da Vinci, 32, I-20133 Milano (Italy),

More information

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm D.R. Brown III WPI WPI D.R. Brown III 1 / 19 Problem Statement and Assumptions known input x[n] unknown system (assumed

More information

ADAPTIVE FILTER ALGORITHMS. Prepared by Deepa.T, Asst.Prof. /TCE

ADAPTIVE FILTER ALGORITHMS. Prepared by Deepa.T, Asst.Prof. /TCE ADAPTIVE FILTER ALGORITHMS Prepared by Deepa.T, Asst.Prof. /TCE Equalization Techniques Fig.3 Classification of equalizers Equalizer Techniques Linear transversal equalizer (LTE, made up of tapped delay

More information

Each problem is worth 25 points, and you may solve the problems in any order.

Each problem is worth 25 points, and you may solve the problems in any order. EE 120: Signals & Systems Department of Electrical Engineering and Computer Sciences University of California, Berkeley Midterm Exam #2 April 11, 2016, 2:10-4:00pm Instructions: There are four questions

More information

ESE 531: Digital Signal Processing

ESE 531: Digital Signal Processing ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn

More information

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach. Wiener filter From Wikipedia, the free encyclopedia In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949. [1] Its purpose is to reduce the

More information

Error Entropy Criterion in Echo State Network Training

Error Entropy Criterion in Echo State Network Training Error Entropy Criterion in Echo State Network Training Levy Boccato 1, Daniel G. Silva 1, Denis Fantinato 1, Kenji Nose Filho 1, Rafael Ferrari 1, Romis Attux 1, Aline Neves 2, Jugurta Montalvão 3 and

More information

Analog vs. discrete signals

Analog vs. discrete signals Analog vs. discrete signals Continuous-time signals are also known as analog signals because their amplitude is analogous (i.e., proportional) to the physical quantity they represent. Discrete-time signals

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Digital Signal Processing

Digital Signal Processing Digital Signal Processing Discrete-Time Signals and Systems (2) Moslem Amiri, Václav Přenosil Embedded Systems Laboratory Faculty of Informatics, Masaryk University Brno, Czech Republic amiri@mail.muni.cz

More information

2A1H Time-Frequency Analysis II

2A1H Time-Frequency Analysis II 2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 209 For any corrections see the course page DW Murray at www.robots.ox.ac.uk/ dwm/courses/2tf. (a) A signal g(t) with period

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Regular paper. Adaptive System Identification Using LMS Algorithm Integrated with Evolutionary Computation

Regular paper. Adaptive System Identification Using LMS Algorithm Integrated with Evolutionary Computation Ibraheem Kasim Ibraheem 1,* Regular paper Adaptive System Identification Using LMS Algorithm Integrated with Evolutionary Computation System identification is an exceptionally expansive topic and of remarkable

More information

Acoustic Research Institute ARI

Acoustic Research Institute ARI Austrian Academy of Sciences Acoustic Research Institute ARI System Identification in Audio Engineering P. Majdak piotr@majdak.com Institut für Schallforschung, Österreichische Akademie der Wissenschaften;

More information

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes STOCHASTIC PROCESSES, DETECTION AND ESTIMATION 6.432 Course Notes Alan S. Willsky, Gregory W. Wornell, and Jeffrey H. Shapiro Department of Electrical Engineering and Computer Science Massachusetts Institute

More information

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk

More information

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Stochastic Processes Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Chapter 2 Random Processes

Chapter 2 Random Processes Chapter 2 Random Processes 21 Introduction We saw in Section 111 on page 10 that many systems are best studied using the concept of random variables where the outcome of a random experiment was associated

More information

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit   dwm/courses/2tf Time-Frequency Analysis II (HT 20) 2AH 2AH Time-Frequency Analysis II Bugs/queries to david.murray@eng.ox.ac.uk HT 20 For hints and answers visit www.robots.ox.ac.uk/ dwm/courses/2tf David Murray. A periodic

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

Digital Filters Ying Sun

Digital Filters Ying Sun Digital Filters Ying Sun Digital filters Finite impulse response (FIR filter: h[n] has a finite numbers of terms. Infinite impulse response (IIR filter: h[n] has infinite numbers of terms. Causal filter:

More information

Gradient-Adaptive Algorithms for Minimum Phase - All Pass Decomposition of an FIR System

Gradient-Adaptive Algorithms for Minimum Phase - All Pass Decomposition of an FIR System 1 Gradient-Adaptive Algorithms for Minimum Phase - All Pass Decomposition of an FIR System Mar F. Flanagan, Member, IEEE, Michael McLaughlin, and Anthony D. Fagan, Member, IEEE Abstract Adaptive algorithms

More information

III.B Linear Transformations: Matched filter

III.B Linear Transformations: Matched filter III.B Linear Transformations: Matched filter [p. ] Definition [p. 3] Deterministic signal case [p. 5] White noise [p. 19] Colored noise [p. 4] Random signal case [p. 4] White noise [p. 4] Colored noise

More information

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, 2007 Cover Sheet Test Duration: 120 minutes. Open Book but Closed Notes. Calculators allowed!! This test contains five problems. Each of

More information

Introduction to DSP Time Domain Representation of Signals and Systems

Introduction to DSP Time Domain Representation of Signals and Systems Introduction to DSP Time Domain Representation of Signals and Systems Dr. Waleed Al-Hanafy waleed alhanafy@yahoo.com Faculty of Electronic Engineering, Menoufia Univ., Egypt Digital Signal Processing (ECE407)

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Chapter [4] "Operations on a Single Random Variable"

Chapter [4] Operations on a Single Random Variable Chapter [4] "Operations on a Single Random Variable" 4.1 Introduction In our study of random variables we use the probability density function or the cumulative distribution function to provide a complete

More information

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain Electrical Engineering Department Technion - Israel Institute of Technology Supervised by: Prof. Israel Cohen Outline

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

EE 225a Digital Signal Processing Supplementary Material

EE 225a Digital Signal Processing Supplementary Material EE 225A DIGITAL SIGAL PROCESSIG SUPPLEMETARY MATERIAL EE 225a Digital Signal Processing Supplementary Material. Allpass Sequences A sequence h a ( n) ote that this is true iff which in turn is true iff

More information

EE 574 Detection and Estimation Theory Lecture Presentation 8

EE 574 Detection and Estimation Theory Lecture Presentation 8 Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization 1 Shihab Jimaa Khalifa University of Science, Technology and Research (KUSTAR) Faculty

More information

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if jt X ( ) = xte ( ) dt, (3-) then X ( ) represents its energy spectrum. his follows from Parseval

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 4, 6, M Open access books available International authors and editors Downloads Our authors are

More information

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood ECE559:WIRELESS COMMUNICATION TECHNOLOGIES LECTURE 16 AND 17 Digital signaling on frequency selective fading channels 1 OUTLINE Notes Prepared by: Abhishek Sood In section 2 we discuss the receiver design

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

MMSE Decision Feedback Equalization of Pulse Position Modulated Signals

MMSE Decision Feedback Equalization of Pulse Position Modulated Signals SE Decision Feedback Equalization of Pulse Position odulated Signals AG Klein and CR Johnson, Jr School of Electrical and Computer Engineering Cornell University, Ithaca, NY 4853 email: agk5@cornelledu

More information

Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering

Elec4621: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering Elec462: Advanced Digital Signal Processing Chapter 8: Wiener and Adaptive Filtering Dr D S Taubman May 2, 2 Wiener Filtering A Wiener lter is one which provides the Minimum Mean Squared Error (MMSE) prediction,

More information

Communications and Signal Processing Spring 2017 MSE Exam

Communications and Signal Processing Spring 2017 MSE Exam Communications and Signal Processing Spring 2017 MSE Exam Please obtain your Test ID from the following table. You must write your Test ID and name on each of the pages of this exam. A page with missing

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.011: Introduction to Communication, Control and Signal Processing QUIZ, April 1, 010 QUESTION BOOKLET Your

More information

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Emna Eitel and Joachim Speidel Institute of Telecommunications, University of Stuttgart, Germany Abstract This paper addresses

More information

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Matched Filter Generalized Matched Filter Signal

More information

Square Root Raised Cosine Filter

Square Root Raised Cosine Filter Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Efficient Use Of Sparse Adaptive Filters

Efficient Use Of Sparse Adaptive Filters Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2: EEM 409 Random Signals Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Consider a random process of the form = + Problem 2: X(t) = b cos(2π t + ), where b is a constant,

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Discrete-time signals and systems

Discrete-time signals and systems Discrete-time signals and systems 1 DISCRETE-TIME DYNAMICAL SYSTEMS x(t) G y(t) Linear system: Output y(n) is a linear function of the inputs sequence: y(n) = k= h(k)x(n k) h(k): impulse response of the

More information

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment

More information

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo

Adaptive Linear Filtering Using Interior Point. Optimization Techniques. Lecturer: Tom Luo Adaptive Linear Filtering Using Interior Point Optimization Techniques Lecturer: Tom Luo Overview A. Interior Point Least Squares (IPLS) Filtering Introduction to IPLS Recursive update of IPLS Convergence/transient

More information

Ch4: Method of Steepest Descent

Ch4: Method of Steepest Descent Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number

More information

ACTIVE noise control (ANC) ([1], [2]) is an established

ACTIVE noise control (ANC) ([1], [2]) is an established 286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE,

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Algorithms and structures for long adaptive echo cancellers

Algorithms and structures for long adaptive echo cancellers Loughborough University Institutional Repository Algorithms and structures for long adaptive echo cancellers This item was submitted to Loughborough University's Institutional Repository by the/an author.

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Linear and Nonlinear Iterative Multiuser Detection

Linear and Nonlinear Iterative Multiuser Detection 1 Linear and Nonlinear Iterative Multiuser Detection Alex Grant and Lars Rasmussen University of South Australia October 2011 Outline 1 Introduction 2 System Model 3 Multiuser Detection 4 Interference

More information

3.2 Complex Sinusoids and Frequency Response of LTI Systems

3.2 Complex Sinusoids and Frequency Response of LTI Systems 3. Introduction. A signal can be represented as a weighted superposition of complex sinusoids. x(t) or x[n]. LTI system: LTI System Output = A weighted superposition of the system response to each complex

More information