2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

Size: px
Start display at page:

Download "2.6 The optimum filtering solution is defined by the Wiener-Hopf equation"

Transcription

1 .6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o J min 0 Define A σ d p Since p (3) σ d Edn [ ( )d * ( n), p E[ u( n)d * ( n), and E[ u( n)u * ( n), we may rewrite Eq. (3) as A Edn [ ( )d * ( n) Edn [ ( )u ( n) E[ u( n)d ( n) E[ u( n)u ( n) 31

2 E dn ( ) u( n) d * ( n), u ( n) The minimum mean-square error equals J min σ d p w o (4) Eliminate σ d between Eqs. and (4): J( w) J min + p w o p w w p + w w (5) Eliminate p between () and (5): J( w) J min + w o wo w o w w wo + w w (6) where we have used the property. We may rewrite Eq. (6) simply as J( w) J min + ( w w o ) w ( w o ) which clearly show that J(w o ) J min..7 The minimum mean-square error equals J min σ d p 1 p Using the spectral theorem, we may express the correlation matrix as QΛQ M k1 λ k q k q k ence, the inverse of equals 1 M q λ k q k k1 k () 3

3 J r( 0) ( w w w o ) 4.3 (a) There is a single mode with eigenvalue λ 1 r(0), and q 1 1, ence, J( n) J min + λ 1 v 1 ( n) where v 1 (n) q 1 (w o - w(n)) (w o - w(n)) (b) J( n) v λ 1 ( n) ( w o wn ( )) The estimation error e(n) equals en ( ) dn ( ) w ( n)u( n) where d(n) is the desired response, w(n) is the tap-weight vector, and u(n) is the tap-input vector. ence, the gradient of the instantaneous squared error equals ˆ J( n) [ en ( ) w [ en ( )e * ( n) w en ( ) e* ( n) e * ( n) en ( ) w w e * ( n)u( n) u( n)d * ( n) + u( n)u ( n)w( n) 4.5 Consider the approximation to the inverse of the correlation matrix: 1 n ( n+1) µ ( I µ) k k0 where µ is a positive constant bounded in value as 0 < µ < λ max where λ max is the largest eigenvalue of. Note that according to this approximation, we have -1 µi. Correspondingly, we may approximate the optimum Wiener solution as 11

4 w( n+1) 1 ( n+1)p n µ ( I µ) k p k0 n µp + µ ( I µ) k p k1 In the second term, put k i+1 or i k-1: n w( n+1) µp + µ ( I µ) ( I µ) i p k1 µp+ ( I µ)w( n) where, in the second line, we have used the fact that n µ ( I µ) i p i0 w( n) ence, rearranging Eq. : w( n+1) w( n) + µ [ p w( n) which is the standard formula for the steepest descent algorithm J( w( n+1) ) J( w( n) ) --µ g( n) For stability of the steepest-descent algorithm, we therefore require J( w( n+1) ) < J( w( n) ) To satisfy this requirement, the step-size parameter µ should be positive, since µ g( n) > 0. ence, the steepest-descent algorithm becomes unstable when the step-size parameter is negative. 113

5 The corresponding plot of the error performance surface is therefore J(n) r r(0) r(0) r(0) 0 -r/r(0) a (c) The condition on the step-size parameter is 0 < µ < r ( 0) 4.10 The second-order A process u(n) is described by the difference equation un ( ) 0.5u( n-1) + un- ( ) + vn ( ) ence, w 1 0.5, w 1 and the A parameters equal a 1 0.5, a 1 Accordingly, we write the Yule-Walker equations as r( 0) r( 1) r( 1) r( 0) r( 1) r( ) () σ v k0 a k rk ( ) 1 a 0 r( 0) + a 1 r( 1) + a r( ) r( 0) + 0.5r( 1) r( ) (3) Eqs., () and (3) yield 116

6 r(0) 0 r 1 r() -1/ ence, The eigenvalues of are -1, +1. For convergence of the steepest descent algorithm: 0 < µ < λ max where λ max is the largest eigenvalue of the correlation matrix. ence, with λ max 1. 0 < µ < 4.11 un- ( ) 0.5u( n-1) + un ( ) vn ( ) ence, w 1 1 w 0.5 Accordingly, we may write w b r B* as r( 0) r( 1) r( 1) r( 0) r( ) r( 1) σ v k0 a k rk ( ) 1 r( 0) r( 1) 0.5r( ) () r(0) 0 r -/3 Therefore,

7 components subtract coherently, thereby yielding the average power (A /)(1-a). ence, we may express the cost function J as J σ a ν A + σν ( 1 a) M Differentiating J with respect to a and setting the result equal to zero yields the optimum scale factor A a opt A + 4 ( σ ν M) A ( σ ν )( M ) A + ( σ ν )( M ) ( M )SN ( M )SN where SN A σ ν 5.5 The index of performance equals J( w, K ) Ee [ K ( n), K 13,,, The estimation error e(n) equals en ( ) dn ( ) w T ( n)u( n) where d(n) is the desired response, w(n) is the tap-weight vector of the transversal filter, and u(n) is the tap-input vector. In accordance with the multiple linear regression model for d(n), we have T dn ( ) w o ( n)u( n) () where w o is the parameter vector, and v(n) is a white-noise process of zero mean and variance σ v. (a) The instantaneous gradient vector equals 17

8 ˆ ( nk, ) J( w, K ) w [ e K ( n) w Ke K-1 ( n) en ( ) w Ke K-1 ( n)u( n) ence, we may express the new adaptation rule for the estimate of the tap-weight vector as 1 ŵ( n+1) ŵ( n) --µ( ˆ ( nk, )) ŵ( n) + µku( n)e K-1 ( n) (3) (b) Eliminate d(n) between Eqs. and (), with the estimate w(n): ŵ( n) used in place of en ( ) ( w o ŵ( n) ) T u( n) + vn ( ) T ( n)u( n) + vn ( ) u T ( n) ( n) + vn ( ) (4) Subtract w o from both sides of Eq. (3): ( n+1) ( n)-µku( n)e K-1 ( n) (5) For the case when (n) is close to zero (i.e., ŵ( n) is close to w o ), we may use Eq. (4) to write e K-1 ( n) [ u T ( n) ( n) + vn ( ) K-1 v K-1 ( n) 1 u T ( n) ( n) vn ( ) K-1 18

9 v K-1 ( n) 1 ( K-1) ut ( n) ( n) vn ( ) v K-1 ( n) + ( K-1)u T ( n) ( n)v ( K-1) ( n) (6) Substitute Eq. (6) into (5): ( n+1) [ I µk( K-1)v ( K-1) ( n)u( n)u T ( n) ( n)-µkv K-1 u( n) Taking the expectation of both sides of this relation and recognizing that (n) is independent of u(n) by low-pass filtering action of the filter, () u(n) is independent of v(n) by assumption, and (3) u(n) has zero mean, we get E[ ( n+1) { I µk( K-1)Ev [ ( K-1) ( n) }E[ ( n) (7) where E[ u( n)u T ( n) (c) Let QΛQ T (8) where Λ is the diagonal matrix of eigenvalues of, and Q is a matrix whose column vectors equal the associated eigenvectors. ence, substituting Eq. (8) in (7) and using υ( n) Q T E[ ( n) we get υ( n+1) { I µk( K-1)Ev [ ( K-1) ( n) Λ}υ( n) That is, the ith element of this equation is υ i ( n+1) 1 µk( K-1)Ev ( K-1) [ ( n) λ i υi ( n) (9) where υ i ( n) is the ith element of υ( n), and i 1,,, M. Solving the first-order difference equation (9): 19

10 υ i ( n) 1 µk( K-1)Ev ( K-1) n-1 [ ( n) λ i υ i ( 0) where υ i ( 0) is the initial value of υ i ( n). ence, for υ i ( n) to converge, we require that 1 µk( K-1)Ev [ ( K-1) ( n) λ max < 1 where λ max is the largest eigenvalue of. This condition on µ may be rewritten as 0 < µ < K( K-1)λ max Ev [ ( K-1) ( n) (10) When this condition is satisfied, we find that υ i ( ) 0 for all i That is, ( ) 0 and, correspondingly, ŵ( ) w o. (d) For K 1, the results described in Eqs. (3), (7) and (10) reduce as follows, respectively, ŵ( n+1) ŵ( n) + µu( n)en ( ) E[ ( n+1) ( I µ)e[ ( n) 0 < µ < λ max These results are recognized to be the same as those for the conventional LMS algorithm for real-valued data. 5.6 (a) We start with the equation E[ ( n+1) ( I µ) E[ ( n) where ε( n) w o ŵ( n) We note that 130

11 µj min for small µ. (b) From the Lyapunov equation derived in Problem 5.10, we have K 0 ( n) + K 0 ( n) µj min, µ small where only the first term of the summation in the right-hand side of Eq. 8 in the solution to Problem 5.10 is retained. Taking the trace of both sides of this equation, and recognizing that tr[ K 0 ( n) tr[ K 0 ( n) we get for n : tr[ K 0 ( ) µj min tr[ From Eq. (5.90) of the text, J ex ( ) tr[ K 0 ( n) µ --J min tr[ ence, the misadjustment is M J ex ( ) J min µ --tr[ 5.13 The error correlation matrix K(n) equals K( n) E[ ( n) ( n) The trace of K(n) equals tr[ K( n) tr{ E[ ( n) ( n) } E{ tr[ ( n) ( n) } 141

12 Since tr[ ( n) ( n) tr[ ( n) ( n) we may express the trace of K(n) as tr[ K( n) E{ tr[ ( n) ( n) } tr{ E[ ( n) ( n) } The inner product (n) (n) equals the squared norm of (n) which is a scalar. ence tr[ K( n) E[ ( n) From convergence analysis of the LMS algorithm, we have K( n+1) ( I µ)k( n) ( I µ) + µ J min () Initially, (n) is so large that we may justifiably ignore the term µ J min, in which case Eq. () may be approximated as K( n+1) ( I µ)k( n) ( I µ), n small (3) Assuming that σ u I we may further reduce Eq. (3) to K( n+1) ( 1 µσ u ) K( n) Thus, in light of Eq. we may write E[ ( n+1) ( 1 µσ u ) E[ ( n), n small The convergence ratio is therefore approximately 14

13 cn ( ) E[ ( n+1) E[ ( n) ( 1 µσ u ) n w 143

Ch5: Least Mean-Square Adaptive Filtering

Ch5: Least Mean-Square Adaptive Filtering Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm

More information

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method SGN 21006 Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 20 Adaptive filtering:

More information

Least Mean Square Filtering

Least Mean Square Filtering Least Mean Square Filtering U. B. Desai Slides tex-ed by Bhushan Least Mean Square(LMS) Algorithm Proposed by Widrow (1963) Advantage: Very Robust Only Disadvantage: It takes longer to converge where X(n)

More information

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS 4.1 Adaptive Filter Generally in most of the live applications and in the environment information of related incoming information statistic is not available

More information

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method 1 Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method Adaptive filtering: Problem statement Consider the family of variable parameter FIR filters, computing their

More information

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters 1 Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters Overview Block Adaptive Filters Iterating LMS under the assumption of small variations in w(n) Approximating the gradient by time

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

Ch4: Method of Steepest Descent

Ch4: Method of Steepest Descent Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1. Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

Linear Optimum Filtering: Statement

Linear Optimum Filtering: Statement Ch2: Wiener Filters Optimal filters for stationary stochastic models are reviewed and derived in this presentation. Contents: Linear optimal filtering Principle of orthogonality Minimum mean squared error

More information

Computer exercise 1: Steepest descent

Computer exercise 1: Steepest descent 1 Computer exercise 1: Steepest descent In this computer exercise you will investigate the method of steepest descent using Matlab. The topics covered in this computer exercise are coupled with the material

More information

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall 2015 1 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009

Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009 Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009 Announcement Homework 3 due one week from today. Not so long ago in a classroom very very closeby Unconstrained Optimization The Method of Steepest

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

Adaptive Systems Homework Assignment 1

Adaptive Systems Homework Assignment 1 Signal Processing and Speech Communication Lab. Graz University of Technology Adaptive Systems Homework Assignment 1 Name(s) Matr.No(s). The analytical part of your homework (your calculation sheets) as

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

Chapter 2 Wiener Filtering

Chapter 2 Wiener Filtering Chapter 2 Wiener Filtering Abstract Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Machine Learning and Adaptive Systems. Lectures 5 & 6

Machine Learning and Adaptive Systems. Lectures 5 & 6 ECE656- Lectures 5 & 6, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 c. Performance Learning-LMS Algorithm (Widrow 1960) The iterative procedure in steepest

More information

Copyright IEEE, 13th Stat. Signal Proc. Workshop (SSP), July 2005, Bordeaux (F)

Copyright IEEE, 13th Stat. Signal Proc. Workshop (SSP), July 2005, Bordeaux (F) Copyrigt IEEE, 13t Stat. Signal Proc. Worksop SSP), 17-0 July 005, Bordeaux F) ROBUSTNESS ANALYSIS OF A GRADIENT IDENTIFICATION METHOD FOR A NONLINEAR WIENER SYSTEM Ernst Ascbacer, Markus Rupp Institute

More information

Lecture 4 1. Block-based LMS 3. Block-based LMS 4. Standard LMS 2. bw(n + 1) = bw(n) + µu(n)e (n), bw(k + 1) = bw(k) + µ u(kl + i)e (kl + i),

Lecture 4 1. Block-based LMS 3. Block-based LMS 4. Standard LMS 2. bw(n + 1) = bw(n) + µu(n)e (n), bw(k + 1) = bw(k) + µ u(kl + i)e (kl + i), Standard LMS 2 Lecture 4 1 Lecture 4 contains descriptions of Block-based LMS (7.1) (edition 3: 1.1) u(n) u(n) y(n) = ŵ T (n)u(n) Datavektor 1 1 y(n) M 1 ŵ(n) M 1 ŵ(n + 1) µ Frequency Domain LMS (FDAF,

More information

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases

A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases A Derivation of the Steady-State MSE of RLS: Stationary and Nonstationary Cases Phil Schniter Nov. 0, 001 Abstract In this report we combine the approach of Yousef and Sayed [1] with that of Rupp and Sayed

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Lecture Notes in Adaptive Filters

Lecture Notes in Adaptive Filters Lecture Notes in Adaptive Filters Second Edition Jesper Kjær Nielsen jkn@es.aau.dk Aalborg University Søren Holdt Jensen shj@es.aau.dk Aalborg University Last revised: September 19, 2012 Nielsen, Jesper

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c = ENEE630 ADSP Part II w/ solution. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix) R a = 4 4 4,R b = 0 0,R c = j 0 j 0 j 0 j 0 j,r d = 0 0 0

More information

DOA Estimation using MUSIC and Root MUSIC Methods

DOA Estimation using MUSIC and Root MUSIC Methods DOA Estimation using MUSIC and Root MUSIC Methods EE602 Statistical signal Processing 4/13/2009 Presented By: Chhavipreet Singh(Y515) Siddharth Sahoo(Y5827447) 2 Table of Contents 1 Introduction... 3 2

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

EEL 6502: Adaptive Signal Processing Homework #4 (LMS)

EEL 6502: Adaptive Signal Processing Homework #4 (LMS) EEL 6502: Adaptive Signal Processing Homework #4 (LMS) Name: Jo, Youngho Cyhio@ufl.edu) WID: 58434260 The purpose of this homework is to compare the performance between Prediction Error Filter and LMS

More information

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Numerical methods part 2

Numerical methods part 2 Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an

More information

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS Gary A. Ybarra and S.T. Alexander Center for Communications and Signal Processing Electrical and Computer Engineering Department North

More information

Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm

Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm Speech enhancement in discontinuous transmission systems using the constrained-stability least-mean-squares algorithm J.. Górriz a and J. Ramírez Department of Signal Theory, University of Granada, Andalucia

More information

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation

Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Reduced-Rank Multi-Antenna Cyclic Wiener Filtering for Interference Cancellation Hong Zhang, Ali Abdi and Alexander Haimovich Center for Wireless Communications and Signal Processing Research Department

More information

SNR lidar signal improovement by adaptive tecniques

SNR lidar signal improovement by adaptive tecniques SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it

More information

Chapter 5 Kalman Filtering over Wireless Fading Channels

Chapter 5 Kalman Filtering over Wireless Fading Channels Chapter 5 Kalman Filtering over Wireless Fading Channels Yasamin Mostofi and Alireza Ghaffarkhah Abstract In this chapter, we consider the estimation of a dynamical system over a wireless fading channel

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

In the Name of God. Lecture 11: Single Layer Perceptrons

In the Name of God. Lecture 11: Single Layer Perceptrons 1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider

More information

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control

Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control APSIPA ASC 20 Xi an Dominant Pole Localization of FxLMS Adaptation Process in Active Noise Control Iman Tabatabaei Ardekani, Waleed H. Abdulla The University of Auckland, Private Bag 9209, Auckland, New

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS 17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS Muhammad Tahir AKHTAR

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera

Intelligent Control. Module I- Neural Networks Lecture 7 Adaptive Learning Rate. Laxmidhar Behera Intelligent Control Module I- Neural Networks Lecture 7 Adaptive Learning Rate Laxmidhar Behera Department of Electrical Engineering Indian Institute of Technology, Kanpur Recurrent Networks p.1/40 Subjects

More information

ELEC E7210: Communication Theory. Lecture 4: Equalization

ELEC E7210: Communication Theory. Lecture 4: Equalization ELEC E7210: Communication Theory Lecture 4: Equalization Equalization Delay sprea ISI irreucible error floor if the symbol time is on the same orer as the rms elay sprea. DF: Equalization a receiver signal

More information

On the Use of A Priori Knowledge in Adaptive Inverse Control

On the Use of A Priori Knowledge in Adaptive Inverse Control 54 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 47, NO 1, JANUARY 2000 On the Use of A Priori Knowledge in Adaptive Inverse Control August Kaelin, Member,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

ECE580 Partial Solution to Problem Set 3

ECE580 Partial Solution to Problem Set 3 ECE580 Fall 2015 Solution to Problem Set 3 October 23, 2015 1 ECE580 Partial Solution to Problem Set 3 These problems are from the textbook by Chong and Zak, 4th edition, which is the textbook for the

More information

Least Squares Regression

Least Squares Regression Least Squares Regression Chemical Engineering 2450 - Numerical Methods Given N data points x i, y i, i 1 N, and a function that we wish to fit to these data points, fx, we define S as the sum of the squared

More information

ECE 301 Fall 2011 Division 1 Homework 5 Solutions

ECE 301 Fall 2011 Division 1 Homework 5 Solutions ECE 301 Fall 2011 ivision 1 Homework 5 Solutions Reading: Sections 2.4, 3.1, and 3.2 in the textbook. Problem 1. Suppose system S is initially at rest and satisfies the following input-output difference

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

1.6: 16, 20, 24, 27, 28

1.6: 16, 20, 24, 27, 28 .6: 6, 2, 24, 27, 28 6) If A is positive definite, then A is positive definite. The proof of the above statement can easily be shown for the following 2 2 matrix, a b A = b c If that matrix is positive

More information

Advanced Signal Processing Adaptive Estimation and Filtering

Advanced Signal Processing Adaptive Estimation and Filtering Advanced Signal Processing Adaptive Estimation and Filtering Danilo Mandic room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

An introduction to multivariate data

An introduction to multivariate data An introduction to multivariate data Angela Montanari 1 The data matrix The starting point of any analysis of multivariate data is a data matrix, i.e. a collection of n observations on a set of p characters

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard

More information

EE 225a Digital Signal Processing Supplementary Material

EE 225a Digital Signal Processing Supplementary Material EE 225A DIGITAL SIGAL PROCESSIG SUPPLEMETARY MATERIAL EE 225a Digital Signal Processing Supplementary Material. Allpass Sequences A sequence h a ( n) ote that this is true iff which in turn is true iff

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

E4702 HW#4-5 solutions by Anmo Kim

E4702 HW#4-5 solutions by Anmo Kim E70 HW#-5 solutions by Anmo Kim (ak63@columbia.edu). (P3.7) Midtread type uniform quantizer (figure 3.0(a) in Haykin) Gaussian-distributed random variable with zero mean and unit variance is applied to

More information

Wiener Filtering. EE264: Lecture 12

Wiener Filtering. EE264: Lecture 12 EE264: Lecture 2 Wiener Filtering In this lecture we will take a different view of filtering. Previously, we have depended on frequency-domain specifications to make some sort of LP/ BP/ HP/ BS filter,

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4) A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To

More information

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL T F T I G E O R G A I N S T I T U T E O H E O F E A L P R O G R ESS S A N D 1 8 8 5 S E R V L O G Y I C E E C H N O Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL Aravind R. Nayak

More information

Lecture 4 - Spectral Estimation

Lecture 4 - Spectral Estimation Lecture 4 - Spectral Estimation The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated

More information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

On the Stability of the Least-Mean Fourth (LMF) Algorithm XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We

More information

DATA MINING AND MACHINE LEARNING

DATA MINING AND MACHINE LEARNING DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems

More information

Chapter 4 Code Tracking Loops

Chapter 4 Code Tracking Loops Chapter 4 Code Tracking Loops 4- Optimum Tracking of Wideband Signals A tracking loop making use of this optimum discriminator is illustrated in Figure 4-1. 4- Optimum Tracking of Wideband Signals 4.3

More information

«Random Vectors» Lecture #2: Introduction Andreas Polydoros

«Random Vectors» Lecture #2: Introduction Andreas Polydoros «Random Vectors» Lecture #2: Introduction Andreas Polydoros Introduction Contents: Definitions: Correlation and Covariance matrix Linear transformations: Spectral shaping and factorization he whitening

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

Irina Pchelintseva A FIRST-ORDER SPECTRAL PHASE TRANSITION IN A CLASS OF PERIODICALLY MODULATED HERMITIAN JACOBI MATRICES

Irina Pchelintseva A FIRST-ORDER SPECTRAL PHASE TRANSITION IN A CLASS OF PERIODICALLY MODULATED HERMITIAN JACOBI MATRICES Opuscula Mathematica Vol. 8 No. 008 Irina Pchelintseva A FIRST-ORDER SPECTRAL PHASE TRANSITION IN A CLASS OF PERIODICALLY MODULATED HERMITIAN JACOBI MATRICES Abstract. We consider self-adjoint unbounded

More information

Examination with solution suggestions SSY130 Applied Signal Processing

Examination with solution suggestions SSY130 Applied Signal Processing Examination with solution suggestions SSY3 Applied Signal Processing Jan 8, 28 Rules Allowed aids at exam: L. Råde and B. Westergren, Mathematics Handbook (any edition, including the old editions called

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi

DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi + DESIGN OF EXPERIMENT ERT 427 Response Surface Methodology (RSM) Miss Hanna Ilyani Zulhaimi + Outline n Definition of Response Surface Methodology n Method of Steepest Ascent n Second-Order Response Surface

More information

H Estimation. Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar. H Estimation p.1/34

H Estimation. Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar. H Estimation p.1/34 H Estimation Speaker : R.Lakshminarayanan Guide : Prof. K.Giridhar H Estimation p.1/34 H Motivation The Kalman and Wiener Filters minimize the mean squared error between the true value and estimated values

More information

Widely Linear Estimation and Augmented CLMS (ACLMS)

Widely Linear Estimation and Augmented CLMS (ACLMS) 13 Widely Linear Estimation and Augmented CLMS (ACLMS) It has been shown in Chapter 12 that the full-second order statistical description of a general complex valued process can be obtained only by using

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 4, 6, M Open access books available International authors and editors Downloads Our authors are

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague

THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR. Petr Pollak & Pavel Sovka. Czech Technical University of Prague THE PROBLEMS OF ROBUST LPC PARAMETRIZATION FOR SPEECH CODING Petr Polla & Pavel Sova Czech Technical University of Prague CVUT FEL K, 66 7 Praha 6, Czech Republic E-mail: polla@noel.feld.cvut.cz Abstract

More information