Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Size: px
Start display at page:

Download "Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on"

Transcription

1 Adap>ve Filters Part 2 (LMS variants and analysis) Sta>s>cal Signal Processing II: Linear Es>ma>on Eric Wan, Ph.D. Fall

2 LMS Variants and Analysis LMS variants Normalized LMS Leaky LMS Filtered-X LMS and Adjoint LMS Transform and Frequency domain LMS Block LMS LMS Analysis Misadjustment Tracking Performance 2

3 Normalized LMS c(n +1) = c +!µ x 2 e* x 0 <!µ < 2 Time varying step size Can be derived as a constrained op>miza>on problem that minimizes the change in the coefficients from >me step to >me step Helps mi>gate effects of gradient noise when the number of coefficients is large or when the error or input values are large. Analysis of rate of convergence and misadjustment is much more difficult than LMS Rate of convergence is poten>ally significantly faster than standard LMS Only slight increase in computa>onal complexity Minor change to avoid issues when the input goes to zero!µ!µ x 2 x 2 +δ ( ) 3

4 Leaky LMS c(n +1) = (1 µα)c + 2µe * x Adds a constraint on the coefficients P = e 2 +α c 2 lim E[c] = R +αi n 1 d Like adding a small amount of white noise to the input Keeps the coefficients small, which can be useful if the input is near zero or poorly condi>oned. May improve eigenvalue spread. Adds a bias to the solu>on 4

5 Filtered-X and Adjoint LMS Consider the following adap>ve filter setup y x c ŷ P(z) ŷ P - e 5

6 Filtered-X and Adjoint LMS Consider the following adap>ve filter setup H(z) = z D y x c ŷ P(z) ŷ P - e open-loop Inverse Control Pre-equaliza>on 6

7 Filtered-X and Adjoint LMS Consider the following adap>ve filter setup M (z) y x c ŷ P(z) ŷ P - e Controller Model Reference Adap>ve Control 7

8 Filtered-X and Adjoint LMS Concert hall room impulse response y stereo x c ŷ P(z) ŷ P - e Acous>c room equaliza>on Your living room impulse response 8

9 Filtered-X and Adjoint LMS y x c ŷ P(z) ŷ P - e Ac>ve noise (or vibra>on) cancella>on 9

10 Filtered-X LMS How to train? Re-derive stochas>c gradient descent to account for Block diagram manipula>on P(z) y x c ŷ P(z) ŷ P - e y x P(z) x P c ŷ P - e When are these equal? LMS 10

11 Filtered-X LMS x P = ˆp x = ˆp T x ŷ = c H x P e = y ŷ P c(n +1) = c + 2µex P y x c ŷ P(z) ŷ P - e ˆP(z) x P LMS ŷ P 11

12 Filtered-X LMS Mul>ple Error X-LMS Generaliza>on to MIMO systems c kn p nl x 1 x 2 x 3 K N L Same input x k x k x k x k ˆp 1 ˆp 2 ˆp 3 ˆp 4 Same filter c c c c For each filter do L filtered X-LMS implementa=ons e 1 e 2 e 3 e 4 e 1 e 2 e 3 e 4 x k c Consider 1 filter c(n +1) = c µ et e c = c + 2µ e 2 1 c + e 2 2 c + e 2 3 c Requires a total of K N L filtered-x implementa=ons p 1 p 2 p 3 p 4 e 1 e 2 e 3 e 4 + e 2 4 c 12

13 Adjoint LMS Alterna>ve to Filtered-X LMS y x c M u P(z) N ŷ P - e LMS δ? Assume Calculate the gradient: e 2 c modeled by an FIR filter P(z) ŷ P = p 0 u + p 1 u(n 1) + p 2 u(n 2) = 2e e c = 2e n 2 k=n e u(k) u(k) c u z 1 z 1 N = 3 p 0 p 1 p 2 ŷ P e u(n i) = p i u(k) c = x(k) 13

14 Adjoint LMS ˆ n = 2e p 0 x + p 1 x(n 1) + p 2 x(n 2) ˆ n+1 = 2e(n +1) p 0 x(n +1) + p 1 x + p 2 x(n 1) ˆ n+2 = 2e(n + 2) p 0 x(n + 2) + p 1 x(n +1) + p 2 x 2 e p 0 + e(n +1) p 1 + e(n + 2) p 2 x N M mul=plica=ons N + M mul=plica=ons δ δ z z c(n +1) = c + 2µδx p 0 p 1 p 2 e y x c M u P(z) N ŷ P - e LMS δ P(z 1 ) 14

15 Adjoint LMS c(n +1) = c + 2µδx y x c M u P(z) N ŷ P - e LMS δ ˆP(z 1 ) δ not instantaneous gradient es>mate averaged gradient Delayed implementa>on c(n +1) = c + 2µδ(n N )x(n N ) Works for MIMO systems Significant computa>onal savings z 1 P(z) P(z 1 ) z δ 1 δ 2 P(z 1 ) e 1 e 2 e 3 e 4 15

16 Transform domain LMS Consider transforming the input before the linear es>mator u 1 T r a n s f o r m u 2 u M What would be the ideal transform? 16

17 Transform domain LMS For an orthogonal input the eigenvalue spread = 1 and we have fast convergence for LMS (circular bowl) R = 2 σ x 2 σ x 0! 0 σ x 2 T χ ( R) =1 Consider first the transform Q T where R = QΛQ T u = Q T x E uu T = QT xx T Q = Q T RQ = Λ c 1 Q T c 2 c 2 c 1 In image compression Q T is known as the Karhunen Loeve Transform 17

18 Transform domain LMS We want a circular bowl ( χ ( R) =1 ). So just power normalize u = Λ 1/2 Q T x = R 1/2 x R = R 1/2 R T /2 E uu T = I R 1 = R 1/2 T /2 R Cholesky Factoriza1on Possible Fixed Transforms DFT u i (n +1) = u i exp( j2πi / M ) + x(n +1) x(n +1 M ) c(n +1) = c + 2µe * u Order M update Complex coefficients DCT m i,m = R i 2 M cos(i(m +.5)π M R i =1/ 2 k = 0,M 2 =1 else u = M T x M T M = I 18

19 Transform domain LMS Last step is to power normalize each element separately u i = u i / ˆσ ui c(n +1) = c + 2µe * u u 1 u 1 1/ ˆσ ui T r a n s f o r m u 2 1/ ˆσ ui u 2 u M 1/ ˆσ ui u M Transform domain approaches ooen result in faster convergence Final MSE may be different 19

20 Block LMS Basic idea is to average the instantaneous gradient over a block of data Consider a single LMS update ŷ = c H (n 1)x c = c(n 1) + 2µex One step later c(n +1) = c + 2µe(n +1)x(n +1) Iterate for L steps c(n + L) = c(n 1) + 2µ e(n + l)x(n + l) L l=0 Weights are only updated every L steps. The sequence of weights will not be the same. Why? 20

21 Block LMS c(n + L) = c(n 1) + 2µ e(n + l)x(n + l) Convergence, stability, and >me constants all the same Is the misadjustment the same? L l=0 P ex P o µ L Trace(R) Depends on how you look at it. So what are the advantages? Fast implementa>on using FFT to perform convolu>ons L l=0 e(n + l)x(n + l) 21

22 Fast Block LMS Assuming a block length of M equals the FIR order M: LMS: 2M 2 mul>plica>ons Fast Block LMS: 10M log(2m ) +16M Can also improve convergence by power normalizing in the frequency domain See Haykin text for details 22

23 LMS Analysis - Misadjustment Start by calcula>ng the covariance of the LMS coefficients (at the bopom of the bowl) cov[!c] = E[!c!c T ]!c = c c o LMS: c(n +1) = c µ ˆ n write: ˆ n = n +V Where n = = 2R!c +V V = true gradient sta=onary noise Subs=tute into LMS rotate!c(n +1) = ( I 2µR) c! µv ( )!ʹ c!ʹ(n +1) = I 2µΛ E c!ʹ(n +1)!ʹ c (n +1) T = E I 2µΛ c µ V ʹ ( ) 2!ʹ c!ʹ = Q T!c c!ʹ c T + µ 2 V ʹ V ʹ T + 2(cross terms)(cross terms) 23

24 LMS Analysis - Misadjustment E c!ʹ(n +1)!ʹ c (n +1) T = E I 2µΛ ( ) 2!ʹ c!ʹ c T + µ 2 V ʹ V ʹ T + 2(cross terms)(cross terms) Assume sta=onarity of c!ʹ at the bohom of the bowl, zero mean noise V uncorrelated with c!ʹ. Thus the cross terms go to zero solving ( ) 2 cov!ʹ cov c!ʹ = I 2µΛ ( ) 1 cov ʹ cov c!ʹ = µ 4 Λ 2µΛ2 At the bohom of the bowl n 0 c + µ 2 cov V ʹ V V = ˆ n = 2ex cov V = 4E e2 xx T Near op=mal, e and x are uncorrelated by the principle of orthogonality cov V = 4P R cov V ʹ o = cov QT V = 4P Λ o 24

25 LMS Analysis - Misadjustment ( ) 1 cov ʹ cov c!ʹ = µ 4 Λ 2µΛ2 V cov V ʹ = cov QT V = 4P Λ o ( ) 1 4P o Λ cov c!ʹ = µ 4 Λ 2µΛ2 µp o Λ 1 Λ = µp o I cov!c = Qcov c!ʹ QT µp o I assume µλ << I P( n) P( n) = P o + P tr + P ex Misadjustment = P ex P o P o n 25

26 LMS Analysis - Misadjustment MSE P = P o + P ex Assume!c is sta=onary aner convergence Excess MSE P ex = E!c T R!c = E c!ʹ T Λ!ʹ c = P ex µp o M 1 λ k k=0 = µp o Trace R M 1 λ E[ cʹ ] 2 k k k=0 Diagonal elements of cov c!ʹ Misadjustment = P ex P o µ Trace R 26

27 LMS Analysis - Misadjustment Misadjustment Misadjustment = P ex P o µ Trace R Shows trade-off between learning rate and misadjustment In terms of >me constants M 1 M 1 1 Misadj. = µ λ k = µ = 1 M 1 1 = M k=0 k=0 2µτ k 4 k=0 (τ k ) mse 4 1 τ av Shows trade-off between misadjustment and rate of convergence 27

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given

More information

26. Filtering. ECE 830, Spring 2014

26. Filtering. ECE 830, Spring 2014 26. Filtering ECE 830, Spring 2014 1 / 26 Wiener Filtering Wiener filtering is the application of LMMSE estimation to recovery of a signal in additive noise under wide sense sationarity assumptions. Problem

More information

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation .6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o

More information

Adaptive Filter Theory

Adaptive Filter Theory 0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent

More information

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1. Lecture Lecture includes the following: Eigenvalue spread of R and its influence on the convergence speed for the LMS. Variants of the LMS: The Normalized LMS The Leaky LMS The Sign LMS The Echo Canceller

More information

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters

Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters 1 Lecture 6: Block Adaptive Filters and Frequency Domain Adaptive Filters Overview Block Adaptive Filters Iterating LMS under the assumption of small variations in w(n) Approximating the gradient by time

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

Adaptive Filtering Part II

Adaptive Filtering Part II Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,

More information

Variable Learning Rate LMS Based Linear Adaptive Inverse Control *

Variable Learning Rate LMS Based Linear Adaptive Inverse Control * ISSN 746-7659, England, UK Journal of Information and Computing Science Vol., No. 3, 6, pp. 39-48 Variable Learning Rate LMS Based Linear Adaptive Inverse Control * Shuying ie, Chengjin Zhang School of

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least

More information

Least Mean Square Filtering

Least Mean Square Filtering Least Mean Square Filtering U. B. Desai Slides tex-ed by Bhushan Least Mean Square(LMS) Algorithm Proposed by Widrow (1963) Advantage: Very Robust Only Disadvantage: It takes longer to converge where X(n)

More information

ESE 531: Digital Signal Processing

ESE 531: Digital Signal Processing ESE 531: Digital Signal Processing Lec 22: April 10, 2018 Adaptive Filters Penn ESE 531 Spring 2018 Khanna Lecture Outline! Circular convolution as linear convolution with aliasing! Adaptive Filters Penn

More information

Ch5: Least Mean-Square Adaptive Filtering

Ch5: Least Mean-Square Adaptive Filtering Ch5: Least Mean-Square Adaptive Filtering Introduction - approximating steepest-descent algorithm Least-mean-square algorithm Stability and performance of the LMS algorithm Robustness of the LMS algorithm

More information

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE? Dariusz Bismor Institute of Automatic Control, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland, e-mail: Dariusz.Bismor@polsl.pl

More information

On the Stability of the Least-Mean Fourth (LMF) Algorithm

On the Stability of the Least-Mean Fourth (LMF) Algorithm XXI SIMPÓSIO BRASILEIRO DE TELECOMUNICACÕES-SBT 4, 6-9 DE SETEMBRO DE 4, BELÉM, PA On the Stability of the Least-Mean Fourth (LMF) Algorithm Vítor H. Nascimento and José Carlos M. Bermudez + Abstract We

More information

Deriva'on of The Kalman Filter. Fred DePiero CalPoly State University EE 525 Stochas'c Processes

Deriva'on of The Kalman Filter. Fred DePiero CalPoly State University EE 525 Stochas'c Processes Deriva'on of The Kalman Filter Fred DePiero CalPoly State University EE 525 Stochas'c Processes KF Uses State Predic'ons KF es'mates the state of a system Example Measure: posi'on State: [ posi'on velocity

More information

SNR lidar signal improovement by adaptive tecniques

SNR lidar signal improovement by adaptive tecniques SNR lidar signal improovement by adaptive tecniques Aimè Lay-Euaille 1, Antonio V. Scarano Dipartimento di Ingegneria dell Innovazione, Univ. Degli Studi di Lecce via Arnesano, Lecce 1 aime.lay.euaille@unile.it

More information

III.C - Linear Transformations: Optimal Filtering

III.C - Linear Transformations: Optimal Filtering 1 III.C - Linear Transformations: Optimal Filtering FIR Wiener Filter [p. 3] Mean square signal estimation principles [p. 4] Orthogonality principle [p. 7] FIR Wiener filtering concepts [p. 8] Filter coefficients

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Machine Learning and Data Mining. Linear regression. Prof. Alexander Ihler

Machine Learning and Data Mining. Linear regression. Prof. Alexander Ihler + Machine Learning and Data Mining Linear regression Prof. Alexander Ihler Supervised learning Notation Features x Targets y Predictions ŷ Parameters θ Learning algorithm Program ( Learner ) Change µ Improve

More information

Parameter Es*ma*on: Cracking Incomplete Data

Parameter Es*ma*on: Cracking Incomplete Data Parameter Es*ma*on: Cracking Incomplete Data Khaled S. Refaat Collaborators: Arthur Choi and Adnan Darwiche Agenda Learning Graphical Models Complete vs. Incomplete Data Exploi*ng Data for Decomposi*on

More information

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112

Performance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112 Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS

More information

BLOCK LMS ADAPTIVE FILTER WITH DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS

BLOCK LMS ADAPTIVE FILTER WITH DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS BLOCK LMS ADAPTIVE FILTER WIT DETERMINISTIC REFERENCE INPUTS FOR EVENT-RELATED SIGNALS S. Olmos, L. Sörnmo, P. Laguna Dept. of Electroscience, Lund University, Sweden Dept. of Electronics Eng. and Communications,

More information

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis

Comparative Performance Analysis of Three Algorithms for Principal Component Analysis 84 R. LANDQVIST, A. MOHAMMED, COMPARATIVE PERFORMANCE ANALYSIS OF THR ALGORITHMS Comparative Performance Analysis of Three Algorithms for Principal Component Analysis Ronnie LANDQVIST, Abbas MOHAMMED Dept.

More information

Computer exercise 1: Steepest descent

Computer exercise 1: Steepest descent 1 Computer exercise 1: Steepest descent In this computer exercise you will investigate the method of steepest descent using Matlab. The topics covered in this computer exercise are coupled with the material

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Module 3. Convolution. Aim

Module 3. Convolution. Aim Module Convolution Digital Signal Processing. Slide 4. Aim How to perform convolution in real-time systems efficiently? Is convolution in time domain equivalent to multiplication of the transformed sequence?

More information

EFFICIENT LINEAR MIMO ADAPTIVE INVERSE CONTROL

EFFICIENT LINEAR MIMO ADAPTIVE INVERSE CONTROL EFFICIENT LINEAR MIMO ADAPTIVE INVERSE CONTROL Dr. Gregory L. Plett Electrical and Computer Engineering Department University of Colorado at Colorado Springs P.O. Box 715, Colorado Springs, CO 8933 715

More information

Ch6-Normalized Least Mean-Square Adaptive Filtering

Ch6-Normalized Least Mean-Square Adaptive Filtering Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation

More information

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels Bijit Kumar Das 1, Mrityunjoy Chakraborty 2 Department of Electronics and Electrical Communication Engineering Indian Institute

More information

Lecture: Adaptive Filtering

Lecture: Adaptive Filtering ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate

More information

Machine Learning and Adaptive Systems. Lectures 3 & 4

Machine Learning and Adaptive Systems. Lectures 3 & 4 ECE656- Lectures 3 & 4, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 What is Learning? General Definition of Learning: Any change in the behavior or performance

More information

ELEG-636: Statistical Signal Processing

ELEG-636: Statistical Signal Processing ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical

More information

Machine Learning and Adaptive Systems. Lectures 5 & 6

Machine Learning and Adaptive Systems. Lectures 5 & 6 ECE656- Lectures 5 & 6, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 c. Performance Learning-LMS Algorithm (Widrow 1960) The iterative procedure in steepest

More information

Lecture 4 1. Block-based LMS 3. Block-based LMS 4. Standard LMS 2. bw(n + 1) = bw(n) + µu(n)e (n), bw(k + 1) = bw(k) + µ u(kl + i)e (kl + i),

Lecture 4 1. Block-based LMS 3. Block-based LMS 4. Standard LMS 2. bw(n + 1) = bw(n) + µu(n)e (n), bw(k + 1) = bw(k) + µ u(kl + i)e (kl + i), Standard LMS 2 Lecture 4 1 Lecture 4 contains descriptions of Block-based LMS (7.1) (edition 3: 1.1) u(n) u(n) y(n) = ŵ T (n)u(n) Datavektor 1 1 y(n) M 1 ŵ(n) M 1 ŵ(n + 1) µ Frequency Domain LMS (FDAF,

More information

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method SGN 21006 Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 20 Adaptive filtering:

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Prediction Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict the

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Optimal and Adaptive Filtering

Optimal and Adaptive Filtering Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1

More information

Chapter 5. Fourier Analysis for Discrete-Time Signals and Systems Chapter

Chapter 5. Fourier Analysis for Discrete-Time Signals and Systems Chapter Chapter 5. Fourier Analysis for Discrete-Time Signals and Systems Chapter Objec@ves 1. Learn techniques for represen3ng discrete-)me periodic signals using orthogonal sets of periodic basis func3ons. 2.

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co

Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University

More information

Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation

Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation Decision Weighted Adaptive Algorithms with Applications to Wireless Channel Estimation Shane Martin Haas April 12, 1999 Thesis Defense for the Degree of Master of Science in Electrical Engineering Department

More information

Equalization Prof. David Johns University of Toronto (

Equalization Prof. David Johns University of Toronto ( Equalization Prof. David Johns (johns@eecg.toronto.edu) (www.eecg.toronto.edu/~johns) slide 1 of 70 Adaptive Filter Introduction Adaptive filters are used in: Noise cancellation Echo cancellation Sinusoidal

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals

ECE Unit 4. Realizable system used to approximate the ideal system is shown below: Figure 4.47 (b) Digital Processing of Analog Signals ECE 8440 - Unit 4 Digital Processing of Analog Signals- - Non- Ideal Case (See sec8on 4.8) Before considering the non- ideal case, recall the ideal case: 1 Assump8ons involved in ideal case: - no aliasing

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL

Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL T F T I G E O R G A I N S T I T U T E O H E O F E A L P R O G R ESS S A N D 1 8 8 5 S E R V L O G Y I C E E C H N O Timing Recovery at Low SNR Cramer-Rao bound, and outperforming the PLL Aravind R. Nayak

More information

4.0 Update Algorithms For Linear Closed-Loop Systems

4.0 Update Algorithms For Linear Closed-Loop Systems 4. Update Algorithms For Linear Closed-Loop Systems A controller design methodology has been developed that combines an adaptive finite impulse response (FIR) filter with feedback. FIR filters are used

More information

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems

Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications systems Wireless Signal Processing & Networking Workshop: Emerging Wireless Technologies, Sendai, Japan, 28 Oct. 2013. Adaptive sparse algorithms for estimating sparse channels in broadband wireless communications

More information

Error Entropy Criterion in Echo State Network Training

Error Entropy Criterion in Echo State Network Training Error Entropy Criterion in Echo State Network Training Levy Boccato 1, Daniel G. Silva 1, Denis Fantinato 1, Kenji Nose Filho 1, Rafael Ferrari 1, Romis Attux 1, Aline Neves 2, Jugurta Montalvão 3 and

More information

Chapter 4 Code Tracking Loops

Chapter 4 Code Tracking Loops Chapter 4 Code Tracking Loops 4- Optimum Tracking of Wideband Signals A tracking loop making use of this optimum discriminator is illustrated in Figure 4-1. 4- Optimum Tracking of Wideband Signals 4.3

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications SHENGKUI ZHAO School of Computer Engineering A thesis submitted to the Nanyang Technological University in partial fulfillment

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

Computation time/accuracy trade-off and linear regression

Computation time/accuracy trade-off and linear regression Computation time/accuracy trade-off and linear regression Maxime BRUNIN & Christophe BIERNACKI & Alain CELISSE Laboratoire Paul Painlevé, Université de Lille, Science et Technologie INRIA Lille-Nord Europe,

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

SKA machine learning perspec1ves for imaging, processing and analysis

SKA machine learning perspec1ves for imaging, processing and analysis 1 SKA machine learning perspec1ves for imaging, processing and analysis Slava Voloshynovskiy Stochas1c Informa1on Processing Group University of Geneva Switzerland with contribu,on of: D. Kostadinov, S.

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1 IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding

More information

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method 1 Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method Adaptive filtering: Problem statement Consider the family of variable parameter FIR filters, computing their

More information

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks

New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 135 New Recursive-Least-Squares Algorithms for Nonlinear Active Control of Sound and Vibration Using Neural Networks Martin Bouchard,

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters

ECE 8440 Unit 13 Sec0on Effects of Round- Off Noise in Digital Filters ECE 8440 Unit 13 Sec0on 6.9 - Effects of Round- Off Noise in Digital Filters 1 We have already seen that if a wide- sense staonary random signal x(n) is applied as input to a LTI system, the power density

More information

Examination with solution suggestions SSY130 Applied Signal Processing

Examination with solution suggestions SSY130 Applied Signal Processing Examination with solution suggestions SSY3 Applied Signal Processing Jan 8, 28 Rules Allowed aids at exam: L. Råde and B. Westergren, Mathematics Handbook (any edition, including the old editions called

More information

Tsybakov noise adap/ve margin- based ac/ve learning

Tsybakov noise adap/ve margin- based ac/ve learning Tsybakov noise adap/ve margin- based ac/ve learning Aar$ Singh A. Nico Habermann Associate Professor NIPS workshop on Learning Faster from Easy Data II Dec 11, 2015 Passive Learning Ac/ve Learning (X j,?)

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

EE 225D LECTURE ON DIGITAL FILTERS. University of California Berkeley

EE 225D LECTURE ON DIGITAL FILTERS. University of California Berkeley University of California Berkeley College of Engineering Department of Electrical Engineering and Computer Sciences Professors : N.Morgan / B.Gold EE225D Digital Filters Spring,1999 Lecture 7 N.MORGAN

More information

Binary Step Size Variations of LMS and NLMS

Binary Step Size Variations of LMS and NLMS IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume, Issue 4 (May. Jun. 013), PP 07-13 e-issn: 319 400, p-issn No. : 319 4197 Binary Step Size Variations of LMS and NLMS C Mohan Rao 1, Dr. B

More information

Bias/variance tradeoff, Model assessment and selec+on

Bias/variance tradeoff, Model assessment and selec+on Applied induc+ve learning Bias/variance tradeoff, Model assessment and selec+on Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège October 29, 2012 1 Supervised

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

The key is to understand an es0mator as a random variable

The key is to understand an es0mator as a random variable The key is to understand an es0mator as a random variable IID P(X,Y) y 1 y 2 y n x 1 x 2 x n? x n+1 minimize w2r d nx i=1 y i x i > w 2 + kwk 2 Training error Regulariza0on (ridge) term (λ: regulariza0on

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME

Fourier Methods in Digital Signal Processing Final Exam ME 579, Spring 2015 NAME Fourier Methods in Digital Signal Processing Final Exam ME 579, Instructions for this CLOSED BOOK EXAM 2 hours long. Monday, May 8th, 8-10am in ME1051 Answer FIVE Questions, at LEAST ONE from each section.

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider

More information

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay Yu Gong, Xia Hong and Khalid F. Abu-Salim School of Systems Engineering The University of Reading, Reading RG6 6AY, UK E-mail: {y.gong,x.hong,k.f.abusalem}@reading.ac.uk

More information

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco Adaptive Systems Winter Term 2017/18 Instructor: Pejman Mowlaee Beikzadehmahaleh Assistants: Christian Stetco Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/EG written by Bernhard

More information

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)

1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?) Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from

More information

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION Jordan Cheer and Stephen Daley Institute of Sound and Vibration Research,

More information

An Adaptive Sensor Array Using an Affine Combination of Two Filters

An Adaptive Sensor Array Using an Affine Combination of Two Filters An Adaptive Sensor Array Using an Affine Combination of Two Filters Tõnu Trump Tallinn University of Technology Department of Radio and Telecommunication Engineering Ehitajate tee 5, 19086 Tallinn Estonia

More information

Signal Analysis. Principal Component Analysis

Signal Analysis. Principal Component Analysis Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Ch4: Method of Steepest Descent

Ch4: Method of Steepest Descent Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Discrete-time signals and systems

Discrete-time signals and systems Discrete-time signals and systems 1 DISCRETE-TIME DYNAMICAL SYSTEMS x(t) G y(t) Linear system: Output y(n) is a linear function of the inputs sequence: y(n) = k= h(k)x(n k) h(k): impulse response of the

More information

Machine Learning Basics: Maximum Likelihood Estimation

Machine Learning Basics: Maximum Likelihood Estimation Machine Learning Basics: Maximum Likelihood Estimation Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics 1. Learning

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information