EE482: Digital Signal Processing Applications

Similar documents
EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications

26. Filtering. ECE 830, Spring 2014

Adaptive Filtering Part II

ESE 531: Digital Signal Processing

Chapter 2 Random Processes

AdaptiveFilters. GJRE-F Classification : FOR Code:

Adaptive Filter Theory

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Wiener Filtering. EE264: Lecture 12

Lecture: Adaptive Filtering

EE482: Digital Signal Processing Applications

( ) John A. Quinn Lecture. ESE 531: Digital Signal Processing. Lecture Outline. Frequency Response of LTI System. Example: Zero on Real Axis

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

III.C - Linear Transformations: Optimal Filtering

Statistical and Adaptive Signal Processing

ECG782: Multidimensional Digital Signal Processing

Chapter 2 Wiener Filtering

Stochastic Processes

EE482: Digital Signal Processing Applications

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

ECE 636: Systems identification

Optimal and Adaptive Filtering

ELEG-636: Statistical Signal Processing

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

ECE4270 Fundamentals of DSP Lecture 20. Fixed-Point Arithmetic in FIR and IIR Filters (part I) Overview of Lecture. Overflow. FIR Digital Filter

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

Massachusetts Institute of Technology

EE361: Signals and System II

Linear Optimum Filtering: Statement

Revision of Lecture 4

Adaptive Systems Homework Assignment 1

ECE534, Spring 2018: Solutions for Problem Set #5

EFFECTS OF ILL-CONDITIONED DATA ON LEAST SQUARES ADAPTIVE FILTERS. Gary A. Ybarra and S.T. Alexander

Examination with solution suggestions SSY130 Applied Signal Processing

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Analysis of Finite Wordlength Effects

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

IMPROVEMENTS IN ACTIVE NOISE CONTROL OF HELICOPTER NOISE IN A MOCK CABIN ABSTRACT

MMSE System Identification, Gradient Descent, and the Least Mean Squares Algorithm

EE 5345 Biomedical Instrumentation Lecture 12: slides

Department of Electrical and Electronic Engineering

Today. ESE 531: Digital Signal Processing. IIR Filter Design. Impulse Invariance. Impulse Invariance. Impulse Invariance. ω < π.

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Lecture 4: FT Pairs, Random Signals and z-transform

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Advanced Signal Processing Adaptive Estimation and Filtering

DSP Design Lecture 2. Fredrik Edman.

Advanced Digital Signal Processing -Introduction

EE 313 Linear Signals & Systems (Fall 2018) Solution Set for Homework #7 on Infinite Impulse Response (IIR) Filters CORRECTED

Name of the Student: Problems on Discrete & Continuous R.Vs

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

Lab 4: Quantization, Oversampling, and Noise Shaping

Ch4: Method of Steepest Descent

Sparse Least Mean Square Algorithm for Estimation of Truncated Volterra Kernels

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Performance Analysis and Enhancements of Adaptive Algorithms and Their Applications

Lecture 19 IIR Filters

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

ELEN E4810: Digital Signal Processing Topic 2: Time domain

3.4 Linear Least-Squares Filter

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Test 2 Electrical Engineering Bachelor Module 8 Signal Processing and Communications

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

Chapter 2 Fundamentals of Adaptive Filter Theory

Discrete-Time Signals & Systems

Random Processes Why we Care

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Ch5: Least Mean-Square Adaptive Filtering

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

Adaptive Beamforming Algorithms

Digital Signal Processing

Linear and Nonlinear Optimization

Lecture - 30 Stationary Processes

A Probability Review

Each problem is worth 25 points, and you may solve the problems in any order.

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

ELEG-636: Statistical Signal Processing

Lecture 4 - Spectral Estimation

A quadratic expression is a mathematical expression that can be written in the form 2

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Linear Stochastic Models. Special Types of Random Processes: AR, MA, and ARMA. Digital Signal Processing

IS NEGATIVE STEP SIZE LMS ALGORITHM STABLE OPERATION POSSIBLE?

Adap>ve Filters Part 2 (LMS variants and analysis) ECE 5/639 Sta>s>cal Signal Processing II: Linear Es>ma>on

Adaptive Systems. Winter Term 2017/18. Instructor: Pejman Mowlaee Beikzadehmahaleh. Assistants: Christian Stetco

ECE 673-Random signal analysis I Final

Adaptive MMSE Equalizer with Optimum Tap-length and Decision Delay

Optimum Ordering and Pole-Zero Pairing of the Cascade Form IIR. Digital Filter

System Identification and Adaptive Filtering in the Short-Time Fourier Transform Domain

Lecture 7 Random Signal Analysis

Transcription:

Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/

2 Outline Random Processes Adaptive Filters LMS Algorithm

3 Adaptive Filtering FIR and IIR filters are designed for linear timeinvariant signals How can we handle signals when the characteristics are unknown or changing? Need ways to update filter coefficients automatically and continually Track time-varying signals and systems

4 Random Processes Real-world signals are time varying and have randomness in nature E.g. speech, music, noise Need to characterize a signal even if full deterministic mathematical definition does not exist Random process sequence of random variables

5 Autocorrelation Specifies statistical relationship of signal at different time lags (n k) r xx n, k = E x n, x k Similarity of observations as a function of the time lag between them Mathematical tool for detecting signals Repeating patterns (noise in sinusoid) Measuring time-delay between signals Radar, sonar, lidar Estimation of impulse response Etc.

6 Wide Sense Stationary (WSS) Process Random process statistics do not change with time Mean independent of time E x n = m x Autocorrelation only depends only on time lag r xx k = E x n + k x n WSS autocorrelation properties Even function r xx k = r xx k Bounded by 0 time lag r xx k r xx 0 = E[x 2 n ] Zero mean process: E x 2 n = σ x 2 Cross-correlation r xy k = E[x n + k y n ]

7 Expected Value Value of random variable expected if random variable process repeated infinite number of times Weighted average of all possible values Expectation operator E. =. f x dx f(x) probability density function of random variable X

8 White Noise v(n) with zero mean and variance σ2 v Very popular random signal Typical noise model Autocorrelation r vv k = σ 2 v δ k Statistically uncorrelated except at zero time lag Power spectrum P vv ω = σ 2 v, ω π Uniformly distributed over entire frequency range

9 Example 6.2 Second-order FIR filter with white noise input y n = x n + ax n 1 + bx n 2 Mean E y n = E[x n + ax n 1 + bx n 2 ] E y n = E[x n ] + ae[x n 1 ] + be[x n 2 ] E y n = 0 + a 0 + b 0 = 0 Autocorrelation r yy k = E y n + k y n x n + k + ax n + k 1 + bx n + k 2 r yy k = E (x n + ax n 1 + bx n 2 ) r yy k = E x n + k x n + E ax n + k x n 1 + r yy k = r xx k + ar xx k 1 + r yy k = 1 + a 2 + b 2 σ x 2 a + ab σ x 2 bσ x 2 0 k = 0 k = ±1 k = ±2 else

10 Practical Estimation Practical applications have finite length sequences Sample mean N 1 x(n) n=0 m x = 1 N Sample autocorrelation r xx k = 1 N k 1 x n + k x(n) N k n 0 Only produces a good estimate of lags < 10% of N Use Matlab (mean.m, xcorr.m, etc.) to calculate

11 Adaptive Filters Signal characteristics in practical applications are time varying and/or unknown Must modify filter coefficients adaptively in an automated fashion to meet objectives Example: Channel equalization High-speed data communication via media channel (e.g. wireless network) Channel equalization compensates for channel distortion (e.g. path from wifi router and computer) Channel must be continually tracked and characterized to compensate for distortion (e.g. moving around a room)

12 General Adaptive Filter Two components Digital filter defined by coefficients Adaptive algorithm automatically update filter coefficients (weights) Adaption occurs by comparing filtered signal y(n) with a desired (reference) signal d(n) Minimize error e(n) using a cost function (e.g. meansquare error) Continually lower error and get y n closer to d(n)

13 FIR Adaptive Filter L 1 y n = l=0 w l n x(n l) Notice time-varying weights In vector form y n = w T n x n = x T n w n x n = x n, x n 1,, x n L + 1 T w n = w 0 n, w 1 n,, w L 1 n T Error signal e n = d n y n = d n w T n x n

14 Performance Function Use mean-square error (MSE) cost function ξ n = E e 2 n ξ n = E d 2 n 2p T w n + w T n Rw n p = E d n x n = r dx 0, r dx 1,, r dx L 1 T R autocorrelation matrix R = E[x n x T n ] Toeplitz matrix symmetric across main diagonal

15 Steepest Descent Optimization Error function is a quadratic surface ξ n = E d 2 n 2p T w n + w T n Rw n Therefore gradient decent search techniques can be used Gradient points in direction of greatest change Iterative optimization to step toward the bottom of error surface w n + 1 = w n μ ξ n 2

16 LMS Algorithm Practical applications do not have knowledge of d n, x n Cannot directly compute MSE and gradient Stochastic gradient algorithm Use instantaneous squared error to estimate MSE ξ n = e 2 n Gradient estimate ξ n = 2 e n e n e n = d n w T n x(n) ξ n = 2x(n)e n Steepest descent algorithm w n + 1 = w n + μx n e n LMS Steps 1. Set L, μ, and w(0) L filter length μ step size (small e.g. 0.01) w(0) initial filter weights 2. Compute filter output y n = w T n x n 3. Compute error signal e n = d n y n 4. Update weight vector w l n + 1 = w l n + μx n l e n, l = 0,1, L 1 Notice this requires a reference signal