INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

Similar documents
26. Filtering. ECE 830, Spring 2014

Lecture 19: Bayesian Linear Estimators

III.C - Linear Transformations: Optimal Filtering

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Wiener Filter for Deterministic Blur Model

Digital Signal Processing Lecture 4

A TERM PAPER REPORT ON KALMAN FILTER

Discrete-time signals and systems

Analog vs. discrete signals

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

EEL3135: Homework #4

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

Lecture 19 IIR Filters

DETECTION theory deals primarily with techniques for

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

Lecture 11 FIR Filters

EE482: Digital Signal Processing Applications

ECE-S Introduction to Digital Signal Processing Lecture 3C Properties of Autocorrelation and Correlation

ELEN E4810: Digital Signal Processing Topic 2: Time domain

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Massachusetts Institute of Technology

EE-210. Signals and Systems Homework 7 Solutions

Introduction to Sparsity in Signal Processing

ECE534, Spring 2018: Solutions for Problem Set #5

Linear Convolution Using FFT

Wiener Filtering. EE264: Lecture 12

Probability and Statistics for Final Year Engineering Students

Chap 2. Discrete-Time Signals and Systems

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Q3. Derive the Wiener filter (non-causal) for a stationary process with given spectral characteristics

ELEG-636: Statistical Signal Processing

Levinson Durbin Recursions: I

Adaptive Systems Homework Assignment 1

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Levinson Durbin Recursions: I

Multidimensional digital signal processing

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

III. Time Domain Analysis of systems

VU Signal and Image Processing

UNIT 1. SIGNALS AND SYSTEM

Introduction to Sparsity in Signal Processing

2. CONVOLUTION. Convolution sum. Response of d.t. LTI systems at a certain input signal

ECE 541 Stochastic Signals and Systems Problem Set 11 Solution

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

EE482: Digital Signal Processing Applications

Unbiased Power Prediction of Rayleigh Fading Channels

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Digital Signal Processing. Midterm 2 Solutions

Solutions for examination in TSRT78 Digital Signal Processing,

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Advanced Digital Signal Processing -Introduction

X n = c n + c n,k Y k, (4.2)

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

ECE 308 Discrete-Time Signals and Systems

Linear Optimum Filtering: Statement

Digital Signal Processing I Final Exam Fall 2008 ECE Dec Cover Sheet

Optimal and Adaptive Filtering

Solutions. Number of Problems: 10

Fourier Series Representation of

Discrete-Time Signals & Systems

Name of the Student: Problems on Discrete & Continuous R.Vs

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY

Laboratory Project 2: Spectral Analysis and Optimal Filtering

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

STAD57 Time Series Analysis. Lecture 8

Lecture 8: Signal Reconstruction, DT vs CT Processing. 8.1 Reconstruction of a Band-limited Signal from its Samples

Digital Filters Ying Sun

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

Introduction to Digital Signal Processing

6.02 Fall 2012 Lecture #11

Discrete-Time Systems

Each problem is worth 25 points, and you may solve the problems in any order.

Module 3. Convolution. Aim

Properties of LTI Systems

University Question Paper Solution

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Examination with solution suggestions SSY130 Applied Signal Processing

A. Relationship of DSP to other Fields.

Probability and Random Processes

ZERO-FORCING MULTIUSER DETECTION IN CDMA SYSTEMS USING LONG CODES

EE 225a Digital Signal Processing Supplementary Material

E : Lecture 1 Introduction

Signals and Systems. Problem Set: The z-transform and DT Fourier Transform

DIGITAL SIGNAL PROCESSING LECTURE 1

EE123 Digital Signal Processing

Computer Engineering 4TL4: Digital Signal Processing

Digital Signal Processing. Midterm 1 Solution

Forecasting with ARMA

Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

ENT 315 Medical Signal Processing CHAPTER 2 DISCRETE FOURIER TRANSFORM. Dr. Lim Chee Chin

6.02 Fall 2012 Lecture #10

III.B Linear Transformations: Matched filter

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Blind Deconvolution of Ultrasonic Signals Using High-Order Spectral Analysis and Wavelets

Transcription:

WIENER FILTERING Presented by N.Srikanth(Y8104060), M.Manikanta PhaniKumar(Y8104031). INDIAN INSTITUTE OF TECHNOLOGY KANPUR Electrical Engineering dept.

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wiener filtering is a method to estimate the original signal as close as possible from the signals degraded by additive white noise Wiener Filter is the one which is based on the Linear minimum mean square error(lmmse). Calculation of the Wiener filter requires the assumption that the signal and noise processes are second-order order stationary. For this description, the data is WSS with zero mean will be considered.

Wiener filtering There are three main problems that we will study Smoothing Filtering Prediction to solve all three above problems we use Θˆ= = C Θx C -1 xx x and the minimum MSE matrix given by θˆ = C θθ -C Θx C xx M θˆ xx -1 C xθ

SMOOTHING Θ=s[n] is to be estimated for n=0,1, N-1 1 based on the data set { x[0],x[1],.x[n-1] [ [ ]}, where x[n]= s[n] + w[n]. In this an estimation can not be obtained until all the data has been collected Then the wiener estimation of the signal is

The corresponding minimum MSE matrix is Where W= R ss matrix ss (R ss ss + R ww ) - 1 is referred to as the wiener smoothing If N=1, we would estimate s[0] based on X[0]=S[0]+w[0]. Then, the wiener smoother W is given by W= r ss [0] / (r ss [0] + r ww [0] ) = η / (η+1) where η= r ss [0] / r ww [0] is the SNR

For high SNR so what W 1, we have Ŝ[0] X[0], while for a low SNR so what W 0, we have Ŝ[n] 0. The corresponding minimum MSE is M ŝ =( 1- W) r ss [0] = (1 - η / (η+1) ) r ss [0] which for these two extremes is either 0 for a high SNR, r xx [0] for a low SNR

FILTERING In this we estimate θ = s[n] based on x[m]=s[m]+w[m] for m=0,1,2 n. The above problem is to filter the signal from noise, the signal sample is estimated based on the present and past data only. Also C xx =R ss ss + R ww Then the estimator of the signal is Ŝ[n] = r ss T ss (R ss ss + R ww ) - 1 x

Ŝ[n] = a T x r ss where a= (R ss + R ww ) - 1 r To make the filtering correspondence we let h (n) [k] = a n-k Then where h (n) [k] is the time varying FIR filter To find the impulse response h we note that since

Written out, the set of linear equations becomes These are the wiener-hopf filtering equations For large enough n it can be shown that the filter becomes time invariant, the Wiener-Hopf filtering equations can be written as The same set of equations result if we attempt to estimate s[n] based on the present and infinite past. This is termed the infinite wiener filter. let

And use the orthogonality principle. Then, E[ (S[n] - Ŝ[n] ) x[n-l]) ] =0 ; l=0,1. Hence, and therefore, the equations to be solved for the infinite wiener filter impulse response are

The smoothing estimator takes the form Ŝ[n] = a k x[k] k=- And by letting h[k]=a n-k we have the convolution sum Ŝ[n] = k=- h[k] x[n-k] The wiener equation becomes h[k] r xx [l-k] = r ss [ l ] ; - < l < k=-

The difference from the filtering case is that now the equations must be satisfied for all, and there is no constraint that h[k] must be causal. Hence we can use Fourier transform techniques to solve for the impulse response then H(f) = P ss (f) / P xx (f) = P ss (f) () / (P( ss (f) () + P ww (f) ())) If we define SNR as η(f) = P ss (f) / P ww (f) Then the optimal filter frequency response becomes H(f) = η(f) / (η(f)+1) (f)+1) Clearly, the filter response satisfies 0< H(f) < 1, and the wiener smoother response is H(f) 0 when η(f) 0 and H(f) 1 when n(f)

PREDICTION The prediction problem in which we estimate θ =X[N-1+l] for l 1 based on X. we use C xx =R xx and let the latter vector be denoted by r T xx. Then

let h[n-k]=a k to allow filtering interpretation R xx h=r xx =

The minimum MSE for l-step linear predictor is Assume that x[n] is an AR(1) process with ACF

Let l=1 and we solve for the h[k]. N h[k] r xx k=1 N [m-k] = r xx [ m ] ; m=1,2,,n xx [m h[k] (-a[1]) m-k = (-a[1]) m k=1 On solving above equation we get ; m=1,2,,n The one step linear predictor is And the minimum MSE is

Similarly, the l step predictor by solving N h[k] r xx k=1 [m-k] = r xx [ m+l-1 ] ; m=1,2,,n xx [m substituting the ACF for an AR(1) process, this becomes N h[k] (-a[1]) m-k = (-a[1]) m+l m+l-1 k=1 Then the impulse response is ; m=1,2,,n The l step pp predictor is

and the minimum MSE for l step pp predictor is The predictor decays to zero with increase in l, since a[1] a[1] <1. This is also reflected in the minimum MSE, which is smallest for l=1 and increases for larger one.

THANK YOU