ECE534, Spring 2018: Solutions for Problem Set #5

Similar documents
EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

Wiener Filtering. EE264: Lecture 12

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

Chapter 6: Random Processes 1

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

ECE 636: Systems identification

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

ENGR352 Problem Set 02

Gaussian, Markov and stationary processes

X n = c n + c n,k Y k, (4.2)

Your solutions for time-domain waveforms should all be expressed as real-valued functions.

Properties of the Autocorrelation Function

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

Massachusetts Institute of Technology

Statistical signal processing

EE482: Digital Signal Processing Applications

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Name of the Student: Problems on Discrete & Continuous R.Vs

ECE 438 Exam 2 Solutions, 11/08/2006.

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Lesson 1. Optimal signalbehandling LTH. September Statistical Digital Signal Processing and Modeling, Hayes, M:

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Module 9: Stationary Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Chapter 2 Wiener Filtering

Chapter 2 Random Processes

EEL3135: Homework #4

EE 438 Essential Definitions and Relations

Stochastic Processes. Monday, November 14, 11

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Digital Filters Ying Sun

Multivariate Random Variable

Discrete-Time Signals and Systems. The z-transform and Its Application. The Direct z-transform. Region of Convergence. Reference: Sections

26. Filtering. ECE 830, Spring 2014

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

16.584: Random (Stochastic) Processes

Each problem is worth 25 points, and you may solve the problems in any order.

III.C - Linear Transformations: Optimal Filtering

STAT 100C: Linear models

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

Problems on Discrete & Continuous R.Vs

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

Chapter 4 Random process. 4.1 Random process

Time Series Analysis. Solutions to problems in Chapter 5 IMM

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

X t = a t + r t, (7.1)

The Multivariate Gaussian Distribution

ECE534, Spring 2018: Solutions for Problem Set #3

The Hilbert Space of Random Variables

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Statistics of stochastic processes

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Adaptive Systems Homework Assignment 1

STAT 100C: Linear models

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

16.584: Random Vectors

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

ELEMENTS OF PROBABILITY THEORY

Name of the Student: Problems on Discrete & Continuous R.Vs

ECE Homework Set 3

Introduction to Probability and Stochastic Processes I

Lecture 04: Discrete Frequency Domain Analysis (z-transform)

ELEN E4810: Digital Signal Processing Topic 2: Time domain

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

Discrete Time Fourier Transform (DTFT)

5 Operations on Multiple Random Variables

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science. Fall Solutions for Problem Set 2

Gaussian vectors and central limit theorem

1. Stochastic Processes and filtrations

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Multiple Random Variables

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

QUALIFYING EXAM IN SYSTEMS ENGINEERING

Lecture Note 12: Kalman Filter

ECE531 Lecture 12: Linear Estimation and Causal Wiener-Kolmogorov Filtering

Problem Sheet 1 Examples of Random Processes

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

3F1 Random Processes Examples Paper (for all 6 lectures)

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Discrete time processes

Review of some mathematical tools

Stochastic Processes. Chapter Definitions

MIT Spring 2015

Digital Control & Digital Filters. Lectures 1 & 2

ESTIMATION THEORY. Chapter Estimation of Random Variables

5 Linear Algebra and Inverse Problem

Solutions: Homework Set # 5

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Transcription:

ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described by a Poisson process with rate λ (iii) X(0) Compute the following: (a) P (X(t) ) (b) P (X(t) ) (c) EX(t) (d) R XX (t, t ), t, t 0 Let N(t) be a Poisson process with rate λ (a) Since X(0), the event {X(t) } is equivalent to the event {N(t) is even}: P ( X(t) ) P ( N(t) k ) k0 e λt k0 ( e λt + e λt) λt (λt)k e (k)! ( + e λt), where the latter is due to the fact that k0 xk (k)! (ex + e x ) (b) The complementary event is P ( X(t) ) P ( X(t) ) and corresponds to the event {N(t) is odd} ( e λt ) (c) The expectation is E(X(t)) P(X(t) )(+) + ( ) P(X(t) ) e λt (d) For the autocorrelation function, let x 0, x and note that: R XX (t, t ) EX(t )X(t ) x i x j P (X(t ) x i, X(t ) x j ) i0 j0 i0 j0 x i x j P (X(t ) x i )P (X(t ) x j X(t ) x i ) ()

We now note that for t t, the number of sign changes in the interval (t, t ) is the number of zero crossings in the same interval The number of zero crossings is a Poisson process and N(t ) N(t ) Poi(λ(t t )) Therefore, P (X(t ) X(t ) ) P (X(t ) X(t ) ) ( ) + e λ(t t ), () P (X(t ) X(t ) ) P (X(t ) X(t ) ) ( ) e λ(t t ) (3) By ()-(3) and parts (a) and (b), we have: R XX (t, t ) e λ(t t ), t t Using the symmetry of the autocorrelation function we conclude that R XX (t, t ) e λ t t, t, t 0 Low-pass Filtering in Discrete Time Let X(n) be a discrete-time random process with autocorrelation function R XX (m) EX(n)X(n + m) σ δ(m) Consider further a discrete-time process Y (n) described by the following difference equation: Y (n) X(n) + ay (n ) This difference equation represents a digital filter (a) Find the impulse response of this filter (b) Show that R XY (m) EY (n)x(n m) a m R XY (0) (c) Find the autocorrelation function of the output R Y Y (m) EY (n)y (n + m) Note: Use the fact that R Y Y ( m) R Y Y (m) (a) Option : Take the z-transform of the difference equation Y (n) X(n) + ay (n ): Y (z) X(z) + az Y (z) H(z) Y (z) X(z) az

Then, the (causal) inverse z-transform is the sought impulse response: where u(n) denotes the unit step h(n) a n u(n), Option : Time-domain computation of the impulse response h(n), which is the output Y (n) corresponding to an impulse input X(n) δ(n) That is h(n) δ(n) + ah(n ) Assuming causality h(n) 0 for all n < 0 (which is an initial condition for this equation), we have: Therefore, h(n) a n u(n) h(0) δ(0) + a h( ) h() δ() + ah(0) a h() δ() + ah() a h(n) δ(n) + ah(n ) a n, n > 0 (b) Option : We have R XY (m) : E(X(n m)y (n)) We examine two cases: For m < 0 the input X(n m) is uncorrelated with any historical output Y (n) so that R XY (m) 0 for all m < 0 This is due to the fact that X(n) is a white sequence according to its autocorrelation function For m 0: R XY (m) E {X(n m)(x(n) + ay (n ))} σ δ(m) + ar XY (m ) For all m > 0 we have δ(m) 0 and therefore (similarly to part (a)) for all m 0 R XY (m) ar XY (m ) R XY (m) a m R XY (0) Combining the two cases yields R XY (m) a m R XY (0)u(m) Option : By convolution R XY (m) EX(n m)y (n) E X(n m) j h(j)x(n j) j h(j)r XX (m j) (h R XX )(m) σ a m u(m) Since R XY (0) σ, we get R XY (m) R XY (0)a m u(m) 3

(c) Option : R Y Y (0) EY () E(X() + ay (0)) σ δ(0) + a R Y Y (0) + a EX()Y (0) σ + a R Y Y (0), hence For m > 0, we have: R Y Y (0) σ a R Y Y (m) EY (m)y (0) E(X(m) + ay (m ))Y (0) EX(m)Y (0) + ar Y Y (m ) Therefore, R Y Y (m) a m R Y Y (0) σ a m Due to the symmetry of R a Y Y conclude with R Y Y (m) σ a m a Option : For m > 0, we have R Y Y (m) EY (m)y (0) E a i X(m i) a j X( j) i0 j0 i0 j0 σ i0 a i+j EX(m i)x( j) a i+j σ δ(m + j i) j0 a m+j σ a m a j0 we and by symmetry we have R Y Y (m) σ a m a 3 Wide Sense Stationarity Consider the harmonic oscillator X(t) A cos(ω 0 t + Θ) where ω 0 is a constant and A is a random variable (a) Let Θ θ 0, where θ 0 is a constant Argue that X(t) is not WSS for any value of EA (b) Show that X(t) is WSS if Θ Unif π, π, independent of A 4

(c) Continuing part (b), argue that X(t) is mean ergodic in the ms sense (a) In this case, the process X(t) A cos(ω 0 t + θ 0 ) has the following mean and autocorrelation functions: E(X(t)) E(A) cos(ω 0 t + θ 0 ), EX(s)X(t) E(A ) cos(ω 0 s + θ 0 ) cos(ω 0 t + θ 0 ) If E(A) 0, the mean is time-varying, and hence, X(t) is not WSS If E(A) 0, then EX(t) 0, t which is time-independent Nevertheless, the autocorrelation condition fails Let s θ 0 ω 0 and s π/ θ 0 Then, EX(s )X(s ) E(A ) cos(0) E(A ), ( π ) EX(s )X(s ) E(A ) cos 0 Excluding the degenerate case A 0 almost surely, the above two autocorrelation values correspond to time difference τ s s s s 0 and they are different Hence, X(t) is not WSS in this case either (b) The process X(t) A cos(ω 0 t + Θ) has mean function E(X(t)) E(A) E(cos(ω 0 t + Θ)) (independence of A, Θ) π π E(A) cos(ω 0 t + θ)dθ 0 and autocorrelation EX(t)X(s) π E(A ) π π π π 4π E(A ) { 4π E(A ) π E(A ) cos(ω 0 (t s)) cos(ω 0 t + θ) cos(ω 0 s + θ)dθ cos(ω 0 (t + s) + θ) + cos(ω 0 (t s))dθ } π π cos(ω 0 (t s)) + cos(ω 0 (t + s) + θ)dθ π ω 0 Hence, X(t) is WSS in this case 5

T (c) We want to show that T 0 X(t)dt µ X 0 in the ms sense We have: { T } E X(t)dt 0 { T } T 0 T E A cos(ω 0 t + Θ)dt 0 E(A ) ω0 T E sin(ω 0 T + Θ) sin(θ) }{{} 4 E(A ) ω 0 T 0 as T 4 Wiener Filtering Let X(n) S(n) + W (n) be the input to a (noncausal) Wiener filter aiming at optimally approximating the target signal S(n) in the MSE sense Assume that S(n) and W (n) are zero mean, uncorrelated processes such that X(n) is a WSS process (a) Find an expression for the z-transform of the optimal filter H opt (z) (b) Let R SS (k) 0 7 ( ) k and R W W (k) 3 δ(k), ie, W (n) is additive white noise Compute the impulse response of the optimal filter h opt (n), n (a) Note that R SX (n) ES(0)(S(n) + W (n)) R SS (n) + R SW (n) and likewise R XX (n) EX(0)X(n) R SS (n)+r W W (n)+ R SW (0)+ R W S (n) Therefore, S SX (z) S SS (z), S XX (z) S SS (z) + S W W (z) As per the derived formula in class, we have H opt (z) S SX(z) S XX (z) S SS (z) S SS (z) + S W W (z) (b) We begin with a useful z-transform of a symmetric geometric sequence: Z ( a n ) (a )z, ROC {/ a < z < a } (4) (az )(a z) 6

Using this formula, S SS (z) 0 ( 7 Z k ) 0 9 and furthermore, S W W (z) 3 We therefore obtain: H opt (z) S SX (z) S SS (z) + S W W (z) 5z (3 z)(3z ) This is of the form (4) (up to scaling), hence, z (z )( z) 0z 9(z )( z) 0z 9(z )( z) + 3 h opt (n) 5 6 3 n, n 5 Kalman Filtering bonus problem Consider an unobservable signal defined by the recursion X n X n ε n where X 0 and {ε n } n is a sequence of independent random variables such that P (ε n ) p n and P (ε n 0) p n with 0 < p n < for all n Assume that we observe Y n X n + ξ n, where {ξ n } n is a sequence of iid random variables, independent of {ε n } n with density f ξ (x) Let Eξ n 0 and Eξ n < for any n (a) Find a suitable state space model for the Kalman filter (b) Derive the Kalman filter (a) Using the provided recursion for X n, we can write: X n p n X n + (ε n p n )X n Let v n (ε n p n )X n We then have: Ev n E(ε n p n )X n E(ε n p n )EX n 0 Also, for m n, we have: Ev n X m E(ε n p n )EX n X m 0 7

Moreover, EX n Eε nex n p n EX n Therefore, EX n n i p i This leads to: n Evn E(ε n p n ) EXn p n ( p n ) p i ( p n ) i n p i Combining the above results, we obtain the following state space model for the Kalman filter: { Xn p n X n + v n Y n X n + ξ n (b) Let ˆX n EX n Y n, Σ n E(X n ˆX n ) and P n Ev n Then, p ˆX n p n ˆXn + nσ n + P n p nσ n + P n + Eξn (Y n p n ˆXn ), ( ) p Σ n p nσ n + P n n Σ n + P n p nσ n + P n + Eξn are the corresponding Kalman filter recursions i 6 Decorrelation Let X X, X T be a random vector such that Cov(X) Find a matrix A such that the entries of the random vector Y AX are uncorrelated random variables We seek a matrix A such that Y AX contains uncorrelated random variables This implies that Cov(Y ) has to be a diagonal matrix Such an A corresponds to the transpose of the modal matrix of Cov(X) To see this, let the eigenvalue decomposition of this matrix be UΛU T Note that Cov(Y ) E (Y EY ) (Y EY ) T E (AX AEX) (AX AEX) T ACov(X)A T AUΛU T A T (5) 8

Setting A U T and using the orthonormality of U, ie, UU T U T U I, we obtain: λ 0 Cov(Y ) Λ 0 λ Here, λ, λ are the eigenvalues of Cov(X) and also the eigenvalues of Cov(Y ) To find A, we first solve the characteristic polynomial: det (Cov(X) λi) 0 The roots of this polynomial are: λ 3, λ To find the corresponding eigenvectors, which correspond to the columns of U, we solve the equations: Cov(X)u λ u, Cov(X)u λ u, by taking into account that u u The resulting A is A U T Note: A can be chosen to whiten Cov(X) The corresponding choice is A Λ / U T Plugging this choice in (5), we verify that Cov(Y ) I 7 Gaussian Processes bonus problem Consider two random processes {X n } n 0 and {Y n } n 0 generated by the following (nonlinear) recursion X n+ ax n + X nɛ n+ + Y n ζ n+ X n + Y n Y n+ bx n + X nζ n+ Y n ɛ n+, X n + Yn where {ɛ n } n and {ζ n } n are iid Gaussian sequences with zero mean and unit variance Moreover, assume that {ɛ n } n and {ζ n } n are independent and a, b are constants The initial pair X 0, Y 0 T is assumed to be a Gaussian vector with zero mean and invertible covariance Σ (a) Show that the joint process {X n, Y n } n 0 is Gaussian (b) Derive the pdf of the vector X n, Y n T (c) Derive the pdf of the vector X 0,, X n, Y 0,, Y n T Note: Use the Markov property of the processes {X n } n 0 and {Y n } n 0 9

(a) We provide a short description for the solution of this part with the main ideas The key observation to deal with this part is to rewrite the recursion pair as follows: Xn+ Y n+ a 0 b 0 Xn Y n + X n + Yn Xn Y n Y n X n ɛn+ ζ n+ We now note that the matrix B Xn,Y n X n + Y n Xn Y n Y n X n is orthonormal by its structure: B T X n,y n B Xn,Y n B Xn,Y n B T X n,y n I To establish Gaussianity of {Z n } with Z n X n, Y n T, we need to show that Φ n (u 0:n ) E e jut 0:n Z 0:n, where Z 0:n Z0 T, ZT,, ZT n T and u 0:n u 0,X, u 0,Y, u,x, u,y,, u n,x, u n,y T, has an exponent which is a quadratic function of u 0:n This can be performed by using the Markov property of {Z n } and the orthonormality of B Xn,Yn Starting with Φ n (u 0:n ) E E e jut 0:n Z 0:n E e jut 0:n Z 0:n E E e jut 0:n Z 0:n Z 0:n e j(ux n X n+u Y n Y n) X0:n, Y 0:n, one can show that E e j(ux n X n+u Y n Y n) X0:n, Y 0:n e j(aux n X n +bu Y n Y n ) e (u X n ) +(u Y n ) Proceeding inductively on the term E e jut 0:n Z 0:n e j(aux n X n +bu Y n Y n ), one can establish the aforementioned desired form (b) Since Z n is a Gaussian vector, the corresponding probability density function is totally characterized by the corresponding mean vector and covariance matrix Clearly, EZ n 0, 0

since Z 0 is zero mean Also, P n E Z n Zn T a 0 b 0 P n a 0 b 0 T + I, P 0 Σ Therefore, f Zn (z n ) πdet(p n ) / e z n T P n z n (c) Using the orthonomarlity of B Xn,Y n, it is easy to see that the process ɛn+ ζ n+ X n + Y n Xn Y n Y n X n ɛn+ ζ n+ is Gaussian Employing the Markov property of {Z n }, one can establish that f X0,,X n,y 0,,Y n (x 0,, x n, y 0,, y n ) n i π e (x i ax i ) n i π e (y i by i ) by following a similar approach as in part (a) πdet(σ) / e z 0 T Σ z 0 8 Markov Chains again (countable state space) Consider a discrete-time Markov chain X with state space X Let the probability flow from a set A X to its complement A c (with respect to X ) under a distribution π be given by: F (A, A c ) π(i)p i,j i A j A c Theorem: π is a stationary distribution if and only if i π(i) and for all A X Consider the following Markov chain: F (A, A c ) F (A c, A) Figure : Markov Chain where p+q Find the stationary distribution of this chain and specify a condition under which this distribution exists

Note: You can find the stationary distribution using the classical approach Nevertheless, the provided theorem can be used to simplify the derivation Option : (Without the theorem) A stationary distribution π (π in (infinite) vector form) must satisfy πp π We therefore obtain the following balance equation: pπ(j ) + qπ(j + ) π(j), j > 0 The characteristic polynomial for this recursion is: qz z + p 0, with roots λ and λ p q Therefore, the stationary distribution must have the form: π(j) c λ j + c λ j c + c ( p q ) j For convergence of j π(j) we require c 0 and p q <, ie, p < / The value of c can be obtained by requiring j π(j) and equals c p q Option : (Using the provided theorem) Pick A as A {0,, i}, whose complement is A c {i +, i +,, } The respective flows from A to A c and vice versa are easy to compute, since the interaction between A and A c occurs only between states i and i + : F (A, A c ) π(i)p, F (A c, A) π(i + )q By equating the two flows, π(i)p π(i + )q yields a geometric sequence π(i) π(0) ( ) p i q As before, j π(j) converges only when p q < or p < (otherwise no stationary distribution exists, as the state is more likely to drift towards and never to return) Obtaining the stationary distribution is easy: π(i) π(i) π(0) i0 ( p ) ( ) p i q q i0 ( ) p i π(0) q p, q