Solutions to Homework Set #6 (Prepared by Lele Wang)

Similar documents
UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Lecture 4: Least Squares (LS) Estimation

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

Math 180B Problem Set 3

Exercises with solutions (Set D)

More than one variable

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

Introduction to Probability and Stocastic Processes - Part I

Final Examination Solutions (Total: 100 points)

Multivariate probability distributions and linear regression

Appendix A : Introduction to Probability and stochastic processes

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

MAS223 Statistical Inference and Modelling Exercises

18.440: Lecture 26 Conditional expectation

ECE Homework Set 3

Chapter 4 continued. Chapter 4 sections

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

4 Derivations of the Discrete-Time Kalman Filter

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Chapter 4 : Expectation and Moments

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

ENGR352 Problem Set 02

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Lecture 2: Repetition of probability theory and statistics

11 - The linear model

1 Random variables and distributions

ENGG2430A-Homework 2

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Lecture 2: Review of Basic Probability Theory

01 Probability Theory and Statistics Review

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

ECE 302, Final 3:20-5:20pm Mon. May 1, WTHR 160 or WTHR 172.

Algorithms for Uncertainty Quantification

Review of probability

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Bivariate distributions

Probability Review. Chao Lan

Chapter 5 continued. Chapter 5 sections

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

6.041/6.431 Fall 2010 Quiz 2 Solutions

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

Multivariate Random Variable

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

MULTIVARIATE PROBABILITY DISTRIBUTIONS

conditional cdf, conditional pdf, total probability theorem?

ECE534, Spring 2018: Solutions for Problem Set #3

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

CS 630 Basic Probability and Information Theory. Tim Campbell

Multiple Random Variables

Bivariate Distributions. Discrete Bivariate Distribution Example

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

STOR Lecture 16. Properties of Expectation - I

Lecture 2: Review of Probability

Lecture 11. Probability Theory: an Overveiw

Review of Probability Theory

Communication Theory II

Quick Tour of Basic Probability Theory and Linear Algebra

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

3. Probability and Statistics

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Lecture 11. Multivariate Normal theory

If we want to analyze experimental or simulated data we might encounter the following tasks:

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

1: PROBABILITY REVIEW

STAT 414: Introduction to Probability Theory

ECE Lecture #10 Overview

Mock Exam - 2 hours - use of basic (non-programmable) calculator is allowed - all exercises carry the same marks - exam is strictly individual

Random variables (discrete)

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

STAT 418: Probability and Stochastic Processes

A Probability Review

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Lecture 1. ABC of Probability

Recitation 2: Probability

Lecture 16 : Independence, Covariance and Correlation of Discrete Random Variables

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory

Chapter 2: Random Variables

Basics on Probability. Jingrui He 09/11/2007

Lecture 1: Basics of Probability

Joint Gaussian Graphical Model Review Series I

ECE 673-Random signal analysis I Final

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Multiple Random Variables

Stat 704 Data Analysis I Probability Review

Statistical Methods in Particle Physics

Lecture Note 1: Probability Theory and Statistics

Transcription:

Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X + X 3, iv X 3 given (X, X ), and v (X, X 3 ) given X (b) What is P{X + X X 3 < 0}? Express your answer using the Q function (c) Find the joint pdf on AX, where A [ ] Solution: (a) i The marginal pdfs of a jointly Gaussian pdf are Gaussian Therefore X N (, ) ii Since X and X 3 are independent (σ 3 0), the variance of the sum is the sum of the variances Also the sum of two jointly Gaussian random variables is also Gaussian Therefore X + X 3 N (7, 3) iii Since X + X + X 3 is a linear transformation of a Gaussian random vector, X + X + X 3 [ ] X X, X 3 it is a Gaussian random vector with mean and variance µ [ ] 5 9 and σ [ ] 0 4 0 0 0 9 Thus X + X + X 3 N (9, ) iv Since σ 3 0, X 3 and X are uncorrelated and hence independent since they are jointly Gaussian; similarly, since σ 3 0, X 3 and X are independent Therefore the conditional pdf of X 3 given (X, X ) is the same as the pdf of X 3, which is N (, 9)

v We use the general formula for the conditional Gaussian pdf: X {X x } N ( Σ Σ (x µ ) + µ, Σ Σ Σ Σ ) In the case of (X, X 3 ) X, Σ [ ], Σ [ ], Σ 0 [ ] 4 0 0 9 Therefore the mean and variance of (X, X 3 ) given X x are [ ] [ ] [ µ (X,X 3 ) X x ] [ ] [ ] 5 x + 4 +, 0 [ ] [ ] [ ] [ ] [ ] 4 0 [ ] 4 0 0 3 0 Σ (X,X 3 ) X 0 0 9 0 0 9 0 0 0 9 Thus X and X 3 are conditionally independent given X The conditional densities are X {X x } N (x + 4, 3) and X 3 {X x} N (, 9) (b) Let X +X X 3 Similarly as part (a)iii, X +X X 3 is a linear transformation of a Gaussian random vector, X + X X 3 [ ] X X, X 3 it is a Gaussian random vector with mean and variance µ [ ] 5 5 and σ [ ] 0 4 0 0 0 9 Thus X + X X 3 N (5, ), ie, N (5, ) Thus { } ( ) ( 5) (0 5) 5 P{ < 0} P < Q (c) In general, AX N (Aµ X, AΣ X A T ) For this problem, Thus N µ Aµ X Σ AΣ X A T [ ([ ] [ ]) 9 6, 6 [ ] 5 [ ] 9, ] 0 4 0 0 0 9 [ ] 6 6

Gaussian Markov chain Let X,, and Z be jointly Gaussian random variables with zero mean and unit variance, ie, E(X) E( ) E(Z) 0 and E(X ) E( ) E(Z ) Let ρ X, denote the correlation coefficient between X and, and let ρ,z denote the correlation coefficient between and Z Suppose that X and Z are conditionally independent given (a) Find ρ X,Z in terms of ρ X, and ρ,z (b) Find the MMSE estimate of Z given (X, ) and the corresponding MSE Solution: (a) From the definition of ρ X,Z, we have where, ρ X,Z Cov(X, Z) σ X σ Z, Cov(X, Z) E(XZ) E(X)E(Z) E(XZ) 0 E(XZ), σ X E(X ) E(X) 0, σ E( ) E( ) 0 Thus, ρ X,Z E(XZ) Moreover, since X and Z are conditionally independent given, E(XZ) E(E(XZ )) E[E(X )E(Z )] Now E(X ) can be easily calculated from the bivariate Gaussian conditional density Similarly, we have Therefore, combining the above, E(X ) E(X) + ρ X, σ X σ ( E( )) ρ X, E(Z ) ρ,z ρ X,Z E(XZ) E[E(X )E(Z )] E(ρ X, ρ,z ) ρ X, ρ,z E( ) ρ X, ρ,z (b) X, and Z are jointly Gaussian random variables Thus, the minimum MSE estimate of Z given (X, ) is linear [ ] ρx, Σ (X, ) T, ρ X, [ ] [ ] E(XZ) ρx,z Σ (X, ) T Z, E( Z) Σ Z(X, ) T [ ρ X,Z ρ,z ] ρ,z 3

Therefore, [ ] Ẑ Σ Z(X, ) T Σ X (X, ) T [ ] [ ] [ ] ρ ρ X,Z ρ X, X,Z ρ X, [ [ ] ρ ρ X,Z ρ X,,Z ρ ρ X, X, ρ X, [ 0 ρ X, ρ,z + ρ,z ] [ X ], ][ ] X where the last equality follows from the result of (a) Thus, Ẑ [ ] [ ] X 0 ρ,z ρ,z The corresponding MSE is MSE Σ Z Σ Z(X, ) T Σ Σ (X, ) T (X, ) T Z [ ] [ ] [ ] ρ ρ X,Z ρ X, ρx,z,z ρ X, ρ,z [ [ ] [ ] ] ρ ρ X,Z ρ X, ρx,z,z ρ ρ X, X, ρ,z [ ] [ ] ρ 0 ρ X,Z,Z ρ,z ρ,z 3 Prediction of an autoregressive process Let X be a random vector with zero mean and covariance matrix α α α n α α Σ X α α α n for α < X, X,, X n are observed, find the best linear MSE estimate (predictor) of X n Compute its MSE Solution: We have α n α n Σ X α n α α n α 4

By defining [ X ] T X n, we have α n Σ, α n Therefore, and Σ X [ α n α ] T, Σ X [ α n α ], σ x ˆX n Σ X Σ [ α n α ] α n α n h T (where h T Σ X Σ ) [ 0 0 α ] (since h T Σ Σ X ) αx n ; MSE σ x Σ X Σ Σ X h T Σ X [ 0 0 α ] α α n 4 Noise cancellation A classical problem in statistical signal processing involves estimating a weak signal (eg, the heart beat of a fetus) in the presence of a strong interference (the heart beat of its mother) by making two observations; one with the weak signal present and one without (by placing one microphone on the mother s belly and another close to her heart) The observations can then be combined to estimate the weak signal by cancelling out the interference The following is a simple version of this application Let the weak signal X be a random variable with mean µ and variance P, and the observations be X + Z (Z being the strong interference), and Z + Z (Z is a measurement noise), where Z and Z are zero mean with variances N and N, respectively Assume that X, Z and Z are uncorrelated Find the best linear MSE estimate of X given and and its MSE Interprete the results Solution: This is a vector linear MSE problem Since Z and Z are zero mean, µ X µ µ and µ 0 We first normalize the random variables by subtracting off their means to get α 5

X X µ, and [ ] µ Now using the orthogonality principle we can find the best linear MSE estimate ˆX of X To do so we first find [ ] [ ] P + N N Σ P and Σ N N + N X 0 Thus, ˆX Σ T XΣ [ P 0 ] [ ] N + N N P (N + N ) + N N N P + N P [ ] (N + N ) N P (N + N ) + N N The best linear MSE estimate is ˆX ˆX + µ Thus, P ˆX ((N + N )( µ) N ) + µ P (N + N ) + N N (P ((N + N ) N )) + N N µ) P (N + N ) + N N The MSE can be calculated by MSE σx Σ T XΣ Σ X P P [ ] (N + N ) N P (N + N ) + N N P (N + N ) P P (N + N ) + N N P N N P (N + N ) + N N The equation for the MSE makes perfect sense First, note that if N and N are held constant but P goes to infinity, the MSE tends to N N N +N Next, note that if both N and N go to infinity, the MSE goes to σx, ie, the estimate becomes worthless Finally, note that if either N or N goes to 0, the MSE also goes to 0 This is because the estimator will then use the measurement with zero noise variance (that is, the one with no noise) and ignore the other measurement [ ] P 0 6

Solutions to Additional Exercises Markov chain Suppose X and X 3 are independent given X Show that f(x, x, x 3 ) f(x )f(x x )f(x 3 x ) f(x 3 )f(x x 3 )f(x x ) In other words, if X X X 3 forms a Markov Chain, then so does X 3 X X Solution: By definition of conditional independence, f(x, x 3 x ) f(x x )f(x 3 x ) Therefore, using the definition of conditional density, f(x 3 x, x ) f(x, x, x 3 ) f(x, x ) f(x, x 3 x )f(x ) f(x x )f(x ) f(x x )f(x 3 x ) f(x x ) f(x 3 x ) We are given that X and X 3 are independent given X Then f(x, x, x 3 ) f(x )f(x x )f(x 3 x, x ) f(x )f(x x )f(x 3 x ), In this case X X X 3 is said to form a Markov chain Similarly, f(x, x, x 3 ) f(x 3 )f(x x 3 )f(x x, x 3 ) f(x 3 )f(x x 3 )f(x x ), This shows that if X X X 3 is a Markov chain, then X 3 X X is also a Markov chain Proof of Property 4 In Lecture Notes #6 it was stated that conditionals of a Gaussian random vector are Gaussian In this problem you will prove that fact If [ ] is a zero-mean GRV then X { y} N ( Σ X X Σ y, σ X Σ XΣ Σ X) Justify each of the following steps of the proof (a) Let ˆX be the best MSE linear estimate of X given Then ˆX and X ˆX are individually zero-mean Gaussians Find their variances (b) ˆX and X ˆX are independent (c) Now write X ˆX + (X ˆX) If y then X Σ X Σ y + (X ˆX) (d) Now complete the proof Remark: This proof can be extended to vector X Solution: (a) Let ˆX be the best MSE linear estimate of X given In the MSE vector case section of Lecture Notes #6 it was shown that ˆX and X ˆX are individually zero-mean Gaussian random variables with variances Σ X Σ Σ X and σx Σ XΣ Σ X, respectively 7

(b) The random variables ˆX and X ˆX are jointly Gaussian since they are obtained by a linear transformation of the GRV [ X ] T By orthogonality, ˆX and X ˆX are uncorrelated, so they are also independent By the same reasoning, X ˆX and are independent (c) Now write X ˆX + (X ˆX) Then given y since X ˆX is independent of X Σ X Σ y + (X ˆX), (d) Thus X { y} is Gaussian with mean Σ X Σ y and variance σ X Σ XΣ Σ X 3 Additive nonwhite Gaussian noise channel Let i X + Z i for i,,, n be n observations of a signal X N(0, P ) The additive noise random variables Z, Z,, Z n are zero mean jointly Gaussian random variables that are independent of X and have correlation E(Z i Z j ) N i j for i, j n (a) Find the best MSE estimate of X given,,, n (b) Find the MSE of the estimate in part (a) Hint: the coefficients for the best estimate are of the form h T [ a b b b b a ] Solution: (a) The best estimate of X is of the form ˆX n h i i i We apply the orthogonality condition E(X j ) E( ˆX j ) for j n: P n h i E( i j ) i n h i E((X + Z i )(X + Z j )) i n h i (P + N i j ) i There are n equations with n unknowns: P P + N P + N/ P + N/ n P + N/ n P P + N/ P + N P + N/ n 3 P + N/ n P P + N/ n P + N/ n 3 P + N P + N/ P P + N/ n P + N/ n P + N/ P + N h h h n h n 8

By the hint, there are only degrees of freedom given, a and b Solving this equation using the first rows of the matrix, we obtain h h P 3N + (n + )P h n h n (b) The minimum mean square error is MSE E(X ˆX)X n P P h i i P ( i ) (n + )P 3N + (n + )P 3P N 3N + (n + )P 4 Sufficient statistic The bias of a coin is a random variable P U[0, ] Let Z, Z,, Z 0 be the outcomes of 0 coin flips Thus Z i B(P ) and {Z, Z,, Z 0 } are conditionally independent given P If X is the total number of heads, then X {P p} Binom(0, p) Assuming that the total number of heads is 9, show that is independent of the order of the outcomes Solution: f P Z,Z,,Z 0 (p z, z,, z 0 ) f P X (p 9) The tosses of the coin are conditionally independent given the bias, that is, p Z,,Z 0 P (z,, z 0 p) p Z P (z p)p Z P (z p) p Z0 P (z 0 p) Suppose that the order of the outcomes is nine heads followed by one tail Then p Z,,Z 0 P (H, H,, H, T p) p 9 ( p) We use Bayes rule to find the conditional pdf of P f P Z,,Z 0 (p H,, H, T ) p Z,Z,,Z 0 P (H,, H, T p) p Z,Z,,Z 0 (H, H,, H, T ) f P (p) This expression is zero when p < 0 or p > since the a priori pdf f P (p) is zero For 0 p : f P Z,,Z 0 (p H,, H, T ) p 9 ( p) 0 f Z,Z,,Z 0 P (H,, H, T p)f P (p) dp p 9 ( p) 0 p9 ( p) dp p9 ( p) /0 0p 9 ( p) 9

Note that the result is independent of the order of heads and tails This is due to the fact that the tosses are conditionally independent, and therefore the conditional pmf of (Z,, Z 0 ) given P is a function only of the number of heads 5 Gambling Let X, X, X 3, be independent random variables with the same mean µ > 0 and the same variance σ Find the limit of P{ n n i X i < µ/} as n Solution: By the weak law of large numbers, the sample mean n n i X i converges to the mean E(X) in probability, so P( S n µ < ɛ) as n But if S n µ > µ/ then P (S n < µ/) 0 0