UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

Similar documents
UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

ENGG2430A-Homework 2

Final Examination Solutions (Total: 100 points)

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE Homework Set 3

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Lecture 4: Least Squares (LS) Estimation

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Appendix A : Introduction to Probability and stochastic processes

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

4 Derivations of the Discrete-Time Kalman Filter

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Conditional distributions (discrete case)

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

A Probability Review

University of Illinois ECE 313: Final Exam Fall 2014

Multivariate Gaussians. Sargur Srihari

More than one variable

ECE 673-Random signal analysis I Final

Chapter 5 continued. Chapter 5 sections

MAS223 Statistical Inference and Modelling Exercises

ECE 302, Final 3:20-5:20pm Mon. May 1, WTHR 160 or WTHR 172.

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

18.440: Lecture 26 Conditional expectation

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Exercises with solutions (Set D)

6.041/6.431 Fall 2010 Quiz 2 Solutions

ENGR352 Problem Set 02

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

ECE534, Spring 2018: Solutions for Problem Set #3

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Gaussian Random Variables Why we Care

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

UCSD ECE 255C Handout #14 Prof. Young-Han Kim Thursday, March 9, Solutions to Homework Set #5

UNIT Define joint distribution and joint probability density function for the two random variables X and Y.

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

9 Multi-Model State Estimation

ECON Fundamentals of Probability

ECE531 Homework Assignment Number 6 Solution

Introduction to Probability and Stocastic Processes - Part I

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Exam 2. Jeremy Morris. March 23, 2006

TAMS39 Lecture 2 Multivariate normal distribution

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

Lecture Note 1: Probability Theory and Statistics

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Multiple Random Variables

Multivariate probability distributions and linear regression

Statistical Methods in Particle Physics

Chapter 5 Joint Probability Distributions

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Unsupervised Learning: Dimensionality Reduction

EE 278 October 26, 2017 Statistical Signal Processing Handout #12 EE278 Midterm Exam

ECE Lecture #9 Part 2 Overview

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECEN 5612, Fall 2007 Noise and Random Processes Prof. Timothy X Brown NAME: CUID:

Probability and Statistics for Final Year Engineering Students

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University.

CMPSCI 240: Reasoning Under Uncertainty

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

CMPSCI 240: Reasoning Under Uncertainty

Continuous Random Variables

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Algorithms for Uncertainty Quantification

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?

Problem Set 7 Due March, 22

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lesson 4: Stationary stochastic processes

ECE 650 1/11. Homework Sets 1-3

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

Let X and Y denote two random variables. The joint distribution of these random

EE4601 Communication Systems

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

Assignment 9. Due: July 14, 2017 Instructor: Dr. Mustafa El-Halabi. ! A i. P (A c i ) i=1

Chapter 5. Chapter 5 sections

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Multivariate Random Variable

FINAL EXAM: Monday 8-10am

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

ECE Lecture #10 Overview

ESTIMATION THEORY. Chapter Estimation of Random Variables

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel

MAS113 Introduction to Probability and Statistics. Proofs of theorems

UCSD ECE 153 Handout #20 Prof. Young-Han Kim Thursday, April 24, Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei)

Transcription:

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and the noise Z are uncorrelated Gaussian random variables. Let E[X] = 1, E[Z] = 2, σ 2 X = 5, and σ2 Z = 8. (a) Find the best MSE linear estimate of X given Y. (b) Suppose your friend from Caltech tells you that he was able to derive an estimator with a lower MSE. Your friend from UCLA disagrees, saying that this is not possible because the signal and the noise are Gaussian, and hence the best linear MSE estimator will also be the best MSE estimator. Could your UCLA friend be wrong? 2. Additive-noise channel with path gain. Consider the additive noise channel shown in the figure below, where X and Z are zero mean and uncorrelated, and a and b are constants. Z X a b Y = b(ax +Z) Find the MMSE linear estimate of X given Y and its MSE in terms only of σ X, σ Z, a, and b. 3. Image processing. A pixel signal X U[ k,k] is digitized to obtain X = i+ 1, if i < X i+1, i = k, k +1,..., k 2, k 1. 2 To improve the the visual appearance, the digitized value X is dithered by adding an independent noise Z with mean E(Z) = 0 and variance Var(Z) = N to obtain Y = X +Z. (a) Find the correlation of X and Y. (b) Find the best linear MSE estimate of X given Y. Your answer should be in terms only of k, N, and Y. 1

4. Covariance matrices. Which of the following matrices can be a covariance matrix? Justify your answer either by constructing a random vector X, as a function of the i.i.dzero meanunit variancerandomvariablesz 1,Z 2,andZ 3, withthegivencovariance matrix, or by establishing a contradiction. (a) [ 1 2 0 2 ] (b) [ 2 1 1 2 ] (c) 1 1 1 1 2 2 1 2 3 (d) 1 1 2 1 2 3 2 3 3 5. Gaussian random vector. Given a Gaussian random vector X N(µ,Σ), where µ = (152) T and 1 1 0 Σ = 1 4 0. 0 0 9 (a) Find the pdfs of i. X 1, ii. X 2 +X 3, iii. 2X 1 +X 2 +X 3, iv. X 3 given (X 1,X 2 ), and v. (X 2,X 3 ) given X 1. (b) What is P{2X 1 +X 2 X 3 < 0}? Express your answer using the Q function. (c) Find the joint pdf on Y = AX, where [ ] 2 1 1 A =. 1 1 1 6. Gaussian Markov chain. Let X, Y, and Z be jointly Gaussian random variables with zero mean and unit variance, i.e., E(X) = E(Y) = E(Z) = 0 and E(X 2 ) = E(Y 2 ) = E(Z 2 ) = 1. Let ρ X,Y denote the correlation coefficient between X and Y, and let ρ Y,Z denote the correlation coefficient between Y and Z. Suppose that X and Z are conditionally independent given Y. (a) Find ρ X,Z in terms of ρ X,Y and ρ Y,Z. (b) Find the MMSE estimate of Z given (X,Y) and the corresponding MSE. 7. Prediction of an autoregressive process. Let X be a random vector with zero mean and covariance matrix 1 α α 2 α n 1 α 1 α Σ X = α 2 α 1.... α n 1 1 2

for α < 1. X 1,X 2,...,X n 1 are observed, find the best linear MSE estimate (predictor) of X n. Compute its MSE. 8. Noise cancellation. A classical problem in statistical signal processing involves estimating a weak signal (e.g., the heart beat of a fetus) in the presence of a strong interference (the heart beat of its mother) by making two observations; one with the weak signal present and one without (by placing one microphone on the mother s belly and another close to her heart). The observations can then be combined to estimate the weak signal by cancelling out the interference. The following is a simple version of this application. Let the weak signal X be a random variable with mean µ and variance P, and the observations be Y 1 = X +Z 1 (Z 1 being the strong interference), and Y 2 = Z 1 +Z 2 (Z 2 is a measurement noise), where Z 1 and Z 2 are zero mean with variances N 1 and N 2, respectively. Assume that X, Z 1 and Z 2 are uncorrelated. Find the best linear MSE estimate of X given Y 1 and Y 2 and its MSE. Interprete the results. 3

Additional Exercises Do not turn in solutions to these problems. 1. Worst noise distribution. Consider an additive noise channel Y = X + Z, where the signal X N(0,P) and the noise Z has zero mean and variance N. Assume X and Z are independent. Find a distribution of Z that maximizes the minimum MSE of estimating X given Y, i.e., the distribution of the worst noise Z that has the given mean and variance. You need to justify your answer. 2. Jointly Gaussian random variables. Let X and Y be jointly Gaussian random variables with pdf 1 f X,Y (x,y) = π 1 3/4 e 2 (4x2 /3+16y 2 /3+8xy/3 8x 16y+16). (a) Find E(X), E(Y), Var(X), Var(Y), and Cov(X,Y). (b) Find the minimum MSE estimate of X given Y and its MSE. 3. Markov chain. Suppose X 1 and X 3 are independent given X 2. Show that f(x 1,x 2,x 3 ) = f(x 1 )f(x 2 x 1 )f(x 3 x 2 ) = f(x 3 )f(x 2 x 3 )f(x 1 x 2 ). Inotherwords, ifx 1 X 2 X 3 formsamarkovchain, thensodoesx 3 X 2 X 1. 4. Proof of Property 4. In Lecture Notes #6 it was stated that conditionals of a Gaussian random vector are Gaussian. In this problem you will prove that fact. [ Y If X] is a zero-mean GRV then X {Y = y} N ( Σ XY Σ 1 Y y, σ2 X Σ XYΣ 1 Y Σ YX). Justify each of the following steps of the proof. (a) Let ˆX be the best MSE linear estimate of X given Y. Then ˆX and X ˆX are individually zero-mean Gaussians. Find their variances. (b) ˆX and X ˆX are independent. (c) Now write X = ˆX +(X ˆX). If Y = y then X = Σ XY Σ 1 Y y + (X ˆX). (d) Now complete the proof. Remark: This proof can be extended to vector X. 4

5. Additive nonwhite Gaussian noise channel. Let Y i = X +Z i, i = 1,2,...,n, be n observations of a signal X N(0,P). The additive noise random variables Z 1,Z 2,...,Z n are zero mean jointly Gaussian random variables that are independent of X and have correlation E(Z i Z j ) = N 2 i j for 1 i,j n. (a) Find the best MSE estimate of X given Y 1,Y 2,...,Y n. (b) Find the MSE of the estimate in part (a). Hint: The coefficients for the best estimate are of the form h T = [a b b b b a]. 5