Problem Set 7 Due March, 22

Similar documents
Problem Set 3 Due Oct, 5

Summary of EE226a: MMSE and LLSE

ENGG2430A-Homework 2

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

Multivariate Distributions

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

conditional cdf, conditional pdf, total probability theorem?

ECE Lecture #9 Part 2 Overview

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

EE4304 C-term 2007: Lecture 17 Supplemental Slides

Tutorial 1 : Probabilities

Introduction to Probability and Stocastic Processes - Part I

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

1 Solution to Problem 2.1

Lecture 8: Channel Capacity, Continuous Random Variables

Lesson 4: Stationary stochastic processes

Problem Set 1 Sept, 14

STA 256: Statistics and Probability I

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

EE4601 Communication Systems

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Digital Transmission Methods S

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

EE 302 Division 1. Homework 6 Solutions.

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Multivariate Random Variable

Midterm Exam 1 Solution

MATH/STAT 3360, Probability Sample Final Examination Model Solutions

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

3. Probability and Statistics

Final Solutions Fri, June 8

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture 2. Spring Quarter Statistical Optics. Lecture 2. Characteristic Functions. Transformation of RVs. Sums of RVs

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

The Hilbert Space of Random Variables

1 Joint and marginal distributions

Multiple Random Variables

Statistics for scientists and engineers

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

ECE 5615/4615 Computer Project

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Whitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain

Linear Models for Regression CS534

1 Random Variable: Topics

Probability review. September 11, Stoch. Systems Analysis Introduction 1

Continuous Random Variables

Section 8.1. Vector Notation

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

2 (Statistics) Random variables

Computing Probability of Symbol Error

Estimation techniques

ESTIMATION THEORY. Chapter Estimation of Random Variables

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

ECE531: Principles of Detection and Estimation Course Introduction

Exercises with solutions (Set D)

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 26. Estimation: Regression and Least Squares

MAS223 Statistical Inference and Modelling Exercises

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

1: PROBABILITY REVIEW

Random Processes Why we Care

DOA Estimation using MUSIC and Root MUSIC Methods

18.440: Lecture 19 Normal random variables

1 Review of Probability

Chapter 7: Channel coding:convolutional codes

More than one variable

CS281 Section 4: Factor Analysis and PCA

Chapter 5,6 Multiple RandomVariables

Lecture 2: Review of Basic Probability Theory

Final Examination Solutions (Total: 100 points)

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

Appendix B Information theory from first principles

Let X and Y denote two random variables. The joint distribution of these random

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

BASICS OF PROBABILITY

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015.

ECE Lecture #10 Overview

5 Operations on Multiple Random Variables

1 Presessional Probability

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

4 Derivations of the Discrete-Time Kalman Filter

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

IIT JAM : MATHEMATICAL STATISTICS (MS) 2013

Transcription:

EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables (i.e., u and v are independent Gaussian random variables with means of zero and variances of one). Let x = u + v y = u v. 1. Do x and y have a bivariate normal distribution? Explain.. Provide a formula for E[x y]. 1. Recall that to write the joint p.d.f or a normal random vector, we need to invert its co-variance matrix. So for the p.d.f to be defined, the co-variance matrix must be invertible. Also we know that a jointly gaussian random vector is characterized by its mean and co-variance matrix. Hence in this exercise we just need to verify that the co-variance matrix of the vector (x y) T is invertible. We have (taking into account the fact that the random variables u and v are zero mean) ( ) ( ) E[x Σ = ] E[xy] E[xy] E[y = ] 5 and det(σ) = 16 0. Thus (x y) T has a joint p.d.f.. To compute E[x y], we apply the formula given in the notes E[x y] = σ xy y = σy 5 y Problem 7.. Let = ( 1,, 3 ) be jointly Gaussian with joint pdf f 1,, 3 (x 1, x, x 3 ) = e (x 1 +x x1 x + 1 x 3 ) π π Find a transformation A such that Y = A consists of independent Gaussian random variables. 7-1

EE16 Problem Set 7 Due March, SP 07 In class we have seen that any jointly gaussian random vector can be written as = BY where Y has i.i.d standard normal components, and BB T = Σ. A formal way to compute B is to use the eigenvalue decomposition of the matrix Σ = UDU T where U is an orthogonal matrix and D is a diagonal matrix with non-negative entries in the diagonal. From this, B can be written B = D 1 U. And, if Σ is invertible, A can be chosen as A = B 1 = UD 1. If you are not familiar with eigenvalue decomposition, you can still compute B by solving the matrix equation BB T = Σ. First let s figure out what Σ is. For that, notice that f 1,, 3 (x 1, x, x 3 ) = 1 (π) Σ e 1 (x 1,x,x 3 )Σ 1 (x 1,x,x 3 ) T Developing the term in the exponent and identifying with the p.d.f given in the exercise, yield to Σ = BB T = which gives 0 0 0 0 1 = A = B 1 = 1.3066 0.541 0 1.3066 0.541 0 0 0 1 0.387 0.387 0 0.939 0.939 0 0 0 1 1.3066 0.541 0 1.3066 0.541 0 0 0 1 Note: Here, we have solved the exercise for Y having i.i.d standard normal components... but independent is enough and probably easier! Problem 7.3. A signal of amplitude s = is transmitted from a satellite but is corrupted by noise, and the received signal is Z = s + W, where W is noise. When the weather is good, W is normal with zero mean and variance 1. When the weather is bad, W is normal with zero mean and variance 4. Good and bad weather are equally likely. In the absence of any weather information: 1. Calculate the PDF of Z.. Calculate the probability that Z is between 1 and 3. T 1. Let G represent the event that the weather is good. We are given P (G) = 1. To find the PDF of, we first find the PDF of W, since = s + W = + W. We know that given good weather, W N(0, 1). We also know that given bad weather, 7-

EE16 Problem Set 7 Due March, SP 07 W N(0, 4). To find the unconditional PDF of W, we use the density version of the total probability theorem. f W (w) = P (G) f W G (w) + P (G c ) f W G c(w) = 1 1 e w + 1 π 1 w π e (4) We now perform a change of variables using = + W to find the PDF of : f (x) = f W (x ) = 1 1 e (x ) + 1 π 1 (x ) π e 8.. In principle, one can use the PDF determined in part (a) to compute the desired probability as 3 1 f (x) dx. It is much easier, however, to translate the event {1 3} to a statement about W and then to apply the total probability theorem. P (1 3) = P (1 + W 3) = P ( 1 W 1) We now use the total probability theorem. P ( 1 W 1) = P (G) P ( 1 W 1 G) +P (G c ) P ( 1 W 1 G c ) }{{}}{{} a b Since conditional on either G or G c the random variable W is Gaussian, the conditional probabilities a and b can be expressed using Φ (Φ(x) = 1 π x on G, we have W N(0, 1) so a = Φ(1) Φ( 1) = Φ(1) 1. Try to show that Φ( x) = 1 Φ(x)! Conditional on G c, we have W N(0, 4) so ( ) ( 1 b = Φ Φ 1 ) ( ) 1 = Φ 1. The final answer is thus P (1 3) = 1 (Φ (1) 1) + 1 x e ( ( ) ) 1 Φ 1. dx). Conditional Problem 7.4. Suppose, Y are independent gaussian random variables with the same variance. Show that Y and + Y are independent. 7-3

EE16 Problem Set 7 Due March, SP 07 First note that, in general, to show that two random variables U and V are independent, we need to show that P (U (u 1, u ), V (v 1, v )) = P (U (u 1, u ))P (V (v 1, v )), u 1, u, v 1, v which can be very hard sometimes. And also in general un-correlation is weaker than independence. However for Gaussian random variables, we know that independence is equivalent to uncorrelation. So, to show that U and V are independent, it suffices to show that they are un-correlated: E[(U E(U))(V E[V ])] = 0 We will apply this to U = + Y and V = Y. First notice that E[U] = E[] + E[Y ] and E[V ] = E[] E[Y ]. Thus E[ + Y (E[] + E[Y ])( Y (E[] E[Y ])] = E[( E[]) ] E[(Y E[Y ]) ] +E[( E[])(Y E[Y ])] E[( E[])(Y E[Y ])] = E[( E[]) ] E[(Y E[Y ]) ] = 0 because and Y have equal variance. So Y and +Y are uncorrelated; since they are jointly gaussian (because they are linear combinations of the same independent random variables and Y ), they are independent. Problem 7.5. Steve is trying to decide how to invest his wealth in the stock market. He decides to use a probabilistic model for the shares price changes. He believes that, at the end of the day, the change of price Z i of a share of a particular company i is the sum of two components: i, due solely to the performance of the company, and the other Y due to investors jitter. Assuming that Y is a normal random variable, zero-mean and with variance equal to 1, and independent of i. Find the PDF of Z i under the following circumstances in part a) to c), 1. 1 is Gaussian with a mean of 1 dollar and variance equal to 4.. is equal to -1 dollars with probability 0.5, and 3 dollars with probability 0.5. 3. 3 is uniformly distributed between -.5 dollars and 4.5 dollars (No closed form expression is necessary.) 7-4

EE16 Problem Set 7 Due March, SP 07 4. Being risk averse, Steve now decides to invest only in the first two companies. He uniformly chooses a portion V of his wealth to invest in company 1 (V is uniform between 0 and 1.) Assuming that a share of company 1 or costs 100 dollars, what is the expected value of the relative increase/decrease of his wealth? 1. Because Z 1 is the sum of two independent Gaussian random variables, 1 and Y, the PDF of Z 1 is also Gaussian. The mean and variance of Z 1 are equal to the sums of the expected values of 1 and Y and the sums of the variances of 1 and Y, respectively. f Z1 (z 1 ) = N(1, 5). is a two-valued discrete random variable, so it is convenient to use the total probability theorem and condition the PDF of Z on the outcome of. Because linear transformations of Gaussian random variables are also Gaussian, we obtain: f Z (z ) = 1 N( 1, 1) + 1 N(3, 1) 3. We can use convolution here to get the PDF of Z 3. f Z3 (z 3 ) = N(0, 1)f 3 (z 3 y)dy Using the fact that 3 is uniform from -.5 to 4.5, we can reduce this convolution to: f Z3 (z 3 ) = z3 +.5 z 3 4.5 N(0, 1) 1 7 dy A normal table is necessary to compute this integral for all values of z 3. 4. Given an experimental value V = v, we can draw the following tree: E[Z] = E[E[Z V ]] = 7-5 1 0 E[Z V = v]f V (v)dv

EE16 Problem Set 7 Due March, SP 07 E[Z V = v] = v 1 + (1 v)[ 1 1 + 3 1 ] = v + (1 v) = 1 Plugging E[Z V = v] and f V (v) = 1 into our first equation, we get E[Z] = 1 0 1 1dv = 1. Since the problem asks for the relative change in wealth, we need to divide E[Z] by 100. Thus the expected relative change in wealth is 1 percent. Problem 7.6. The Binary Phase-shift Keying (BPSK) and Quadrature Phase-shift Keying (QPSK) modulation schemes are shown in figure 7.1. We consider that in both cases, the symbols (S) are sent over an additive gaussian channel with zero mean and variance σ. Assuming that the symbols are equally likely, compute the average error probability for each scheme. Which one is better? QPSK BPSK S0 s1 -a a S1 S -a a S S3 Figure 7.1. QPSK and BPSK modulations (Hint) Note that the comparison is not fair because the two schemes do not have the same rate (eggs and apples!). But let us compute the error probabilities and compare them. In both cases, an error occurs when one symbol is sent and the receiver decides for a different one. Since the symbols are equally likely, the decoding (or decision) rule will be to decide in favor of the symbol S i that maximizes the likelihood (assuming that Y is the received signal) f Y (y S i ) = 1 e (y S i )T (y S i ) π where S i { a, a} for BPSK and S i {( a, a), ( a, a), (a, a), (a, a)}. It is not hard to see that maximizing the likelihood is the same as minimizing y S i ) which turns out to be the Euclidean distance between the received point and the symbol S i. Thus the decision rule is as follows (assuming that Z N(0, ) is the channel noise): 7-6

EE16 Problem Set 7 Due March, SP 07 BPSK: decide for a if = S + Z 0 and decide for a otherwise. Error occurs if S = a (S = a) and = a + Z < 0 Z < a ( = a + Z 0 Z a). The corresponding conditional probabilities are P (Z < a) = Φ( a) (P (Z a) = Φ( a)), thus the average error probability is Φ( a) for BPSK. QPSK: decide for (a, a) if is in the first quartan, ( a, a) if is in the second quartan, etc... For QPSK (which can be modeled as independent BPSK), let s assume that signal S 1 = (a, a) was sent. Observe that error occurs if the received signal does not fall in the first quartan. By considering the probability of detecting any other signal, we can see that given S 1 was sent the error probability is equal to P qpsk e = Φ(a) + Φ( a) which is the average error probability given the symmetry of the problem. For a large enough we have Pe qpsk Pe bpsk Problem 7.7. When using a multiple access communication channel, a certain number of users N try to transmit information to a single receiver. If the real-valued random variable i represents the signal transmitted by user i, the received signal Y is Y = 1 + + + N + Z, where Z is an additive noise term that is independent of the transmitted signals and is assumed to be a zero-mean Gaussian random variable with variance σz. We assume that the signals transmitted by different users are mutually independent and, furthermore, we assume that they are identically distributed, each Gaussian with mean µ and variance σ. 1. If N is deterministically equal to, find the transform or the PDF of Y.. In most practical schemes, the number of users N is a random variable. Assume now that N is equally likely to be equal to 0, 1,..., 10. (a) Find the transform or the PDF of Y. (b) Find the mean and variance of Y. (c) Given that N, find the transform or PDF of Y. 1. Here it is easier to find the PDF of Y. Since Y is the sum of independent Gaussian random variables, Y is Gaussian with mean µ and variance σ + σ Z. 7-7

EE16 Problem Set 7 Due March, SP 07. (a) The transform of N is Since Y is the sum of M N (s) = 1 11 (1 + es + e s + + e 10s ) = 1 11 a random sum of Gaussian random variables an independent Gaussian random variable, e ks k=0 M Y (s) = = ( ) ( 1 M N (s) e s =M (s) M Z (s) = 11 ( 1 11 = 1 11 k=0 10 k=0 ) e skµ+ s kσ e s σ Z e skµ+ s (kσ +σ Z ) (e sµ+ s σ k=0 ) k ) In general, this is not the transform of a Gaussian random variable. e s σ Z (b) One can differentiate the transform to get the moments, but it is easier to use the laws of iterated expectation and conditional variance: EY = EEN + EZ = 5µ var(y ) = ENvar() + (E )var(n) + var(z) = 5σ + 10µ + σ Z (c) Now, the new transform for N is Therefore, M Y (s) = = = 1 9 M N (s) = 1 9 (es + + e 10s ) = 1 9 ( ) ( 1 M N (s) e s =M (s) M Z (s) = 9 ( 1 9 k= 10 k= ) e skµ+ s kσ e s σ Z e skµ+ s (kσ +σ Z ) e ks k= (e sµ+ s σ k= ) k ) e s σ Z 7-8