UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

Size: px
Start display at page:

Download "UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)"

Transcription

1 UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the signal X and the noise Z are uncorrelated Gaussian random variables Let E[X], E[Z], σ X 5, and σ Z 8 (a) Find the best MSE linear estimate of X given Y (b) Suppose your friend from Caltech tells you that he was able to derive an estimator with a lower MSE Your friend from UCLA disagrees, saying that this is not possible because the signal and the noise are Gaussian, and hence the best linear MSE estimator will also be the best MSE estimator Could your UCLA friend be wrong? (a) We know that the best linear estimate is given by the formula ˆX Cov(X,Y) σy (Y E(Y))+E(X) Note that X and Z Gaussian and uncorrelated implies they are independent Therefore, E(Y) E(XZ) E(X)E(Z), E(XY) E(X Z) E(X )E(Z) (σ X +E (X))E(Z), E(Y ) E(X Z ) E(X )E(Z ) (σ X +E (X)),(σ Z +E (Z)) 7, Cov(X,Y) σ Y σ Y E(Y ) E (Y) 68, E(XY) E(X)E(Y) σ Y Using all of the above, we get 5 34 ˆX 5 34 Y + 7 (b) The fact that the best linear estimate equals the best MMSE estimate when input and noise are independent Gaussians is only known to be true for additive channels For multiplicative channels this need not be the case in general In the following, we prove Y is not Gaussian by contradiction Suppose Y is Gaussian, then Y N(,68) We have f Y (y) π 68 e (y ) 68

2 On the other hand, as a function of two random variables, Y has pdf f Y (y) ( y ) X (x)f Z dx f x But these two expressions are not consistent, because ( ) 0 f Y (0) X (x)f Z dx f Z (0) f x e (0 ) 8 π 8 π 68 e (0 ) 68 f Y (0), f X (x)dx f Z (0) which is a contradiction Hence, X and Y are not joint Gaussian, and we might be able to derive an estimator with a lower MSE Additive-noise channel with path gain Consider the additive noise channel shown in the figure below, where X and Z are zero mean and uncorrelated, and a and b are constants Z X a b Y b(ax +Z) Find the MMSE linear estimate of X given Y and its MSE in terms only of σ X, σ Z, a, and b By the theorem of MMSE linear estimate, we have ˆX Cov(X,Y) σy (Y E(Y))+E(X) Since X and Z are zero mean and uncorrelated, we have E(X) 0, E(Y) b(ae(x)+e(z)) 0, Cov(X,Y) E(XY) E(X)E(Y) E(Xb(aX +Z)) abσx, σy E(Y ) (E(Y)) E(b (ax +Z) ) b a σx +b σz Hence, the best linear MSE estimate of X given Y is given by ˆX aσ X ba σx Y +bσ Z

3 3 Image processing A pixel signal X U[ k,k] is digitized to obtain X i+, if i < X i+, i k, k +,, k, k To improve the the visual appearance, the digitized value X is dithered by adding an independent noise Z with mean E(Z) 0 and variance Var(Z) N to obtain Y X +Z (a) Find the correlation of X and Y (b) Find the best linear MSE estimate of X given Y Your answer should be in terms only of k, N, and Y (a) From the definition of X, we know P{ X i+ } P{i < X i+} k By the law of total expectation, we have Cov(X,Y) E(XY) E(X)E(Y) E(X( X +Z)) E(X X) k E[X X i < X i+]p(i < X i+) i k k i+ i k i 4k Since, k i i k(k +)(k +)/6 (b) We have x(i+ ) k dx 8k k (i+) 4k i k k (i ) i E(X) 0, E(Y) E( X)+E(Z) 0, σ Y Var X +VarZ k (i+ ) k +N k (i+) +N 4k 4k i k Then, the best linear MMSE estimate of X given Y is given by i0 ˆX Cov(X,Y) σy (Y E(Y))+E(X) 4k 4k +N Y 4k 4k +N Y +N 4 Covariance matrices Which of the following matrices can be a covariance matrix? Justify youranswereitherbyconstructingarandomvectorx, asafunctionoftheiidzeromeanunit variance random variables Z,Z, and Z 3, with the given covariance matrix, or by establishing a contradiction 3

4 [ (a) 0 ] (b) [ ] (c) 3 (d) (a) This cannot be a covariance matrix because it is not symmetric (b) This is a covariance matrix for X Z +Z and X Z +Z 3 (c) This is a covariance matrix for X Z, X Z +Z, and X 3 Z +Z +Z 3 (d) This cannot be a covariance matrix Suppose it is, then σ 3 9 > σ σ 33 6, which contradicts the Cauchy Schwartz inequality You can also verify this by showing that the matrix is not positive semidefinite For example, the determinant is Also one of the eigenvalues is negative (λ 08056) Alternatively, we can directly show that this matrix does not satisfy the definition of positive semidefiniteness by [ ] < Gaussian random vector Given a Gaussian random vector X N(µ,Σ), where µ (5) T and 0 Σ (a) Find the pdfs of i X, ii X +X 3, iii X +X +X 3, iv X 3 given (X,X ), and v (X,X 3 ) given X (b) What is P{X +X X 3 < 0}? Express your answer using the Q function (c) Find the joint pdf on Y AX, where A [ ] (a) i The marginal pdfs of a jointly Gaussian pdf are Gaussian Therefore X N(,) ii Since X and X 3 are independent (σ 3 0), the variance of the sum is the sum of the variances Also the sum of two jointly Gaussian random variables is also Gaussian Therefore X +X 3 N(7,3) 4

5 iii Since X +X +X 3 is a linear transformation of a Gaussian random vector, X +X +X 3 [ ] X X, X 3 it is a Gaussian random vector with mean and variance µ [ ] 5 9 and σ [ ] Thus X +X +X 3 N(9,) iv Since σ 3 0, X 3 and X are uncorrelated and hence independent since they are jointly Gaussian; similarly, since σ 3 0, X 3 and X are independent Therefore the conditional pdf of X 3 given (X,X ) is the same as the pdf of X 3, which is N(,9) v We use the general formula for the conditional Gaussian pdf: X {X x } N ( Σ Σ (x µ )+µ,σ Σ Σ Σ In the case of (X,X 3 ) X, Σ [ ], Σ [ ], Σ 0 [ ] Therefore the mean and variance of (X,X 3 ) given X x are [ ] [ ] [ µ (X,X 3 ) X x ] [ ] [ ] 5 x +4 +, 0 [ ] [ ] [ ] [ ] [ ] 4 0 [ ] Σ (X,X 3 ) X Thus X and X 3 are conditionally independent given X The conditional densities are X {X x } N(x +4,3) and X 3 {X x} N(,9) (b) LetY X +X X 3 Similarlyaspart(a)iii, X +X X 3 isalineartransformation of a Gaussian random vector, X +X X 3 [ ] X X, X 3 it is a Gaussian random vector with mean and variance µ [ ] 5 5 and σ [ ] Thus X +X X 3 N(5,), ie, Y N(5,) Thus { (Y 5) P{Y < 0} P < (0 5) } ( ) 5 Q ) 5

6 (c) In general, AX N(Aµ X, AΣ X A T ) For this problem, Thus Y N µ Y Aµ X Σ Y AΣ X A T [ ([ ] [ ]) 9 6, 6 [ ] 5 [ ] 9, ] [ ] Gaussian Markov chain Let X, Y, and Z be jointly Gaussian random variables with zero mean and unit variance, ie, E(X) E(Y) E(Z) 0 and E(X ) E(Y ) E(Z ) Let ρ X,Y denote the correlation coefficient between X and Y, and let ρ Y,Z denote the correlation coefficient between Y and Z Suppose that X and Z are conditionally independent given Y (a) Find ρ X,Z in terms of ρ X,Y and ρ Y,Z (b) Find the MMSE estimate of Z given (X,Y) and the corresponding MSE (a) From the definition of ρ X,Z, we have where, ρ X,Z Cov(X,Z) σ X σ Z, Cov(X,Z) E(XZ) E(X)E(Z) E(XZ) 0 E(XZ), σ X E(X ) E(X) 0, σ Y E(Y ) E(Y) 0 Thus, ρ X,Z E(XZ) Moreover, since X and Z are conditionally independent given Y, E(XZ) E(E(XZ Y)) E[E(X Y)E(Z Y)] Now E(X Y) can be easily calculated from the bivariate Gaussian conditional density Similarly, we have E(X Y) E(X)+ ρ X,Yσ X σ Y (Y E(Y)) ρ X,Y Y E(Z Y) ρ Y,Z Y 6

7 Therefore, combining the above, ρ X,Z E(XZ) E[E(X Y)E(Z Y)] E(ρ X,Y ρ Y,Z Y ) ρ X,Y ρ Y,Z E(Y ) ρ X,Y ρ Y,Z (b) X, Y and Z are jointly Gaussian random variables Thus, the minimum MSE estimate of Z given (X,Y) is linear [ ] ρx,y Σ (X,Y) T, ρ X,Y [ ] [ ] E(XZ) ρx,z Σ (X,Y) T Z, E(YZ) Σ Z(X,Y) T [ ρ X,Z ρ Y,Z ] ρ Y,Z Therefore, [ ] Ẑ Σ Z(X,Y) TΣ X (X,Y) T Y [ ] [ ] [ ] ρ ρ X,Z ρ X,Y X Y,Z ρ X,Y Y [ [ ] ρ ρ X,Z ρ X,Y Y,Z ρ ρ X,Y X,Y ρ X,Y [ 0 ρ X,Y ρ Y,Z +ρ Y,Z ] [ X Y ], ][ ] X Y where the last equality follows from the result of (a) Thus, Ẑ [ ] [ ] X 0 ρ Y,Z ρ Y Y,Z Y The corresponding MSE is MSE Σ Z Σ Z(X,Y) TΣ Σ (X,Y) T (X,Y) T Z [ ] [ ] [ ] ρ ρ X,Z ρ X,Y ρx,z Y,Z ρ X,Y ρ Y,Z [ [ ] ρ ρ X,Z ρ X,Y Y,Z ρ ρ X,Y X,Y [ ] [ ] ρ 0 ρ X,Z Y,Z ρ Y,Z ρ Y,Z ][ ρx,z ρ Y,Z ] 7

8 7 Prediction of an autoregressive process Let X be a random vector with zero mean and covariance matrix α α α n α α Σ X α α α n for α < X,X,,X n are observed, find the best linear MSE estimate (predictor) of X n Compute its MSE We have α n α n Σ X α n α α n α By defining Y [ ] T, X X n we have α n Σ Y, α n Therefore, and Σ YX [ α n α ] T, Σ XY [ α n α ], σ x ˆX n Σ XY Σ Y Y [ α n α ] α n α n h T Y (where h T Σ XY Σ Y ) [ 0 0 α ] Y (since h T Σ Y Σ XY ) αx n ; MSE σ x Σ XY Σ Y Σ YX h T Σ YX [ 0 0 α ] α α n α Y 8

9 8 Noise cancellation A classical problem in statistical signal processing involves estimating a weak signal (eg, the heart beat of a fetus) in the presence of a strong interference (the heart beat of its mother) by making two observations; one with the weak signal present and one without (by placing one microphone on the mother s belly and another close to her heart) The observations can then be combined to estimate the weak signal by cancelling out the interference The following is a simple version of this application LettheweaksignalX bearandomvariablewithmeanµandvariancep, andtheobservations be Y X +Z (Z being the strong interference), and Y Z +Z (Z is a measurement noise), where Z and Z are zero mean with variances N and N, respectively Assume that X, Z and Z are uncorrelated Find the best linear MSE estimate of X given Y and Y and its MSE Interprete the results ThisisavectorlinearMSEproblem SinceZ andz arezeromean,µ X µ Y µ and µ Y 0 We first normalize the random variables by subtracting off their means to get X X µ, and [ ] Y Y µ Now using the orthogonality principle we can find the best linear MSE estimate ˆX of X To do so we first find [ ] [ ] P +N N Σ Y P and Σ N N +N YX 0 Thus, ˆX Σ T YXΣ Y Y [ P 0 ] [ ] N +N N Y P(N +N )+N N N P +N P [ ] (N +N ) N Y P(N +N )+N N The best linear MSE estimate is ˆX ˆX +µ Thus, P ˆX ((N +N )(Y µ) N Y )+µ P(N +N )+N N (P((N +N )Y N Y ))+N N µ) P(N +N )+N N The MSE can be calculated by MSE σ X Σ T YXΣ Y Σ YX P Y P [ ] (N +N ) N P(N +N )+N N P (N +N ) P P(N +N )+N N PN N P(N +N )+N N [ ] P 0 9

10 TheequationfortheMSEmakesperfectsense First, notethatifn andn areheldconstant but P goes to infinity, the MSE tends to N N N +N Next, note that if both N and N go to infinity, the MSE goes to σx, ie, the estimate becomes worthless Finally, note that if either N or N goes to 0, the MSE also goes to 0 This is because the estimator will then use the measurement with zero noise variance and perfectly determine the signal X 0

11 Solutions to Additional Exercises Worst noise distribution Consider an additive noise channel Y X+Z, where the signal X N(0,P) and the noise Z has zero mean and variance N Assume X and Z are independent Find a distribution of Z that maximizes the minimum MSE of estimating X given Y, ie, the distribution of the worst noise Z that has the given mean and variance You need to justify your answer The worst noise has Gaussian distribution, ie Z N(0,N) To prove this statement, we show that the MSE corresponds to any other distribution of Z is less than or equal to the MSE of Gaussian noise, ie MSE NonG MSE G We know for any noise, MMSE estimation is no worse than linear MMSE estimation, so MSE NonG LMSE Linear MMSE estimate of X given Y is given by ˆX Cov(X,Y) σy (Y E(Y))+E(X) P P +N Y, LMSE σ X Cov (X,Y) σ Y P P P +N NP P +N Note that LMSE only depends on the second moment of X and Z So MSE corresponds to any distribution of Z is always upper bounded by the same LMSE, ie MSE NonG NP P+N When Z is Gaussian and independent of X, (X,Y) are joint Gaussian Then MSE G is equal to LMSE, ie MSE G NP P+N Hence, which shows the Gaussian noise is the worst MSE NonG NP P +N MSE G, Jointly Gaussian random variables Let X and Y be jointly Gaussian random variables with pdf f X,Y (x,y) π 3/4 e (4x /3+6y /3+8xy/3 8x 6y+6) (a) Find E(X), E(Y), Var(X), Var(Y), and Cov(X,Y) (b) Find the minimum MSE estimate of X given Y and its MSE (a) We can write the joint pdf for X and Y jointly Gaussian as ( exp f X,Y (x,y) [ a(x µ X ) +b(y µ Y ) +c(x µ X )(y µ Y ) πσ X σ Y ρ X,Y ]),

12 where a ( ρ, b X,Y )σ X ( ρ, c X,Y )σ Y ρ X,Y ( ρ X,Y )σ Xσ Y By inspection of the given f X,Y (x,y) we find that a 3, b 8 3, c 4 3, and we get three equations in three unknowns To find µ X and µ Y, we solve the equations and find that Finally ρ X,Y c ab, σx ( ρ, X,Y )a σy ( ρ X,Y )b 4 aµ X +cµ Y 4, bµ Y +cµ X 8, µ X, µ Y Cov(X,Y) ρ X,Y σ X σ Y 4 (b) X and Y are jointly Gaussian random variables Thus, the minimum MSE estimate of X given Y is linear E(X Y) Cov(X,Y) σy (Y µ Y )+µ X (Y )+ 3 Y MMSE E(Var(X Y)) ( ρ XY)σ X Markov chain Suppose X and X 3 are independent given X Show that f(x,x,x 3 ) f(x )f(x x )f(x 3 x ) f(x 3 )f(x x 3 )f(x x ) In other words, if X X X 3 forms a Markov Chain, then so does X 3 X X By definition of conditional independence, f(x,x 3 x ) f(x x )f(x 3 x )

13 Therefore, using the definition of conditional density, f(x 3 x,x ) f(x,x,x 3 ) f(x,x ) f(x,x 3 x )f(x ) f(x x )f(x ) f(x x )f(x 3 x ) f(x x ) f(x 3 x ) We are given that X and X 3 are independent given X Then f(x,x,x 3 ) f(x )f(x x )f(x 3 x,x ) f(x )f(x x )f(x 3 x ), In this case X X X 3 is said to form a Markov chain Similarly, f(x,x,x 3 ) f(x 3 )f(x x 3 )f(x x,x 3 ) f(x 3 )f(x x 3 )f(x x ), This shows that if X X X 3 is a Markov chain, then X 3 X X is also a Markov chain 4 Proof of Property 4 In Lecture Notes #6 it was stated that conditionals of a Gaussian random vector are Gaussian In this problem you will prove that fact If [ ] Y is a zero-mean GRV then X {Y y} N ( Σ X XY Σ Y y, σ X Σ XYΣ Y Σ YX) Justify each of the following steps of the proof (a) Let ˆX bethebestmselinearestimateofx giveny Then ˆX andx ˆX areindividually zero-mean Gaussians Find their variances (b) ˆX and X ˆX are independent (c) Now write X ˆX +(X ˆX) If Y y then X Σ XY Σ Y y + (X ˆX) (d) Now complete the proof Remark: This proof can be extended to vector X (a) Let ˆX be the best MSE linear estimate of X given Y In the MSE vector case section of Lecture Notes #6 it was shown that ˆX and X ˆX are individually zero-mean Gaussian random variables with variances Σ XY Σ Y Σ YX and σx Σ XYΣ Y Σ YX, respectively (b) The random variables ˆX and X ˆX are jointly Gaussian since they are obtained by a linear transformation of the GRV [Y X] T By orthogonality, ˆX and X ˆX are uncorrelated, so they are also independent By the same reasoning, X ˆX and Y are independent (c) Now write X ˆX +(X ˆX) Then given Y y since X ˆX is independent of Y X Σ XY Σ Y y+(x ˆX), (d) Thus X {Y y} is Gaussian with mean Σ XY Σ Y y and variance σ X Σ XYΣ Y Σ YX 3

14 5 Additive nonwhite Gaussian noise channel Let Y i X + Z i for i,,,n be n observations of a signal X N(0,P) The additive noise random variables Z,Z,,Z n are zero mean jointly Gaussian random variables that are independent of X and have correlation E(Z i Z j ) N i j for i,j n (a) Find the best MSE estimate of X given Y,Y,,Y n (b) Find the MSE of the estimate in part (a) Hint: the coefficients for the best estimate are of the form h T [a b b b b a] (a) The best estimate of X is of the form ˆX n h i Y i i We apply the orthogonality condition E(XY j ) E( ˆXY j ) for j n: P n h i E(Y i Y j ) i n h i E((X +Z i )(X +Z j )) i n h i (P +N i j ) i There are n equations with n unknowns: P P +N P +N/ P +N/ n P +N/ n P P +N/ P +N P +N/ n 3 P +N/ n P P +N/ n P +N/ n 3 P +N P +N/ P P +N/ n P +N/ n P +N/ P +N h h h n h n By the hint, there are only degrees of freedom given, a and b Solving this equation using the first rows of the matrix, we obtain h h P 3N +(n+)p h n h n 4

15 (b) The minimum mean square error is MSE E(X ˆX)X n P P h i Y i P ( i ) (n+)p 3N +(n+)p 3PN 3N +(n+)p 5

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011 UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and

More information

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))

More information

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #6 (Prepared by Lele Wang) Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X

More information

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #7 Prof. Young-Han Kim Tuesday, May 6, 4 Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei). Neural net. Let Y = X + Z, where the signal X U[,] and noise Z N(,) are independent.

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

Final Examination Solutions (Total: 100 points)

Final Examination Solutions (Total: 100 points) Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7 UCSD ECE50 Handout #4 Prof Young-Han Kim Wednesday, June 6, 08 Solutions to Exercise Set #7 Polya s urn An urn initially has one red ball and one white ball Let X denote the name of the first ball drawn

More information

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS In many practical situations, multiple random variables are required for analysis than a single random variable. The analysis of two random variables especially

More information

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7 UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Lecture 4: Least Squares (LS) Estimation

Lecture 4: Least Squares (LS) Estimation ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:

More information

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011 UCSD ECE53 Handout #40 Prof. Young-Han Kim Thursday, May 9, 04 Homework Set #8 Due: Thursday, June 5, 0. Discrete-time Wiener process. Let Z n, n 0 be a discrete time white Gaussian noise (WGN) process,

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EE 26 Spring 2006 Final Exam Wednesday, May 7, 8am am DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the final. The final consists of five

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Introduction to Normal Distribution

Introduction to Normal Distribution Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction

More information

18.440: Lecture 26 Conditional expectation

18.440: Lecture 26 Conditional expectation 18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions

More information

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2 MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

Bivariate Paired Numerical Data

Bivariate Paired Numerical Data Bivariate Paired Numerical Data Pearson s correlation, Spearman s ρ and Kendall s τ, tests of independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel Solutions to Homework Set #4 Differential Entropy and Gaussian Channel 1. Differential entropy. Evaluate the differential entropy h(x = f lnf for the following: (a Find the entropy of the exponential density

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20 Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

Bivariate Distributions. Discrete Bivariate Distribution Example

Bivariate Distributions. Discrete Bivariate Distribution Example Spring 7 Geog C: Phaedon C. Kyriakidis Bivariate Distributions Definition: class of multivariate probability distributions describing joint variation of outcomes of two random variables (discrete or continuous),

More information

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. NAME: ECE 302 Division MWF 0:30-:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. If you are not in Prof. Pollak s section, you may not take this

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Multivariate probability distributions and linear regression

Multivariate probability distributions and linear regression Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

FINAL EXAM: Monday 8-10am

FINAL EXAM: Monday 8-10am ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 016 Instructor: Prof. A. R. Reibman FINAL EXAM: Monday 8-10am Fall 016, TTh 3-4:15pm (December 1, 016) This is a closed book exam.

More information

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

The Binomial distribution. Probability theory 2. Example. The Binomial distribution Probability theory Tron Anders Moger September th 7 The Binomial distribution Bernoulli distribution: One experiment X i with two possible outcomes, probability of success P. If the experiment is repeated

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

ECE534, Spring 2018: Solutions for Problem Set #3

ECE534, Spring 2018: Solutions for Problem Set #3 ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51 Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v } Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

ECE 673-Random signal analysis I Final

ECE 673-Random signal analysis I Final ECE 673-Random signal analysis I Final Q ( point) Comment on the following statement "If cov(x ; X 2 ) 0; then the best predictor (without limitation on the type of predictor) of the random variable X

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Chapter 4 continued. Chapter 4 sections

Chapter 4 continued. Chapter 4 sections Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Chapter 5 Joint Probability Distributions

Chapter 5 Joint Probability Distributions Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 5 Joint Probability Distributions 5 Joint Probability Distributions CHAPTER OUTLINE 5-1 Two

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information