L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace transform of X is the same as the moment generating function for X with s replaced by s. Moments can be found from the Laplace transform as E[X n ] = ( 1) n dn ds n X (s). s=0 For a discrete, integer-valued RV X, the probability generating function of X is G X (z) = E[z X ] = p X (k)z k k= Same as the z-transform with z replaced by z 1. Can be found from the characteristic function as G X (z) = Φ X (w) e jω =z G X (z) can be used to find factorial moments. The nth factorial moment of a real RV X is E[X(X 1) (X n + 1)]. E[X(X 1) (X n + 1)] = dn dz G X(z) n z=1
L30-2 Example RANDOM VECTORS The joint cumulative distribution function of X 1, X 2,..., X N is F X1,X 2,...,X n (x 1, x 2,..., x n ) = P (X 1 x 1, X 2 x 2,..., X N x n ) We let X = (X 1, X 2,..., X n ) T be a (column) vector of the X i s, and define X x = {X 1 x 1, X 2 x 2,..., X n x n }. Then The joint cumulative distribution function of X is F X1,X 2,...,X n (x 1, x 2,..., x n ) = F X (x) = P (X x) The joint probability density function of X 1, X 2,..., X N is f X (x) = n x 1 x n F X (x)
L30-3 The marginal pdf of X i is obtained by integrating out the other variables x 1, x 2,..., x i 1, x i+1,..., x n. The conditional pdf of X i given all other X 1,..., X n is f Xi X 1,...,X i 1,X i+1,...,x n (x i x 1,..., x i 1, x i+1,..., x n ) f X1,...,X = n (x 1,..., x n ) f X1,...,X i 1,X i+1,...,x n (x 1,..., x i 1, x i+1,..., x n ) One common use for these types of conditional pdfs is for a series of RVs: Let X n = [X 1, X 2,..., X n ] Then f Xn X1,X 2,...,X n 1 (x n x 1, x 2,..., x n 1 ) = f Xn (x 1,..., x n ) f Xn 1 (x 1,..., x n 1 ) f Xn (x 1,..., x n ) = f Xn Xn 1 (x n x 1,..., x n 1 ) f Xn 1 X n 2 (x n 1 x 1,..., x n 2 )... f X2 X 1 (x 2 x 1 )f X1 (x 1 ) INDEPENDENCE FOR MULTIPLE RVS RVs X 1, X 2,... X n are statistically independent if and only if F X1,X 2,...,X n (x 1, x 2,..., x n ) = F X1 (x 1 )F X2 (x 2 )... F Xn (x n ) or, equivalently, f X1,X 2,...,X n (x 1, x 2,..., x n ) = f X1 (x 1 )f X2 (x 2 )... f Xn (x n ). EXPECTATION VECTORS AND COVARIANCE MATRICES Often the joint distribution and density functions for a sequence of random variables are unknown or difficult to work with In those cases, we often work with the random vectors in terms of their moments The most common moments used are the first moment (mean) and second central moment (covariance)
L30-4 A special case of random vectors that we often use is Gaussian random variables For Gaussian random vectors, the mean and covariances completely specify the distribution The of a random vector X is a whose elements µ 1, µ 2,..., µ n are given by µ i = x i f X (x 1, x 2,..., x n )dx 1...dx n. Note that µ i can be found directly from the marginal density for X i as µ i = x i f Xi (x i )dx i. The random vector X is K = E [ (X µ)(x µ) H]. associated with a complex For complex random vectors, K ij = E[(X i µ i )(X j µ j ) ] = E[(X j µ j ) (X i µ i )] = Kji A matrix M is a if M ij = M ji. Thus, for complex random vectors, the covariance matrix is a Note that the ith diagonal element is K ii = E[(X i µ i )(X i µ i ) ] (the for the random variable X i ) For real random vectors X, K ij = E[(X i µ i )(X j µ j )] = E[(X j µ j )(X i µ i )] = K ji
L30-5 A matrix M is if. So, for real random vectors, the covariance matrix is The is R = E[XX H ]. for a vector random variable X Note that K = R µµ H The correlation matrix is also a for a real vector random variable). ( or CLASSIFICATION OF RANDOM VECTORS Two random vectors X and Y are E[XY H ] = if and only if Two random vectors X and Y are E[XY H ] = if and only if Two random vectors X and Y are f X,Y (x, y) = if and only if
L30-6 THE MULTIDIMENSIONAL GAUSSIAN LAW If X = (X 1, X 2,..., X n ) T is a Gaussian random vector with mean µ and covariance matrix K then the density function for X can be written as [ 1 f X (x) = exp 1 ] (2π) n/2 1/2 [det K] 2 (x µ)t K 1 (x µ) SPECIAL CASE: BIVARIATE GAUSSIAN DISTRIBUTION X, Y are jointly Gaussian if and only if the joint density of X and Y can be written as { [ (x ) 2 1 1 µx f X,Y (x, y) = exp 2πσ X σ Y 1 ρ 2 2(1 ρ 2 X,Y ) σ X X,Y ( ) ( ) ( ) ]} 2 x µx y µy y µy 2ρ X,Y + σ X σ Y σ Y An equivalent condition that may be easier to work with is: X and Y are jointly Gaussian if and only if ax + by is a Gaussian random variable for any real a and b pdf is bell-shaped, centered at (µ X, µ Y ) Additional insight can be gained from considering contours of equal prob. density For equal prob.: [ (x ) 2 µx 2ρ X,Y σ X ( ) ( ) ( ) ] 2 x µx y µy y µy + = const. (1) σ X σ Y σ Y
L30-7 Equation (1) is the equation for an ellipse: (From Komo, Random Signal Analysis...) -When ρ X,Y = 0, X and Y are s.i., and equal-prob. contour ellipse is aligned w/ x- and y-axes: (a) σ x = σ y ; ρ XY = 0 (b) σ x > σ y ; ρ XY = 0 (c) σ x < σ y ; ρ XY = 0 (From Stark and Woods, Probability and Random Processes...) -When ρ X,Y 0,the major axis is at an angle given by Note that σ X = σ Y θ = 45 degrees θ = 1 2 arctan ( 2ρX,Y σ X σ Y σ 2 X σ2 Y )
L30-8 (d) (e) (f) (g) pdf (h) equiprobability contours Joint Gaussian RVs, µ X =µ Y =0, σ X =σ Y = 2, ρ XY =0.9 (From Stark and Woods, Probability and Random Processes...) SPECIAL CASE: JOINTLY GAUSSIAN RANDOM VARIABLES WITH ZERO MEAN AND UNIT VARIANCE Two Gaussian random variables X and Y that each have mean 0 and variance 1 are said to be jointly Gaussian if their joint density function can be written as f XY (x, y) = 1 2π 1 ρ { 2 } (x 2 2ρxy + y 2 ) exp, 2 (1 ρ 2 ) < x < < y <
L30-9 EX Find the marginal densities EX Under what conditions are X and Y independent? Note that X and Y can each be Gaussian without being jointly Gaussian: For example, if the joint density of X and Y is given by f XY (x, y) = 1 { } (x 2 2π exp + y 2 ) 2 ( 1 + xy exp { (x 2 + y 2 2) }), then X and Y are each Gaussian but clearly not jointly Gaussian.