s of Spring Quarter 2018 ECE244a - Spring 2018 1
Function s of The characteristic function is the Fourier transform of the pdf (note Goodman and Papen have different notation) C x(ω) = e iωx = = f x(x)e iωx dx (iω) n x n, (1) n! n=0 If the random variable is discrete with probability function p k (k), then the characteristic function is C k (ω) = e iωk p k (k). (2) k= ECE244a - Spring 2018 2
Relationship between C x (ω) and moments of pdf s of The nth moment of the probability distribution, if it exists, may be determined by differentiation of the characteristic function x n = 1 d n i n dω n Cx(ω). (3) ω=0 function weighted sum of the moments x n of the distribution Differentiating and setting ω = 0 generates moments. ECE244a - Spring 2018 3
Example - RV s of function of a is a with variance reciprocally scaled [ ] [ ] 1 (u u )2 exp 2πσ 2σ 2 exp iω u σ2 ω 2 2 ECE244a - Spring 2018 4
s of Consider the transformation z = f(u) where f is so that u = f 1 (z) exists. Given p u(u), what is p z(z)? ECE244a - Spring 2018 5
s (cont.) s of Differential area must be preserved or p u(u)du = p z(z)dz Substitute u = f 1 (z) [ p z(z) = p u f 1 (z) ] du dz ECE244a - Spring 2018 6
Example s of Let z = e u, with p u(u) = e u over [0, ) (Exponential distribution.) Then u = ln z, dz = e u du or du dz = e u = z 1. Range of z [1, ) Substituting p z(z) = e ln z z 1 = 1 z 2 for 1 < z < ECE244a - Spring 2018 7
s of s of Variables Let a new RV be given as the sum of two other z = v + u Given the joint distribution p uv(u, v), what is p z(z)? Start w/joint CDF F uv(u, v) and line z = u + v F z(z) is region where z < u + v or F z(z) = Differentiate both sides d dz Fz(z) = d dz z v dv p uv(u, v)du [ z v dv p uv(u, v)du Use result from calculus g(z) d p u(u)du = p u [g(z)] dg dz dz with g(z) = z v. Then dg dz = 1 and d dz z v p uv(u, v)du = p uv(z v, v) ECE244a - Spring 2018 8 ]
s of RV s ( cont. 2) s of Therefore p z(z) = p uv(z v, v)dv If are independent, then p uv(z v, v) = p u(z v)p v(v) and p z(z) = p u(z v)p v(v)dv The pdf of the sum of independent is the convolution of the separate pdfs In terms of the characteristic function M z(z) = M u(u)m v(v) ECE244a - Spring 2018 9
Distribution s of A gaussian random variable has a gaussian probability distribution defined by f x(x). = 1 2πσ e (x x )2 /2σ 2. (4) It is easy to verify that the expected value of x is x and the variance is σ 2. A unique property of gaussian random variables is that any weighted superposition of multiple gaussian random variables, whether they are independent or dependent, is also a gaussian random variable. ECE244a - Spring 2018 10
Area Under s of The probability that a unit-variance gaussian random variable exceeds a value z, P {x > z}, can be expressed in terms of the complementary error function denoted by erfc and defined as ( ) 1 e x2 /2 1 z 2 dx = 2π z 2 erfc, (5) where erfc(z) = 1 erf(z) with erf(z) being the error function that is defined as erf(z) = z 2 e s2 ds. π 0 For large arguments, the erfc function can be approximated by erfc(x) 1 x π e x2. (6) ECE244a - Spring 2018 11
Joint gaussian probability distribution s of The two-dimensional joint gaussian probability distribution for two independent random variables with the same variance is given by f xy(x, y) = 1 2πσ 2 e [(x x )2 +(y y ) 2 ]/2σ 2. (7) If the two gaussian random variables each have zero-mean and are correlated so that ρ xy. = xy /σ 2, (8) then the joint probability distribution may be written as ( ) 1 f xy(x, y) = 2πσ 2 exp x2 2ρ xyxy + y 2 1 ρ 2 xy 2σ 2 (1 ρ 2. (9) xy) If ρ xy = 0, then the probability distribution is separable and factors into f xy(x, y) = f x(x)f y(y). Therefore, uncorrelated gaussian random variables are independent. ECE244a - Spring 2018 12
Circularly Symmetric Variables s of A two-dimensional gaussian distribution is often used to model complex-baseband noise The corresponding random variable is called a complex gaussian random variable. If the two noise components are independent and have equal variance, then the two-dimensional gaussian distribution is circularly-symmetric. The corresponding random variable is called a circularly-symmetric gaussian random variable. It is possible to have a joint two-dimensional probability distribution that has marginal gaussian distributions, but is not jointly gaussian. Knowing that each marginal distribution is gaussian is not sufficient to infer that the joint distribution is gaussian. ECE244a - Spring 2018 13
Distribution s of The probability distribution for the multivariate gaussian distribution is given by f x(x) = where C is the real autocovariance matrix. 1 e 1 2 (x x ) T C 1 (x x ), (10) (2π) N/2 det C The matrix is defined as the expectation of the outer product of the column vector with itself after removing the mean C = (x x ) (x x ) T. (11) The on-diagonal matrix element C ii is the variance of the gaussian random variable x i. The off-diagonal matrix element C ij is the autocovariance of the two gaussian random variables x i and x j. ECE244a - Spring 2018 14
Example s of As an example, let a set of M uncorrelated gaussian random variables have an autocovariance matrix given by C = σ 2 I M where where I M is an M by M identity matrix. Using the matrix identity det(σ 2 I M) = σ 2M, the joint probability distribution is f x(x) = 1 (2πσ 2 ) M/2 e x x 2 /2σ 2. (12) ECE244a - Spring 2018 15
Circularly-symmetric Variables s of A multivariate joint gaussian distribution can also be defined in which each component of the column vector is a complex gaussian random variable z i = Re[z i ] + i Im[z i ]. The complex autocovariance matrix of this vector is defined as W. = (z z ) (z z ). (13) If the complex gaussian random variables are jointly gaussian and jointly circularly symmetric, then the real autocovariance matrix C given in (11) can be expressed in terms of the complex autocovariance matrix W given in (13) [ ] C = 1 Re W Im W. (14) 2 Im W Re W (asked as a problem) ECE244a - Spring 2018 16
pdf for vector of Circularly-symmetric s s of The joint probability distribution for a circularly-symmetric multivariate gaussian distribution expressed in terms of the complex autocovariance matrix is f z(z) = 1 π N det W e (z z ) W 1 (z z ). (15) Using the properties of determinants, the leading term (π N det W) 1 may be written as det(πw) 1 If W = 2σ 2 I M, then following the same steps used to derive (12), we have f z(z) = 1 (2πσ 2 ) M e z z 2 /2σ 2. (16) ECE244a - Spring 2018 17
s of Example A new set of decorrelated gaussian random variables may be defined by using a coordinate transformation matrix T that diagonalizes the autocovariance matrix C. The uncorrelated gaussian random variables in the new coordinate system are then independent, but may have changed mean values and variances. Consider a complex gaussian random variable defined in (9) with ρ xy = 0. The joint probability distribution f xy(x, y) for these correlated gaussian random variables is shown below y y x x (a) (b) ECE244a - Spring 2018 18
Example-cont. s of To determine the transformation that produces decorrelated random variables, define a new set of random variables x and y such that ] [ x y ] = A [ x y where A is the matrix that diagonalizes the autocovariance matrix C, given as [ ] σx C = 2 ρ xyσ xσ y ρ xyσ xσ y σy 2. The matrix A is formed by the eigenvectors of C. If σ 2 x = σ 2 y = σ 2, then x = 1 2 (x + y) and y = 1 2 (x y). The eigenvalues of C are the variances of the decorrelated gaussian random variables and are given by σ 2 x = σ2 (1 + ρ xy) and σ 2 y = σ2 (1 ρ xy)., ECE244a - Spring 2018 19
Decorrelated Distribution s of The decorrelated joint gaussian distribution is [ ] f x y (x, y 1 /2σ ) = 2 (1+ρ xy) 2πσ 2 (1 + ρ xy) e x 2 [ ] 1 /2σ 2 (1 ρ xy) 2πσ 2 (1 ρ xy) e y 2, where the functional form is written to show that the probability distribution in the new coordinate system is separable as f x y (x, y ) = f x (x )f y (y ). Therefore x and y are independent gaussian random variables.joint Variables ECE244a - Spring 2018 20
Correlated s of Two jointly random variables each with zero mean [ ] 1 p uv(u, v) = 2πσ 2 1 ρ exp u2 + v 2 2uvρ 2 2σ 2 (1 ρ 2 ) u v ρ σ 2 Uncorrelated jointly are independent (One of rare times that correlation implies independence.) s of statistically independent are also with the total variance being the sum of the individual variances. s of dependent are also but the variance of sum is not sum of individual variances. If σ same for each distribution, total variance varies from 2σ 2 (two are independent (ρ = 0) to 4σ 2 when the two are completely correlated (ρ = 1). ECE244a - Spring 2018 21
s of Let the pdf of the sum of n independent be defined by z = 1 n i=1 U i u i σ i where it is assumed that z has unit variance and zero mean. As n the pdf of z approaches a distribution regardless of the underlining pdfs or lim pz(z) = 1 e z2 2 n 2π ECE244a - Spring 2018 22
(cont.) s of Result only true in the limit. Finite sums may not approach -especially in tails individual distributions must be small. ECE244a - Spring 2018 23
Phasors s of Recall that a single component of the field is real U(z, t) = A(z, t) cos [ωt kz + φ(z, t)] At a specific point in space r and a specific time t, the amplitude of the field is u = A cos [ φ ] where A and φ are random variables. Let u = Re{U} where U = Ae jφ is a random complex phasor. Consider as a vector in complex plane ECE244a - Spring 2018 24
Phasor s Consider sum of a large number of random phasors added to a constant phasor A as shown below s of Imaginary Axis (quadrature) of independent random vectors Many independent contributions Imaginary Axis (quadrature) Joint gaussian pdf Real Axis (in-phase) Real Axis (in-phase) ECE244a - Spring 2018 25
s of Phasors (cont. 1) s of Write down real and imag. parts of the fluctuation part of the phasor r = Re{a} = A cos(φ) + 1 N i = Im{a} = A sin(φ) + 1 N N α k cos φ k k=1 N α k sin φ k k=1 Both r and i are sums of many independent contributions and thus as asserted by the central limit theorem, the pdf of each is and it can be shown that r and i are jointly ECE244a - Spring 2018 26
Mean of the s of Now calculate mean, variance and correlation of the The mean value of the distribution is given by r = = 1 N 1 N N αk cos φ k k=1 N α k cos φ k (from independence) k=1 = N α cos φ (from iid) = 0 ( φ uniform over ( π, π)) ECE244a - Spring 2018 27
s of Start with definition of variance σr 2 = r 2 r r = 0 from before, so that σr 2 = r 2 and r 2 N N = N 1 αn cos φ n α k cos φ k k=1 n=1 N N = N 1 α 2 cos φ n cos φ k (from independence) k=1 n=1 and same distribution For n = k cos φ n cos φ k = 1 2 (mean of cos 2 (φ) = 1 2 for φ being uniform ) For n = k cos φ n cos φ k = 0 (add out of phase ) ECE244a - Spring 2018 28
Variance and s of The variance then becomes r 2 = i 2 α 2 = σ 2 2 r i r i = 1 N N α 2 cos φ N n sin φ k = 0 k=1 n=1 ECE244a - Spring 2018 29
s of A sum of a large number of phasors produces a joint process f xy(x, y) = σ 2 = α 2 /2 variances are the same 1 2πσ 2 e [(x x )2 +(y y ) 2 ]/2σ 2. (17) x and y are zero mean, jointly and are independent ECE244a - Spring 2018 30