Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark Slides originally by: Line Ørtoft Endelt Introduction to Probability and Stocastic Processes - Part I p. 1/26
From Experiment to Probability Experiment, E Sample space, S (containing the outcomes of the experiment) Events and/or a Random variable is defined on the sample space A probability measure is found/chosen A Probabilistic Model containes Sample space. Probability measure. Class of sets forming the domain of the probability measure. Introduction to Probability and Stocastic Processes - Part I p. 2/26
Example The joint density function of X and Y is Find a: f X,Y (x,y) = axy 1 x 3, 2 y 4 f X,Y (x,y) = 0 elsewhere 1 = = a 4 3 2 1 4 2 axy dx dy = a 4y dy = 24a 4 2 y [ x 2] 3 2 dy 1 so a = 1 24 Introduction to Probability and Stocastic Processes - Part I p. 3/26
Example (continued) The marginal pdf of X: f X (x) = 24 1 f X (x) = 0 4 2 xy dy = x 4 1 x 3 elsewhere The distribution function of Y is F Y (y) = 0 y 2 F Y (y) = 1 y > 4 F Y (y) = 24 1 y 3 2 1 xv dx dv = 1 y 6 2 v dv = 12 1 (y2 4) 2 y 4 Introduction to Probability and Stocastic Processes - Part I p. 4/26
Uniform Probability Density Function X has an uniform pdf if f X (x) = { 1 b a a x b 0 elsewhere The mean and variance are µ X = b + a 2 σ 2 X = (b a)2 12 Introduction to Probability and Stocastic Processes - Part I p. 5/26
f X (x) The Uniform pdf 1 b a 0 F X (x) a µ b x 1 0 a b x Introduction to Probability and Stocastic Processes - Part I p. 6/26
Gaussian Probability Density Function Electrical noise in communication systems is often due to the cumulative effects of a large number of randomly moving charged particles and hence the instantaneous value of the noise will tend to have a Gaussian distribution. The Gaussian pdf is given by f X (x) = P(X > a) = = 1 2πσX 2 a exp { (x µ X) 2 } 1 2πσX 2 (a µ X )/σ X 1 2σ 2 X exp { (x µ X) 2 } dx 2π exp 2σ 2 X { z2 2 } dz Introduction to Probability and Stocastic Processes - Part I p. 7/26
Gaussian Probability Density Function The Q function is defined by Q(y) = 1 2π y exp { z2 P(X > a) = Q[(a µ X )/σ X )] 2 } dz, y > 0 For the standard normal distribution (µ = 0,σ = 1) P(X x) = 1 Q(x) P( a X a) = 2P( a X 0) = 2P(0 X a) P(X 0) = 1 2 = Q(0) Introduction to Probability and Stocastic Processes - Part I p. 8/26
The Standard Normal Distribution The normal distribution with µ = 0 and σx 2 standard normal distribution. = 1 is called the 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 5 0 5 Introduction to Probability and Stocastic Processes - Part I p. 9/26
The Normal Distribution 0.8 0.7 µ = 0, σ = 0.5 0.6 0.5 0.4 0.3 0.2 0.1 µ = 0, σ = 2 µ = 0, σ = 1 µ = 2, σ = 1 0 5 0 5 Introduction to Probability and Stocastic Processes - Part I p. 10/26
Example I X: the voltage output of a noise generator, std. norm. distribution. Find P(X > 2.3) and P(1 X 2.3) y Q(y) y Q(y) 0.9 0.1841 2.20 0.0139 0.95 0.1711 2.30 0.0107 1.00 0.1587 2.40 0.0082 P(X > 2.3) = Q(2.3) 0.0107 P(1 X 2.3) = (1 Q(2.3)) (1 Q(1)) = Q(1) Q(2.3) 0.148 Introduction to Probability and Stocastic Processes - Part I p. 11/26
Example II V : the velocity of the wind at a certain location, normal distributed, with µ = 2 and σ = 5. Find P( 3 V 8) P( 3 V 8) = 8 3 y = ( 3 2)/5 (8 2)/5 Q(y) 1.00 0.1587 1.2 0.1151 1 2π25 exp [ (v 2)2 1 2π exp 2 25 [ x2 2 ] dv ] dx = P(X 1.2) P(X 1) = (1 Q(1.2)) (1 Q( 1)) 0.726 since Q( 1) = 1 Q(1) due to symmetry. Introduction to Probability and Stocastic Processes - Part I p. 12/26
Bivariate Gaussian pdf The Bivariate Gaussian pdf is given by { [ (x ) 2 1 f X,Y = 2πσ X σ exp 1 µx Y 1 ρ 2 2(1 ρ 2 + ) σ X ( ) ]} 2 y µy + 2ρ(x µ X)(y µ Y ) σ Y σ X σ Y where ρ = ρ XY = E{(X µ X)(Y µ Y )} σ X σ Y = σ XY σ X σ Y Introduction to Probability and Stocastic Processes - Part I p. 13/26
Complex Random Variables Given two random variables X and Y, a complex random variable can be defined as Z = X + jy And the expected value of g(z) is defined as E{g(Z)} = g(z)f X,Y (x,y)dxdy µ Z = E{Z} = E{X} + je{y } = µ X + jµ Y σz 2 = E{ Z µ Z 2 } and the covariance of two complex RV is C Zm Z n = E{(Z m µ Zm ) (Z n µ Zn )} Introduction to Probability and Stocastic Processes - Part I p. 14/26
Joint Distribution and Density The joint probability distribution function for m random variables X 1,...,X m are given by F X1,...,X m (x,...,x m ) = P[(X 1 x 1 ),...,(X m x m )] The density function for a continuous random variable is the partial derivative of the distribution function, f X1,...,X m (x,...,x m ) = dm F X1,...,X m (x,...,x m ) dx 1 dx m Introduction to Probability and Stocastic Processes - Part I p. 15/26
Marginal PDF The marginal density function for X 1 is given by f X1 (x 1 ) = f X1,...,X m (x,...,x m )dx 2 dx m The joint marginal density function between two of the RV is found as f X1,X 2 (x 1,x 2 ) = f X1,...,X m (x,...,x m )dx 3 dx m Introduction to Probability and Stocastic Processes - Part I p. 16/26
Conditional PDF The conditional density functions are given by f X2,X 3...X m X 1 (x 2,x 3...x m x 1 ) = f X 1,X 2,...X m (x 1,x 2,...x m ) f X1 (x 1 ) and f X3,X 4...X m X 1,X 2 (x 3,x 4...x m x 1,x 2 ) = f X 1,X 2,...X m (x 1,x 2,...x m ) f X1,X 2 (x 1,x 2 ) Introduction to Probability and Stocastic Processes - Part I p. 17/26
Expected values The expected value of a function g(x 1,...,X m ) defined on m random variables are given by E{g(X 1,...,X m )} = g(x 1,...,x m )f X1,...,X m (x 1,...,x m )dx 1 dx m E{g(X 1,...,X m ) X 1,X 2 } = g(x 1,.,x m )f X3,.,X m X 1,X 2 (x 3,.,x m x 1,x 2 )dx 3 dx m Introduction to Probability and Stocastic Processes - Part I p. 18/26
Mean value and covariance The mean value and the covariance µ Xi = E{X i } σ Xi X j = E{X i X j } µ Xi µ Xj Note that when i = j, σ Xi X i is the variance of X i. Introduction to Probability and Stocastic Processes - Part I p. 19/26
Random vectors The m random variables X 1,...,X m can be represented using a vector X = X 1. or X = (X 1,X 2,...,X m ) T X m A possible value of the random variable (relating to and outcome of the underlying experiment), is represented as x = (x 1,x 2,...,x m ) T. And the joint PDF (The same as in slide no. 15) is denoted by f X (x) = f X1,...,X m (x,...,x m ) Introduction to Probability and Stocastic Processes - Part I p. 20/26
Mean Value and Covariance-matrix The mean vector is defined as E(X 1 ) E(X µ X = E(X) = 2 ). E(X m ) and the covariance-matrix is defined as σ X1 X 1 σ X1 X 2 σ X1 X m Σ X = E{XX T } µ X µ T X = σ X2 X 1 σ X2 X 2 σ X2 X m...... σ Xm X 1 σ Xm X 2 σ Xm X m Introduction to Probability and Stocastic Processes - Part I p. 21/26
Independent RV The random variables are uncorrelated, when their covariances are 0, ie. and independent when σ Xi X j = σ ij = 0, for i j f X (x) = f X1,...,X m (x,...,x m ) = m i=1 f Xi (x i ). Introduction to Probability and Stocastic Processes - Part I p. 22/26
Multivariate Gaussian Distribution A random vector X is multivariate Gaussian if its pdf is given by [ 1 f X (x) = (2π) m/2 exp 1 ] Σ X 1/2 2 (x µ X) T Σ 1 X (x µ X) where Σ X is the determinant of Σ X. Introduction to Probability and Stocastic Processes - Part I p. 23/26
Multivariate Gaussian Distribution If X has a multivariate Gaussian distribution, then 1. If X is partitioned as X = [ X 1 X 2 ], X 1 = X 1. X k, X 2 = X k+1. X m and [ ] [ ] µ X = µ X1 µ X2 Σ X = Σ 11 Σ 12 Σ 21 Σ 22 then X 1 has a k-dimensional multivariate Gaussian distribution, with mean value µ X1 and covariance Σ 11. Introduction to Probability and Stocastic Processes - Part I p. 24/26
Multivariate Gaussian Distribution 2. If Σ X is diagonal, then the components of X are independent. Note, that this ONLY holds for the Gaussian distribution. 3. If A is a k m matrix of rank k, then Y = AX has a k-variate Gaussian distribution with µ Y = Aµ X = AΣ X A T Σ Y Introduction to Probability and Stocastic Processes - Part I p. 25/26
Multivariate Gaussian Distribution 4. If X is partioned as in 1, the conditional density of X 1 given X 2 = x 2 is a k-dimensional multivariate Gaussian with µ X1 X 2 = E[X 1 X 2 = x 2 ] = µ X1 + Σ 12 Σ 1 22 (x 2 µ X2 ) Σ X1 X 2 = Σ 11 Σ 12 Σ 1 22 Σ 21 Introduction to Probability and Stocastic Processes - Part I p. 26/26