Multiple Random Variables

Size: px
Start display at page:

Download "Multiple Random Variables"

Transcription

1 Multiple Random Variables

2 Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x <, < y < (, ) = F ( XY x, ) = F ( XY, y) = 0 0 F XY x, y F XY F XY F XY (, ) = 1 ( x, y) does not decrease if either x or y increases or both increase (, y) = F ( Y y) and F ( XY x, ) = F ( X x)

3 Joint Probability Density Joint distribution function for tossing two dice

4 Joint Probability Density f XY f XY ( x, y) = 2 x y F XY ( ( x, y) ) ( x, y) 0, < x <, < y < f ( XY x, y)dxdy = 1 F XY x, y f X ( x) = f ( XY x, y)dy and f Y y y x ( ) = f XY α,β ( ) = f XY x, y ( )dαdβ ( )dx P x 1 < X x 2, y 1 < Y y 2 = f ( XY x, y)dxdy ( ) = g x, y E g( X,Y ) y 2 y 1 x 2 x 1 ( )f ( XY x, y)dxdy

5 Example Combinations of Two Random Variables X and Y are independent, identically distributed (i.i.d.) random variables with common PDF f X ( x) = e x u( x) f ( Y y) = e y u( y) Find the PDF of Z = X / Y. Since X and Y are never negative, Z is never negative. F ( Z z) = P X / Y z F ( Z z) = P X zy Y > 0 + P X zy Y < 0 Since Y is never negative F ( Z z) = P X zy Y > 0

6 Combinations of Two Random F Z zy ( z) = f ( XY x, y)dxdy = e x e y dxdy Using Leibnitz s formula for differentiating an integral, b z d ( ) g( x, z)dx dz a( z) = db ( z ) g( b( z), z) da ( z ) dz dz f Z f Z 0 ( z) = z F ( z) = ye zy e y dy Z ( ) ( z) = u z z +1 ( ) 2 0 Variables zy 0, z > 0 g( a( z), z) + b( z) a( z) g( x, z) dx z

7 Combinations of Two Random Variables

8 Combinations of Two Random Variables Example The joint PDF of X and Y is defined as f XY ( x, y) = 6x, x 0, y 0,x + y 1 0, otherwise Define Z = X Y. Find the PDF of Z.

9 Combinations of Two Random Variables Given the constraints on X and Y, 1 Z 1. Z = X Y intersects X + Y = 1 at X = 1+ Z 2 For 0 z 1, F Z ( 1 z)/2 1 y y+z, Y = 1 Z 2 ( z) = 1 6xdxdy = 1 3x 2 0 ( 1 z)/2 0 1 y dy y+z F Z ( z) = 1 3 ( 4 1 z )( 1 z 2 ) f ( Z z) = 3 ( 4 1 z )( 1+ 3z)

10 For 1 z 0 F Z F Z Combinations of Two Random ( 1 z)/2 y+z 0 ( z) = 2 6xdxdy = 6 x 2 = 6 y + z z ( ) 3 ( z) = 1+ z 4 f Z Variables ( 1 z)/2 z ( ) 2 ( z) = 3 1+ z 4 0 y+z dy ( 1 z)/2 z ( ) 2 dy

11 Joint Probability Density Let f XY E X ( x, y) = 1 w X w Y rect ( ) = x f XY x, y E( Y ) = Y 0 E XY f X x X 0 w X ( )dxdy = X 0 ( ) = xy f XY x, y ( )dxdy = X 0 Y 0 ( x) = f ( XY x, y)dy = 1 rect w X rect y Y 0 x X 0 w X w Y

12 Joint Probability Density Conditional Probability F X A x Let A = { Y y} F X Y y Let A = { y 1 < Y y } 2 ( ) = P ( X x ) A P A ( x) = P X x Y y = F x, y XY P Y y y F X y1 <Y y 2 ( x) = F x, y XY 2 F Y y 2 F Y ( ) ( ) ( ) F ( XY x, y ) 1 ( ) F ( Y y ) 1

13 Let A = { Y = y} F X Y = y F X Y = y Joint Probability Density ( x) = lim Δy 0 ( x) = F XY F Y y F XY ( x, y + Δy) F XY x, y ( ( x, y) ) f Y ( y) Similarly f ( Y X =x y) = f x, y XY x ( ) ( y + Δy) F ( Y y) f X ( ) ( ), f ( X Y = y x) = = y F XY ( ( x, y) ) d dy F Y x F X Y = y ( ( y) ) ( ( x) ) = f ( x, y) XY ( y) f Y

14 Joint Probability Density In a simplified notation f ( X Y x) = f ( x, y) XY f ( Y y) Bayes Theorem f ( X Y x)f ( Y y) = f ( Y X y)f ( X x) and f Y X ( ) ( ) ( y) = f x, y XY x f X Marginal PDF s from joint or conditional PDF s f X f Y ( x) = f ( XY x, y)dy = f ( X Y x)f ( Y y)dy ( y) = f ( XY x, y)dx = f ( Y X y)f ( X x)dx

15 Joint Probability Density Example: Let a message X with a known PDF be corrupted by additive noise N also with known pdf and received as Y = X + N. Then the best estimate that can be made of the message X is the value at the peak of the conditional PDF, f ( X Y x) = f ( y Y X )f ( X x) y f Y ( )

16 Joint Probability Density Let N have the PDF, Then, for any known value of X, the PDF of Y would be Therefore if the PDF of N is f N X is f N ( y X ) ( n), the conditional PDF of Y given

17 Joint Probability Density Using Bayes theorem, f ( X Y x) = f ( y Y X )f ( X x) = f ( y x N )f X x y y f Y ( ) = f ( y x N )f X x f Y X ( ) ( y)f ( X x)dx = f Y ( ) f N f N ( ) ( y x)f ( X x) ( y x)f ( X x)dx Now the conditional PDF of X given Y can be computed.

18 Joint Probability Density To make the example concrete let f ( X x) = e x/e( X ) ( ) f E X N n ( ) u x ( ) = σ N 1 2π e n 2 2 /2σ N Then the conditional pdf of X given Y is found to be 2 σ exp N 2E 2 ( X ) y E( X ) f ( Y y) = 1+ erf 2E( X ) where erf is the error function. y σ 2 N E( X ) 2σ N

19 Joint Probability Density

20 Independent Random Variables If two random variables X and Y are independent then f X Y ( x) = f ( X x) = f ( x, y) XY f ( Y y) ( x, y) = f ( X x)f Y y Therefore f XY of their expected values. ( ) = xy f XY x, y E XY and f Y X ( y) = f ( Y y) = f ( x, y) XY ( x) f X ( ) and their correlation is the product ( )dxdy = y f ( Y y)dy x f ( X x)dx = E( X )E( Y )

21 Independent Random Variables Covariance σ XY E σ XY = σ XY = E XY * ( ) X E X ( ) Y E Y ( x E( X )) y * E Y * ( ) E X * ( ( ))f XY x, y ( ) ( )E Y * ( )dxdy If X and Y are independent, σ XY = E( X )E( Y * ) E( X )E( Y * ) = 0

22 Correlation Coefficient ρ XY = E ρ XY = Independent Random Variables X E ( X ) σ X ( ) x E X σ X ( ) E X ( ) Y * E Y * σ Y ( ) y * E Y * ( ) σ Y f XY ( x, y)dxdy ρ XY = E XY * ( )E Y * = σ XY σ X σ Y σ X σ Y If X and Y are independent ρ = 0. If they are perfectly positively correlated ρ = +1 and if they are perfectly negatively correlated ρ = 1.

23 Independent Random Variables If two random variables are independent, their covariance is zero. However, if two random variables have a zero covariance that does not mean they are necessarily independent. Independence Zero Covariance Zero Covariance Independence

24 Independent Random Variables In the traditional jargon of random variable analysis, two uncorrelated random variables have a covariance of zero. Unfortunately, this does not also imply that their correlation is zero. If their correlation is zero they are said to be orthogonal. X and Y are "Uncorrelated" σ XY = 0 X and Y are "Uncorrelated" E( XY ) = 0

25 Independent Random Variables The variance of a sum of random variables X and Y is 2 σ X +Y = σ X 2 + σ Y 2 + 2σ XY = σ X 2 + σ Y 2 + 2ρ XY σ X σ Y If Z is a linear combination of random variables X i then E Z Z = a 0 + N i=1 N i=1 ( ) = a 0 + a i E( X ) i σ Z 2 = N N j=1 a i X i a i a j σ Xi = a 2 2 X j i σ Xi + a i a j σ Xi X j i=1 N i=1 N i=1 i j N j=1

26 Independent Random Variables If the X s are all independent of each other, the variance of the linear combination is a linear combination of the variances. σ Z 2 = N i=1 a 2 2 i σ Xi If Z is simply the sum of the X s, and the X s are all independent of each other, then the variance of the sum is the sum of the variances. σ Z 2 = N i=1 2 σ Xi

27 One Function of Two Random Variables Let Z = g( X,Y ). Find the pdf of Z. F Z ( z) = P Z z ( ) z = P g X,Y ( ) R Z = P X,Y where R Z is the region in the XY plane where g X,Y For example, let Z = X + Y ( ) z

28 Probability Density of a Sum of Random Variables Let Z = X + Y. Then for Z to be less than z, X must be less than z Y. Therefore, the distribution function for Z is F Z z y ( z) = f ( XY x, y)dxdy If X and Y are independent, F Z and it can be shown that f Z ( z) = f Y y z y ( ) f X x ( )dx dy ( z) = f ( Y y)f ( X z y)dy = f ( Y z) f X z ( )

29 Moment Generating Functions The moment-generating function Φ X X is defined by Φ X ( s) = E e sx ( ) = f X x Relation to the Laplace transform Φ X d ds Φ X d ds Φ X ( ( s )) ( s) of a CV random variable ( )e sx dx. ( ( s )) = f X x s 0 Relation to moments E X n ( ) xe sx dx = x f ( X x)dx ( ) = d n ( s) = L f X x = E( X ) ( ( )) Φ ds n X s s 0 ( ) s s

30 Moment Generating Functions The moment-generating function Φ X X is defined by Φ X ( z) = E z X Relation to the z transform Φ X d dz Φ X ( z ) = E( Xz X 1 ) Relation to moments d 2 ( z) of a DV random variable ( ) = P X = n z n = p n z n. n= ( ( )) z z 1 ( z) = Z P X n n= ( ) ( ) = E X ( X 1) z X 2 dz Φ z 2 X d dz Φ ( z ) X d 2 dz Φ ( z ) 2 X z=1 z=1 = E( X ) = E( X 2 ) E( X )

31 The Chebyshev Inequality For any random variable X and any ε > 0, ( µ X +ε) P X µ X ε = f ( x X )dx + f ( X x)dx = f ( X x)dx Also µ X +ε X µx ε σ 2 X = ( x µ ) 2 X f ( X x)dx ( x µ ) 2 X f ( X x)dx ε 2 f ( X x)dx X µ X ε X µx ε It then follows that P X µ X ε σ 2 / ε 2 X This is known as the Chebyshev inequality. Using this we can put a bound on the probability of an event with knowledge only of the variance and no knowledge of the PMF or PDF.

32 The Markov Inequality For any random variable X let f X constant. Then ( x) = 0 for all X < 0 and let ε be a postive E X = x f ( X x)dx = x f ( X x)dx x f ( X x)dx ε f ( X x)dx 0 ( ) ε ε = ε P X ε Therefore P X ε E X. This is known as the Markov inequality. ε It allows us to bound the probability of certain events with knowledge only of the expected value of the random variable and no knowledge of the PMF or PDF except that it is zero for negative values.

33 The Weak Law of Large Numbers Consider taking N independent values { X 1, X 2,, X } N from a random variable X in order to develop an understanding of the nature of X. They constitute a sampling of X. The sample mean is X N = 1 N N n=1 X n. The sample size is finite, so different sets of N values will yield different sample means. Thus X N is itself a random variable and it is an estimator of the expected value of X, E( X ). A good estimator has two important qualities. It is unbiased and consistent. Unbiased means E( X ) N = E X that as N is increased the variance of the estimator is decreased. ( ). Consistent means

34 The Weak Law of Large Numbers Using the Chebyshev inequality we can put a bound on the probable deviation of X N from its expected value. P This implies that X E X N ( ) ε ( ) < ε 2 σ X N ε 2 = σ 2 X Nε, ε > 0 2 P X N E X 1 σ X Nε, ε > 0 2 The probability that X N is within some small deviation from E X made as close to one as desired by making N large enough. 2 ( ) can be

35 The Weak Law of Large Numbers Now, in P X N E X let N approach infinity. lim P N ( ) < ε X N E X 2 1 σ X Nε, ε > 0 2 ( ) < ε = 1, ε > 0 { } is a The Weak Law of Large Numbers states that if X 1, X 2,, X N sequence of iid random variable values and E( X ) is finite, then ( ) < ε lim P X N N E X = 1, ε > 0 This kind of convergence is called convergence in probability.

36 The Strong Law of Large Numbers Now consider a sequence X 1, X 2, { } of independent values of X and let X have an expected value E( X ) and a finite variance σ 2 X. Also consider a sequence of sample means { X 1, X 2, } defined by X N = 1 N Strong Law of Large Numbers says P lim X N N = E( X ) = 1 This kind of convergence is called almost sure convergence. N n=1 X n. The

37 The Laws of Large Numbers The Weak Law of Large Numbers ( ) < ε lim P X N N E X and the Strong Law of Large Numbers = 1, ε > 0 P lim X N N = E( X ) = 1 seem to be saying about the same thing. There is a subtle difference. It can be illustrated by the following example in which a sequence converges in probability but not almost surely.

38 The Laws of Large Numbers ( ) / n, 0 k < n, n = 1,2,3, Let X nk = 1, k / n ζ < k +1 0, otherwise and let ζ be uniformly distributed between 0 and 1. As n increases from one we get this "triangular" sequence of X 's. X 10 X 20 X 21 X 30 X 31 X 32 Now let Y n ( = X meaning that Y = { X, X, X, X, X, X, n 1 )/2+k+1 nk }. X 10 is one with probability one. X 20 and X 21 are each one with probability 1/2 and zero with probability 1/2. Generalizing we can say that X nk is one with probability 1/n and zero with probability 1 1/ n.

39 The Laws of Large Numbers Y n n 1 ( )/2+k+1 is therefore one with probability 1/ n and zero with probability 1 1/ n. For each n the probability that at least one of the n numbers in each length-n sequence is one is = 1 ( 1 1/ n) n. P at least one 1 = 1 P no ones In the limit as n approaches infinity this probability approaches 1 1/ e So no matter how large n gets there is a non-zero probability that at least one 1 will occur in any length-n sequence. This proves that the sequence Y does not converge almost surely because there is always a non-zero probability that a length-n sequence will contain a 1 for any n.

40 The Laws of Large Numbers ( ) is The expected value E X nk E( X ) nk = P X nk = 1 1+ P X nk = 0 0 = 1/ n and is therefore independent of k and approaches zero as n approaches 2 infinity. The expected value of X nk ( ) = P X nk = 1 2 E X nk is P X nk = = E X nk ( ) = 1/ n and the variance is X nk is n 1. So the variance of Y approaches zero n 2 as n approaches infinity. Then according to the Chebyshev inequality P Y µ Y ε σ 2 / ε 2 = n 1 Y n 2 ε 2 implying that as n approaches infinity the variation of Y gets steadily smaller and that says that Y converges in probability to zero.

41 The Laws of Large Numbers

42 The Laws of Large Numbers Consider an experiment in which we toss a fair coin and assign the value 1 to a head and the value 0 to a tail. Let N H be the number of heads, let N be the number of coin tosses, let r H be N H / N and let X be the random variable indicating a head or tail. Then N H = E( r ) H = 1/ 2. N n=1 X n, E( N ) H = N / 2 and

43 The Laws of Large Numbers 2 σ rh = σ X 2 / N σ rh = σ X / N Therefore r H 1/ 2 generally approaches zero but not smoothly or monotonically. 2 σ N H = Nσ 2 X σ N = Nσ H X. Therefore N H E( N ) H does not approach zero. So the variation of N H increases with N.

44 Convergence of Sequences of Random Variables We have already seen two types of convergence of sequences of random variables, almost sure convergence (in the Strong Law of Large Numbers) and convergence in probability (in the Weak Law of Large Numbers). Now we will explore other types of convergence.

45 Convergence of Sequences of Random Variables A sequence of random variables X n Sure Convergence { ( ζ )} converges surely to the random ( ) if the sequence of functions X ( n ζ ) converges to the function ( ) as n for all ζ in S. Sure convergence requires that every possible variable X ζ X ζ sequence converges. Different sequences may converge to different limits but all must converge. X n ( ζ ) X( ζ ) as n for all ζ S

46 Convergence of Sequences of Random Variables A sequence of random variables X n random variable X ζ Almost Sure Convergence { ( ζ )} converges almost surely to the ( ) if the sequence of functions X ( n ζ ) converges to the function X( ζ ) as n for all ζ in S, except possible on a set of probability zero. ( ζ ) X ζ ( ) as n P ζ : X n = 1 This is the convergence in the Strong Law of Large Numbers.

47 Convergence of Sequences of Random Variables Mean Square Convergence The sequence of random variables X n sense to the random variable X( ζ ) if ( ) 2 E X n ζ ( ) X( ζ ) If the limiting random variable X ζ { ( ζ )} converges in the mean - square 0 as n Criterion: The sequence of random variables X n ( ) is not known we can use the Cauchy { ( ζ )} converges in the mean - square sense to the random variable X( ζ ) if and only if ( ) 2 E X n ζ ( ) X ( m ζ ) 0 as n and m

48 Convergence of Sequences of Random Variables Convergence in Probability The sequence of random variables X n to the random variable X( ζ ) if, for any ε > 0 ( ) X( ζ ) > ε { ( ζ )} converges in probability P X n ζ 0 as n This is the convergence in the Weak Law of Large Numbers.

49 Convergence of Sequences of Random Variables Convergence in Distribution The sequence of random variables { X } n with cumulative distribution functions { F ( n x) } converges in distribution to the random variable X with cumulative distribution function F( x) if F ( n x) F( x) as n for all x at which F( x) is continuous. The Central Limit Theorem (coming soon) is an example of convergence in distribution.

50 Long-Term Arrival Rates Suppose a system has a component that fails at time X 1, it is replaced and that component fails at time X 2, and so on. Let N( t) be the number of components that have failed at time t. N( t) is called a renewal counting process. Let X j denote the lifetime of the jth component. Then the time when the nth component fails is S n = X 1 + X X n where we assume that the X j are iid non-negative random variables with 0 E( X ) = E( X ) j <. We call the X j 's the interarrival or cycle times.

51 Long-Term Arrival Rates Since the average interarrival time is E X ( ) seconds per event one would ( ) events per expect intuitively that the average rate of arrivals is 1/E X second. S N( t) t S N( t)+1 Dividing through by N( t), S N t ( ) N t ( ) S N t ( ) N t ( )+1 ( ) t N( t) S N t N( t) is the average interarrival time for the first N t ( ) arrivals.

52 S N t ( ) N t ( ) = 1 N( t) Similarly, Long-Term Arrival Rates N( t) we can say lim t ( ) N t lim t t = X j As t, N t j=1 S N( t)+1 N( t) +1 E X 1 E( X). t N ( t) = E( X) and ( ). So from SN ( t ) N t ( ) and S N( t) N( t) E X ( )+1 ( ) t N( t) S N t N( t) ( ).

53 Long-Term Time Averages Suppose that events occur at random with iid interarrival times X j and that a cost C j is associated with each event. Let C j ( t) be the cost accumulated up to time t. Then C j time t is C( t) t N( t) t 1 E X = 1 t N( t) j=1 C j ( ) and 1 N( t) = N( t) t N( t) j=1 C j 1 N t ( ) N( t) ( t) = C j. The average cost up to N( t) j=1 C j. In the limit t, j=1 ( ) C t E( C). Therefore lim t t ( ) ( ). = E C E X

54 The Central Limit Theorem Let Y N = values. N X n where the X n 's are an iid sequence of random variable n=1 ( ) Let Z N = Y N E X N N E( Z N ) = E σ X N n=1 = X n E( X) ( ) σ X N N ( X n E( X) ) n=1. σ X N = N n=1 =0 E( X n E( X) ) = 0 N σ X

55 2 σ ZN = The Central Limit Theorem N 1 2 σ X = 1 n=1 N σ X The MGF of Z N is Φ ZN Φ ZN Φ ZN 2 ( s) = E e sz N ( ( )) N ( s) = E exp s X E X n n 1 σ X N ( ( )) ( s) = E N exp s X E X σ X N ( ) = E exp s = N n 1 N n=1 ( X n E( X )) σ X N E exp s X E X n σ X N. ( ( ))

56 The Central Limit Theorem We can expand the exponential function in an infinite series. Φ ZN Φ ZN Φ ZN ( ) ( s) = E N 1+ s X E X σ X N ( ) ( ) ( ) 2 X E X + s 2 2!σ 2 X N ( ) ( X E X ) 3 + s 3 3!σ 3 X N N + 2 =σ X =0 ( s) = 1+ s E ( X E ( X )) E ( X E( X )) 2 E X E X + s 2 + s 3 σ X N 2!σ 2 X N 3!σ 3 X N N X E( X ) E ( s) = 1+ s2 2N + s3 3!σ 3 X N N ( ) 3 + N ( ) ( ) 3 + N

57 The Central Limit Theorem For large N we can neglect the higher-order terms. Then using lim m Φ ZN 1+ z m m ( s) = lim N = e z we get 1+ s2 2N N = e s2 /2 f ZN z ( ) = e z 2 /2 Thus the PDF approaches a Gaussian shape, with no assumptions about the shapes of the PDF's of the X n 's. This is convergence in distribution. 2π

58 The Central Limit Theorem Comparison of the distribution functions of two different Binomial random variables and Gaussian random variables with the same expected value and variance

59 The Central Limit Theorem Comparison of the distribution functions of two different Poisson random variables and Gaussian random variables with the same expected value and variance

60 The Central Limit Theorem Comparison of the distribution functions of two different Erlang random variables and Gaussian random variables with the same expected value and variance

61 The Central Limit Theorem Comparison of the distribution functions of a sum of five independent random variables from each of four distributions and a Gaussian random variable with the same expected value and variance as that sum

62 The Central Limit Theorem The PDF of a sum of independent random variables is the convolution of their PDF's. This concept can be extended to any number of random variables. If Z = N X n then f Z z n=1 ( ) = f X1 z ( ) f X2 ( z) f X2 ( z) f XN ( z). As the number of convolutions increases, the shape of the PDF of Z approaches the Gaussian shape.

63 The Central Limit Theorem

64 The Central Limit Theorem The Gaussian pdf f X ( x) = σ X 1 ( 2π e x µx ) 2 2 /2σ X µ X = E X ( ) and σ X = E X E( X ) 2

65 The Central Limit Theorem The Gaussian PDF Its maximum value occurs at the mean value of its argument. It is symmetrical about the mean value. The points of maximum absolute slope occur at one standard deviation above and below the mean. Its maximum value is inversely proportional to its standard deviation. The limit as the standard deviation approaches zero is a unit impulse. δ ( x µ ) x = lim σ X 0 σ X 1 ( 2π e x µx ) 2 2 /2σ X

66 The Central Limit Theorem The normal PDF is a Gaussian PDF with a mean of zero and a variance of one. f X ( x) = 1 2π e x2 /2 The central moments of the Gaussian PDF are E ( ) X E X n = 0, n odd ( n n 1)σ X, n even

67 The Central Limit Theorem In computing probabilities from a Gaussian PDF it is necessary to evaluate integrals of the form, G ( x) = 1 2π x x 2 x 1 σ X dx 2π e x µx ( ) 2 /2σ X 2. Define a function e λ2 /2 dλ. Then, using the change of variable λ = x µ X σ X x 2 µ X we can convert the integral to σ X dλ 2 /2 or G x µ 2 X x 1 µ X 2π e λ σ X G x 1 µ X σ X. σ X The G function is closely related to some other standard functions. For example the "error" function erf x ( ) = 2 π e λ2 dλ x and G x 0 ( ) = 1 2 ( erf ( 2x) ) +1.

68 The Central Limit Theorem Jointly Normal Random Variables f XY ( x, y) = exp x µ X σ X 2 2ρ XY 2πσ X σ Y ( x µ )( X y µ ) Y + y µ Y σ X σ Y ( ) ρ XY 2 1 ρ XY σ Y 2

69 The Central Limit Theorem Jointly Normal Random Variables

70 The Central Limit Theorem Jointly Normal Random Variables

71 The Central Limit Theorem Jointly Normal Random Variables

72 The Central Limit Theorem Jointly Normal Random Variables Any cross section of a bivariate Gaussian PDF at any value of x or y is a Gaussian. The marginal PDF s of X and Y can be found using f X which turns out to be ( x) = f ( XY x, y)dy Similarly f X f Y ( ( x) = e x µ X ) 2 2 /2σ X 2π σ X ( ( y) = e y µ Y ) 2 2 /2σ Y 2π σ Y

73 The Central Limit Theorem Jointly Normal Random Variables The conditional PDF of X given Y is f X Y ( x) = exp ( ( )( y µ )) Y ( x µ ) ρ X XY σ X / σ Y 2 2 2σ X 1 ρ XY 2πσ X The conditional PDF of Y given X is f Y X ( y) = exp ( ) 2 1 ρ XY ( ( )( x µ )) X ( y µ ) Y ρ XY σ Y / σ X 2σ 2 2 Y 1 ρ XY 2πσ Y ( ) 2 1 ρ XY 2 2

74 Transformations of Joint Probability Density Functions If W = g( X,Y ) and Z = h( X,Y ) and both functions are invertible then it is possible to write X = G W,Z and ( ) and Y = H( W,Z ) P x < X x + Δx, y < Y y + Δy = P w < W w + Δw, z < Z z + Δz f XY ( x, y)δxδy f ( WZ w,z)δwδz

75 Transformations of Joint Probability Density Functions ΔxΔy = J ΔwΔz where J = G w H w G z H z f WZ ( w, z) = J f ( XY x, y) = J f XY G w,z ( ( ),H( w,z) )

76 Transformations of Joint Probability Density Functions Y Let R = X 2 + Y 2 and Θ = tan -1 X, π < Θ π where X and Y are independent and Gaussian, with zero mean and equal variances. Then X = Rcos Θ J = x r y r ( ) and Y = Rsin( Θ) x θ y θ = cos θ sin θ ( ) r sin( θ) ( ) r cos( θ) = r

77 Transformations of Joint Probability Density Functions f X ( x) = σ X 1 2π e x 2 2 /2σ X and f Y ( y) = σ Y Since X and Y are independent 1 ( 2 + y 2 )/2σ 2 σ 2 = σ 2 2 X = σ Y f XY ( x, y) = 2πσ 2 e x Applying the transformation formula f RΘ f RΘ ( r,θ) = ( r,θ) = r 2 2πσ 2 e r /2σ 2 u( r), π < θ π r 2πσ 2 e r 1 2 /2σ 2 u( r)rect ( θ / 2π ) 2π e y 2 2 /2σ Y

78 Transformations of Joint Probability Density Functions The radius R is distributed according to the Rayleigh PDF f R ( r) = π r 2 2πσ 2 e r /2σ 2 u( r)dθ = π r 2 σ 2 e r /2σ 2 u( r) E( R) = π 2 σ and σ 2 = 0.429σ 2 R The angle is uniformly distributed f Θ ( θ) = r 2 2πσ 2 e r /2σ 2 u( r)dr = ( ) 2π rect θ / 2π = 1/ 2π, π < θ π 0, otherwise

79 Multivariate Probability Density F X1, X 2,, X N ( x 1, x 2,, x ) N P X 1 x 1 X 2 x 2 X N x N 0 F ( X1 x, X 2,, X N 1, x 2,,x ) N 1, < x 1 <,, < x N < F X1, X 2,, X N F X1, X 2,, X N F X1, X 2,, X N F X1, X 2,, X N (,, ) = F ( X1,,x, X 2,, X N k,, ) ( +,,+ ) = 1 = F ( X1 x, X 2,, X N 1,,,,x ) N = 0 ( x 1, x 2,, x ) N does not decrease if any number of x's increase ( +,, x k,,+ ) = F ( X x ) k k

80 f X1, X 2,, X N f X1, X 2,, X N Multivariate Probability Density ( x 1,x 2,, x ) N = N x 1 x 2 x N F X1, X 2,, X N ( x 1,x 2,, x ) N ( x 1,x 2,, x ) N 0, < x 1 <,, < x N < f ( X1 x, X 2,, X N 1, x 2,, x N )dx 1 dx 2 dx N = 1 F X1, X 2,, X N f X k P x N x 2 x 1 ( x 1, x 2,, x ) N = f ( X1 λ, X 2,, X N 1,λ 2,,λ N )dλ 1 dλ 2 dλ N ( x ) k = f X1 x, X 2,, X N 1,x 2,,x k 1,x k+1,,x N ( ) R X 1, X 2,, X N E g( X 1, X 2,, X ) N = ( )dx 1 dx 2 dx k 1 dx k+1 dx N R f X1, X 2,, X N ( ) = g x 1,x 2,,x N ( x 1,x 2,, x N )dx 1 dx 2 dx N ( )f ( X1 x, X 2,, X N 1,x 2,, x N )dx 1 dx 2 dx N

81 Other Important Probability Density Functions In an ideal gas the three components of molecular velocity are all Gaussian with zero mean and equal variances of σ 2 2 V = σ VX The speed of a molecule is 2 = σ VY 2 = σ VZ V = V X 2 +V Y 2 +V Z 2 = kt / m and the PDF of the speed is called Maxwellian and is given by f V ( v) = 2 / π v σ e v /2σ V V u( v)

82 Other Important Probability Density Functions

83 Other Important Probability Density Functions If χ 2 = Y Y Y Y 2 N = 2 Y n and the random variables Y n are all mutually independent and normally distributed then f χ 2 ( x) = x N /2 1 N n=1 ( ) e x/2 u( x) 2 N /2 Γ N / 2 This is the chi - squared PDF. ( ) 2 = N σ χ 2 E χ 2 = 2N

84 Other Important Probability Density Functions

85 Reliability Reliability is defined by R( t) = P T > t where T is the random variable representing the length of time after a system first begins operation that it fails. F T ( t) = P T t = 1 R( t) d dt ( R( t) ) = f ( T t)

86 Reliability Probably the most commonly-used term in reliability analysis is mean time to failure (MTTF). MTTF is the expected value of T which is E T ( ) = t f T t ( )dt. The conditional distribution function and PDF for the time to failure T given the condition T > t 0 are F T T >t0 ( t) = 0, t < t 0 F ( T t) F ( T t ) 0 = F t T, t t 1 F ( T t ) 0 R t 0 0 f T T >t0 ( t) = f t T R t 0 ( ) ( ) u ( t t ) 0 ( ) F ( T t ) 0 ( ) u( t t ) 0

87 Reliability A very common term in reliability analysis is failure rate which is defined by λ( t)dt = P t < T t + dt = f ( T T >t t)dt. Failure rate is the probability per unit time that a system which has been operating properly up until time t will fail, as a function of t. ( ) = f ( t ) T R( t) = R ( t ) R( t) ( t) + λ( t)r( t) = 0, t 0 λ t R, t 0

88 Reliability The solution of R ( t) + λ( t)r t ( ) = 0, t 0 is R( t) = e λ x 0 t ( )dx, t 0. One of the simplest models for system failure used in reliability analysis is that the failure rate is a constant. Let that constant be K. Then ( ) = e Kdx 0 R t MTTF is 1/K. t = e Kt and f T t ( ) = R ( t) = Ke Kt Exponential PDF

89 Reliability In some systems if any of the subsystems fails the overall system fails. If subsystem failure mechanisms are independent, the probability that the overall system is operating properly is the product of the probabilities that the subsystems are all operating properly. Let A k be the event subsystem k is operating properly and let A s be the event the overall system is operating properly. Then, if there are N subsystems P A s = P A 1 P A 2 P A N and R ( s t) = R ( 1 t)r ( 2 t) R ( N t) If the subsystems all have failure times with exponential PDF s then R s ( ) = e t/τ ( t) = e t/τ 1 e t/τ 2 e t/τ N = e t 1/τ 1 +1/τ /τ N 1/ τ = 1/ τ 1 +1/ τ / τ N

90 Reliability In some systems the overall system fails only if all of the subsystems fail. If subsystem failure mechanisms are independent, the probability that the overall system is not operating properly is the product of the probabilities that the subsystems are all not operating properly. As before let A k be the event subsystem k is operating properly and let A s be the event the overall system is operating properly. Then, if there are N subsystems P A s = P A 1 P A 2 P A N and 1 R ( s t) = ( 1 R ( 1 t) )( 1 R ( 2 t) ) 1 R N t ( ( )) If the subsystems all have failure times with exponential PDF s then R ( s t) = 1 ( 1 e t/τ )( 1 1 e t/τ 2 ) ( 1 e t/τ ) N

91 Reliability An exponential failure rate implies that whether a system has just begun operation or has been operating properly for a long time, the probability that it will fail in the next unit of time is the same. The expected value of the additional time to failure at any arbitrary time is a constant independent of past history, ( ) = t 0 + E( T ) E T T > t 0 This model is fairly reasonable for a wide range of times but not for all times in all systems. Many real systems experience two additional types of failure that are not indicated by an exponential PDF of failure times, infant mortality and wear - out.

92 Reliability The Bathtub Curve

93 Reliability The two higher-failure-rate portions of the bathtub curve are often modeled by the log - normal distribution of failure times. If a random variable X is Gaussian distributed its PDF is If Y = e X f X ( ( x) = e x µ X ) 2 2 /2σ X 2π σ X then dy / dx = e X = Y, X = ln( Y ) and the PDF of Y is f Y ( ( )) ( y) = f ln y X dy / dx ( ln( y) µ = e X ) 2 2 /2σ X yσ X 2π Y is log-normal distributed E( Y ) = e µ X +σ 2 X /2 and σ 2 Y = e 2µ X +σ 2 X ( e σ ) 2 X 1.

94 The Log-Normal Distribution Y = e X

95 The Log-Normal Distribution Another common application of the log-normal distribution is to model the pdf of a random variable X that is formed from the product of a large number N of independent random variables X n. X = The logarithm of X is then Since log X log X N X n n=1 N n=1 ( ) = log( X ) n ( ) is the sum of a large number of independent random variables its PDF tends to be Gaussian which implies that the PDF of X is log-normal in shape.

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

4 Pairs of Random Variables

4 Pairs of Random Variables B.Sc./Cert./M.Sc. Qualif. - Statistical Theory 4 Pairs of Random Variables 4.1 Introduction In this section, we consider a pair of r.v. s X, Y on (Ω, F, P), i.e. X, Y : Ω R. More precisely, we define a

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

1 Random variables and distributions

1 Random variables and distributions Random variables and distributions In this chapter we consider real valued functions, called random variables, defined on the sample space. X : S R X The set of possible values of X is denoted by the set

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Review of Mathematical Concepts. Hongwei Zhang

Review of Mathematical Concepts. Hongwei Zhang Review of Mathematical Concepts Hongwei Zhang http://www.cs.wayne.edu/~hzhang Outline Limits of real number sequences A fixed-point theorem Probability and random processes Probability model Random variable

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

General Random Variables

General Random Variables 1/65 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Probability A general random variable is discrete, continuous, or mixed. A discrete random variable

More information

Multivariate random variables

Multivariate random variables Multivariate random variables DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Joint distributions Tool to characterize several

More information

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Probability theory. References:

Probability theory. References: Reasoning Under Uncertainty References: Probability theory Mathematical methods in artificial intelligence, Bender, Chapter 7. Expert systems: Principles and programming, g, Giarratano and Riley, pag.

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Definition: n-dimensional random vector, joint pmf (pdf), marginal pmf (pdf) Theorem: How to calculate marginal pmf (pdf) given joint pmf (pdf) Example: How to calculate

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Lecture 1. ABC of Probability

Lecture 1. ABC of Probability Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

Physics 403 Probability Distributions II: More Properties of PDFs and PMFs

Physics 403 Probability Distributions II: More Properties of PDFs and PMFs Physics 403 Probability Distributions II: More Properties of PDFs and PMFs Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Last Time: Common Probability Distributions

More information

Chap 2.1 : Random Variables

Chap 2.1 : Random Variables Chap 2.1 : Random Variables Let Ω be sample space of a probability model, and X a function that maps every ξ Ω, toa unique point x R, the set of real numbers. Since the outcome ξ is not certain, so is

More information

Chapter 3 Single Random Variables and Probability Distributions (Part 1)

Chapter 3 Single Random Variables and Probability Distributions (Part 1) Chapter 3 Single Random Variables and Probability Distributions (Part 1) Contents What is a Random Variable? Probability Distribution Functions Cumulative Distribution Function Probability Density Function

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Probability, CLT, CLT counterexamples, Bayes. The PDF file of this lecture contains a full reference document on probability and random variables.

Probability, CLT, CLT counterexamples, Bayes. The PDF file of this lecture contains a full reference document on probability and random variables. Lecture 5 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Probability, CLT, CLT counterexamples, Bayes The PDF file of

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I Code: 15A04304 R15 B.Tech II Year I Semester (R15) Regular Examinations November/December 016 PROBABILITY THEY & STOCHASTIC PROCESSES (Electronics and Communication Engineering) Time: 3 hours Max. Marks:

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes. Closed book and notes. 60 minutes. A summary table of some univariate continuous distributions is provided. Four Pages. In this version of the Key, I try to be more complete than necessary to receive full

More information

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Introduction to Probability Theory

Introduction to Probability Theory Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random

More information

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial

More information

Chapter 2 Random Variables

Chapter 2 Random Variables Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Probability. Table of contents

Probability. Table of contents Probability Table of contents 1. Important definitions 2. Distributions 3. Discrete distributions 4. Continuous distributions 5. The Normal distribution 6. Multivariate random variables 7. Other continuous

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

Section 9.1. Expected Values of Sums

Section 9.1. Expected Values of Sums Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Chapter 5,6 Multiple RandomVariables

Chapter 5,6 Multiple RandomVariables Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

Multivariate random variables

Multivariate random variables DS-GA 002 Lecture notes 3 Fall 206 Introduction Multivariate random variables Probabilistic models usually include multiple uncertain numerical quantities. In this section we develop tools to characterize

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Introduction to Statistics and Error Analysis

Introduction to Statistics and Error Analysis Introduction to Statistics and Error Analysis Physics116C, 4/3/06 D. Pellett References: Data Reduction and Error Analysis for the Physical Sciences by Bevington and Robinson Particle Data Group notes

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

EE 302: Probabilistic Methods in Electrical Engineering

EE 302: Probabilistic Methods in Electrical Engineering EE : Probabilistic Methods in Electrical Engineering Print Name: Solution (//6 --sk) Test II : Chapters.5 4 //98, : PM Write down your name on each paper. Read every question carefully and solve each problem

More information

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions

More information

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2}, ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information