STAT 430/510: Lecture 16 James Piette June 24, 2010
Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7.
Joint Distribution of Functions of Cont. R.V. s Let X 1 and X 2 be jointly cont. r.v. s with joint pdf f X1,X 2. Suppose there are two r.v. s, Y 1 and Y 2, such that Y 1 = g 1 (X 1, X 2 ) and Y 2 = g 2 (X 1, X 2 ) for some functions g 1 and g 2. There are two assumptions: 1 Equations y 1 = g 1 (x 1, x 2 ) and y 2 = g 2 (x 1, x 2 ) can be uniquely solved for x 1 and x 2 in terms of y 1 and y 2, with solutions given by x 1 = h 1 (y 1, y 2 ) and x 2 = h 2 (y 1, y 2 ). 2 The functions g 1 and g 2 have cont. partial derivatives at all points (x 1, x 2 ) and are s.t. the 2 2 determinant: g 1 J(x 1, x 2 ) = x 1 g 2 x 2 g 1 g 1 x 1 x 2 0
Joint Distribution of Functions of Cont. R.V. s (cont.) Under the previous two assumptions, the r.v. s Y 1 and Y 2 are jointly cont. with joint density given by f Y1 Y 2 (y 1, y 2 ) = f X1 X 2 (x 1, x 2 ) J(x 1, x 2 ) 1 where x 1 = h 1 (y 1, y 2 ), x 2 = h 2 (y 1, y 2 ).
Example 1 Let X 1, X 2, X 3 be independent standard normal r.v. s. Question: If Y 1 = X 1 + X 2 + X 3, Y 2 = X 1 X 2 and Y 3 = X 1 X 3, then what is the joint density of Y 1, Y 2, Y 3? Solution: Need to verify the assumptions. The first assumption... X 1 = Y 1 + Y 2 + Y 3 3 X 2 = Y 1 2Y 2 + Y 3 3 X 3 = Y 1 + Y 2 2Y 3 3 And the second assumption... 1 1 1 J(x 1, x 2 ) = 1 1 0 1 0 1 = 3
Example 1 (cont.) Remember that... f X1 X 2 X 3 (x 1, x 2, x 3 ) = 1 3 e 1 x i 2 /2 (2π) 3/2 Now, all we need to do is plug in the appropriate values into the formulas: f Y1 Y 2 Y 3 (y 1, y 2, y 3 ) = 1 ( 3 f y1 + y 2 + y 3 X 1 X 2 X 3, y 1 2y 2 + y 3, y 1 + y 2 3 3 3 1 = 3(2π) 3/2 e Q(y 1,y 2,y 3 )/2 where... Q(y 1, y 2, y 3 ) = ( ) y1 + y 2 + y 2 ( ) 3 y1 2y 2 + y 2 ( 3 y1 + y 2 + + 3 3 3 = y 1 2 3 + 2y 2 2 3 + 2y 3 2 3 2y 2y 3 3
Recall: Expectation The expected value of a discrete r.v. X with pmf p(x) is given by E(X) = xp(x) x The expected value of a continuous r.v. X with pdf f (x) is given by E(X) = xf (x)dx If P(a X b) = 1, then a E(X) b.
Proposition 7.2.1 If X and Y have a joint pmf p(x, y), then E(g(X, Y )) = g(x, y)p(x, y) y x If X and Y have a joint pdf f (x, y), then E(g(X, Y )) = g(x, y)f (x, y)dxdy
Example 2 An accident occurs at point X that is uniformly dist. on a road of length L. At the time of the accident, an ambulance is at location Y that is also uniformly dist. on the road. Question: Assuming X and Y are independent, what is the expected distance between the ambulance and point of the accident? Solution: In this scenario, we are looking to calculate the expectation of the r.v.... X Y because we want distance between the two.
Example 2 (cont.) First, we need the joint density function of X and Y : f (x, y) = 1 L 2, 0 < x < L, 0 < y < L From Proposition 7.2.1,... E( X Y ) = 1 L 2 L L 0 0 x y dydx
Example 2 (cont.) Let s evaluate the first integral by splitting it up: L 0 x y dy = x 0 (x y)dy + L x (y x)dy = x 2 2 + L2 2 x 2 x(l x) 2 = L2 2 + x 2 xl Therefore, E( X Y ) = 1 L ( ) L 2 L 2 2 + x 2 xl dx = L 3 0
Properties Let X and Y be continuous r.v. s If X Y, then E(X) E(Y ). E( n i=1 X i) = n i=1 E(X i). E(aX + by ) = ae(x) + be(y ).
Sample Mean Let X 1,..., X n be i.i.d r.v. s having distribution F and expected value µ. Such a sequence of r.v. s is said to constitute a sample from the distribution F. The quantity X = n i=1 X i n is called the sample mean.
Sample Mean (cont.) The expectation of X is [ n ] E( X) X i = E n i=1 [ n ] = 1 n E X i i=1 = 1 n E(X i ) n i=1 = µ since E(X i ) µ
Expectation of Binomial R.V. Let X be a binomial r.v. with parameters (n, p). Recalling that a binomial represents the number of successes in n independent trials when each trail has probability p of being a success, we have that X = X 1 + X 2 +... + X n where X i = { 1 if the ith trial is a success 0 if the ith trial is a failure Hence, X i is a Bernoulli r.v. with expectation... E(X i ) = 0(1 p) + 1(p) = p. So... E(X) = E(X 1 ) + E(X 2 ) +... + E(X n ) = np
Expectation of Negative Binomial R.V. Let X represent the number of trials needed to get a total of r successes, where the probability of each success is p. Then X is a negative binomial r.v. with parameters (r, p). Once again, X can be represented by X = X 1 + X 2 +... + X r where X i is the number of trials required after after the (i 1)th success until a total of i successes is obtained. Thus, X i is a geometric r.v. with parameter p. So... E(X) = E(X 1 ) + E(X 2 ) +... + E(X r ) = r p
Example 3 Suppose there are N different types of coupons and each time one obtains a coupon, it is equally likely to be any one of the N types. Question: What is the expected number of coupons needed to amass before you have a complete set of at least one of each type? Solution: Let X denote the number of coupons collected before complete set is attained. Then, X = X 0 + X 1 +... + X N 1 where X i = number of additional coupons needed to be obtained after i distinct types have been collected in order to obtain another distinct type. How is X i distributed? Geometric with parameter N i, with expectation... E(X i ) = N N i. N
Example 3 (cont.) This implies that E(X) = X 0 + X 1 +... + X N 1 = 1 + N N 1 + N N 2 +... + N [ 1 = N 1 +... + 1 N 1 + 1 ] N
Example 4 Ten hunters are waiting for ducks to fly by. When a flock of ducks flies overhead, the hunters fire at the same time, but each chooses his target at random, independently of all the others. Question: If each hunter independently hits his target with probability p, then what is the expected number of ducks that escape unhurt when a flock of 10 flies overhead? Solution: Let X denote the number of ducks that escape. Then, we can represent X as X = X 1 + X 2 +... + X 10 where X i is an indicator (remember back?) such that { 1 if the ith duck escapes unhurt X i = 0 otherwise
Example 4 (cont.) Like with any indicator, its expected value is... E(X i ) = P(X i = 1) = (1 p 10 )10 Therefore, E(X) = E(X 1 ) + E(X 2 ) +... + E(X 10 ) = 10(1 p 10 )10
Motivation Expected value describes the average of the r.v. s. Variance desrcibes the variation of the r.v. s. Covariance describes the relationship between two r.v. s (i.e. how the interact).
Formalization Def: The covariance between two r.v. s X and Y is defined as Cov(X, Y ) = E[(X E(X))(Y E(Y ))] An alternative form of covariance is Cov(X, Y ) = E[XY ] E[X]E[Y ] Remembering back, Cov comes up when we look at X and Y not independent and... Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X, Y )
Let s cover self-test problems 6.20. We ve now finished Ch. 6 and sections 7.1-7.2, plus started 7.4. Remember, I ve posted HW4 up on my website and it is due on Mon. June 28th.