Problem Set #5 Econ 103 Part I Problems from the Textbook Chapter 4: 1, 3, 5, 7, 9, 11, 13, 15, 25, 27, 29 Chapter 5: 1, 3, 5, 9, 11, 13, 17 Part II Additional Problems 1. Suppose X is a random variable with support { 1, 0, 1} where p( 1) = q and p(1) = p. (a) What is p(0)? By the complement rule p(0) = 1 p q. (b) Calculate the CDF, F (x 0 ), of X. F (x 0 ) = 0, x 0 < 1 q, 1 x 0 < 0 1 p, 0 x 0 < 1 1, x 0 1 (c) Calculate E[X]. E[X] = 1 q + 0 (1 p q) + p 1 = p q (d) What relationship must hold between p and q to ensure E[X] = 0? p = q 2. Fill in the missing details from class to calculate the variance of a Bernoulli Random Variable directly, that is without using the shortcut formula. 1
σ 2 = V ar(x) = (x µ) 2 p(x) = x {0,1} x {0,1} (x p) 2 p(x) = (0 p) 2 (1 p) + (1 p) 2 p = p 2 (1 p) + (1 p) 2 p = p 2 p 3 + p 2p 2 + p 3 = p p 2 = p(1 p) 3. Prove that the Bernoulli Random Variable is a special case of the Binomial Random variable for which n = 1. (Hint: compare pmfs.) The pmf for a Binomial(n, p) random variable is ( ) n p(x) = p x (1 p) n x x with support {0, 1, 2..., n}. Setting n = 1 gives, ( ) 1 p(x) = p(x) = p x (1 p) 1 x x with support {0, 1}. Plugging in each realization in the support, and recalling that 0! = 1, we have 1! p(0) = 0!(1 0)! p0 (1 p) 1 0 = 1 p and p(1) = 1! 1!(1 1)! p1 (1 p) 0 = p which is exactly how we defined the Bernoulli Random Variable. 4. Suppose that X is a random variable with support {1, 2} and Y is a random variable with support {0, 1} where X and Y have the following joint distribution: p XY (1, 0) = 0.20, p XY (1, 1) = 0.30 p XY (2, 0) = 0.25, p XY (2, 1) = 0.25 Page 2
(a) Express the joint distribution in a 2 2 table. Y X 1 2 0 0.20 0.25 1 0.30 0.25 (b) Using the table, calculate the marginal probability distributions of X and Y. p X (1) = p XY (1, 0) + p XY (1, 1) = 0.20 + 0.30 = 0.50 p X (2) = p XY (2, 0) + p XY (2, 1) = 0.25 + 0.25 = 0.50 p Y (0) = p XY (1, 0) + p XY (2, 0) = 0.20 + 0.25 = 0.45 p Y (1) = p XY (1, 1) + p XY (2, 1) = 0.30 + 0.25 = 0.55 (c) Calculate the conditional probability distribution of Y X = 1 and Y X = 2. The distribution of Y X = 1 is P (Y = 0 X = 1) = p XY (1, 0) p X (1) P (Y = 1 X = 1) = p XY (1, 1) p X (1) = 0.2 0.5 = 0.4 = 0.3 0.5 = 0.6 while the distribution of Y X = 2 is P (Y = 0 X = 2) = p XY (2, 0) p X (2) P (Y = 1 X = 2) = p XY (2, 1) p X (2) = 0.25 0.5 = 0.5 = 0.25 0.5 = 0.5 (d) Calculate E[Y X]. Page 3
Hence, E[Y X = 1] = 0 0.4 + 1 0.6 = 0.6 E[Y X = 2] = 0 0.5 + 1 0.5 = 0.5 E[Y X] = since p X (1) = 0.5 and p X (2) = 0.5. (e) What is E[E[Y X]]? { 0.6 with probability 0.5 0.5 with probability 0.5 E[E[Y X]] = 0.5 0.6 + 0.5 0.5 = 0.3 + 0.25 = 0.55. Note that this equals the expectation of Y calculated from its marginal distribution, since E[Y ] = 0 0.45 + 1 0.55. This illustrates the so-called Law of Iterated Expectations. (f) Calculate the covariance between X and Y using the shortcut formula. First, from the marginal distributions, E[X] = 1 0.5 + 2 0.5 = 1.5 and E[Y ] = 0 0.45 + 1 0.55 = 0.55. Hence E[X]E[Y ] = 1.5 0.55 = 0.825. Second, E[XY ] = (0 1) 0.2 + (0 2) 0.25 + (1 1) 0.3 + (1 2)0.25 = 0.3 + 0.5 = 0.8 Finally Cov(X, Y ) = E[XY ] E[X]E[Y ] = 0.8 0.825 = 0.025 5. Let X and Y be discrete random variables and a, b, c, d be constants. Prove the following: (a) Cov(a + bx, c + dy ) = bdcov(x, Y ) Let µ X = E[X] and µ Y = E[Y ]. By the linearity of expectation, E[a + bx] = a + bµ X E[c + dy ] = c + dµ Y Thus, we have (a + bx) E[a + bx] = b(x µ X ) (c + dy) E[c + dy ] = d(y µ Y ) Page 4
Substituting these into the formula for the covariance between two discrete random variables, Cov(a + bx, c + dy ) = [b(x µ X )] [d(y µ Y )] p(x, y) x y = bd (x µ X )(y µ Y )p(x, y) x y = bdcov(x, Y ) (b) Corr(a + bx, c + dy ) = Corr(X, Y ) provided that b, c are positive. Corr(a + bx, c + dy ) = = = Cov(a + bx, c + dy ) V ar(a + bx)v ar(c + dy ) bdcov(x, Y ) b2 V ar(x)d 2 V ar(y ) Cov(X, Y ) V ar(x)v ar(y ) = Corr(X, Y ) 6. Fill in the missing steps from lecture to prove the shortcut formula for covariance: Cov(X, Y ) = E[XY ] E[X]E[Y ] By the Linearity of Expectation, Cov(X, Y ) = E[(X µ X )(Y µ Y )] = E[XY µ X Y µ Y X + µ X µ Y ] = E[XY ] µ x E[Y ] µ Y E[X] + µ X µ Y = E[XY ] µ X µ Y µ Y µ X + µ X µ Y = E[XY ] µ X µ Y = E[XY ] E[X]E[Y ] Page 5
7. Let X 1 be a random variable denoting the returns of stock 1, and X 2 be a random variable denoting the returns of stock 2. Accordingly let µ 1 = E[X 1 ], µ 2 = E[X 2 ], σ 2 1 = V ar(x 1 ), σ 2 2 = V ar(x 2 ) and ρ = Corr(X 1, X 2 ). A portfolio, Π, is a linear combination of X 1 and X 2 with weights that sum to one, that is Π(ω) = ωx 1 + (1 ω)x 2, indicating the proportions of stock 1 and stock 2 that an investor holds. In this example, we require ω [0, 1], so that negative weights are not allowed. (This rules out short-selling.) (a) Calculate E[Π(ω)] in terms of ω, µ 1 and µ 2. E[Π(ω)] = E[ωX 1 + (1 ω)x 2 ] = ωe[x 1 ] + (1 ω)e[x 2 ] = ωµ 1 + (1 ω)µ 2 (b) If ω [0, 1] is it possible to have E[Π(ω)] > µ 1 and E[Π(ω)] > µ 2? What about E[Π(ω)] < µ 1 and E[Π(ω)] < µ 2? Explain. No. If short-selling is disallowed, the portfolio expected return must be between µ 1 and µ 2. (c) Express Cov(X 1, X 2 ) in terms of ρ and σ 1, σ 2. Cov(X, Y ) = ρσ 1 σ 2 (d) What is V ar[π(ω)]? (Your answer should be in terms of ρ, σ1 2 and σ2.) 2 V ar[π(ω)] = V ar[ωx 1 + (1 ω)x 2 ] = ω 2 V ar(x 1 ) + (1 ω) 2 V ar(x 2 ) + 2ω(1 ω)cov(x 1, X 2 ) = ω 2 σ1 2 + (1 ω) 2 σ2 2 + 2ω(1 ω)ρσ 1 σ 2 (e) Using part (d) show that the value of ω that minimizes V ar[π(ω)] is ω = σ 2 2 ρσ 1 σ 2 σ 2 1 + σ 2 2 2ρσ 1 σ 2 In other words, Π(ω ) is the minimum variance portfolio. Page 6
The First Order Condition is: 2ωσ1 2 2(1 ω)σ2 2 + (2 4ω)ρσ 1 σ 2 = 0 Dividing both sides by two and rearranging: ωσ1 2 (1 ω)σ2 2 + (1 2ω)ρσ 1 σ 2 = 0 ωσ1 2 σ2 2 + ωσ2 2 + ρσ 1 σ 2 2ωρσ 1 σ 2 = 0 ω(σ1 2 + σ2 2 2ρσ 1 σ 2 ) = σ2 2 ρσ 1 σ 2 So we have ω = σ 2 2 ρσ 1 σ 2 σ 2 1 + σ 2 2 2ρσ 1 σ 2 (f) If you want a challenge, check the second order condition from part (e). The second derivative is 2σ 2 1 2σ 2 2 4ρσ 1 σ 2 and, since ρ = 1 is the largest possible value for ρ, 2σ 2 1 2σ 2 2 4ρσ 1 σ 2 2σ 2 1 2σ 2 2 4σ 1 σ 2 = 2(σ 1 σ 2 ) 2 0 so the second derivative is positive, indicating a minimum. minimum since the problem is quadratic in ω. This is a global Page 7