Advanced Econometrics II (Part 1) Dr. Mehdi Hosseinkouchack Goethe University Frankfurt Summer 2016 osseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 1 / 22
Distribution For simlicity, we consider three-dimensional case of real continuous random variables X, Y and Z with joint density f x,y,z, so that e.g. (X c, Y b, Z a) = Z a Z b Z c f x,y,z (x, y, z)dxdydz. Univariate and multivariate marginal distributions are e.g. f x (x) = f x,y (x, y) = Z Z Z f x,y,z (x, y, z)dydz, f x,y,z (x, y, z)dz. The variables are (stochastically) indeendent if: f x,y,z (x, y, z) = f x (x) f y (y) f z (z). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 2 / 22
Conditional distributions Conditional distributions are de ned as follows (denominators assumed to be ositive): In case of indeendence: f x jy (x) = f x,y (x, y), f y (y) f x jy,z (x) = f x,y,z (x, y, z), f y,z (y, z) f x,y jz (x, y) = f x,y,z (x, y, z). f z (z) f x jy (x) = f x (x), and so on. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 3 / 22
Exectation Sometimes we index the exectation oerator to make clear with resect to which variable exectation is comuted: Z E(X ) = E x (X ) = xf x (x)dx. For g with g: R! R Z E[g(X )] = E x [g(x )] = g(x)f x (x)dx. Similarly for g: R 2! R: Z Z E x,y [g(x, Y )] = g(x, y)f x,y (x, y)dxdy. In articular, the covariance is de ned as Cov(X, Y ) = E x,y [X E x (X )] [Y E y (Y )]] = E x,y (XY ) E x (X )E y (Y ). We assume that all those integrals are nite. By construction these exected values are real numbers. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 4 / 22
Therefore, it does make sense to form exectations of conditional exectations! Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 5 / 22 Conditional exectation Assume that X and Y are not indeendent and that the realization of Y is known: Y = y. This will change our exectation about X : Z E(X jy = y) = E x (X jy = y) = xf x jy (x)dx. Here, the marginal density of X has been relaced by the conditional density given Y = y. Formally we may de ne the density conditioned on the random variable Y instead of conditioned on one value Y = y: f x jy (x) = f x,y (x, Y ). f y (Y ) Note that f x jy (x) is a transformation of Y and hence a random variable, too. The same holds true for the corresonding conditional exectation: Z E(X jy ) = E x (X jy ) = xf x jy (x)dx.
Conditional exectation Corresonding rules are sometimes referred to as laws of iterated exectation (LIE) in the literature. The general result is: E[E(X jy, Z )jz ] = E(X jz ). In order to not get confused it should be helful to remember with resect to which variable exectation is taken: We obtain as secial case of the LIE: or shorter: E y [E x (X jy, Z )jz ] = E x (X jz ). E y [E x (X jy )] = E x (X ), E[E(X jy )] = E(X ). Although Y and g(y ) are random variables, we obtain uon conditioning on Y for the RV H = h(x, Y ) = Xg(Y ): E(g(Y )X jy ) = E h (g(y )X jy ) = g(y )E x (X jy ) = g(y )E(X jy ). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 6 / 22
Conditional exectation Lemma (0.1) With the notation introduced above it holds: a) E y [E x (X jy, Z )jz ] = E x (X jz ), b) E y [E x (X jy )] = E x (X ), c) E h (g(y )X jy ) = g(y )E x (X jy ). In fact, one often desires to condition on sigma elds, or information sets. The above results generalize accordingly. 1 1 In Breiman (1992) or Davidson (1994) we nd results like E[E(X jf)] = E(X ), E[E(X jf [ G)jF] = E(X jf). osseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 7 / 22
Problem set 0.1 Prove Lemma 0.1 b). Hint: Fubini s Theorem. 0.2 Let X be a normally n distributed random variable with f (x) = 1 2πσ ex (x µ) 2 /2σ o. 2 i. nd Ee θx. ii nd E (X µ) k for k = 1, 2,... iii. nd a relationshi between the results of art i and ii. iv. Find PrfX 2 Rg. 0.3 Consider the rices of an asset that each day either increases or decreases by one unit with robability or 1, resectively. If the rice of this asset it 1 when it is introduced in the market, and the robability that it ever reaches a value of 2 is α then nd α. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 8 / 22
Convergence in Probability: I Small o: We say that fx t g converges in robability to zero, written X t = o (1) or X t! 0, if for every ε > 0 it holds that Pr fjx t j > εg! 0, as t!. Big O: We say that fx t g is bounded in robability (or tight), written X t = O (1), if for every ε > 0 there exists δ (ε) 2 (0, ) such that Pr fjx t j > δ (ε)g < ε, for all t. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 9 / 22
Convergence in Probability: II For a random sequence fx t g we say that it converges in robability to a random variable X if for every ε > 0 we have that Pr fjx t X j > εg! 0 as t!. We write X t! X. X t = o (a t ) i a 1 t X t = o (1). X t = O (a t ) i a 1 t X t = O (1). If X t = o (a t ) and Y t = o (b t ) then i. X t Y t = o (a t b t ), ii. X t + Y t = o (max (a t, b t )), iii. jx t j r = o (a r t ), r > 0. If X t = O (a t ) and Y t = O (b t ) then i. X t Y t = O (a t b t ), ii. X t + Y t = O (max (a t, b t )), iii. jx t j r = O (a r t ), r > 0. If X t = o (a t ) and Y t = O (b t ) then X t Y t = o (a t b t ) Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 10 / 22
Convergence in Probability: III Proosition A.1: If X t! X and g is a continuous function on R, then g (X t )! g (X ) as t!. Proostion A.2: Let fx t g be such that X t = a + O (r t ), where a is real and 0 < r t! 0 and t!. If g is a continuous function with s dervatves at a, then g (X t ) = s j=0 g (j) (a) j! (X t a) j + o (r s t ). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 11 / 22
Convergence in r-th-mean For a random sequence fx t g with E jx t j r < for some r > 0 we say that X t converges in the r th mean to a random variable X if E jx j r < and E jx t X j r! 0 as t!. r We write X t! X. r s X t! X ) X t! X, for some r > s > 0. r! X ) X t! X, for some r > 0. X t X t 2! X ) EXt! EX and EX 2 t! EX 2. EX t! EX and var (X t )! 0 ) X t 2! EX. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 12 / 22
Weak Convergence in Distribution: I Let ff t g be a sequence of distribution functions. If there exists a distribution function F such that as t!, F t (x)! F (x) at every oint x at which F is continuous, we say that F t converges in law (or weakly) to F and we write F t w! F. Let fx t g be a sequence of random variable and ff t g be the corresonding sequence of df s, we say that X t converges in distribution (or law) to X if there exists an rv with df F such that F t w! F. We write Xt L! X or Xt d! X. d if X t! X the Xt + c! d X + c d d if X t! X the cxt! cx Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 13 / 22
Weak Convergence in Distribution: II Proostion A.3. (Cramer-Wold Device). Let fx t g be a sequence d of random k vectors. Then X t! X i λ 0 d X t! λ 0 X for all λ 2 R k. Proosition A.4. If X t! X then as t! E jex (it 0 X t ) ex (it 0 X )j! 0 d X t! X. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 14 / 22
Weak Convergence in Distribution: III Proosition A.5. If for random k vector fx t g it holds that d! X and h : R k! R m reresent a continuous maing, then X t h (X t ) d! h (X ). In articular, if X t d! X and Yt! c then X t Y t d! X c. X t Y t d! cx if c 6= 0. X t Y t! 0 if c = 0. X t /Y t d! X /c if c 6= 0. If Y t converges in robability toa random variable then these results need not hold. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 15 / 22
Weak Convergence in Distribution: IV X t! X ) Xt d! X. X t! c, Xt d! c with c being a constant. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 16 / 22
Almost Sure Convergence For a random sequence fx t g we say that it converges almost surely (a.s.) to a random variable X i Pr fω : X t (ω)! X (ω) as t! g = 1. We write X t a.s.! X or X t! X with robability 1. X t a.s.! X i lim t! Pr su mt jx m X j > ε = 0 for all ε > 0. X t a.s.! X ) X t! X. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 17 / 22
Weak law of Large Numbers (WLLN) Proosition A.6. For an iid sequence fx t g with a nite mean µ we have that X T! µ. Proosition A.7. For X t = j=0 c j u t j, where u t is iid with mean µ and j=0 jc j j < it holds that X T! µ j=0 c j. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 18 / 22
Central Limit Theorem (CLT) Proosition A.8. For the rocess fx t g iid X T is AN µ, T 1 σ 2. µ, σ 2, it holds that Proosition A.9. Let fx t g AN µ, ct 2 Σ be k vector rocess where c t! 0 as t!, where Σ is a symmetric non-negative de nite matrix. If g : R k! R m is such that each of its elements in R m are continuously di erentiable in a neighborhood of µ, and if DΣD 0 has all its diagonal elements non-zero, where D is an m k matrix [( g i / x j ) (µ)] for i = 1, 2,..., m and j = 1, 2,..., k, then g (X t ) is AN g (µ), ct 2 DΣD 0. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 19 / 22
Problem set Prove all (or as many as your time schedule allows) of the results given above. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 20 / 22
Useful inequalities Assume that all the necessary moments exist. Markov s ineq: For a rv X with Pr [X 0] = 1 it holds that Pr [X > a] a 1 E (X ), where a > 0. Chebychev s ineq: Pr [jx µ X j κσ X ] κ 2. Hölder s ineq: E jxy j (E jx j ) 1/ (E jy j q ) 1/q, with > 1 and q be such that 1 + 1 q = 1. (Cauchy-Schwarz ineq: E jxy j EX 2 EY 2 ). Mikowski ineq: Let 1, E jx j < and E jy j <, then E jx + Y j E jx j + E jy j. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 21 / 22
Problem set Prove that all of the inequalities hold. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 22 / 22