Advanced Econometrics II (Part 1)

Similar documents
Review of Probability Theory II

Chapter 7: Special Distributions

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Formulas for probability theory and linear models SF2941

Continuous Random Variables

Lecture 11. Probability Theory: an Overveiw

Chp 4. Expectation and Variance

General Random Variables

Economics 620, Lecture 8: Asymptotics I

5 Operations on Multiple Random Variables

conditional cdf, conditional pdf, total probability theorem?

Sums of independent random variables

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

BASICS OF PROBABILITY

Stochastic integration II: the Itô integral

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

1 Gambler s Ruin Problem

Stat 5101 Notes: Algorithms

Computing the covariance of two Brownian area integrals

Econometrics I. September, Part I. Department of Economics Stanford University

SDS 321: Introduction to Probability and Statistics

B8.1 Martingales Through Measure Theory. Concept of independence

Product measure and Fubini s theorem

Exercises and Answers to Chapter 1

18.440: Lecture 28 Lectures Review

Random Variables. P(x) = P[X(e)] = P(e). (1)

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Introduction to Probability Theory

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

University of Regina. Lecture Notes. Michael Kozdron

1 Probability theory. 2 Random variables and probability theory.

Elementary Analysis in Q p

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Lecture 4: Law of Large Number and Central Limit Theorem

ECON 4130 Supplementary Exercises 1-4

Hints/Solutions for Homework 3

Lecture 21: Convergence of transformations and generating a random variable

Ch3 Operations on one random variable-expectation

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

Stat 5101 Notes: Algorithms (thru 2nd midterm)

7. Introduction to Large Sample Theory

Module 9: Stationary Processes

Convergence in Distribution

Conditional densities, mass functions, and expectations

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Notes on Random Vectors and Multivariate Normal

Maximum Likelihood Asymptotic Theory. Eduardo Rossi University of Pavia

18.440: Lecture 28 Lectures Review

ECE Lecture #9 Part 2 Overview

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Regression and Statistical Inference

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

STA205 Probability: Week 8 R. Wolpert

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Probability and Distributions

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN

Probability inequalities 11

Probability and Statistics

Statistical signal processing

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Probability Lecture III (August, 2006)

Notes on Random Variables, Expectations, Probability Densities, and Martingales

6.1 Moment Generating and Characteristic Functions

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Lecture 22: Variance and Covariance

Lecture 3 Consistency of Extremum Estimators 1

Quick Tour of Basic Probability Theory and Linear Algebra

1 Probability Spaces and Random Variables

Lecture 1: August 28

The Multivariate Normal Distribution 1

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Sobolev Spaces with Weights in Domains and Boundary Value Problems for Degenerate Elliptic Equations

Multivariate Random Variable

Lecture 2: Consistency of M-estimators

Review of Probability Theory

2 (Statistics) Random variables

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Review: mostly probability and some statistics

STAT 430/510: Lecture 16

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Covariance and Correlation

Homework 2: Solution

Gov Multiple Random Variables

3. Probability and Statistics

Probability Background

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Elementary theory of L p spaces

1 Extremum Estimators

Stat 5101 Lecture Slides Deck 5. Charles J. Geyer School of Statistics University of Minnesota

STT 441 Final Exam Fall 2013

4. Distributions of Functions of Random Variables

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

Slash Distributions and Applications

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Transcription:

Advanced Econometrics II (Part 1) Dr. Mehdi Hosseinkouchack Goethe University Frankfurt Summer 2016 osseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 1 / 22

Distribution For simlicity, we consider three-dimensional case of real continuous random variables X, Y and Z with joint density f x,y,z, so that e.g. (X c, Y b, Z a) = Z a Z b Z c f x,y,z (x, y, z)dxdydz. Univariate and multivariate marginal distributions are e.g. f x (x) = f x,y (x, y) = Z Z Z f x,y,z (x, y, z)dydz, f x,y,z (x, y, z)dz. The variables are (stochastically) indeendent if: f x,y,z (x, y, z) = f x (x) f y (y) f z (z). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 2 / 22

Conditional distributions Conditional distributions are de ned as follows (denominators assumed to be ositive): In case of indeendence: f x jy (x) = f x,y (x, y), f y (y) f x jy,z (x) = f x,y,z (x, y, z), f y,z (y, z) f x,y jz (x, y) = f x,y,z (x, y, z). f z (z) f x jy (x) = f x (x), and so on. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 3 / 22

Exectation Sometimes we index the exectation oerator to make clear with resect to which variable exectation is comuted: Z E(X ) = E x (X ) = xf x (x)dx. For g with g: R! R Z E[g(X )] = E x [g(x )] = g(x)f x (x)dx. Similarly for g: R 2! R: Z Z E x,y [g(x, Y )] = g(x, y)f x,y (x, y)dxdy. In articular, the covariance is de ned as Cov(X, Y ) = E x,y [X E x (X )] [Y E y (Y )]] = E x,y (XY ) E x (X )E y (Y ). We assume that all those integrals are nite. By construction these exected values are real numbers. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 4 / 22

Therefore, it does make sense to form exectations of conditional exectations! Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 5 / 22 Conditional exectation Assume that X and Y are not indeendent and that the realization of Y is known: Y = y. This will change our exectation about X : Z E(X jy = y) = E x (X jy = y) = xf x jy (x)dx. Here, the marginal density of X has been relaced by the conditional density given Y = y. Formally we may de ne the density conditioned on the random variable Y instead of conditioned on one value Y = y: f x jy (x) = f x,y (x, Y ). f y (Y ) Note that f x jy (x) is a transformation of Y and hence a random variable, too. The same holds true for the corresonding conditional exectation: Z E(X jy ) = E x (X jy ) = xf x jy (x)dx.

Conditional exectation Corresonding rules are sometimes referred to as laws of iterated exectation (LIE) in the literature. The general result is: E[E(X jy, Z )jz ] = E(X jz ). In order to not get confused it should be helful to remember with resect to which variable exectation is taken: We obtain as secial case of the LIE: or shorter: E y [E x (X jy, Z )jz ] = E x (X jz ). E y [E x (X jy )] = E x (X ), E[E(X jy )] = E(X ). Although Y and g(y ) are random variables, we obtain uon conditioning on Y for the RV H = h(x, Y ) = Xg(Y ): E(g(Y )X jy ) = E h (g(y )X jy ) = g(y )E x (X jy ) = g(y )E(X jy ). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 6 / 22

Conditional exectation Lemma (0.1) With the notation introduced above it holds: a) E y [E x (X jy, Z )jz ] = E x (X jz ), b) E y [E x (X jy )] = E x (X ), c) E h (g(y )X jy ) = g(y )E x (X jy ). In fact, one often desires to condition on sigma elds, or information sets. The above results generalize accordingly. 1 1 In Breiman (1992) or Davidson (1994) we nd results like E[E(X jf)] = E(X ), E[E(X jf [ G)jF] = E(X jf). osseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 7 / 22

Problem set 0.1 Prove Lemma 0.1 b). Hint: Fubini s Theorem. 0.2 Let X be a normally n distributed random variable with f (x) = 1 2πσ ex (x µ) 2 /2σ o. 2 i. nd Ee θx. ii nd E (X µ) k for k = 1, 2,... iii. nd a relationshi between the results of art i and ii. iv. Find PrfX 2 Rg. 0.3 Consider the rices of an asset that each day either increases or decreases by one unit with robability or 1, resectively. If the rice of this asset it 1 when it is introduced in the market, and the robability that it ever reaches a value of 2 is α then nd α. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 8 / 22

Convergence in Probability: I Small o: We say that fx t g converges in robability to zero, written X t = o (1) or X t! 0, if for every ε > 0 it holds that Pr fjx t j > εg! 0, as t!. Big O: We say that fx t g is bounded in robability (or tight), written X t = O (1), if for every ε > 0 there exists δ (ε) 2 (0, ) such that Pr fjx t j > δ (ε)g < ε, for all t. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 9 / 22

Convergence in Probability: II For a random sequence fx t g we say that it converges in robability to a random variable X if for every ε > 0 we have that Pr fjx t X j > εg! 0 as t!. We write X t! X. X t = o (a t ) i a 1 t X t = o (1). X t = O (a t ) i a 1 t X t = O (1). If X t = o (a t ) and Y t = o (b t ) then i. X t Y t = o (a t b t ), ii. X t + Y t = o (max (a t, b t )), iii. jx t j r = o (a r t ), r > 0. If X t = O (a t ) and Y t = O (b t ) then i. X t Y t = O (a t b t ), ii. X t + Y t = O (max (a t, b t )), iii. jx t j r = O (a r t ), r > 0. If X t = o (a t ) and Y t = O (b t ) then X t Y t = o (a t b t ) Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 10 / 22

Convergence in Probability: III Proosition A.1: If X t! X and g is a continuous function on R, then g (X t )! g (X ) as t!. Proostion A.2: Let fx t g be such that X t = a + O (r t ), where a is real and 0 < r t! 0 and t!. If g is a continuous function with s dervatves at a, then g (X t ) = s j=0 g (j) (a) j! (X t a) j + o (r s t ). Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 11 / 22

Convergence in r-th-mean For a random sequence fx t g with E jx t j r < for some r > 0 we say that X t converges in the r th mean to a random variable X if E jx j r < and E jx t X j r! 0 as t!. r We write X t! X. r s X t! X ) X t! X, for some r > s > 0. r! X ) X t! X, for some r > 0. X t X t 2! X ) EXt! EX and EX 2 t! EX 2. EX t! EX and var (X t )! 0 ) X t 2! EX. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 12 / 22

Weak Convergence in Distribution: I Let ff t g be a sequence of distribution functions. If there exists a distribution function F such that as t!, F t (x)! F (x) at every oint x at which F is continuous, we say that F t converges in law (or weakly) to F and we write F t w! F. Let fx t g be a sequence of random variable and ff t g be the corresonding sequence of df s, we say that X t converges in distribution (or law) to X if there exists an rv with df F such that F t w! F. We write Xt L! X or Xt d! X. d if X t! X the Xt + c! d X + c d d if X t! X the cxt! cx Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 13 / 22

Weak Convergence in Distribution: II Proostion A.3. (Cramer-Wold Device). Let fx t g be a sequence d of random k vectors. Then X t! X i λ 0 d X t! λ 0 X for all λ 2 R k. Proosition A.4. If X t! X then as t! E jex (it 0 X t ) ex (it 0 X )j! 0 d X t! X. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 14 / 22

Weak Convergence in Distribution: III Proosition A.5. If for random k vector fx t g it holds that d! X and h : R k! R m reresent a continuous maing, then X t h (X t ) d! h (X ). In articular, if X t d! X and Yt! c then X t Y t d! X c. X t Y t d! cx if c 6= 0. X t Y t! 0 if c = 0. X t /Y t d! X /c if c 6= 0. If Y t converges in robability toa random variable then these results need not hold. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 15 / 22

Weak Convergence in Distribution: IV X t! X ) Xt d! X. X t! c, Xt d! c with c being a constant. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 16 / 22

Almost Sure Convergence For a random sequence fx t g we say that it converges almost surely (a.s.) to a random variable X i Pr fω : X t (ω)! X (ω) as t! g = 1. We write X t a.s.! X or X t! X with robability 1. X t a.s.! X i lim t! Pr su mt jx m X j > ε = 0 for all ε > 0. X t a.s.! X ) X t! X. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 17 / 22

Weak law of Large Numbers (WLLN) Proosition A.6. For an iid sequence fx t g with a nite mean µ we have that X T! µ. Proosition A.7. For X t = j=0 c j u t j, where u t is iid with mean µ and j=0 jc j j < it holds that X T! µ j=0 c j. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 18 / 22

Central Limit Theorem (CLT) Proosition A.8. For the rocess fx t g iid X T is AN µ, T 1 σ 2. µ, σ 2, it holds that Proosition A.9. Let fx t g AN µ, ct 2 Σ be k vector rocess where c t! 0 as t!, where Σ is a symmetric non-negative de nite matrix. If g : R k! R m is such that each of its elements in R m are continuously di erentiable in a neighborhood of µ, and if DΣD 0 has all its diagonal elements non-zero, where D is an m k matrix [( g i / x j ) (µ)] for i = 1, 2,..., m and j = 1, 2,..., k, then g (X t ) is AN g (µ), ct 2 DΣD 0. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 19 / 22

Problem set Prove all (or as many as your time schedule allows) of the results given above. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 20 / 22

Useful inequalities Assume that all the necessary moments exist. Markov s ineq: For a rv X with Pr [X 0] = 1 it holds that Pr [X > a] a 1 E (X ), where a > 0. Chebychev s ineq: Pr [jx µ X j κσ X ] κ 2. Hölder s ineq: E jxy j (E jx j ) 1/ (E jy j q ) 1/q, with > 1 and q be such that 1 + 1 q = 1. (Cauchy-Schwarz ineq: E jxy j EX 2 EY 2 ). Mikowski ineq: Let 1, E jx j < and E jy j <, then E jx + Y j E jx j + E jy j. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 21 / 22

Problem set Prove that all of the inequalities hold. Hosseinkouchack (Goethe University Frankfurt) Review Slides Summer 2016 22 / 22