Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Similar documents
Economics 620, Lecture 8: Asymptotics I

LECTURE 8: ASYMPTOTICS I

Economics 241B Review of Limit Theorems for Sequences of Random Variables

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

Notes on Random Vectors and Multivariate Normal

1 Solution to Problem 2.1

Convergence in Distribution

Asymptotic Statistics-III. Changliang Zou

Lecture 5: Moment generating functions

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

Regression and Statistical Inference

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

4 Moment generating functions

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

Economics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model

the convolution of f and g) given by

Discrete Distributions

Topic 7: Convergence of Random Variables

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Limiting Distributions

The Multivariate Normal Distribution 1

Lecture 11. Multivariate Normal theory

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Chapter 7: Special Distributions

STAT 512 sp 2018 Summary Sheet

Limiting Distributions

8 Laws of large numbers

Econ 508B: Lecture 5

the law of large numbers & the CLT

Part 3.3 Differentiation Taylor Polynomials

Math 341: Probability Seventeenth Lecture (11/10/09)

the law of large numbers & the CLT

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

PROBABILITY THEORY LECTURE 3

Graduate Econometrics I: Asymptotic Theory

Ma 530 Power Series II

CHAPTER 3: LARGE SAMPLE THEORY

Summary. Ancillary Statistics What is an ancillary statistic for θ? .2 Can an ancillary statistic be a sufficient statistic?

Chapter 2. Discrete Distributions

Lecture The Sample Mean and the Sample Variance Under Assumption of Normality

Review of Power Series

Section Taylor and Maclaurin Series

Asymptotic Statistics-VI. Changliang Zou

Tail and Concentration Inequalities

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Statistics 3657 : Moment Generating Functions

Taylor Series. Math114. March 1, Department of Mathematics, University of Kentucky. Math114 Lecture 18 1/ 13

Taylor and Maclaurin Series

REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b)

Week 2. Review of Probability, Random Variables and Univariate Distributions

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

Moment Generating Functions

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN

Sampling Distributions

Sometimes can find power series expansion of M X and read off the moments of X from the coefficients of t k /k!.

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Probability Lecture III (August, 2006)

1 Review of Probability

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

10.1 Sequences. Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1.

0.0.1 Moment Generating Functions

Economics 620, Lecture 2: Regression Mechanics (Simple Regression)

Lecture 14: Multivariate mgf s and chf s

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Gaussian vectors and central limit theorem

ACM 116: Lectures 3 4

Math 341: Probability Eighteenth Lecture (11/12/09)

Stochastic Models (Lecture #4)

Formulas for probability theory and linear models SF2941

Things to remember when learning probability distributions:

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

2 2 + x =

Probability Background

Stat 5101 Notes: Algorithms

BASICS OF PROBABILITY

Northwestern University Department of Electrical Engineering and Computer Science

Department of Mathematics

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

STAT 450. Moment Generating Functions

1.6 Families of Distributions

5 Operations on Multiple Random Variables

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

1. Point Estimators, Review

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

11.8 Power Series. Recall the geometric series. (1) x n = 1+x+x 2 + +x n +

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Chapter 8. Some Approximations to Probability Distributions: Limit Theorems

Characteristic Functions and the Central Limit Theorem

1.12 Multivariate Random Variables

A Very Brief Summary of Statistical Inference, and Examples

STA 4321/5325 Solution to Homework 5 March 3, 2017

p. 6-1 Continuous Random Variables p. 6-2

4.1 Expected value of a discrete random variable. Suppose we shall win $i if the die lands on i. What is our ideal average winnings per toss?

Transcription:

1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). Note that the random variable sequence fn 1=2 X n ), n 1g does not converge in probability. (Why not?). We might be able to make probability statements like lim n!1 P (n 1=2 ( X n ) < z) = F (z) for some distribution F. Then we could use F as an approximate distribution for n 1=2 ( X n ). This implies an approximate distribution for X n. It is often easier to work with Y n = n 1=2 ( X n )=.

2 Moment Generating Function De nition: The moment generating function for the rv X (or the distribution f) is m X (t) = E(e tx ) = Z 1 1 etx f(x)dx: The name comes from the fact that d r m dt r = Z 1 1 xr e tx f(x)dx = E(X r ) when evaluated at t = 0 The subscript is dropped when unnecessary.

3 Moment Generating Function (cont d): Note the series expansion: m(t) = E(e tx ) = E(1 + Xt + 1 2! (Xt)2 + :::) = 1 + 1 t + 1 2 2t 2 + ::: where r = EX r (For example: 1 = ; 2 = 2 + 2 ): Property 1: The moment generating function of np X i i=1 when X i are independent is the product of the moment generating functions of X i. (Exercise: Prove this.) Property 2: Let X and Y be random variables with continuous densities f(x) and g(y). If the moment generating function of X is equal to that of Y in an interval h < t < h, then f = g.

4 Moment Generating Function (cont d) Example: N(0; 1) is The moment generating function for X m(t) = E(e tx ) = 1 p 2 Z 1 1 etx e 1 2 x2 dx = e t2 =2 1 p 2! Z 1 1 e 1 2 (x t)2 dx =?

5 Characteristic Function The mgf does not always exist. The function X (t) = Ee itx always exists and is continuous and bounded. We know X n d! X ) Xn (t)! X (t): The converse Xn (t)! X (t) (8t) ) X n d! X is also true If EjXj < 1 then 0 (0) = iex: Similarly, if EX 2 < 1 then 00 (0) = EX 2, etc. CFs combine like mgfs.

6 Properties of c.f.s X (t) = Ee itx = E cos tx + ie sin tx Thus (0) = 1; j(t)j 1;and (t) = ( t): where the bar indicates complex conjugate. If X is symmetrically distributed, is real-valued. Some cfs: Degenerate at ; (t) = e it Binomial(n,p), (t) = (pe it + 1 p) n N(; 2 ); (t) = e it 2 t 2 =2

7 Using : Application 1, WLLN Suppose fx i g are iid each with cf (t) Then, Ee itx = n (t=n) = (1 + it=n + o(1=n)) n! e it The cf of the constant. Application 2 will give a central limit theorem.

8 Using : Application 2, CLT Central Limit Theorem: (CLT ) (Lindberg-Levy) The distribution of Y n = n 1=2 ( X n )= as n! 1 is (z) = 1 p 2 Z z 1 e x2 =2 dx (standard normal) Proof : Let Xi (t) be the characteristic function of (X i ). That is Xi (t) = 1 2 t 2 2 + o(t2 ) where o(t 2 ) is the remainder term such that o(t 2 )=t 2! 0 as t! 0:

9 CLT We know that Y n = p n( X n ) = P ni=1 (X i ) p ; n Hence Yn (t) = = " " Xi 1 t p n t 2 2n + o ) ln Yn (t) = n ln n ln " 1 t 2 2n #!# n t2 n "!# n 1 t 2 2n + o # 1 n

10 CLT ln Yn (t) n ln 1 t 2 2n ) Yn (t)! e t2 =2 as n! 1 (using ln(1 + x) x for small x) which is the cf of a standard normal random variable. Point of the Central Limit Theorem: The distribution function of X n for large n can be approximated by that of a normal with mean and variance 2 =n.

11 Notes: 1. Identical means and variances can be dropped straightforwardly. We need some restrictions on the variance sequence though. In this case, we work with P ni=1 (X i Y n = i ) Pni=1 2 1=2 : i 2. Versions of the Central Limit Theorem with random vectors are also available. Just apply univariate theorems to all linear combinations. 3. The basic requirement is that each term in the sum should make a negligible contribution.

12 Examples: 1. Estimation of mean from a sample of normal random variables: In this case, we estimate by X, and the asymptotic approximation for the distribution of X or ( X ) is exact. 2. Consider n 1=2 (^ ) where ^ is the LS estimator. n 1=2 (^ ) = n 1=2 (X 0 X) 1 X 0 " = [X 0 X=n] 1 n 1=2 [X 0 "=n] Where [X 0 X=n] is the sample second moment matrix of the regressors. [X 0 X=n] is O(1) or maybe O p (1)depending on assumptions. Its lim or plim is Q, a KxK p.d. matrix.

13 Regression Example Cont d What about n 1=2 [X 0 "=n] = p n(1=n) P x 0 i i? This is p n times a sample mean of x 0 i i:these have Ex 0 i i = 0; V x 0 i i = 2 Q (discuss) Under the assumption that regressors are well-behaved (i.e., contribution of any particular observation to [X 0 "=n] is negligible), we can apply a Central Limit Theorem and conclude that n 1=2 (^ ) = [X 0 X=n] 1 n 1=2 [X 0 "=n] D! N(0; 2 Q 1 ). Consistent with previous results?

14 The Delta Method The delta method is a "trick" for approximating the limiting distribution of a function of a statistic whose limiting distribution is known. From the CMT we know that d X n! X ) g(xn )! d g(x): Suppose X n = p n(x p n n(g(xn ) g())? ):What can we say about Expand g(x n ) g() + g 0 ()(X n ) So p n(g(xn ) g()) g 0 () p n(x n ) Hence if p n(x n )! N(0; 2 ) Then p n(g(xn ) g())! N(0; g 0 () 2 2 )

15 Multivariate CLT and Delta Suppose fx i g are K-variate rv s, EX = ; V X = Then we consider Y = t 0 X t 0 ; univariate with EY = 0; V Y = t 0 t and apply our CLT to conclude p n(xn )! N(0; ) For the Delta method, suppose g : R K suppose p n(x n )! N(0; )! R m and Write g(x n ) g() + g 0 ()(X n ) where g 0 () is the mxk matrix of derivatives and conclude p n(g(xn ) g())! N(0; g 0 ()(g 0 ()) T )