A Martingale Central Limit Theorem

Similar documents
Useful Probability Theorems

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

Tsung-Lin Cheng and Yuan-Shih Chow. Taipei115,Taiwan R.O.C.

17. Convergence of Random Variables

arxiv: v1 [math.pr] 17 May 2009

Limit Theorems for Exchangeable Random Variables via Martingales

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

Spectral representations and ergodic theorems for stationary stochastic processes

Random Bernstein-Markov factors

P (A G) dp G P (A G)

Central Limit Theorem for Non-stationary Markov Chains

. Find E(V ) and var(v ).

1.1 Review of Probability Theory

Notes 15 : UI Martingales

DOUBLE SERIES AND PRODUCTS OF SERIES

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

8 Laws of large numbers

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Exercises Measure Theoretic Probability

Proof. We indicate by α, β (finite or not) the end-points of I and call

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Selected Exercises on Expectations and Some Probability Inequalities

Brownian Motion and Stochastic Calculus

Sums of exponentials of random walks

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A D VA N C E D P R O B A B I L - I T Y

STA205 Probability: Week 8 R. Wolpert

Random Walks Conditioned to Stay Positive

Random Walk in Periodic Environment

1 Dimension Reduction in Euclidean Space

0.1 Uniform integrability

MARKOV CHAINS AND HIDDEN MARKOV MODELS

The Distribution of Mixing Times in Markov Chains

Math 6810 (Probability) Fall Lecture notes

1 Fourier Integrals of finite measures.

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

(A n + B n + 1) A n + B n

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

Exercise 1. Let f be a nonnegative measurable function. Show that. where ϕ is taken over all simple functions with ϕ f. k 1.

Mi-Hwa Ko. t=1 Z t is true. j=0

Modern Discrete Probability Branching processes

Stochastic Models (Lecture #4)

Department of Econometrics and Business Statistics

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

ISyE 6761 (Fall 2016) Stochastic Processes I

Gärtner-Ellis Theorem and applications.

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

Discrete Distributions Chapter 6

the convolution of f and g) given by

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Convergence in Distribution

Introduction to Real Analysis Alternative Chapter 1

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Balls & Bins. Balls into Bins. Revisit Birthday Paradox. Load. SCONE Lab. Put m balls into n bins uniformly at random

On the Kolmogorov exponential inequality for negatively dependent random variables

Lecture 17 Brownian motion as a Markov process

Gradient Estimation for Attractor Networks

STAT 200C: High-dimensional Statistics

Zdzis law Brzeźniak and Tomasz Zastawniak

( f ^ M _ M 0 )dµ (5.1)

1 Expectation of a continuously distributed random variable

Probability and Measure

ABSTRACT EXPECTATION

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Probability A exam solutions

CHAPTER 3: LARGE SAMPLE THEORY

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

4 Sums of Independent Random Variables

Let X and Y denote two random variables. The joint distribution of these random

A remark on the maximum eigenvalue for circulant matrices

MAGIC010 Ergodic Theory Lecture Entropy

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

for all f satisfying E[ f(x) ] <.

Introduction and Preliminaries

Upper Bounds for Partitions into k-th Powers Elementary Methods

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Long Run Average Cost Problem. E x β j c(x j ; u j ). j=0. 1 x c(x j ; u j ). n n E

Martingale Theory for Finance

X n = c n + c n,k Y k, (4.2)

Continuous distributions

Applied Stochastic Processes

Statistics 150: Spring 2007

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

Bases in Banach spaces

Consistency of the maximum likelihood estimator for general hidden Markov models

Errata for FIRST edition of A First Look at Rigorous Probability

Commutative Banach algebras 79

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Tail and Concentration Inequalities

Homework 27. Homework 28. Homework 29. Homework 30. Prof. Girardi, Math 703, Fall 2012 Homework: Define f : C C and u, v : R 2 R by

Conditional moment representations for dependent random variables

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Practical unbiased Monte Carlo for Uncertainty Quantification

Transcription:

A Martingale Central Limit Theorem Sunder Sethuraman We present a proof of a martingale central limit theorem (Theorem 2) due to McLeish (974). Then, an application to Markov chains is given. Lemma. For n, let U n, T n be random variables such that. U n a in probability. 2. {T n } is uniformly integrable. 3. { T n U n } is uniformly integrable. 4. E(T n ). Then E(T n U n ) a. Proof. Write T n U n = T n (U n a) + at n. As E[T n ], we need only show that E[T n (U n a)] to finish. Since {T n } is uniformly integrable, we have T n (U n a) in probability. Also, both T n U n and at n are uniformly integrable, and so the combination T n (U n a) is uniformly integrable. Hence, E[T n (U n a)]. A key observation for the following is the expansion, where r(x) x 3 for real x. exp (ix) = ( + ix) exp ( x2 2 + r(x)) Theorem. Let {X n : k n, n } be a triangular array of (any) random variables. Let S n = k n X n, T n = k n ( + itx n ), and U n = exp ( t2 2 X2 n + r(tx n)). Suppose that. E(T n ).

2. {T n } is uniformly integrable. 3. X2 n in probability. 4. max X n in probability. Then E(exp (its n )) exp ( t2 ). 2 Proof. Let t be fixed. From conditions (3) and (4), bound r(tx n ) t 3 X n 3 t 3 max X n X 2 n = o(). Then, U n = exp ( t2 2 Xn 2 + r(tx n )) = exp ( t2 2 + o()). This verifies condition () of Lemma with a = exp ( t2 2 ). Conditions (2) and (4) of Lemma are our present conditions (2) and (), respectively. Condition (3) of Lemma follows from the fact T n U n = exp its n = exp it X n =. Thus E(exp its n ) = E(T n U n ) exp ( t 2 /2). Theorem 2. Let {X n, k n, n } be a martingale difference array with respect to nested σ-fields {F n : k n, n }, F n F nk for k, such that. E(max X n ). 2. X2 n in probability. Then S n = X n N(, ) in distribution. 2

Proof. Define Z n = X n, and Z n = X n I( r X2 nr 2) for 2 k n and n. Then {Z n : k n, n } is also martingale difference array with respect to {F n } because ( ) E(Z n F n( ) ) = I Xnr 2 2 E(X n F n( ) ) =. r Let now J = inf{ : r X2 nr > 2} k n. Then, P (X nr Z nr for some r k n ) = P (J k n ) P ( rk n X 2 nr > 2) () from the third assumption. It is also to easy that the variables {Z n } satisfy the conditions of the Theorem 2. We now show that {Z n } satisfies the conditions of Theorem. Let T n = k n ( + itz n ). Since ( + itx) 2 = ( + t 2 x 2 ) exp (t 2 x 2 ), we have T n = ( + t 2 Xnr) 2 /2 ( + t 2 XnJ) 2 /2 rj exp ((t 2 /2) rj exp(t 2 )( + t max X n ). X 2 nr)( + t X nj ) Since E(max X n ), {max X n } is uniformly integrable, and therefore {T n } is uniformly integrable. Also, as {Z nr } is a martingale difference array, we have by successive conditioning that E(T n ) =. Hence, conditons (), (2) and (4) of Theorem for {Z n } are met. Clearly condition (3) of Theorem also holds for the array {Z n } in view of (). Thus all the conditions of Theorem hold, and we conclude rk n Z nr N(, ). But, by (), we have then that X nr N(, ) also. For some applications, the following corollary of Theorem 2 is convenient. Theorem 3. Let {Z : } be a stationary ergodic sequence such that σ 2 = E[Z] 2 <, and E[Z n+ F n ] = where F n = σ{z : n}. Then, we have Y n = [Z + Z n ] N(, σ 2 ). n 3

Proof. Let X n = Z / n and F n = F for n and n. Then, {X n } is a martingale difference array with respect to {F n }. We now argue that condition () of Theorem 2 is satisfied with Z n = Z / n. It is an exercise to show that for a sequence of identically (not necessarily independent) distributed r.v. s {η }, with finite mean, that lim n [ ] n E max η n =. Given this claim, by stationarity of {Z } and E[Z 2 ] <, and taking η = Z 2, E(max Z / n) follows. Finally, as ergodicity of the sequence verifies condition (2) of Theorem 2, Theorem 3 follows from Theorem 2. The exercise is proved as follows: Truncate Write η = η [ η M] + η η >M] = A + B. max η max A + max B. Of course, (/n)e[max A ]. Note, using E[Y ] = P (Y x)dx for nonnegative Y, E[max B ] = = n n = P (max B x)dx P ( n = {B x})dx P (B x)dx P (B x)dx = ne[ η, η > M]. Then, lim n (/n)e[max η ] E[ η, η > M] which given the finite mean of η can be made small as M arbitrary. We now present an application of Theorem 3 to finite state Markov chains in discrete time. Application. Let Σ be a finite state space with r letters, Σ = r. Let {X i : i } be an ergodic Markov chain on Σ with transition matrix P starting under the stationary measure π. 4

Let also f : Σ R be a mean-zero function with respect to π, E π [f] =. Consider now the sum S n = n f(x i). The aim of this application is to show that S n / n converges in distribution to N(, σ 2 ) for some σ 2 < with the help of Theorem 3. A preliminary lemma will be useful. Let I r be the r r identity matrix. Also note that f can be represented as a vector, f = f(i) : i Σ R r. Lemma 2. There is a function u : Σ R such that f = (I r P )u. Proof. Write R r = Null(I P ) Range(I P ) where P is the adoint of P. Then, as π[i P ] =, and π is unique, we have Null(I P ) = {cπ : c R}, a one-parameter space. However, since E π [f] = and so f π, we must have f Range(I P ). We now approximate S n / n by a martingale. For n, define n M n = [u(x i ) (P u)(x i )] and F n = σ{x i : i n}. From the Markov property, the conditional expectation, E[u(X i ) F i ] = (P u)(x i ). Therefore, {M n } is martingale sequence with respect to {F n } with stationary ergodic L 2 (π) differences. Write n n f(x i ) = [ n u(x i ) n n ] (P u)(x i ) = M n + (P u)(x ) (P u)(x n ). n n As u is bounded, the error in the martingale approximation vanishes, [(P u)(x ) (P u)(x n )]/ n. We now compute the variance σ 2 : lim n n E π[mn] 2 = E π [(u(x ) (P u)(x )) 2 ] = E π [u 2 (P u) 2 ] = σ 2. 5

As long as f is non-constant, u is non-constant and σ 2 >. Also, as u is bounded, σ 2 <. Hence, by Theorem 3, we have S n / n N(, σ 2 ). I would like to thank at this point T. Kurtz for pointing out a simplification in Theorem 2, and M. Balazs and G. Giacomin and J. Sethuraman for helpful discussions. References.. D. L. McLeish (974) Dependent Central Limit Theorems and Invariance Principles Ann. Prob. 2 62-628. 2. Hall, P. and Heyde, C.C. (98) Martingale limit theory and its application. Academic Press, New York. 3. Helland, I.S. (982) Central limit theorems for martingales with discrete or continuous time. Scand. J. Statist. 9 79-94. 6