A Martingale Central Limit Theorem Sunder Sethuraman We present a proof of a martingale central limit theorem (Theorem 2) due to McLeish (974). Then, an application to Markov chains is given. Lemma. For n, let U n, T n be random variables such that. U n a in probability. 2. {T n } is uniformly integrable. 3. { T n U n } is uniformly integrable. 4. E(T n ). Then E(T n U n ) a. Proof. Write T n U n = T n (U n a) + at n. As E[T n ], we need only show that E[T n (U n a)] to finish. Since {T n } is uniformly integrable, we have T n (U n a) in probability. Also, both T n U n and at n are uniformly integrable, and so the combination T n (U n a) is uniformly integrable. Hence, E[T n (U n a)]. A key observation for the following is the expansion, where r(x) x 3 for real x. exp (ix) = ( + ix) exp ( x2 2 + r(x)) Theorem. Let {X n : k n, n } be a triangular array of (any) random variables. Let S n = k n X n, T n = k n ( + itx n ), and U n = exp ( t2 2 X2 n + r(tx n)). Suppose that. E(T n ).
2. {T n } is uniformly integrable. 3. X2 n in probability. 4. max X n in probability. Then E(exp (its n )) exp ( t2 ). 2 Proof. Let t be fixed. From conditions (3) and (4), bound r(tx n ) t 3 X n 3 t 3 max X n X 2 n = o(). Then, U n = exp ( t2 2 Xn 2 + r(tx n )) = exp ( t2 2 + o()). This verifies condition () of Lemma with a = exp ( t2 2 ). Conditions (2) and (4) of Lemma are our present conditions (2) and (), respectively. Condition (3) of Lemma follows from the fact T n U n = exp its n = exp it X n =. Thus E(exp its n ) = E(T n U n ) exp ( t 2 /2). Theorem 2. Let {X n, k n, n } be a martingale difference array with respect to nested σ-fields {F n : k n, n }, F n F nk for k, such that. E(max X n ). 2. X2 n in probability. Then S n = X n N(, ) in distribution. 2
Proof. Define Z n = X n, and Z n = X n I( r X2 nr 2) for 2 k n and n. Then {Z n : k n, n } is also martingale difference array with respect to {F n } because ( ) E(Z n F n( ) ) = I Xnr 2 2 E(X n F n( ) ) =. r Let now J = inf{ : r X2 nr > 2} k n. Then, P (X nr Z nr for some r k n ) = P (J k n ) P ( rk n X 2 nr > 2) () from the third assumption. It is also to easy that the variables {Z n } satisfy the conditions of the Theorem 2. We now show that {Z n } satisfies the conditions of Theorem. Let T n = k n ( + itz n ). Since ( + itx) 2 = ( + t 2 x 2 ) exp (t 2 x 2 ), we have T n = ( + t 2 Xnr) 2 /2 ( + t 2 XnJ) 2 /2 rj exp ((t 2 /2) rj exp(t 2 )( + t max X n ). X 2 nr)( + t X nj ) Since E(max X n ), {max X n } is uniformly integrable, and therefore {T n } is uniformly integrable. Also, as {Z nr } is a martingale difference array, we have by successive conditioning that E(T n ) =. Hence, conditons (), (2) and (4) of Theorem for {Z n } are met. Clearly condition (3) of Theorem also holds for the array {Z n } in view of (). Thus all the conditions of Theorem hold, and we conclude rk n Z nr N(, ). But, by (), we have then that X nr N(, ) also. For some applications, the following corollary of Theorem 2 is convenient. Theorem 3. Let {Z : } be a stationary ergodic sequence such that σ 2 = E[Z] 2 <, and E[Z n+ F n ] = where F n = σ{z : n}. Then, we have Y n = [Z + Z n ] N(, σ 2 ). n 3
Proof. Let X n = Z / n and F n = F for n and n. Then, {X n } is a martingale difference array with respect to {F n }. We now argue that condition () of Theorem 2 is satisfied with Z n = Z / n. It is an exercise to show that for a sequence of identically (not necessarily independent) distributed r.v. s {η }, with finite mean, that lim n [ ] n E max η n =. Given this claim, by stationarity of {Z } and E[Z 2 ] <, and taking η = Z 2, E(max Z / n) follows. Finally, as ergodicity of the sequence verifies condition (2) of Theorem 2, Theorem 3 follows from Theorem 2. The exercise is proved as follows: Truncate Write η = η [ η M] + η η >M] = A + B. max η max A + max B. Of course, (/n)e[max A ]. Note, using E[Y ] = P (Y x)dx for nonnegative Y, E[max B ] = = n n = P (max B x)dx P ( n = {B x})dx P (B x)dx P (B x)dx = ne[ η, η > M]. Then, lim n (/n)e[max η ] E[ η, η > M] which given the finite mean of η can be made small as M arbitrary. We now present an application of Theorem 3 to finite state Markov chains in discrete time. Application. Let Σ be a finite state space with r letters, Σ = r. Let {X i : i } be an ergodic Markov chain on Σ with transition matrix P starting under the stationary measure π. 4
Let also f : Σ R be a mean-zero function with respect to π, E π [f] =. Consider now the sum S n = n f(x i). The aim of this application is to show that S n / n converges in distribution to N(, σ 2 ) for some σ 2 < with the help of Theorem 3. A preliminary lemma will be useful. Let I r be the r r identity matrix. Also note that f can be represented as a vector, f = f(i) : i Σ R r. Lemma 2. There is a function u : Σ R such that f = (I r P )u. Proof. Write R r = Null(I P ) Range(I P ) where P is the adoint of P. Then, as π[i P ] =, and π is unique, we have Null(I P ) = {cπ : c R}, a one-parameter space. However, since E π [f] = and so f π, we must have f Range(I P ). We now approximate S n / n by a martingale. For n, define n M n = [u(x i ) (P u)(x i )] and F n = σ{x i : i n}. From the Markov property, the conditional expectation, E[u(X i ) F i ] = (P u)(x i ). Therefore, {M n } is martingale sequence with respect to {F n } with stationary ergodic L 2 (π) differences. Write n n f(x i ) = [ n u(x i ) n n ] (P u)(x i ) = M n + (P u)(x ) (P u)(x n ). n n As u is bounded, the error in the martingale approximation vanishes, [(P u)(x ) (P u)(x n )]/ n. We now compute the variance σ 2 : lim n n E π[mn] 2 = E π [(u(x ) (P u)(x )) 2 ] = E π [u 2 (P u) 2 ] = σ 2. 5
As long as f is non-constant, u is non-constant and σ 2 >. Also, as u is bounded, σ 2 <. Hence, by Theorem 3, we have S n / n N(, σ 2 ). I would like to thank at this point T. Kurtz for pointing out a simplification in Theorem 2, and M. Balazs and G. Giacomin and J. Sethuraman for helpful discussions. References.. D. L. McLeish (974) Dependent Central Limit Theorems and Invariance Principles Ann. Prob. 2 62-628. 2. Hall, P. and Heyde, C.C. (98) Martingale limit theory and its application. Academic Press, New York. 3. Helland, I.S. (982) Central limit theorems for martingales with discrete or continuous time. Scand. J. Statist. 9 79-94. 6