When is a Markov chain regenerative?


 Annabella King
 4 months ago
 Views:
Transcription
1 When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can be broken up into iid components. The problem addressed in this paper is to determine under what conditions is a Markov chain regenerative. It is shown that an irreducible Markov chain with a countable state space is regenerative for any initial distribution if and only if it is recurrent (null or positive). An extension of this to the general state space case is also discussed. 1 Introduction A sequence of random variables {X n } n 0 is regenerative (see formal definition below) if it can be broken up into iid components. This makes it possible to apply the laws of large numbers for iid random variables to such sequences. In particular, in Athreya and Roy (2012) this idea was used to develop Monte Carlo methods for estimating integrals of functions with respect to distributions π that may or may not be proper (that is, π() can be where is the underlying state space). A regenerative sequence {X n } n 0, in general, need not be a Markov chain. In this short paper we address the question of when a Markov chain is regenerative. We now give a formal definition of a regenerative sequence of random variables. Definition 1. Let (Ω, F, P ) be a probability space and (, ) be a measurable space. A sequence of random variables {X n } n 0 defined on (Ω, F, P ) with values in (, ) is called regenerative if there exists a sequence of (random) times 0 < T 1 < T 2 <... such that the excursions {X n : T j Key words and phrases. Harris recurrence, Markov chains, Monte Carlo, recurrence, regenerative sequence. 1
2 n < T j+1, τ j } j 1 are iid where τ j = T j+1 T j for j = 1, 2,..., that is, P (τ j = k j, X Tj +q A q,j, 0 q < k j, j = 1,..., r) r = P (τ 1 = k j, X T1 +q A q,j, 0 q < k j ), j=1 for all k 1,..., k r N, the set of positive integers, A q,j, 0 q < k j, j = 1,..., r, and r 1. The random times {T n } n 1 are called regeneration times. From the definition of a regenerative sequence and the strong law of large numbers the next remark follows and Athreya and Roy (2012) used it as a basis for constructing Monte Carlo estimators for integrals of functions with respect to improper distributions. Remark 1. Let {X n } n 0 be a regenerative sequence with regeneration times {T n } n 1. Let ( T2 1 ) π(a) = E I A (X j ) j=t 1 for A. (1) (The measure π is known as the canonical (or, occupation) measure for {X n } n 0.) Let N n = k if T k n < T k+1, k, n = 1, 2,.... uppose f, g : R be two measurable functions such that f dπ < and g dπ <, gdπ 0. Then, as n, (i) (regeneration estimator) ˆλ n = n j=0 f(x j) N n a.s. fdπ, (ii) (ratio estimator) and ˆR n = n j=0 f(x j) n j=0 g(x j) a.s. fdπ gdπ. Athreya and Roy (2012) showed that if π happens to be improper then the standard time average estimator, n j=0 f(x j)/n, based on Markov chains {X n } n 0 with π as its stationary distribution will converge to zero with probability 1, and hence is not appropriate. On the other hand, the regeneration and ratio estimators, namely ˆλ n and ( gdπ) ˆR n, (assuming gdπ is known) defined above produce strongly consistent estimators of fdπ and they work whether π is proper or improper. Regeneration methods have been proved to be useful in a number of other statistical applications. For example, in the case of proper target, that is when π() is finite, regenerative method has been used to construct consistent estimates of asymptotic variance of Markov chain 2
3 Monte Carlo (MCMC) based estimates (Mykland, Tierney and Yu, 1995), to obtain quantitative bounds on the rate of convergence of Markov chains (see e.g. Roberts and Tweedie, 1999), and to construct bootstrap methods for Markov chains (see e.g. Bertail and Clémençon, 2006; Datta and McCormick, 1993). The regeneration method has also been used in nonparametric estimation for null recurrent time series (Karlsen and Tjøstheim, 2001). 2 The Results Our first result shows that a necessary and sufficient condition for an irreducible countable state space Markov chain to be regenerative for any initial distribution is that it is recurrent. Theorem 1. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... }. Let {X n } n 0 be irreducible, that is, for all s i, s j, P (X n = s j for some 1 n < X 0 = s i ) > 0. Then for any initial distribution of X 0, the Markov chain {X n } n 0 is regenerative if and only if it is recurrent, that is, for all s i, P (X n = s i for some 1 n < X 0 = s i ) = 1. The next result gives a sufficient condition for a Markov chain {X n } n 0 with countable state space to be regenerative. Theorem 2. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... }. uppose there exists a state s i0 such that P (X n = s i0 for some 1 n < X 0 = s i0 ) = 1, (2) that is, s i0 is a recurrent state. Then {X n } n 0 is regenerative for any initial distribution ν of X 0, such that P (X n = s i0 for some 1 n < X 0 ν) = 1. Remark 2. It is known that a necessary and sufficient condition for (2) to hold is that n=1 P (X n = s i0 X 0 = s i0 ) = (see e.g. Athreya and Lahiri, 2006, Corollary ). Next we give a necessary condition for a Markov chain {X n } n 0 with countable state space to be regenerative. Theorem 3. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... } and initial distribution ν of X 0. uppose {X n } n 0 is regenerative. Then, 3
4 there exists a nonempty set 0 such that for all s i, s j 0, P (X n = s j for some 1 n < X 0 = s i ) = 1. Remark 3. Under the hypothesis of Theorem 3, the Markov chain {X n } n 0 is regenerative for any initial distribution of X 0 such that P (X n 0 for some 1 n < ) = 1. Next we discuss the case when {X n } n 0 is a Markov chain with a general state space that need not be countable. Our first result in this case provides a sufficient condition similar to Theorem 2. Theorem 4. Let {X n } n 0 be a Markov chain with state space (, ) and transition probability function P (, ). uppose there exists a singleton in with P (X n = for some 1 n < X 0 = ) = 1, i.e., is a recurrent singleton. Then {X n } n 0 is regenerative for any initial distribution ν such that P (X n = for some 1 n < X 0 ν) = 1. The proof of Theorem 4 is similar to that of Theorem 2 and hence is omitted. Remark 4. Let be a recurrent singleton as in Theorem 4 and let π(a) E( T 1 j=0 I A(X j ) X 0 = ) for A, where T = min{n : 1 n <, X n = }. As shown in Athreya and Roy (2012), the measure π is stationary for the transition function P, that is, π(a) = P (x, A)π(dx) for all A. Further, if is accessible from everywhere in, that is, if for all x, P (X n = for some 1 n < X 0 = x) = 1, then π is unique (upto multiplicative constant) in the class of all measures π on (, ) that are subinvariant measure with respect to P, that is, π(a) P (x, A) π(dx) for all A. Remark 5. Let {X n } n 0 be a Harris recurrent Markov chain with state space, that is, there exists a σfinite measure φ( ) on (, ) such that P (X n A for some 1 n < X 0 = x) = 1 for all x and all A such that φ(a) > 0. Assume that is countably generated. Then there exists n 0 1 such that the chain {X nn0 } n 0, is equivalent, in a suitable sense, to a regenerative sequence. (This has been independently shown by Athreya and Ney (1978) and Numelin (1978), see e.g. Meyn and Tweedie (1993, ection ).) 4
5 Our next result provides another sufficient condition for {X n } n 0 to regenerative. Theorem 5. Let {X n } n 0 be a time homogeneous Markov chain with a general state space (, ) and Markov transition function P (, ). uppose, (i) there exists a measurable function s( ) : [0, 1], and a probability measure Q( ) on (, ) such that P (x, A) s(x)q(a), for all x and A. (3) The above is known as the socalled minorisation condition. (ii) Also, assume that n=1 ( Then, {X n } n 0 is regenerative if X 0 has distribution Q. ) s(y)p n (x, dy) Q(dx) =. (4) The next result gives a necessary condition for {X n } n 0 to be regenerative. Theorem 6. Let {X n } n 0 be a time homogeneous Markov chain with a general state space (, ). Assume that (i) {X n } n 0 is regenerative, and (ii) for all j 0, the event {T 2 T 1 > j} is measurable with respect to σ{x T1 +r : 0 r j}, the σalgebra generated by {X T1, X T1 +1,, X T1 +j}. Let π be the occupation measure as defined in (1). Then π is σfinite, and for all A with π(a) > 0, P (X n A for some 1 n < X 0 = x) = 1 for π almost all x. (5) Definition 2. A Markov chain {X n } n 0 is said to be weakly Harris recurrent if there exists a σ finite measure π on (, ) such that (5) holds for all A with π(a) > 0. Remark 6. The above Theorem 6 shows that any regenerative Markov chain is necessarily weakly Harris recurrent. A natural open question is whether a weakly Harris recurrent Markov chain is necessarily regenerative for some initial distribution of X 0. Note that Theorem 6 is a version of Theorem 3 for general state space Markov chains. 5
6 Remark 7. The condition (ii) in Theorem 6 is satisfied in the usual construction of regeneration schemes (see proof of Theorem 5 in ection 3). 3 Proofs of Theorems Proof of Theorem 1 Proof. First we assume that the Markov chain {X n } n 0 is irreducible and recurrent. Fix s i0. Define T 1 = min{n : n 1, X n = s i0 }, and for j 2, T j = min{n : n > T j 1, X n = s i0 }. ince {X n } n 0 is irreducible and recurrent, for any initial distribution, P (T j < ) = 1 for all j 1 (see e.g. Athreya and Lahiri, 2006, ection 14.1). Also by strong Markov property, η j {X n, T j n < T j+1, T j+1 T j } j 1 are iid with X Tj = s i0, that is, {X n } n 0 is regenerative, for any initial distribution. Next we assume that {X n } n 0 is regenerative with X 0 ν, regeneration times T j s and occupation measure π as defined in (1). Note that π() = E(T 2 T 1 ) > 0. o there exists s i0 such that π(s i0 ) > 0. Define ξ r (i 0 ) = T r+1 1 j=t r I {si0 }(X j ) number of visits to s i0 during the rth cycle for r = 1, 2,.... ince ξ r (i 0 ), r = 1, 2,... are iid with E(ξ r (i 0 )) = π(s i0 ), by LLN, we have n r=1 ξ r(i 0 )/n π(s i0 ) with probability 1 as n. ince π(s i0 ) > 0, this implies that r=1 ξ r(i 0 ) = with probability 1, which, in turn, implies that P ν (ξ r (i 0 ) > 0 for infinitely many r) = 1, where P ν is the distribution of {X n } n 0 with X 0 ν. The above implies that P ν (X n = s i0 i. o.) = 1, that is, s i0 is a recurrent state. ince {X n } n 0 is irreducible, it follows that all states are recurrent. Proof of Theorem 2 Proof. If P (X 0 = s i0 ) = 1, then since s i0 is recurrent, {X n } n 0 is regenerative with iid cycles η j, j 0 as defined in the proof of Theorem 1. Now if the initial distribution ν is such that P (X n = s i0 for some 1 n < X 0 ν) = 1, then {X n } n 0 is regenerative with the same iid cycles η j, j 1 as above. Proof of Theorem 3 6
7 Proof. Let π be as defined in (1) and let 0 {s i : s i, π(s i ) > 0}. Then as shown in the second part of proof of Theorem 1, 0 is nonempty and for all s i 0, P ν (A i ) = 1, where A i {X n = s i for infinitely many n}. This implies, for any s i, s j 0, P ν (A i A j ) = 1, which, in turn, implies by the strong Markov property that P (X n = s j i. o. X 0 = s i ) = 1 for all s i, s j 0. Proof of Theorem 5 Proof. Following Athreya and Ney (1978) and Numelin (1978), we construct the socalled split chain {(δ n, X n )} n 0 using (3) and the following recipe. At step n 0, given X n = x n, generate (δ n+1, X n+1 ) as follows. Generate δ n+1 Bernoulli (s(x n )). If δ n+1 = 1, then draw X n+1 Q( ); otherwise if δ n+1 = 0, then draw X n+1 R(x n, ), where R(x, ) {1 s(x)} 1 {P (x, ) s(x)q( )} for s(x) 1, and if s(x) = 1 define R(x, ) = 0. Note that the marginal {X n } sequence of the split chain {(δ n, X n )} n 0 is a Markov chain with state space and transition function P (, ). We observe that for any 1 n <, A, P (X n A δ n = 1, (δ j, X j ), j n 1) = Q(A). Let T = min{n : 1 n <, δ n = 1} and T = if δ n 1 for all 1 n <. Let θ = P (T < X 0 Q). If θ = 1, then the above construction shows that the Markov chain {X n } n 0 with X 0 Q is regenerative. Now we establish that θ = 1 if and only if (4) holds. (For the proof of the theorem we only need the if part.) Let N = n=1 δ n be the total number of regenerations. Let δ 0 = 1, X 0 Q. Then it can be shown that P (N = 0) = 1 θ, and P (N = k) = (1 θ)θ k, for all k 1. Thus if θ < 1, then E(N X 0 Q) = k(1 θ)θ k = θ/(1 θ) <. k=0 On the other hand if θ = 1, then P (N = ) = 1. Hence θ = 1 if and only if E(N X 0 Q) =. But, E(N X 0 Q) = P (δ n = 1) = n=1 n=1 ( ) s(y)p n (x, dy) Q(dx). o if (4) holds then θ = 1 and the Markov chain {X n } n 0 is regenerative when X 0 Q. The proof of Theorem 5 is thus complete. Proof of Theorem 6 7
8 Proof. As noted in the proof of Theorem 1, 0 < π() = E(T 2 T 1 ), and since T 2 T 1 is an integer valued random variable, π is a nontrivial σfinite measure. Fix A such that π(a) > 0. Then arguing as in the proof of Theorem 1, P ν (ξ r (A) > 0 for infinitely many r, 1 r < ) = 1, (6) where X 0 ν, and ξ r (A) = T r+1 1 j=t r I A (X j ) number of visits to A during the rth cycle for r = 1, 2,.... This implies P ν (B) = 0, where B {X n A for only finitely many n T 1 }. This, in turn, implies that for all j 0, P ν (B (T 2 T 1 > j)) = 0. But, P ν (B (T 2 T 1 > j)) = E ν ( IB I (T2 T 1 >j)) = Eν ( IBj I (T2 T 1 >j)), where B j {X n A for only finitely many n T 1 + j} for j = 0, 1,.... By hypothesis (ii) and the (strong) Markov property, we have ( ) ( ( E ν IBj I (T2 T 1 >j) = Eν I(T2 T 1 >j)e ν IBj σ{x T1 +r : 0 r j} )) = E ν ( I(T2 T 1 >j)ψ(x T1 +j) ), where ψ(x) = P (X n A for only finitely many n 1 X 0 = x). ince P ν (B (T 2 T 1 > j)) = 0 for all j = 0, 1,..., it follows that ( E ν I(T2 T 1 >j)ψ(x T1 +j) ) = 0. But, j=0 j=0 ( E ν I(T2 T 1 >j)ψ(x T1 +j) ) ( T2 1 ) = E ν ψ(x j ) = j=t 1 ψ(x)π(dx). ince ψ(x)π(dx) = 0, it implies that ψ(x) = 0 for π almost all x, that is, P (X n A for infinitely many n 1 X 0 = x) = 1 for π almost all x. This conclusion is stronger than (5). Acknowledgments The authors thank an anonymous reviewer and an editor for helpful comments and suggestions. References ATHREYA, K. B. and LAHIRI,. N. (2006). Measure Theory and Probability Theory. pringer, New York. 8
9 ATHREYA, K. B. and NEY, P. (1978). A new approach to the limit theory of recurrent Markov chains. Transactions of the American Mathematical ociety, ATHREYA, K. B. and ROY, V. (2012). Monte carlo methods for improper target distributions. Tech. rep., Iowa tate University. BERTAIL, P. and CLÉMENÇON,. (2006). Regenerative block bootstrap for Markov chains. Bernoulli, DATTA,. and MCCORMICK, W. (1993). Regenerationbased bootstrap for Markov chains. The Canadian Journal of tatistics, KARLEN, H. A. and TJØTHEIM, D. (2001). Nonparametric estimation in null recurrent time series. The Annals of tatistics, MEYN,. P. and TWEEDIE, R. L. (1993). Markov Chains and tochastic tability. pringer Verlag, London. MYKLAND, P., TIERNEY, L. and YU, B. (1995). Regeneration in Markov chain samplers. Journal of the American tatistical Association, NUMELIN, E. (1978). A splitting technique for Harris recurrent Markov chains. Z. Wahrsch. Verw. Gebiete, ROBERT, G. and TWEEDIE, R. (1999). Bounds on regeneration times and convergence rates for Markov chains. tochastic Processes and their Applications, Corrigendum (2001) 91 :