When is a Markov chain regenerative?

Size: px
Start display at page:

Download "When is a Markov chain regenerative?"

Transcription

1 When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can be broken up into iid components. The problem addressed in this paper is to determine under what conditions is a Markov chain regenerative. It is shown that an irreducible Markov chain with a countable state space is regenerative for any initial distribution if and only if it is recurrent (null or positive). An extension of this to the general state space case is also discussed. 1 Introduction A sequence of random variables {X n } n 0 is regenerative (see formal definition below) if it can be broken up into iid components. This makes it possible to apply the laws of large numbers for iid random variables to such sequences. In particular, in Athreya and Roy (2012) this idea was used to develop Monte Carlo methods for estimating integrals of functions with respect to distributions π that may or may not be proper (that is, π() can be where is the underlying state space). A regenerative sequence {X n } n 0, in general, need not be a Markov chain. In this short paper we address the question of when a Markov chain is regenerative. We now give a formal definition of a regenerative sequence of random variables. Definition 1. Let (Ω, F, P ) be a probability space and (, ) be a measurable space. A sequence of random variables {X n } n 0 defined on (Ω, F, P ) with values in (, ) is called regenerative if there exists a sequence of (random) times 0 < T 1 < T 2 <... such that the excursions {X n : T j Key words and phrases. Harris recurrence, Markov chains, Monte Carlo, recurrence, regenerative sequence. 1

2 n < T j+1, τ j } j 1 are iid where τ j = T j+1 T j for j = 1, 2,..., that is, P (τ j = k j, X Tj +q A q,j, 0 q < k j, j = 1,..., r) r = P (τ 1 = k j, X T1 +q A q,j, 0 q < k j ), j=1 for all k 1,..., k r N, the set of positive integers, A q,j, 0 q < k j, j = 1,..., r, and r 1. The random times {T n } n 1 are called regeneration times. From the definition of a regenerative sequence and the strong law of large numbers the next remark follows and Athreya and Roy (2012) used it as a basis for constructing Monte Carlo estimators for integrals of functions with respect to improper distributions. Remark 1. Let {X n } n 0 be a regenerative sequence with regeneration times {T n } n 1. Let ( T2 1 ) π(a) = E I A (X j ) j=t 1 for A. (1) (The measure π is known as the canonical (or, occupation) measure for {X n } n 0.) Let N n = k if T k n < T k+1, k, n = 1, 2,.... uppose f, g : R be two measurable functions such that f dπ < and g dπ <, gdπ 0. Then, as n, (i) (regeneration estimator) ˆλ n = n j=0 f(x j) N n a.s. fdπ, (ii) (ratio estimator) and ˆR n = n j=0 f(x j) n j=0 g(x j) a.s. fdπ gdπ. Athreya and Roy (2012) showed that if π happens to be improper then the standard time average estimator, n j=0 f(x j)/n, based on Markov chains {X n } n 0 with π as its stationary distribution will converge to zero with probability 1, and hence is not appropriate. On the other hand, the regeneration and ratio estimators, namely ˆλ n and ( gdπ) ˆR n, (assuming gdπ is known) defined above produce strongly consistent estimators of fdπ and they work whether π is proper or improper. Regeneration methods have been proved to be useful in a number of other statistical applications. For example, in the case of proper target, that is when π() is finite, regenerative method has been used to construct consistent estimates of asymptotic variance of Markov chain 2

3 Monte Carlo (MCMC) based estimates (Mykland, Tierney and Yu, 1995), to obtain quantitative bounds on the rate of convergence of Markov chains (see e.g. Roberts and Tweedie, 1999), and to construct bootstrap methods for Markov chains (see e.g. Bertail and Clémençon, 2006; Datta and McCormick, 1993). The regeneration method has also been used in nonparametric estimation for null recurrent time series (Karlsen and Tjøstheim, 2001). 2 The Results Our first result shows that a necessary and sufficient condition for an irreducible countable state space Markov chain to be regenerative for any initial distribution is that it is recurrent. Theorem 1. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... }. Let {X n } n 0 be irreducible, that is, for all s i, s j, P (X n = s j for some 1 n < X 0 = s i ) > 0. Then for any initial distribution of X 0, the Markov chain {X n } n 0 is regenerative if and only if it is recurrent, that is, for all s i, P (X n = s i for some 1 n < X 0 = s i ) = 1. The next result gives a sufficient condition for a Markov chain {X n } n 0 with countable state space to be regenerative. Theorem 2. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... }. uppose there exists a state s i0 such that P (X n = s i0 for some 1 n < X 0 = s i0 ) = 1, (2) that is, s i0 is a recurrent state. Then {X n } n 0 is regenerative for any initial distribution ν of X 0, such that P (X n = s i0 for some 1 n < X 0 ν) = 1. Remark 2. It is known that a necessary and sufficient condition for (2) to hold is that n=1 P (X n = s i0 X 0 = s i0 ) = (see e.g. Athreya and Lahiri, 2006, Corollary ). Next we give a necessary condition for a Markov chain {X n } n 0 with countable state space to be regenerative. Theorem 3. Let {X n } n 0 be a time homogeneous Markov chain with a countable state space = {s 0, s 1, s 2,... } and initial distribution ν of X 0. uppose {X n } n 0 is regenerative. Then, 3

4 there exists a nonempty set 0 such that for all s i, s j 0, P (X n = s j for some 1 n < X 0 = s i ) = 1. Remark 3. Under the hypothesis of Theorem 3, the Markov chain {X n } n 0 is regenerative for any initial distribution of X 0 such that P (X n 0 for some 1 n < ) = 1. Next we discuss the case when {X n } n 0 is a Markov chain with a general state space that need not be countable. Our first result in this case provides a sufficient condition similar to Theorem 2. Theorem 4. Let {X n } n 0 be a Markov chain with state space (, ) and transition probability function P (, ). uppose there exists a singleton in with P (X n = for some 1 n < X 0 = ) = 1, i.e., is a recurrent singleton. Then {X n } n 0 is regenerative for any initial distribution ν such that P (X n = for some 1 n < X 0 ν) = 1. The proof of Theorem 4 is similar to that of Theorem 2 and hence is omitted. Remark 4. Let be a recurrent singleton as in Theorem 4 and let π(a) E( T 1 j=0 I A(X j ) X 0 = ) for A, where T = min{n : 1 n <, X n = }. As shown in Athreya and Roy (2012), the measure π is stationary for the transition function P, that is, π(a) = P (x, A)π(dx) for all A. Further, if is accessible from everywhere in, that is, if for all x, P (X n = for some 1 n < X 0 = x) = 1, then π is unique (upto multiplicative constant) in the class of all measures π on (, ) that are subinvariant measure with respect to P, that is, π(a) P (x, A) π(dx) for all A. Remark 5. Let {X n } n 0 be a Harris recurrent Markov chain with state space, that is, there exists a σ-finite measure φ( ) on (, ) such that P (X n A for some 1 n < X 0 = x) = 1 for all x and all A such that φ(a) > 0. Assume that is countably generated. Then there exists n 0 1 such that the chain {X nn0 } n 0, is equivalent, in a suitable sense, to a regenerative sequence. (This has been independently shown by Athreya and Ney (1978) and Numelin (1978), see e.g. Meyn and Tweedie (1993, ection ).) 4

5 Our next result provides another sufficient condition for {X n } n 0 to regenerative. Theorem 5. Let {X n } n 0 be a time homogeneous Markov chain with a general state space (, ) and Markov transition function P (, ). uppose, (i) there exists a measurable function s( ) : [0, 1], and a probability measure Q( ) on (, ) such that P (x, A) s(x)q(a), for all x and A. (3) The above is known as the so-called minorisation condition. (ii) Also, assume that n=1 ( Then, {X n } n 0 is regenerative if X 0 has distribution Q. ) s(y)p n (x, dy) Q(dx) =. (4) The next result gives a necessary condition for {X n } n 0 to be regenerative. Theorem 6. Let {X n } n 0 be a time homogeneous Markov chain with a general state space (, ). Assume that (i) {X n } n 0 is regenerative, and (ii) for all j 0, the event {T 2 T 1 > j} is measurable with respect to σ{x T1 +r : 0 r j}, the σ-algebra generated by {X T1, X T1 +1,, X T1 +j}. Let π be the occupation measure as defined in (1). Then π is σ-finite, and for all A with π(a) > 0, P (X n A for some 1 n < X 0 = x) = 1 for π almost all x. (5) Definition 2. A Markov chain {X n } n 0 is said to be weakly Harris recurrent if there exists a σ- finite measure π on (, ) such that (5) holds for all A with π(a) > 0. Remark 6. The above Theorem 6 shows that any regenerative Markov chain is necessarily weakly Harris recurrent. A natural open question is whether a weakly Harris recurrent Markov chain is necessarily regenerative for some initial distribution of X 0. Note that Theorem 6 is a version of Theorem 3 for general state space Markov chains. 5

6 Remark 7. The condition (ii) in Theorem 6 is satisfied in the usual construction of regeneration schemes (see proof of Theorem 5 in ection 3). 3 Proofs of Theorems Proof of Theorem 1 Proof. First we assume that the Markov chain {X n } n 0 is irreducible and recurrent. Fix s i0. Define T 1 = min{n : n 1, X n = s i0 }, and for j 2, T j = min{n : n > T j 1, X n = s i0 }. ince {X n } n 0 is irreducible and recurrent, for any initial distribution, P (T j < ) = 1 for all j 1 (see e.g. Athreya and Lahiri, 2006, ection 14.1). Also by strong Markov property, η j {X n, T j n < T j+1, T j+1 T j } j 1 are iid with X Tj = s i0, that is, {X n } n 0 is regenerative, for any initial distribution. Next we assume that {X n } n 0 is regenerative with X 0 ν, regeneration times T j s and occupation measure π as defined in (1). Note that π() = E(T 2 T 1 ) > 0. o there exists s i0 such that π(s i0 ) > 0. Define ξ r (i 0 ) = T r+1 1 j=t r I {si0 }(X j ) number of visits to s i0 during the rth cycle for r = 1, 2,.... ince ξ r (i 0 ), r = 1, 2,... are iid with E(ξ r (i 0 )) = π(s i0 ), by LLN, we have n r=1 ξ r(i 0 )/n π(s i0 ) with probability 1 as n. ince π(s i0 ) > 0, this implies that r=1 ξ r(i 0 ) = with probability 1, which, in turn, implies that P ν (ξ r (i 0 ) > 0 for infinitely many r) = 1, where P ν is the distribution of {X n } n 0 with X 0 ν. The above implies that P ν (X n = s i0 i. o.) = 1, that is, s i0 is a recurrent state. ince {X n } n 0 is irreducible, it follows that all states are recurrent. Proof of Theorem 2 Proof. If P (X 0 = s i0 ) = 1, then since s i0 is recurrent, {X n } n 0 is regenerative with iid cycles η j, j 0 as defined in the proof of Theorem 1. Now if the initial distribution ν is such that P (X n = s i0 for some 1 n < X 0 ν) = 1, then {X n } n 0 is regenerative with the same iid cycles η j, j 1 as above. Proof of Theorem 3 6

7 Proof. Let π be as defined in (1) and let 0 {s i : s i, π(s i ) > 0}. Then as shown in the second part of proof of Theorem 1, 0 is nonempty and for all s i 0, P ν (A i ) = 1, where A i {X n = s i for infinitely many n}. This implies, for any s i, s j 0, P ν (A i A j ) = 1, which, in turn, implies by the strong Markov property that P (X n = s j i. o. X 0 = s i ) = 1 for all s i, s j 0. Proof of Theorem 5 Proof. Following Athreya and Ney (1978) and Numelin (1978), we construct the so-called split chain {(δ n, X n )} n 0 using (3) and the following recipe. At step n 0, given X n = x n, generate (δ n+1, X n+1 ) as follows. Generate δ n+1 Bernoulli (s(x n )). If δ n+1 = 1, then draw X n+1 Q( ); otherwise if δ n+1 = 0, then draw X n+1 R(x n, ), where R(x, ) {1 s(x)} 1 {P (x, ) s(x)q( )} for s(x) 1, and if s(x) = 1 define R(x, ) = 0. Note that the marginal {X n } sequence of the split chain {(δ n, X n )} n 0 is a Markov chain with state space and transition function P (, ). We observe that for any 1 n <, A, P (X n A δ n = 1, (δ j, X j ), j n 1) = Q(A). Let T = min{n : 1 n <, δ n = 1} and T = if δ n 1 for all 1 n <. Let θ = P (T < X 0 Q). If θ = 1, then the above construction shows that the Markov chain {X n } n 0 with X 0 Q is regenerative. Now we establish that θ = 1 if and only if (4) holds. (For the proof of the theorem we only need the if part.) Let N = n=1 δ n be the total number of regenerations. Let δ 0 = 1, X 0 Q. Then it can be shown that P (N = 0) = 1 θ, and P (N = k) = (1 θ)θ k, for all k 1. Thus if θ < 1, then E(N X 0 Q) = k(1 θ)θ k = θ/(1 θ) <. k=0 On the other hand if θ = 1, then P (N = ) = 1. Hence θ = 1 if and only if E(N X 0 Q) =. But, E(N X 0 Q) = P (δ n = 1) = n=1 n=1 ( ) s(y)p n (x, dy) Q(dx). o if (4) holds then θ = 1 and the Markov chain {X n } n 0 is regenerative when X 0 Q. The proof of Theorem 5 is thus complete. Proof of Theorem 6 7

8 Proof. As noted in the proof of Theorem 1, 0 < π() = E(T 2 T 1 ), and since T 2 T 1 is an integer valued random variable, π is a nontrivial σ-finite measure. Fix A such that π(a) > 0. Then arguing as in the proof of Theorem 1, P ν (ξ r (A) > 0 for infinitely many r, 1 r < ) = 1, (6) where X 0 ν, and ξ r (A) = T r+1 1 j=t r I A (X j ) number of visits to A during the rth cycle for r = 1, 2,.... This implies P ν (B) = 0, where B {X n A for only finitely many n T 1 }. This, in turn, implies that for all j 0, P ν (B (T 2 T 1 > j)) = 0. But, P ν (B (T 2 T 1 > j)) = E ν ( IB I (T2 T 1 >j)) = Eν ( IBj I (T2 T 1 >j)), where B j {X n A for only finitely many n T 1 + j} for j = 0, 1,.... By hypothesis (ii) and the (strong) Markov property, we have ( ) ( ( E ν IBj I (T2 T 1 >j) = Eν I(T2 T 1 >j)e ν IBj σ{x T1 +r : 0 r j} )) = E ν ( I(T2 T 1 >j)ψ(x T1 +j) ), where ψ(x) = P (X n A for only finitely many n 1 X 0 = x). ince P ν (B (T 2 T 1 > j)) = 0 for all j = 0, 1,..., it follows that ( E ν I(T2 T 1 >j)ψ(x T1 +j) ) = 0. But, j=0 j=0 ( E ν I(T2 T 1 >j)ψ(x T1 +j) ) ( T2 1 ) = E ν ψ(x j ) = j=t 1 ψ(x)π(dx). ince ψ(x)π(dx) = 0, it implies that ψ(x) = 0 for π almost all x, that is, P (X n A for infinitely many n 1 X 0 = x) = 1 for π almost all x. This conclusion is stronger than (5). Acknowledgments The authors thank an anonymous reviewer and an editor for helpful comments and suggestions. References ATHREYA, K. B. and LAHIRI,. N. (2006). Measure Theory and Probability Theory. pringer, New York. 8

9 ATHREYA, K. B. and NEY, P. (1978). A new approach to the limit theory of recurrent Markov chains. Transactions of the American Mathematical ociety, ATHREYA, K. B. and ROY, V. (2012). Monte carlo methods for improper target distributions. Tech. rep., Iowa tate University. BERTAIL, P. and CLÉMENÇON,. (2006). Regenerative block bootstrap for Markov chains. Bernoulli, DATTA,. and MCCORMICK, W. (1993). Regeneration-based bootstrap for Markov chains. The Canadian Journal of tatistics, KARLEN, H. A. and TJØTHEIM, D. (2001). Nonparametric estimation in null recurrent time series. The Annals of tatistics, MEYN,. P. and TWEEDIE, R. L. (1993). Markov Chains and tochastic tability. pringer Verlag, London. MYKLAND, P., TIERNEY, L. and YU, B. (1995). Regeneration in Markov chain samplers. Journal of the American tatistical Association, NUMELIN, E. (1978). A splitting technique for Harris recurrent Markov chains. Z. Wahrsch. Verw. Gebiete, ROBERT, G. and TWEEDIE, R. (1999). Bounds on regeneration times and convergence rates for Markov chains. tochastic Processes and their Applications, Corrigendum (2001) 91 :

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo

On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo James P. Hobert 1, Galin L. Jones 2, Brett Presnell 1, and Jeffrey S. Rosenthal 3 1 Department of Statistics University of Florida

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

Tail inequalities for additive functionals and empirical processes of. Markov chains

Tail inequalities for additive functionals and empirical processes of. Markov chains Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )

More information

Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration

Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration Aixin Tan and James P. Hobert Department of Statistics, University of Florida May 4, 009 Abstract

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Faithful couplings of Markov chains: now equals forever

Faithful couplings of Markov chains: now equals forever Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES

ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES Applie Probability Trust (9 October 2014) ESTIMATION OF INTEGRALS WITH RESPECT TO INFINITE MEASURES USING REGENERATIVE SEQUENCES KRISHNA B. ATHREYA, Iowa State University VIVEKANANDA ROY, Iowa State University

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Applicability of subsampling bootstrap methods in Markov chain Monte Carlo

Applicability of subsampling bootstrap methods in Markov chain Monte Carlo Applicability of subsampling bootstrap methods in Markov chain Monte Carlo James M. Flegal Abstract Markov chain Monte Carlo (MCMC) methods allow exploration of intractable probability distributions by

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Monte Carlo methods for improper target distributions

Monte Carlo methods for improper target distributions Electronic Journal of tatistics Vol. 8 (2014 2664 2692 IN: 1935-7524 DOI: 10.1214/14-EJ969 Monte Carlo methos for improper target istributions Krishna B. Athreya Department of Mathematics an tatistics,

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

Local consistency of Markov chain Monte Carlo methods

Local consistency of Markov chain Monte Carlo methods Ann Inst Stat Math (2014) 66:63 74 DOI 10.1007/s10463-013-0403-3 Local consistency of Markov chain Monte Carlo methods Kengo Kamatani Received: 12 January 2012 / Revised: 8 March 2013 / Published online:

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Limit theorems for dependent regularly varying functions of Markov chains

Limit theorems for dependent regularly varying functions of Markov chains Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr

More information

CONVERGENCE RATES AND REGENERATION OF THE BLOCK GIBBS SAMPLER FOR BAYESIAN RANDOM EFFECTS MODELS

CONVERGENCE RATES AND REGENERATION OF THE BLOCK GIBBS SAMPLER FOR BAYESIAN RANDOM EFFECTS MODELS CONVERGENCE RATES AND REGENERATION OF THE BLOCK GIBBS SAMPLER FOR BAYESIAN RANDOM EFFECTS MODELS By AIXIN TAN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

More information

1 Kernels Definitions Operations Finite State Space Regular Conditional Probabilities... 4

1 Kernels Definitions Operations Finite State Space Regular Conditional Probabilities... 4 Stat 8501 Lecture Notes Markov Chains Charles J. Geyer April 23, 2014 Contents 1 Kernels 2 1.1 Definitions............................. 2 1.2 Operations............................ 3 1.3 Finite State Space........................

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Department of Econometrics and Business Statistics

Department of Econometrics and Business Statistics ISSN 440-77X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Nonlinear Regression with Harris Recurrent Markov Chains Degui Li, Dag

More information

On Differentiability of Average Cost in Parameterized Markov Chains

On Differentiability of Average Cost in Parameterized Markov Chains On Differentiability of Average Cost in Parameterized Markov Chains Vijay Konda John N. Tsitsiklis August 30, 2002 1 Overview The purpose of this appendix is to prove Theorem 4.6 in 5 and establish various

More information

Regenerative block empirical likelihood for Markov chains

Regenerative block empirical likelihood for Markov chains Regenerative block empirical likelihood for Markov chains March 7, 2008 Running title: Regenerative block empirical likelihood Hugo Harari-Kermadec Statistics Team of the Center for Research in Economics

More information

On the Central Limit Theorem for an ergodic Markov chain

On the Central Limit Theorem for an ergodic Markov chain Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University

More information

TEORIA BAYESIANA Ralph S. Silva

TEORIA BAYESIANA Ralph S. Silva TEORIA BAYESIANA Ralph S. Silva Departamento de Métodos Estatísticos Instituto de Matemática Universidade Federal do Rio de Janeiro Sumário Numerical Integration Polynomial quadrature is intended to approximate

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo. (September, 1993; revised July, 1994.)

Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo. (September, 1993; revised July, 1994.) Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo September, 1993; revised July, 1994. Appeared in Journal of the American Statistical Association 90 1995, 558 566. by Jeffrey

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Bayesian Methods with Monte Carlo Markov Chains II

Bayesian Methods with Monte Carlo Markov Chains II Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3

More information

Geometric Ergodicity and Hybrid Markov Chains

Geometric Ergodicity and Hybrid Markov Chains Geometric Ergodicity and Hybrid Markov Chains by Gareth O. Roberts* and Jeffrey S. Rosenthal** (August 1, 1996; revised, April 11, 1997.) Abstract. Various notions of geometric ergodicity for Markov chains

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

Random Walks Conditioned to Stay Positive

Random Walks Conditioned to Stay Positive 1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo

Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Assaf Weiner Tuesday, March 13, 2007 1 Introduction Today we will return to the motif finding problem, in lecture 10

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Geometric ergodicity of the Bayesian lasso

Geometric ergodicity of the Bayesian lasso Geometric ergodicity of the Bayesian lasso Kshiti Khare and James P. Hobert Department of Statistics University of Florida June 3 Abstract Consider the standard linear model y = X +, where the components

More information

Approximate regenerative-block bootstrap for Markov. chains: some simulation studies

Approximate regenerative-block bootstrap for Markov. chains: some simulation studies Approximate regenerative-block bootstrap for Markov chains: some simulation studies by Patrice Bertail and Stéphan Clémençon CREST INSEE - Laboratoire de Statistiques and Université Paris X, Nanterre MODALX

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,

More information

A Dirichlet Form approach to MCMC Optimal Scaling

A Dirichlet Form approach to MCMC Optimal Scaling A Dirichlet Form approach to MCMC Optimal Scaling Giacomo Zanella, Wilfrid S. Kendall, and Mylène Bédard. g.zanella@warwick.ac.uk, w.s.kendall@warwick.ac.uk, mylene.bedard@umontreal.ca Supported by EPSRC

More information

Markov Chains and Monte Carlo Methods

Markov Chains and Monte Carlo Methods Ioana A. Cosma and Ludger Evers Markov Chains and Monte Carlo Methods Lecture Notes March 2, 200 African Institute for Mathematical Sciences Department of Pure Mathematics and Mathematical Statistics These

More information

Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods

Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods M. González, R. Martínez, I. del Puerto, A. Ramos Department of Mathematics. University of Extremadura Spanish

More information

Geometric Convergence Rates for Time-Sampled Markov Chains

Geometric Convergence Rates for Time-Sampled Markov Chains Geometric Convergence Rates for Time-Sampled Markov Chains by Jeffrey S. Rosenthal* (April 5, 2002; last revised April 17, 2003) Abstract. We consider time-sampled Markov chain kernels, of the form P µ

More information

Continuous Time Approximations to GARCH(1, 1)-Family Models and Their Limiting Properties

Continuous Time Approximations to GARCH(1, 1)-Family Models and Their Limiting Properties Communications for Statistical Applications and Methods 214, Vol. 21, No. 4, 327 334 DOI: http://dx.doi.org/1.5351/csam.214.21.4.327 Print ISSN 2287-7843 / Online ISSN 2383-4757 Continuous Time Approximations

More information

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1 FOURIER TRANSFORMS OF STATIONARY PROCESSES WEI BIAO WU September 8, 003 Abstract. We consider the asymptotic behavior of Fourier transforms of stationary and ergodic sequences. Under sufficiently mild

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Rademacher Complexity Bounds for Non-I.I.D. Processes

Rademacher Complexity Bounds for Non-I.I.D. Processes Rademacher Complexity Bounds for Non-I.I.D. Processes Mehryar Mohri Courant Institute of Mathematical ciences and Google Research 5 Mercer treet New York, NY 00 mohri@cims.nyu.edu Afshin Rostamizadeh Department

More information

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic he Polya-Gamma Gibbs Sampler for Bayesian Logistic Regression is Uniformly Ergodic Hee Min Choi and James P. Hobert Department of Statistics University of Florida August 013 Abstract One of the most widely

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

SC7/SM6 Bayes Methods HT18 Lecturer: Geoff Nicholls Lecture 2: Monte Carlo Methods Notes and Problem sheets are available at http://www.stats.ox.ac.uk/~nicholls/bayesmethods/ and via the MSc weblearn pages.

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

A QUASI-ERGODIC THEOREM FOR EVANESCENT PROCESSES. 1. Introduction

A QUASI-ERGODIC THEOREM FOR EVANESCENT PROCESSES. 1. Introduction A QUASI-ERGODIC THEOREM FOR EVANESCENT PROCESSES L.A. BREYER AND G.O. ROBERTS Abstract. We prove a conditioned version of the ergodic theorem for Markov processes, which we call a quasi-ergodic theorem.

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Approximate regenerative-block bootstrap for Markov. chains: some simulation studies

Approximate regenerative-block bootstrap for Markov. chains: some simulation studies Approximate regenerative-block bootstrap for Markov chains: some simulation studies by Patrice Bertail Stéphan Clémençon CREST INSEE - Laboratoire de Statistiques MODAL X - Université Paris X Nanterre

More information

Regeneration-based statistics for Harris recurrent Markov chains

Regeneration-based statistics for Harris recurrent Markov chains 1 Regeneration-based statistics for Harris recurrent Markov chains Patrice Bertail 1 and Stéphan Clémençon 2 1 CREST-LS, 3, ave Pierre Larousse, 94205 Malakoff, France Patrice.Bertail@ensae.fr 2 MODAL

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

1. INTRODUCTION Propp and Wilson (1996,1998) described a protocol called \coupling from the past" (CFTP) for exact sampling from a distribution using

1. INTRODUCTION Propp and Wilson (1996,1998) described a protocol called \coupling from the past (CFTP) for exact sampling from a distribution using Ecient Use of Exact Samples by Duncan J. Murdoch* and Jerey S. Rosenthal** Abstract Propp and Wilson (1996,1998) described a protocol called coupling from the past (CFTP) for exact sampling from the steady-state

More information

Advances and Applications in Perfect Sampling

Advances and Applications in Perfect Sampling and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC

More information

arxiv: v1 [math.pr] 17 May 2009

arxiv: v1 [math.pr] 17 May 2009 A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Solution: (Course X071570: Stochastic Processes)

Solution: (Course X071570: Stochastic Processes) Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution

More information

Variance Bounding Markov Chains

Variance Bounding Markov Chains Variance Bounding Markov Chains by Gareth O. Roberts * and Jeffrey S. Rosenthal ** (September 2006; revised April 2007.) Abstract. We introduce a new property of Markov chains, called variance bounding.

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically

More information

School of Mathematics and Statistics. MT5836 Galois Theory. Handout 0: Course Information

School of Mathematics and Statistics. MT5836 Galois Theory. Handout 0: Course Information MRQ 2017 School of Mathematics and Statistics MT5836 Galois Theory Handout 0: Course Information Lecturer: Martyn Quick, Room 326. Prerequisite: MT3505 (or MT4517) Rings & Fields Lectures: Tutorials: Mon

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Bayesian GLMs and Metropolis-Hastings Algorithm

Bayesian GLMs and Metropolis-Hastings Algorithm Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,

More information

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes RAÚL MONTES-DE-OCA Departamento de Matemáticas Universidad Autónoma Metropolitana-Iztapalapa San Rafael

More information

Output analysis for Markov chain Monte Carlo simulations

Output analysis for Markov chain Monte Carlo simulations Chapter 1 Output analysis for Markov chain Monte Carlo simulations James M. Flegal and Galin L. Jones (October 12, 2009) 1.1 Introduction In obtaining simulation-based results, it is desirable to use estimation

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

Skorokhod embeddings for two-sided Markov chains

Skorokhod embeddings for two-sided Markov chains Skorokhod embeddings for two-sided Markov chains Peter Mörters and István Redl Department of Mathematical Sciences, University of Bath, Bath BA2 7AY, England E mail: maspm@bath.ac.uk and ir250@bath.ac.uk

More information

Analysis of the Gibbs sampler for a model. related to James-Stein estimators. Jeffrey S. Rosenthal*

Analysis of the Gibbs sampler for a model. related to James-Stein estimators. Jeffrey S. Rosenthal* Analysis of the Gibbs sampler for a model related to James-Stein estimators by Jeffrey S. Rosenthal* Department of Statistics University of Toronto Toronto, Ontario Canada M5S 1A1 Phone: 416 978-4594.

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

Establishing Stationarity of Time Series Models via Drift Criteria

Establishing Stationarity of Time Series Models via Drift Criteria Establishing Stationarity of Time Series Models via Drift Criteria Dawn B. Woodard David S. Matteson Shane G. Henderson School of Operations Research and Information Engineering Cornell University January

More information

On a New Class of Regular Doubly Stochastic Processes

On a New Class of Regular Doubly Stochastic Processes American Journal of Theoretical and Applied tatistics 217; 6(3): 156-16 http://wwwsciencepublishinggroupcom/j/ajtas doi: 111648/jajtas2176314 I: 2326-8999 (Print); I: 2326-96 (nline) n a ew Class of Regular

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information