Jae-Ho Lee. Markov Random Walks Driven by General Markov Chains and Their Applications to Semi-Markov Queues

Size: px
Start display at page:

Download "Jae-Ho Lee. Markov Random Walks Driven by General Markov Chains and Their Applications to Semi-Markov Queues"

Transcription

1 Jae-Ho Lee Markov Random Walks Driven by General Markov Chains and Their Applications to Semi-Markov Queues

2

3 Mathematik Markov Random Walks Driven by General Markov Chains and Their Applications to Semi-Markov Queues Inaugural-Dissertation zur rlangung des Doktorgrades der Naturwissenschaften im Fachbereich Mathematik und Informatik der Mathematisch-Naturwissenschaftlichen Fakultät der Westfälischen Wilhelms-Universität Münster vorgelegt von Jae-Ho Lee aus Tae-Jeon

4 Dekan: rster Gutachter: Zweiter Gutachter: Prof. Dr. Dr. h.c. J. Cuntz Prof. Dr. G. Alsmeyer Prof. Dr. M. Löwe Tag der mündlichen Prüfung: Tag der Promotion:

5 Contents Introduction iii Introduction to the theory of general Markov chains. Definitions and elementary properties Recurrence Harris recurrence rgodicity Markov random walks 9 2. Markov random walks Definitions The Harris recurrence of Markov modulated sequences Maximum of MRW s Time-reversal Reflected Markov random walks Reflected MRW s and their basic properties Regeneration Stationary distribution MRW s with lattice-type increments MRW s with lattice-type increments MRW s with upward skip-free increments MRW s with downward skip-free increments A duality i

6 ii CONTNTS 3 Moment conditions Moments of the first weak descending ladder epoch Rates of convergence Semi-Markov queues Single server queues The actual waiting time The busy cycle Continuous-time processes Identities between steady state distributions Multiserver queues xistence of a stationary version Regeneration of the discrete time workload process

7 Introduction Queueing theory is one of the important domains in applied probability. The basic idea has been borrowed from every-day experience of queues, for example, at the checkout counters in a supermarket, but a number of stochastic models may be formulated in queueing terms or are closely related. The great diversity of queueing problems gives rise to an enormous variety of queueing models. The simplest and the most basic model in queueing theory is the single server queue, where customers arrive at one service station, are served one at a time on the first come first server basis and leave the system when service is completed. If arrival times form a renewal process and service times are distributed identically and independently, and if arrival times and service times are independent, then the queue is denoted by GI/GI/, which is an old theme in the queueing theory. A specific feature of stable GI/GI/ queues is the regeneration of the system, which means that the system reaches an empty state infinitely often and restarts from scratch at the empty state. The regeneration of a stable GI/GI/ queue can be described in the framework of the theory of random walks. Let T n be the interarrival time between customers n and n, and U n the service time of customer n. Denote by S n n 0 the associated random walk given as n S n := X k, n 0, k=0 where X k = U k T k, k and X 0 = 0. Then the waiting time process forms the reflected random walk W n n 0, i.e., W 0 = 0 and W n+ = W n + X n+ +, n 0. Moreover, the weak descending ladder epochs are regeneration epochs of the waiting time process. GI/GI/ queues have been extensively studied, because of their tractability. Yet the i.i.d. condition, on which a GI/GI/ queue is based, is somewhat unnatural. In fact, almost everything in the world is occurred in a mutual interaction or under influence by some other things. Many efforts are made to generalize GI/GI/ queues. A generalization can be obtained replacing the i.i.d. assumption by conditional independence given a temporally homogenous Markov chain M. A semi-markove queue denoted by SM/SM/ is a generalization of GI/GI/ queue, in which the sequence M n, S n n 0 forms a Markov random walk MRW. A iii

8 iv INTRODUCTION MRW is a generalization of a random walk, in which the additive part is distributionally governed by a temporally homogenous Markov chain see Chapter 2 for the precise definition. Markov modulation offers more flexibility in the modeling of the real world, but in general it is not easy to explicitly compute queueing quantities like stationary distributions of various queueing processes. In the case of finite modulation, some special types of queues known as M/GI/ type and GI/M/ type are extensively studied by various authors like Neuts, Ramaswami, etc. They have developed matrixanalytic methods for the computation of queueing characteristics such as stationary distributions, which becomes nowadays a popular tool in the applied probabilities. Some comprehensive treatments of matrix-analytic methods can be found in Neuts [43, 44] and Latouche and Ramaswami [34]. For a brief survey see Ramaswami [5]. On the contrary, the theory of queues with general modulation chains was not well developed to the same extent. A study on semi-markov queues with general modulation chains can be found in Nummelin [45]. He showed that, if the modulation chain M is positive Harris recurrent, then under the stability condition the waiting time process is one-dependent as well as wide-sense regenerative. Alsmeyer [4] showed that a unique stationary distribution for the waiting time process can be written as an occupation measure with respect to the weak descending ladder epoch. This dissertation deals with MRW s driven by general Markov chains and semi- Markov queues. The first weak descending ladder epoch is one of the basic quantities in the theory of MRW s. In a semi-markov queue it is interpreted as the index of the customers in the first busy cycle. Making use of some corollaries of Dynkin s formula Corollary I.., I..2 in Kalashnikov [33], we first find moment conditions for the first weak descending ladder epochs of MRW s with negative drift. It should be pointed out that a similar method was used by Sharma [55] for the analysis of R/R/ queues, in which interarrival times and service times form a classical sense regenerative process. In the same manner, we get moment conditions for regeneration epochs of reflected MRW s. These results can be directly applied to the queueing theory with the corresponding queueing interpretations. In particular, for a semi-markov queue we find rates of convergence to the staionary distribution and conditions for the finiteness of moments of the stationary waiting time and workload processes. This dissertation is organized as follows: Chapter contains basic definitions and some preliminary results from the theory of general Markov chains. Harris recurrence and ergodicity are reviewed briefly. Chapter 2 deals with the basic theory of MRW s and reflected MRW s driven by general Markov chains. First we review some basic facts on the theory of MRW s. Most of the concepts and results can be found in Arjas [6, 7] and Arjas and Speed [8]. Next we are concerned with reflected MRW s and, following Nummelin [45] and Alsmeyer [4], we get a stationary distribution of a reflected MRW with negative drift as an occupation measure with respect to the first weak descending ladder epoch. The remainder of Chapter 2 deals with MRW s with lattice-type increments. In this case we obtain the joint stationary distributions of reflected MRW s in simpler forms.

9 v Chapter 3 and Chapter 4 are the main parts of this dissertation. In chapter 3 we find moment conditions for the first weak descending ladder epochs of MRW s and for the regeneration epochs of reflected MRW s with negative drift. Moments of the first weak descending ladder epoch are of particular interest in the theory of MRW s. Moment conditions for the first weak descending ladder epoch of an ordinary random walk are known see Theorem I.5. in Gut [30]. For a Markov random walk, some results on the finiteness of moments of the first weak descending ladder epoch can be found in Fuh and Lai [28] and Alsmeyer [5]. In particular, the results of Fuh and Lai can be regarded as special cases of our results. In Chapter 4 we are concerned with semi-markov queues. Throughout this chapter we assume that the stability conditions are satisfied. We first consider single server queues with general modulation chains and find rates of convergence to the stationary distribution and conditions for the finiteness of moments of stationary waiting time and workload processes. For GI/GI/ queues, rates of convergence are available in Kalashnikov [33] Chapter 5.3 and conditions for the finiteness of moments of stationary waiting time and workload processes in Asmussen [3] Theorem X.2.. Sharma [55] obtained the same results for a R/R/ queue, which can be regarded as the special case of countable modulation. Finally, we point out that for a G/G/ queue, in which interarrival times and service times form a stationary process, Daley, Foley and Rolski [26] obtained some conditions for finite moments of the stationary waiting time. In the remainder of this Chapter we examine multiserver semi-markov queues. In particular, for 2-server queues with countable modulation chains, we show that under some additional conditions the workload process is regenerative. Acknowledgements. I am grateful to my supervisor Prof. Dr. Gerold Alsmeyer for suggesting the topic of the thesis and for his many valuable comments during the work.

10 vi INTRODUCTION

11 Chapter Introduction to the theory of general Markov chains A Markov process is one of successful stochastic processes. Its success is due to the relative simplicity of its theory and to the fact that Markov models can exhibit extremely varied and complex behavior. In this chapter we provide an introduction to the theory of Markov chains with general state space. The theory of general Markov chains forms a basis of this thesis. Although the analysis of general Markov chains requires more elaborate techniques than in the discrete case, nowadays the general theory has been developed to a matured state. There are plenty of literature on general Markov chains. For the comprehensive treatments see Meyn and Tweedie [37], Nummelin [46] and references therein. We are mainly interested in Harris recurrence and stationary distribution of general Markov chains. After introducing some fundamental notions on kernels and Markov chains, we deal with Harris recurrence. The analysis of Markov chains with countable state space is based on the recurrence of individual states. However, if the state space is uncountable, one can not expect the existence of such states in general. The Harris recurrence is an extension of the notion of recurrence from individual states to sets. A Harris chain possesses a regenerative scheme based on the splitting technique, which is suggested by Athreya and Ney. From the existence of regeneration epochs one can construct a stationary measure, which is unique up to constant multiples.. Definitions and elementary properties Let, be a measurable space. A function K : [0, is called a kernel on,, if Ks, is a measure on, for all s and if K, A is a -measurable function for all A. If Ks, for any s, then the kernel K is called a transition kernel. It is known that such functions are well defined on Polish

12 2CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS spaces. Any kernel K can be interpreted as a nonnegative linear operator on the set of nonnegative measurable functions F + on by defining K fs := fs Ks, ds = Ks,, f, s for any f F +. In particular, we have Ks, A = K A s, s, A. By defining Kf = Kf + Kf, we may extend this to every measurable function on, such that Kf + and Kf are not both infinite. Similarly K acts on the class of positive measures M + on by λk = Ks, λds for any λ M +. For any fixed A, one defines a kernel I A by I A s, A := A A s, s, A. If K and K 2 are two kernels, their composition K K 2 is defined as K K 2 s, A = K 2 s, A K s, ds, s, A. The n-step iterates K n, n 0, of a transition kernel K are defined iteratively, K 0 = I and K n = K K n, n. Two kernels K and K on, are said to be adjoint with respect to a positive σ-finite measure ν on, if for any f, g F + Kf g dν = f Kg dν, which will also be written as Kf, g ν = f, Kg ν. Assume that a measure space Ω, S, IP is given, which is called the sample space. Let F n n 0 be a filtration and denote by F = σ F n the smallest σ-algebra A Polish space is a complete, separable metric space. Any locally compact space with a countable dense subset, any countable product of Polish spaces and function spaces with values in Polish space are examples of Polish spaces. There are examples in probability theory, where non-polish state spaces are required, but in general a state space is assumed to be Polish. There is a powerful and complete theory for probability on Polish spaces. For details see Appendix A in Asmussen [2] and references therein.

13 .. DFINITIONS AND LMNTARY PROPRTIS 3 generated by U F n. A sequence M = M n n 0 of, -valued random variables on Ω, S, IP is said to be F n n 0 -adapted, if M n is F n -measurable for any n 0. Letting F M n = σm k : k n, n 0, M is Fn M n 0 - adapted. The filtration Fn M n 0 is called the canonical filtration of M. An F n n 0 -adapted chain M n n 0 is called a Markov chain with respect to F n n 0, if for any n 0 IP [M n+ F n ] = IP [M n+ M n ] IP-a.s.. If, in addition, for a transition kernel P : [0, ] IP [M n+ F n ] = PM n, IP-a.s., then M is called a temporally homogeneous Markov chain with transition kernel P. The space, or is called the state space and the points of are called states. Throughout this dissertation a state space is assumed to be Polish, unless stated otherwise. The distribution λ defined by λ = IPM 0 is called an initial distribution. For any initial distribution λ on,, we define a distribution IP λ by the requirements IP λ M 0 = λ and IP λ [M n+ F n ] = PM n,, n 0. Obviously IP λ M 0 A 0,, M n A n = A 0 Ps n, ds n Ps 0, ds λds 0 A A n for any n IN 0 and A 0,, A n. If M starts at a point s, then we write IP s instead of IP δs. A σ-finite measure ξ 0 is called a stationary measure or an invariant measure for M n n 0 or P, if for any n ξ P n A = P n s, A ξds = ξa, A. If ξ is a probability measure satisfying the above equality, then it is called a stationary distribution or an invariant distribution. If ξ is a stationary distribution, then by the Markov property we have IP ξ M n n m = IP ξ M n n 0 for any m 0. Two Markov chains M and M are said to be in duality relative to ν, if their transition kernels are adjoint with respect to ν. One of the two chains is said to be the dual or time-reversed chain of the other one. If each of the two transition

14 4CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS kernels P and P is adjoint to a transition kernel P with respect to ν, then there is an ν-null set N such that Ps, = P s, for all s N c. If is countable, then the empty set is the only set of ν-measure zero and thus the duality condition is equivalent to the requirement P = ν P T ν, where P and P are the transition matrices of M and M, respectively, and ν is the diagonal matrix of ν. A IN 0 { }-valued random variable τ is called a stopping time w.r.t. the filtration F n n 0, if {τ = n} F n for all n 0. If τ is a stopping time w.r.t. the canonical filtration F M n n 0 of a Markov chain M, then it is called a stopping time for the Markov chain M n n 0. Important examples of stopping times for the Markov chain M n n 0 are the first hitting time κa and the first return time τa to a set A defined as κa := inf{n 0 : M n A} and τa := inf{n : M n A}. A random time τ is called a randomized stopping time for the Markov chain M n n 0, if for every n 0 the event {τ = n} and the post n-chain M n+, M n+2, are conditionally independent given the pre-n-chain M 0,, M n, or equivalently, IP [τ = n F M ] = IP [τ = n F M n ] IP-a.s. If τ is a stopping time w.r.t. a filtration F n n 0 and if a Markov chain M is F n n 0 - adapted, then τ is a randomized stopping time for M. Conversely, if τ is a randomized stopping time for M n n 0, then M n n 0 is adapted and τ is a stopping time w.r.t. the filtration F τ n n 0 defined as F τ n := σm k k n, {τ = k : k n}, n 0. By Pitman and Speed [47], the following are equivalent: i τ is a randomized stopping time w.r.t. F n n 0 ; ii for each n IN 0, F τ n and F are conditionally independent given F n ; iii for each n IN 0, I[X F τ n] = I[X F n ] a.s. for each integrable F -measurable random variable X. Let M n n 0 be a F n n 0 -adapted Markov chain and τ a randomized stopping time w.r.t. F n n 0. It is known that M n n 0 possesses the strong Markov property w.r.t. a randomized stopping time τ, i.e., on {τ < } IP [M τ+n M τ ] = P n M τ, IP-a.s. for any n 0. We denote by F τ the σ-algebra of events which are observed up to time τ, i.e., F τ = {A F : A {τ = n} F n for all n 0}.

15 .2. RCURRNC 5.2 Recurrence A set R is called a recurrent set, if for any s IP s M n R i.o. = IP s τr < =. Let τ n n 0 be the sequence of stopping times defined as τ 0 = τr and τ n+ = inf{k > τ n : M k R}, n 0. If R is a recurrent set, then by the strong Markov property the sequence M τ n n 0 := M τn n 0 forms a temporally homogeneous Markov chain. The following assertions are well known see Theorem I.2.3 in Borovkov [9] for example: Proposition. Suppose that a Markov chain M has a stationary distribution ξ and that there exists a recurrent set R with ξr > 0. Then the chain M τ n n 0 has a stationary distribution ξ R defined as ξ R = ξ R. ξr If, in addition, ξ is the unique stationary distribution for M, then τr ξ = I ξ RτR I ξ M R n = ξr I ξ R = R τr M n IP s M n, τr > n ξds. For fixed R, n, let R P n be the kernel defined as RP n s, A = IP [M n A, τr n M 0 = s], s, A, which is known as n-step taboo kernel with taboo set R. Obviously we have RP n s, R = IPτR = n, n, and I s τr = n R P n s, R for any s. Moreover, the transition kernel P R for Mn τ n 0 is given as P R s, R A = RP n s, R A, s R, A.

16 6CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS Note that for any A τr τr I ξ R M n A = I ξ R M n A = RP n s, A ξ R ds. Thus if ξ is the unique stationary distribution for M, then ξ can be also written as ξa = ξr RP n s, A ξ R ds = RP n s, A ξds R for any A. Remark.2 In the situation of Proposition., the cycles Z n := τ n+ τ, M k τn k<τ n+, n 0, are stationary under IP ξ R. Let τ n n 0 be a sequence of a.s. finite stopping times, such that there exists a distribution ζ with ζ = IP λ M τ n for any n 0 and for any initial distribution λ. Then the cycles Z n := τ n+ τ n, M k τ n k<τ n+, n 0, are also stationary under IP ζ. If, in addition, I ζ τ <, then it can be shown that the measure defined as τ I I ζ τ ζ M n is a stationary distribution for M. For details see Alsmeyer [3]. The following proposition is a consequence of Dynkin s formula and gives a criterion for a set to be positive recurrent. Proposition.3 For a set R, the expectation I s τr is finite for any s if and only if there exists a nonnegative measurable function V : [0,, and a constant > 0, such that i sup s/ R V s V s IP s ds ; ii V s V s IP s ds < for all s R. In this case, we have I s τr { V s + { : s / R V s + V s V s IP s ds } : s R. Proof. See Corollary 5.2. in Kalashnikov [33]. QD

17 .3. HARRIS RCURRNC 7 Moreover, we have the following criteria for the first return time to a set to have finite moments. Proposition.4 Let R be a measurable set. i Suppose that there exist a nonnegative function V : [0,, positive numbers, b and α >, and a random variable Λ defined on, such that the following relations are fulfilled: a v R := sup V s < ; b IP V M V s Λs = for all s ; c sup s/ R I Λs ; d sup s I Λs α b <. Then I s τr α for some constants a, b, α and c, b, α, v R. { α 2V s a, b, α + δ : s / R c, b, α, v R : s R, ii For some γ > 0, the expectation I s expγτr is finite for any s if and only if there exists a nonnegative function V : [,, such that a V s V s IP s ds exp γ V s for all s / R; b V s V s IP s ds < for all s R. In this case, we have { I s expγτr e γ { V s : s / R V s + V s V s IP s ds } : s R. Proof. See Theorem and Corollary in Kalashnikov [33]. QD.3 Harris recurrence Let ϕ be a nontrivial σ-finite measure on,. Definition.5 i A Markov chain M n n 0 with transition kernel P is called ϕ-irreducible, if for any s and A with ϕa > 0 there exists n with IP s M n A > 0. ii A Markov chain M is called d- periodic, if there exists a finite sequence of sets i, i =,, d, such that IP s M i+ =, if s i,

18 8CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS where we set d+ =. If d =, then it is called aperiodic. If a Markov chain is ϕ-irreducible, then the set \ d i= i is a ϕ-null set. It is known see Theorem 3. in Asmussen [2] that, if M is ϕ-irreducible and ϕa > 0, then there exist a measurable set R A, r and p > 0 such that IP s M r A p ϕr A for all s R, A. If M is ϕ-irreducible, then ϕ is called an irreducibility measure for M. A ϕ-irreducible Markov chain has many different irreducibility measures. The measure ψ defined as ψ = 2 n ϕ P n is a maximal irreducibility measure, in the sense that all other irreducibility measures are absolutely continuous w.r.t. ψ. The maximal irreducibility measures are equivalent. It is known that, if a ϕ-irreducible Markov chain M n n 0 has a stationary measure ξ, then it is unique up to constant multiples and is equivalent to maximal irreducibility measures. Definition.6 A temporally homogeneous Markov chain M n n 0 with transition kernel P is called Harris recurrent or Harris chain, if there exists a recurrent set R, such that for some p 0, ], r, and a distribution ϕ on with ϕr = P r s, A p ϕa for any s R, A.. The set R is called a regeneration set, and we say that M n n 0 satisfies the minorization condition MR, p, r, ϕ. If r =, then M is called strong aperiodic. A Markov chain M n n 0 is called ϕ-recurrent, if any A with ϕa > 0 is a recurrent set. Obviously a ϕ-recurrent Markov chain is ϕ-irreducible. Furthermore, a Markov chain is Harris recurrent if and only if it is ϕ-recurrent. In this case, any recurrent set of M contains a regeneration set of M. Remark.7 A discrete Markov chain M n n 0 is Harris recurrent if, and only if, it contains a communication class K of recurrent states such that IP i τk < = for all i. Thus M n n 0 can also possess transient states, from which K can be reached with probability. In this case, every set R = {j} with j K is a regeneration set, since M satisfies the minorization condition MR, p, r, ϕ with r {n : p n jj and ϕ = δ j. > 0}, p = p r jj If M is a discrete, irreducible Markov chain, then the successive return times to a recurrent state form an identically and independently distributed i.i.d. sequence. At each time the chain enters the state, it starts a new tour with the same distribution, regardless of the preceding sample path. This leads to a decomposition of the chain into cycles with i.i.d. distribution. Unfortunately this is not true in general, if the state space is uncountable. But any Harris chain has or can be modified to have regeneration epochs in some generalized sense. We introduce some definitions, which are known for general stochastic processes.

19 .3. HARRIS RCURRNC 9 Definition.8 Let X = X t t Γ be a discrete- or continuous-time stochastic process with state space Γ = IN 0 or IR + 0. Assume that there are random times 0 = τ 0 < τ < τ 2 < < a.s. Consider cycles Z n := τ n+ τ n, X t τn t<τ n+, n 0. i We call X or the pair τ, X wide-sense regenerative, if the cycles Z n, n, are identically distributed and the sequence Z k k n does not depend on τ 0, τ,, τ n for n : ii A wide-sense regenerative process X or τ, X is called classical-sense regenerative, if the cycles Z n, n 0, are independent: iii X or τ, X is called l-dependent regenerative, if the cycles Z n, n 0, are l-dependent and identically distributed for n. In each case of i-iii, the random times τ n, n 0, are called regeneration epochs. If further the cycles Z n, n 0, are identically distributed, then we say that X is zero-delayed. A wide-sense or l-dependent regenerative process X with regeneration epochs τ n, n 0, is called positive recurrent, if I τ 2 τ <, and null recurrent, otherwise. Note that a l-dependent regenerative process X is always one-dependent regenerative, since to a given l-dependent cycles Z n with regeneration epochs τ n, n 0, one can associate new cycles Ẑn := Z k τln k<τ ln+, n 0, which are one-dependent. If τ, X is wide-sense regenerative, then the sequence of regeneration epochs τ n n 0 forms a renewal process, which is called the embedded renewal process. Remark.9 A wide-sense regenerative process with one-dependent cycles is often called weak regenerative. Suppose that τ, X is positive recurrent, wide-sense regenerative and that there exists a distribution IP 0 such that τn τn IP 0 n 0, X t t Γ = IP τ n 0, X +t t Γ. Denote by θ t, t Γ, shift-operators defined as θ t X = X t+u u 0. Let IP be the distribution defined as IP = I 0 θ t X dt, I 0 τ 0 if Γ = IR + 0, and IP = I 0 θ n X, I 0 τ if Γ = IN 0, where I 0 means the expectation under the distribution IP 0. It is known see Kalashnikov [32] or Thorisson [63] that, if Γ = IR + 0 and the distribution IP 0 τ is spread out, then lim t IPθ tx IP = lim t IP 0 θ t X IP = 0.

20 0CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS If Γ = IN 0 and the span of τ under IP 0 is, then lim IPθ nx IP = lim IP 0 θ n X IP = 0. n n Regenerative processes play an important role in applied probability. There are plenty of literature on regenerative processes. A standard reference for regenerative processes is Thorisson [63]. Sigman and Wolf [60] give an expository survey including applications to the queueing theory. We now return to Harris chains. The following proposition says that a Harris chain is wide-sense as well as one-dependent regenerative. Proposition.0 Regeneration lemma Given a Harris chain M n n 0, there exist a filtration F n n 0 and a sequence τ n n 0 of random times, which have the following properties: i 0 = τ 0 < τ < τ 2 < < a.s. under IP λ for any distribution λ on ; ii M n n 0 is Markov-adapted and each τ k a stopping time with respect to F n n 0 ; iii under each IP s, s, the M τn are independent for n 0 and further identically distributed with common distribution ζ = IP λ M for any initial distribution λ and for n ; iv for each n 0 and s IP [τ n+j τ n, M τn+j j 0 F τn ] = IP Mτ n τ j, M j j 0 IP s -a.s.; v τ n+j τ n, M τn +j j 0 is independent of τ 0,, τ n for each n 0. Proof. The sequence of random times τ n n 0 can be obtained by the splitting technique, which was suggested by Athreya and Ney [5]. The construction requires in general enlarging the probability space to support the new Bernoulli random variables. Suppose that M satisfies the minorization condition MR, p, r, ϕ. Starting at any state, the chain M hits R eventually. Conditional on doing so, the distribution of the transition r steps later can be written as P r s, = pϕ + p P r s,, where P r s, = p P r s, pϕ. Let η n, n 0, be i.i.d. {0, }-valued random variables with IP s η n = = p. Thus if M n R, then M n+r is generated according to ϕ if η n = and according to P r s,, otherwise. The missing values of M n+,, M n+r are generated according to the conditional distribution given M n and M n+r, which exists on a Polish state space. If M n R, then M n+ is generated according to PM n,. Random times τ n, n 0, are defined recursively: τ 0 = 0 and τ n := inf{k τ n + r : M k r R, η k r = }, n.

21 .3. HARRIS RCURRNC Then the properties i-v are fulfilled with ζ := ϕ = IP λ M for any initial distribution λ. For details we refer to Alsmeyer [], Kalashnikov [32] or Lindvall [35]. QD We say that a sequence τ n n 0 forms a sequence of regeneration epochs for M n n 0, if it satisfies properties i through iv in Proposition.0. In Alsmeyer [], it is shown that a Markov chain M is Harris recurrent, if thus if and only if it possesses a sequence of regeneration epochs. Note that a Harris chain M is d-periodic, if the span of τ is d under IP ζ, where τ is a regeneration epoch constructed by the splitting technique see Proposition 3.0 in Asmussen [2]. Remark. In the proof of Proposition.0, we have considered the bivariate Markov chain M := M n, η n n 0 with state space {0, }, P{0, } to get regeneration epochs. If M is strong aperiodic, then transition kernel P of M can be given through Ps, 0, A {θ} = Ps,, A {θ} = { pθ + p θps, A : s / R pθ + p θ Ps, A : s R { pθ + p θps, A : s / R pθ + p θϕa : s R, for any A, θ {0, }, where P is defined in Proposition.0. In this case M is classical-sense regenerative. For the construction of P in the general case, see Kalashnikov [32]. Remark.2 Borovkov introduced renovative processes see Chapter 3 in Borovkov [9] or Foss and Kalashnikov [27]. Let Y n n 0 be a sequence of random variables on defined by the recursive relation Y n+ = fy n, X n, n 0, where X n n 0 is a sequence of i.i.d. random variables taking values from a Polish space and the mapping f : is supposed to be measurable. Note that the sequence Y n n 0 forms a temporally homogeneous Markov chain on. Denote by f k the kth iteration of f: For any y, x 0,, x k k+, f y, x 0 = fy, x 0 ; f k+ y, x 0, x k = ff k y, x 0,, x k, x k, k. Suppose that there exist an integer r > 0 and measurable sets B r, C such that for any y, y C and x,, x r B r Define events f r y, x 0,, x r = f r y, x 0,, x r. C n = {Y n C}, B n = {X n r,, X n B r }, n r, A n = C n r B n.

22 2CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS The events A n, n 0, are called renovative and their occurrence times the renovation times. Suppose that IPB r > 0. Let further η r be the common value of f r y, x 0,, x r for y C, and ζ a distribution defined as ζ = IP [η r B r ]. Then the chain Y n n 0 satisfies the minorization condition IP s Y r A IPB r ζa for all s C, A. Thus if C is a recurrent set of Y n n 0, then Y n n 0 is Harris recurrent and the sequence of random variables τ n n 0 given as τ 0 = 0 and τ n = inf{k τ n + r : A k = }, n, forms a sequence of regeneration epochs. From the existence of regeneration epochs for Harris chains one can construct a stationary measure, which is unique up to constant multiples. Proposition.3 With the same notations as in Proposition.0, the measure ξ := I ζ M n,.2 defines a stationary measure for P, which is unique up to constant multiples. If I ζ τ <, then ξ := I ζ τ ξ is the unique stationary distribution for P. Proof. See Satz 8.3., Satz in Alsmeyer []. QD A Harris chain M n n 0 is called positive Harris recurrent, if M has a stationary distribution. Remark.4 A continuous-time Markov process M t t 0 is called a Harris process, if it is ϕ-recurrent for some σ-finite measure ϕ, i.e., for any A with ϕa > 0, IP s M t A dt = =, s. 0 Sigman [57] showed that, if a Markov process is one-dependent regenerative, it is a Harris process. It is also known that a Harris process has a unique up to a multiplicative constant stationary measure ξ. A Harris process with a finite stationary measure ξ is called positive Harris recurrent. In Proposition.0 we have constructed regeneration epochs for a Harris chain from a minorization condition and the first return time to the regeneration set. It is thus reasonable, to expect some relations between moments of the regeneration epochs constructed by the splitting technique from a regeneration set and the first return time to the regeneration set. We need the following lemma.

23 .3. HARRIS RCURRNC 3 Lemma.5 Let X n n 0 be a sequence of real valued random variables adapted to a filtration F n n 0 and τ an a.s. finite stopping time w.r.t. F n n 0. i Let α. If there exist l > 0 and l 2 > 0 such that for all n, then for some constant c. I[ X n α F n ] l < and I[τ α F 0 ] l 2 < [ τ I α F0 ] X n c l l 2 < ii Let γ > 0. If there exists l > 0 such that [ ] I expγx n F n l < for all n, then [ γ I exp 2 τ F0 ] X n { I[l τ F 0 ] } /2. Proof. i See Theorem in Borovkov and Utev [20]. ii The assertion can be deduced from Theorem 2 in Borovkov and Utev [20], but we give a simple proof. Let R n = l n exp γ n X k. Then the sequence R n n forms a positive supermartingale, and by the optional sampling theorem the sequence R n τ n also a positive supermartingale with I [R n F 0 ] for any n, which implies I [R n τ F 0 ] for any n. Thus, by Fatou s lemma, I [R τ F 0 ], since lim n R n τ = R τ a.s. Using Hölder s inequality, we have [ γ I exp 2 τ F0 ] X n k= = I [ R /2 τ l τ/2 F 0 ] { I [l τ F 0 ] } /2. QD The following assertions may be known, but we give full proofs, because we find no adequate proofs in literature. Proposition.6 Let M be a Harris chain satisfying the minorization condition MR, p, r, ϕ and τ n n 0 a sequence of regeneration epochs constructed by the splitting technique from the minorization condition.

24 4CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS i Let α. If sup I s τr α <, then sup I s τ α <. ii Let γ > 0. If sup I s expγτr <, then sup I s expγ τ < for some γ > 0. iii If R =, then sup s I s α < for any α and sup s I s expγτ < for some γ > 0. Proof. i A proof for the case r =, α = can be found in Borovkov [20], for example. To show the assertion for r 2, α, let M = M n, η n n 0 be the Markov chain constructed in Proposition.0 see also Remark.. The transition kernel P of M satisfies P r s, θ, A {0, } = { p P r s, A pϕa : s R, θ = 0 ϕa : s R, θ =, Furthermore, if s / R, then P s, θ, A {0, } = IPs, A for any θ {0, }. Let τ n, n, be the random variables defined as τ = τr and τ n = inf{k r + τ n : Mk R {0, }}, n 2. We set τ 0 R = 0. Let further ν be a random variable defined as By the geometric trial argument ν := inf{k : M τk R {}}. IP ν = k = p p k, k. From the construction of τ, it is easy to see that for any s R since I s, α each n, I s α sup I s,θ τν + r α s,θ R {0,} = sup I s,0 τν + r α ν α sup I s,0 τn τ n + r, = r α for any s R. Let G n := σ Mk : k τ n for n. Then, for I [ τ α ] n+ τ n G n { sup r α Pr s, 0, R {0, } α + I s r + τr Pr s, 0, ds {0, } } R c { } p sup αp r α P r s, R + I s r + τr r s, ds R c p sup α I s r + τr <.

25 .3. HARRIS RCURRNC 5 Since I ν α < for any α, we have sup I s τ α ii For any s R I s expγτ sup I s,θ exp γr + τ ν s,θ R {0,} ν sup I s,0 exp As in i, for each n [ I exp γ τ ] n+ τ Gn n γ { p r + < by Lemma.5 i. τn τ n. sup I s exp γr + τ p Moreover, by assumption, there exists a γ, 0 < γ γ, such that sup I s exp γ 2r + τr < + p 2. Letting for any s R L := { p I s L ν sup I s exp γ 2r + τr } p, L n p p n <. Put γ = γ /2. Then, by Lemma.5 ii, for any s R ν I s expγ τ I s,0 exp γ r + τn τ n { sup { I s p } /2 sup I s L ν <. }. sup I s exp γ 2r + τr p } ν /2 iii Clear from the proofs of i and ii, since τr =. QD The following proposition states the strong law of large numbers SLLN for real functions of Harris chains. Proposition.7 Let M be a positive Harris chain with a stationary distribution ξ. Consider a sequence of random variables Y n n 0 with Y n := fm n, n 0, for a measurable, nonnegative real-valued function f. Then lim n n for any initial distribution λ on. n Y k = I ξ Y k=0 Proof. See Theorem in Revuz [53]. IP λ -a.s. QD

26 6CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS.4 rgodicity Definition.8 Let M be a Markov chain with transition kernel P and denote by M λ a Markov chain with transition kernel P and initial distribution λ. i We say that M admits coupling, if for any two initial distributions µ and λ there exist Markov chains M µ and M λ on a common probability space such that where T is a finite random time. M µ n = M λ n, n T, ii We say that M admits shift-coupling, if for any two initial distributions µ and λ there exist Markov chains M µ and M λ on a common probability space such that where T and T are finite random times. M µ T +n = M λ T +n, n 0, The following two propositions give characterizations of Harris chains. Proposition.9 Let M be a general Markov chain with a stationary distribution ξ. Then the following assertions are equivalent: i M admits shift-coupling; ii for any initial distribution λ the distribution of M converges to ξ in Cesaro total variation, i.e., n lim IP λ M k ξ n n + = 0; k=0 iii M is positive Harris recurrent. Proof. For the equivalence of i and ii see Theorem in Thorisson [63]. The equivalence of i and iii follows from Theorem in Thorisson [63]. QD If M is positive Harris recurrent and aperiodic, then it is called Harris ergodic. Proposition.20 Let M be a general Markov chain with a stationary distribution ξ. Then the following assertions are equivalent: i M admits coupling; ii for any initial distribution λ the distribution of M converges to ξ in total variation, i.e., lim n IP λ M n ξ = 0; iii M is Harris ergodic.

27 .4. RGODICITY 7 Proof. For the equivalence of i and ii see Theorem 6.4. in Thorisson [63]. The equivalence of i and iii follows from Proposition VII 3.3 in Asmussen [2]. QD It is known that rates of convergence of regenerative processes are closely related to moments of regeneration epoch. For details see Lindvall [35], Kalashnikov [32] and Thorisson [63]. The following assertions are known. Proposition.2 Let M be a Harris ergodic Markov chain with a stationary distribution ξ and τ the first regeneration epoch constructed by the splitting technique. Let further ϕ be the distribution defined as ϕ = IP λ M. i If I ϕ τ α+ < for some α > 0, then for some constant c IP ϕ M n ξ cn α. ii If I ϕ expγτ < for some γ > 0, then for some constants c and γ 0, γ] IP ϕ M n ξ c exp γ n. iii Let λ and µ be initial distributions on. If for some α I λ τ α <, I µ τ α and I ϕ τ α <, then lim n nα IP λ M n IP µ M n = 0. Proof. See Corollary 5.. in Kalashnikov [32] for the proof of i and ii, and Theorem in Thorisson [63] for iii. QD Corollary.22 Let M be a Harris ergodic Markov chain with a stationary distribution ξ and R a regeneration set. i Let α > 0. If sup I s τr α+ <, then for some constant c IP ϕ M n ξ cn α. ii Let γ > 0. If sup I s expγτr <, then for some constants c and γ > 0 IP ϕ M n ξ c exp γ n. Proof. All assertions follow directly from Proposition.6 and Proposition.2. QD

28 8CHAPTR. INTRODUCTION TO TH THORY OF GNRAL MARKOV CHAINS A Markov chain M is called uniformly ϕ-recurrent, if it satisfies the condition for any A with ϕa > 0. sup IP s τa > n 0 as n s Proposition.23 The following conditions are equivalent: i M is uniformly ϕ-recurrent and aperiodic; ii M is aperiodic and is a regeneration set, i.e., there exist an integer n 0, a constant α > 0 and a distribution ψ such that sup IP s M n0 αψ ; s iii there exist positive constants c < and ρ < such that for any n 0 and s. IP s M n ξ < cρ n Proof. See Theorem 6.5 in Nummelin [46]. QD If one of the conditions i through iii of Proposition.23 holds true, M is called uniformly Harris ergodic.

29 Chapter 2 Markov random walks A Markov random walk MRW is a bivariate sequence M n, S n n 0 consisting of a temporally homogeneous Markov chain M = M n n 0 with arbitrary state space, and a sequence S n n 0 of real random variables, whose increments X 0, X,, say, are distributionally governed by M. The latter means that X 0, X,, are conditionally independent given M and that the conditional distribution of X n given M depends only on M n and M n for n on M 0 alone, if n = 0. The special case, where M is constant, leads back to ordinary random walks having i.i.d. increments. Since Markov modulation, as opposed to the i.i.d. case, offers greater flexibility in the modeling of fluctuations of additive random sequences without losing too much structural homogeneity, it is not surprising that MRW s have become a popular tool to provide more flexible and thus realistic models in areas like risk theory and queueing theory. The special case of finite modulation finite has been extensively studied by various authors like Pyke, Cinlar and Arjas, and there is now a well developed theory for this case as to renewal and fluctuation theoretic aspects. Roughly speaking, if M has finite state space, then much of the theory can be obtained in an elegant manner via regenerative decomposition and subsequently resort to classical results for ordinary random walks. Unfortunately, this is not true to the same extent, when M has infinite, possibly uncountable state space, whence the theory in this case has not yet reached comparable maturity. This chapter deals with MRW s driven by general state space Markov chains including reflected MRW s. Throughout this chapter, the driving chain M is always assumed to be positive Harris recurrent with general state space, and a unique stationary distribution ξ, unless stated otherwise. 2. Markov random walks In this section we review the fundamental aspects of MRW s driven by general Markov chains. Our discussion is based on Arjas [6, 7], Arjas and Speed [8, 9]. 9

30 20 CHAPTR 2. MARKOV RANDOM WALKS 2.. Definitions A mapping K : B [0, ] is called a semi-markov transition kernel, if i s Ks, A B is bounded, B-measurable for any A B B; ii A B Ks, A B is a probability measure on B for any s. We define the composition K K 2 of semi-markov transition kernels K and K 2 as K K 2 s, A B := K 2 s, A B x K s, ds dx, s, A B B. IR The n-step iterates of a semi-markov transition kernel K, n 0, are defined recursively K 0 = I and K n = K n K, n, where the kernel I is defined as Is, A B = δ s A δ 0 B, s, A B B. A Markov random walk MRW or Markov additive process MAP Markov chain M n, S n n 0 with transition kernel Q of the form is a bivariate Qs, x, A B = Ks, A B x, s, x IR, A B B, for some semi-markov transition kernel K. For any B B, define the operator QB on the set of nonnegative measurable functions F + as QBfs := fs Ks, ds B, s, f F +. A continuous-time Markov additive process can be defined in a similar manner. Let {K t : t 0} be a family of semi-transition kernels such that K t s, A B = K t t s, A B x K t s, ds dx IR for any t < t, t, t IR + 0, s, s, x IR, A, B B. A Markov additive process M t, S t t 0 is a bivariate Markov process with transition semigroup Q t t 0 defined as Q t s, x; A B = K t s, A B x, s, x IR, A B B, t 0 for a family of semi-transition kernels {K t : t 0}. It is clear that M t t 0 is a Markov process with the transition semigroup Q M t t 0 defined as Q M t s, A = Q t s, A IR, s, A, t 0. Furthermore, it is known that, given the process M t t 0, the process S t t 0 has independent increments, that is, I [ Π n i=h i S ti S ti F ] = Π n i=i [ h i S ti S ti F ], for any n, 0 t 0 < t < < t n and bounded measurable functions h, h n on, where F denotes the canonical filtration for M t t 0.

31 2.. MARKOV RANDOM WALKS 2 In particular, we have for any A QB A s = Ks, A B, s. A MRW M n, S n n 0 is called a Markov renewal process, if the increments S n+ S n, n 0, of its additive part are a.s. positive, i.e., if Qs, 0, 0, = for all s. The Markov chain M n n 0 is called the driving chain or underlying Markov chain. Obviously a renewal process is equivalent to the special case of a Markov renewal process with a one-state driving chain. If Nt t 0 denotes the counting process for a Markov renewal process M n, S n n 0, i.e., Nt := sup{n 0 : S n t}, t 0, then the process M S t t 0 defined by M S t = M Nt, t 0, is called a semi-markov process. The sequence of increments X n := S n S n, n, of the additive part plays an important role in the theory of MRW s. Putting X 0 = S 0, the process M n, X n n 0 forms a temporally homogeneous Markov chain with transition kernel P : B [0, ] satisfying Ps, A B = IP [M A, X B M 0 = s], s, A B B. One can easily see that M n+, X n+ depends on the past only through M n for each n 0 and that M n n 0 forms a Markov chain with state space and the transition kernel P M defined as P M s, A := Ps, A IR, s, A. Given M n n 0, the X n, n 0, are conditionally independent with IP [X n B M j j 0 ] = FM n, M n, B IP a.s., for all n, B B and a kernel F : 2 B [0, ]. The process M n, X n n 0 is called a Markov modulated sequence with the driving chain M n n 0. Let throughout a canonical model be given with probability measures IP s,x, s, x IR, on Ω, S such that IP s,x M 0 = s, X 0 = x =. For any distribution λ on IR, define IP λ := IP s,x λds, dx, IR in which case M 0, X 0 has the initial distribution λ. The expectation under IP λ is denoted by I λ. For s and an initial distribution λ on, we write I s and I λ instead of I s,0 and I λ δ0, respectively. For any C B, σc denotes the first return time of M n, W n n 0 to C. For each fixed C B and n, we define the probability distributions H n C s, A; B and G n s, A; B as C H n C s, A; B = IP s M n A, S n B, σc > n; G n C s, A; B = IP s M n A, S n B, σc = n, s, A, B B.

32 22 CHAPTR 2. MARKOV RANDOM WALKS One can easily see that Qs, A B x H n C s, ds ; dx = H n+ C s, A; B + G n+ C s, A; B. IR Define the corresponding transforms Ĥα,β C Ĥ α,β C s, A = Ĝ α,β C s, A = and Ĝα,β C as α n e βx H n s, A; dx 0 = I s σc C α n e βsn ; M n A ; α n e βx G n s, A; dx for any s, A. Further, define H C s, A; B = 0 C = I s α σc e βs σc ; M σc A, σc < H n C s, A; B and G Cs, A; B = G n C s, A; B. In particular, if B = IR, then we write H C s, A and G C s, A instead of H C s, A; IR and G C s, A; IR, respectively. Obviously, we have H C s, A = Ĥ,0 C s, A and G Cs, A = Ĝ,0 C s, A, s, A The Harris recurrence of Markov modulated sequences Nummelin [45] has shown that a Markov modulated sequence M n, X n n 0 is positive Harris recurrent and that the measure ν defined as νa B := IPs, A B ξds, A B B, is a unique stationary distribution for M n, X n n 0. Furthermore, a coupling argument shows that M n, X n n 0 is also Harris ergodic, provided that M is Harris ergodic. Sometimes one needs to consider the sequence M n, X n+ n 0. In this case, it turns out that regeneration epochs for M are also regeneration epochs for M n, X n+ n 0. Proposition 2. Let τ n n 0 be a sequence of regeneration epochs for M n n 0. Then the sequence M n, X n+ n 0 is one-dependent as well as wide-sense regenerative with regeneration epochs τ n, n 0, and for any initial distribution λ IP ζ M k, X k+ k 0 = IP λ M τn+k, X τn+k+ k 0, n,

33 2.. MARKOV RANDOM WALKS 23 where ζ = IP λ M. If the state space is countable, then M n, X n+ n 0 is classical-sense regenerative. Proof. Consider the cycles Z n defined as Z n := τ n+ τ n, M k, X k+ τn k<τ n+, n 0. Obviously the sequence Z k k n does not depend on τ 0, τ,, τ n for any n. Moreover, by conditional independence of X n n 0 given M n n 0, one can easily see that for any n and any initial distribution λ IP λ Z n = IP s Z 0 IP λ M τn ds = IP ζ Z 0, thus M n, X n+ n 0 is wide-sense regenerative. On the other hand, for any initial distribution λ and for any n 0 IP λ [Z n+2 F τn+ ] = IP [ τ n+3 τ n+2, M k, X k+ τn+2 k<τ n+3 M τn+ ] = IP ζ Z 0, where F n n 0 is the canonical filtration for the sequence M n, X n n 0. Since Z n is F τn+ -measurable, the cycles Z n, n 0, are one-dependent. In particular, if M is discrete, then there exists a recurrent state i 0. Thus for any i IP i [Z n+ F τn+ ] = IP [ ] τ n+2 τ n+, M k, X k+ τn+ k<τ n+2 M τn+ = IP i0 Z 0, which means that the cycles are independent. QD Note that for any sequence of regeneration epochs τ n n 0 for M n n 0 I ξ X = I ζ X n+ = I ζ X n, I ζ τ I ζ τ where ζ = IP λ M for any initial distribution λ. The SLLN for a MRW M n, S n n 0 is a direct consequence of the Harris recurrence of M n, X n n 0 and Proposition.6. But we give a full proof, in which the structure of one-dependence in MRW s is exploited. Proposition 2.2 SLLN for MRW s Given a Markov modulated sequence M n, X n n 0, it holds that S n lim n n = I ξ X IP λ a.s. for any initial distribution λ on IR.

34 24 CHAPTR 2. MARKOV RANDOM WALKS Proof. Let τ n n 0 be a sequence of regeneration epochs of M n n 0 and let ζ = IP λ M. We note first that for any initial distribution λ S n IP λ lim n n = I S +n S S n ξx = IP λ lim = I ξ X = IP ζ lim n n n n = I ξx. Therefore it is sufficient to show the assertion only for IP ζ. By assumption, the sequence Sn n := S τn n has stationary increments Xn = τn k=τ n + X k, n, which is one-dependent under IP ζ and in turn, by Birkhoff s ergodic theorem, we have as n Sn n I ζx = I ζ X k = I ζ τ I ξ X IP ζ a.s. k= Let T n := inf{k 0 : τ k > n}. Then as n T n n I ζ τ IP ζ a.s., whence S T n n = T n n S T n T n I ξ X and S T n n I ξ X IP ζ a.s. The assertion follows from the inequality S T n n S n n S T n n. QD Let µ := I ξ X, which is called drift of the MRW M n, S n n 0. As a direct consequence of Proposition 2.2, we get for any initial distribution λ µ < 0 lim S n = n IP λ a.s.; 2. µ > 0 lim S n = + n IP λ a.s Maximum of MRW s Suppose that S 0 = 0 hereafter and put σ > := σ0,, σ := σ[0,, σ < := σ, 0 and σ := σ, 0], which are called the first strict ascending ladder epoch, the first weak ascending ladder epoch, the first strict descending ladder epoch and the first weak descending ladder epoch, respectively. If σ is a.s. finite for {, >,, <}, one can also define, in an

35 2.. MARKOV RANDOM WALKS 25 obvious manner, the nth ladder epochs σ n, n, with σ = σ. Clearly σ n, n, are stopping times. For notational convenience, we write H n >, H n H n < and H n instead of Hn 0,, Hn [0,, Hn,0 and Hn,0] notational conventions are used also for G n 0,, Gn [0,, Gn,0 transforms. Consider the maximum of the partial sums S n := max 0 k n S k.,, respectively. The same and Gn,0] and their Noting that, for 0 m n, S m is maximal among the first n partial sums if and only if S m is the last strict ascending ladder height before n, it can be easily seen that for all α, β < α n = α n e βx IP s M n A, S n dx n m=0 e βx IP [M n A, S m+ S m 0,, S n S m 0 M m = s ] IP s M m ds, S m dx, S m 0, S m > S,, S m > S m. Thus we get the following equality, which was obtained by Arjas [7]: Now consider α n I s e βsn ; M n A = S := sup S n. n 0 For any s, A and x IR, we define Ĥ α,0 > s, A Ĝα,β > n s, ds. 2.3 G 0 >s, A;, x = δ s Aδ 0, x; G n >s, A;, x = G > s, A;, x y G n > s, ds ; dy, n. 0 Proposition 2.3 If µ < 0, then S is a.s. finite. Let τ := inf{n : S n = S}. Then for any s, A and x IR IP s M τ A, S τ < x = A G > s, G n >s, ds ;, x 2.4 and I s e βs = G> s, n Ĝ,β > s, ds. 2.5

Regenerative Processes. Maria Vlasiou. June 25, 2018

Regenerative Processes. Maria Vlasiou. June 25, 2018 Regenerative Processes Maria Vlasiou June 25, 218 arxiv:144.563v1 [math.pr] 22 Apr 214 Abstract We review the theory of regenerative processes, which are processes that can be intuitively seen as comprising

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

On Differentiability of Average Cost in Parameterized Markov Chains

On Differentiability of Average Cost in Parameterized Markov Chains On Differentiability of Average Cost in Parameterized Markov Chains Vijay Konda John N. Tsitsiklis August 30, 2002 1 Overview The purpose of this appendix is to prove Theorem 4.6 in 5 and establish various

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

Faithful couplings of Markov chains: now equals forever

Faithful couplings of Markov chains: now equals forever Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation arxiv:1312.0287v3 [math.pr] 13 Jan 2016 Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation François Baccelli University of Texas at Austin Mir-Omid Haji-Mirsadeghi Sharif

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline 1 Some open problems related to stability S. Foss Heriot-Watt University, Edinburgh and Sobolev s Institute of Mathematics, Novosibirsk I will speak about a number of open problems in queueing. Some of

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs

Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs Markov Processes Relat. Fields 9, 413 468 (2003) Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs S. Foss 1 and T. Konstantopoulos 2 1 Department of Actuarial Mathematics and

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Model reversibility of a two dimensional reflecting random walk and its application to queueing network arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

ISyE 6761 (Fall 2016) Stochastic Processes I

ISyE 6761 (Fall 2016) Stochastic Processes I Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Limit theorems for dependent regularly varying functions of Markov chains

Limit theorems for dependent regularly varying functions of Markov chains Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

On the Central Limit Theorem for an ergodic Markov chain

On the Central Limit Theorem for an ergodic Markov chain Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University

More information

Series Expansions in Queues with Server

Series Expansions in Queues with Server Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil Aïssani Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented

More information

2 light traffic derivatives for the GI/G/ queue. We shall see that our proof of analyticity is mainly based on some recursive formulas very similar to

2 light traffic derivatives for the GI/G/ queue. We shall see that our proof of analyticity is mainly based on some recursive formulas very similar to Analyticity of Single-Server Queues in Light Traffic Jian-Qiang Hu Manufacturing Engineering Department Boston University Cummington Street Boston, MA 0225 September 993; Revised May 99 Abstract Recently,

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS Michael Ryan Hoff A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Stochastic Realization of Binary Exchangeable Processes

Stochastic Realization of Binary Exchangeable Processes Stochastic Realization of Binary Exchangeable Processes Lorenzo Finesso and Cecilia Prosdocimi Abstract A discrete time stochastic process is called exchangeable if its n-dimensional distributions are,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Marked Point Processes in Discrete Time

Marked Point Processes in Discrete Time Marked Point Processes in Discrete Time Karl Sigman Ward Whitt September 1, 2018 Abstract We present a general framework for marked point processes {(t j, k j ) : j Z} in discrete time t j Z, marks k j

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

When is a Markov chain regenerative?

When is a Markov chain regenerative? When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

On the static assignment to parallel servers

On the static assignment to parallel servers On the static assignment to parallel servers Ger Koole Vrije Universiteit Faculty of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The Netherlands Email: koole@cs.vu.nl, Url: www.cs.vu.nl/

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method Applied Mathematical Sciences, Vol. 4, 21, no. 48, 2355-2367 M/M/1 Retrial Queueing System with Negative Arrival under Erlang-K Service by Matrix Geometric Method G. Ayyappan Pondicherry Engineering College,

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Attahiru Sule Alfa Department of Mechanical and Industrial Engineering University of Manitoba Winnipeg, Manitoba

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information