SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS

Size: px
Start display at page:

Download "SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS"

Transcription

1 J. Appl. Prob. 42, (25) Printed in Israel Applied Probability Trust 25 SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS ZHENTING HOU and YUANYUAN LIU, Central South University HANJUN ZHANG, University of Queensland Abstract Let ( t ) t R+ be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure π. We investigate the rates of convergence of the transition function P t (x, ) to π; specifically, we find conditions under which r(t) P t (x, ) π ast, for suitable subgeometric rate functions r(t), where denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples. Keywords: Continuous-time Markov process; queueing model; birth death process; ergodicity; subgeometric convergence 2 Mathematics Subject Classification: Primary 6J25 Secondary 6K25 1. Introduction In the past few decades, much effort has been made to study ergodic theory for continuoustime Markov processes. To the authors knowledge, the literature (e.g. [1], [3], [13], and [14]) has focused on Harris ergodicity, exponential ergodicity, and strong ergodicity. For discrete-time Markov processes, there has been much work (e.g. [15] and [18]) on subgeometric ergodicity, which is, roughly speaking, a kind of convergence quicker than ordinary ergodicity and slower than exponential convergence. However, there are few works on subgeometric convergence for continuous-time Markov processes, the topic of this paper. To obtain our results, we shall make the following assumption about the Markov process. Assumption 1.1. There exists a state x such that whenever the Markov process hits x, it will sojourn there for a random time that is positive with probability 1. It is easy to find many Markov processes satisfying Assumption 1.1, for example many queueing processes and all the totally stable continuous-time Markov chains (i.e. continuoustime Markov processes on countable state spaces). If a continuous-time Markov process satisfies Received 11 October 24; revision received 2 February 25. Postal address: School of Mathematics, Central South University, Changsha, Hunan, 4175, P. R. China. address: zthou@csu.edu.cn address: liuyy@csu.edu.cn Postal address: Department of Mathematics, The University of Queensland, Queensland, 472, Australia. address: hjz@maths.uq.edu.au 698

2 Subgeometric rates of convergence 699 Assumption 1.1 then, due to the Markov property and the homogeneity of the process, it can be easily proved that the sojourn time in state x is exponentially distributed with some parameter λ, <λ<. Thus, the state x behaves just like a state in a continuous-time Markov chain, and we can apply the skeleton chain method, used to study continuous-time Markov chains, to the continuous-time Markov process. Throughout the paper, we denote by R + the set of nonnegative real numbers, by Z + the set of nonnegative integers, and by N + the set of positive integers. Let ( t ) t R+ be a timehomogeneous continuous-time Markov process satisfying Assumption 1.1 and taking values in a general state space X endowed with a countably generated σ -field B(X). We denote by P t (x, A), t R +, A B(X), the transition function of the Markov process: P t (x, A) = P x [ t A] =E x [1 { t A}]. Here, P x and E x respectively denote the probability and expectation of the Markov process under the initial condition = x, and 1 { } is an indicator function. In order to study the subgeometric rates of convergence, we suppose that t is Harris ergodic with unique invariant probability measure π, i.e. for all x X, P t (x, ) π, t, (1.1) where µ =sup g 1 µ(g) denotes the usual total variation norm for a signed measure µ. Obviously, if t is Harris ergodic then it is ϕ-irreducible and Harris recurrent. Now we introduce a collection of subgeometric rate functions that includes, for example, all functions of the form r(t) = t α,α >. Here, r(t) is a continuous-time version of r(n) in [18], originally introduced in [16]. To define this class, we first denote by the family of those functions r : (, ) (, ) such that r(t) is continuous and increasing in t with r(1) 2 and log r(t) ast. (1.2) t Denote by those functions r for which both r(t) > for all t R + and there exists an r which is equivalent to r in the sense that lim inf t r(t) > and lim sup r (t) t r(t) r (t) <. Without loss of generality, we assume that r() <1 whenever r. The properties of these functions r that we will use most frequently below, which follow from (1.2), are r(x + y) r(x)r(y) for all x,y [, ), (1.3) r(x + a) 1 asx, for each a (, ). (1.4) r(x) The structure of the paper is as follows. In Section 2, we obtain the main result, finding conditions under which the subgeometric convergence r(t) P t (x, ) π, t, (1.5) holds for π-almost every (π-a.e.) x X, where r(t) is a subgeometric rate function. In Section 3, this result is applied to the queue length process of the M/G/1 queue with multiple vacations. In particular, when the Markov processes are stochastically ordered, we are interested

3 7 Z. HOU ET AL. in obtaining more explicit results for the subgeometric rates of convergence, i.e. in finding explicit bounds on (a bounding function) M(x) such that r(t) P t (x, ) π dt M(x) for r and every x X. In Section 4, we apply the coupling method to obtain explicit bounds on M(x). In Section 5 and Section 6, we discuss the M/G/1 queue and continuous-time birth death chains, respectively two stochastically monotone examples in which computable bounds on M(x) are given. 2. Subgeometric rates of convergence Let ( nh ) n Z+, h R +, be a skeleton chain of the Markov process ( t ) t R+. Theorem 1 of [17] states that if any skeleton chain nh is Harris ergodic, then so is t. In fact, if t is Harris ergodic, by setting t = nh in (1.1) we have P nh (x, ) π, n, i.e. the skeleton chain nh is also Harris ergodic. The following lemma shows the equivalence between the subgeometric convergences of nh and t. Lemma 2.1. A Markov process t is subgeometrically convergent if and only if so too is any skeleton chain nh of t. Proof. If t is subgeometrically convergent then, by setting t = nh in (1.5), we know that nh is also subgeometrically convergent. For any h>, suppose that nh is subgeometrically convergent with invariant probability measure π, i.e. (1.5) holds for t = nh. By virtue of Theorem 1 of [17], we know that, for every bounded signed measure ν on (X, B(X)), the function t νp t is decreasing and π is the unique invariant probability measure for P t. Choose a t>such that (k 1)h t<khfor k Z +. By use of monotonicity and (1.3), we have r(t) µp t (x, ) π =r(t) (µ π)p t (x, ) r(kh) (µ π)p (k 1)h (x, ) r(h)r((k 1)h) µp (k 1)h (x, ) π, t, (2.1) for any probability measure µ. In particular, if in (2.1) we let µ( ) = 1 {x} ( ) be the single-point probability measure, then we recover (1.5). Now we recall the definition of a small set: a set C B(X) is called a small set if there exists an n N + and a nontrivial measure ν n on B(X) such that, for all x C and B B(X), we have P n (x, B) ν n (B). Also, let τ B = inf{n: n B}, B B(X), and r (n) = n k= r(k). The following lemma can be deduced directly from Theorem 2.7 of [15]. Lemma 2.2. Let n be a Harris ergodic Markov chain on X with invariant probability measure π. IfB is a small set with π(b) > and sup x B E x [r (τ B )] <, then for π-a.e. x X. r(n) P n (x, ) π, n, (2.2) Proof. Since B is a small set of the chain n and E x [r (τ B )] <, by virtue of Theorem 2.7(i) and Theorem 2.6 of [15] we have E x [r(τ B )]E x [r (τ B )] < for π-a.e. x X.

4 Subgeometric rates of convergence 71 Let λ( ) = 1 {x} ( ) be the single-point probability measure in Theorem 2.7(ii) of [15]. We then find that (2.2) holds for π-a.e. x X. In the following, we will reveal the relationship between the moments of the first hitting time of the continuous-time Markov process t and those of its skeleton chain nh. We write ˆr(t) = t r(x)dx and r (nh) = n k= r(kh). Recursively define δ n x = inf{s >δ n 1 x + J 1 : t = x }, where δx = and J 1 is the first jump time of the Markov process t. Thus, δx n denotes the time of the nth return of t to x. We write δ x = δx 1, i.e. δ x is the time of the first return of t to x after the first jump. We also define δ x (h) = h inf{n 1: nh = x } to be the first hitting time at x of the skeleton chain nh. Theorem 2.1. Let t be a continuous-time Markov process on X satisfying Assumption 1.1 and let r. The following statements are equivalent. (i) E x [ˆr(δ x )] <. (ii) There exists some h > such that E x [r (δ x (h ))] <. Proof. Assumption 1.1 implies that the sojourn time in state x is exponentially distributed with some parameter λ, <λ<. Owing to the equivalence between and, we may suppose that r. Suppose that part (ii) holds and define h n = h /2 n, n Z +. Observe that δ x (h n ) δ x (h ) for all n. By the assumption that r() <1, we can find an n such that P h n(x,x )r(h n )<1and E x [r (δ x (h n ))] <. For the h n -skeleton chain, define J 1 (h n ) = inf{m Z + : mhn = } and δ x + (h n ) = h n inf{m J 1 (h n ): mhn = x }. We clearly have E x [r (δ + x (h n ))] = = E[r (δ x (h n ) + mh n ) J 1 (h n ) = m, = x ] m=1 P[J 1 (h n ) = m = x ] [P h n (x,x )] m 1 P h n (x, dy)e y [r (δ x (h n ) + mh n )] m=1 which follows from the fact that X\{x } [P h n (x,x )] m 1 [r(h n)] m+1 r(h m=1 n ) 1 E x [r (δ x (h n ))] <, r (δ x (h n ) + mh n ) r (δ x (h n )) + m k=1 r(kh n )r(δ x (h n )) r (δ x (h n )) [r(h n)] m+1 r(h n ) 1. The statement of part (i) follows from this and the fact that δ x δ + x (h n ). Now suppose that part (i) holds. It is possible that the skeleton chain can miss visits of the continuous-time process to x, and so result in δ x <δ x (nh). Suppose that = x, let D k be the kth sojourn time at x, and let W k be the length of the interval between the kth exit from x and the next visit to x of the continuous-time Markov process t. Then δ n x = n k=1 (D k +W k ).

5 72 Z. HOU ET AL. Note that the W k are independent and that the D k are independent of each other and the W k. Moreover, the D k are identically exponentially distributed with parameter λ. Define N = min{n 1: the h-skeleton chain is in state x during the interval D n }. Then δ x (h) N 1 (D i + W i ) + h and, by using (1.3), we obtain ( ˆr nh + n 1 ) W i = = nh nh+w1 r(t)dt + r(t)dt + nh nh+ k W i + nh+ k 1 W i nh W1 Wk + r(t)dt + ( r r(t)dt + + t + nh + r(t + nh) dt + k 1 nh+ n 1 W i nh+ n 2 W i r(t)dt ) Wn 1 ( W i dt + + r k 1 ˆr(nh) + r(nh)ˆr(w 1 ) + + r(w i )r(nh)ˆr(w k ) n 2 ) t + nh + W i dt n r(w i )r(nh)ˆr(w n 1 ). (2.3) Since the W k are independent and identically distributed, by virtue of (2.3) we obtain ( N 1 )] h E x [r (δ x (h) h)] E x [ˆr(δ x (h))] E x [ˆr (D i + W i ) + h = n=1 ( n 1 ) ] E x [ˆr (D i + W i ) + h 1 {N=n} E x [ˆr n=1 n=1 ( n 1 ) ] W i + nh 1 n 1 k=1 {D kh} ( n 1 )] E x [ˆr nh + W i (1 e λh ) n 1 n 2 } (1 e λh ) {ˆr(nh) n 1 + E[ˆr(W 1 )]r(nh) E[r(W 1 )] m n=1 m= (1 e λh ) n 1 r(nh) E[ˆr(W 1 )] E[r(W 1)] (E[r(W 1 )]) n 1 1 E[r(W n=1 1 )] + (1 e λh ) n 1 ˆr(nh). (2.4) n=1

6 Subgeometric rates of convergence 73 Now we show that we can choose a suitable h such that E x [r (δ x (h ))] <. By using (1.4), we obtain r((n 1)h + h) (1 e λh ) n lim n r((n 1)h) (1 e λh ) n 1 = 1 e λh ash (2.5) and, by using (1.3), we have (1 e λh ) n ˆr(nh) (1 e λh ) n 1 ˆr((n 1)h) (1 ˆr((n 1)h) + hr(nh) e λh ) ˆr((n 1)h) (1 e λh )(1 + hr(2h)). (2.6) It is obvious that W 1 <δ x ; hence, E[ˆr(W 1 )] < follows from the fact that E x [ˆr(δ x )] <. By again using (1.3), we have E[r(W 1 )]E x [r(δ x )] E x [r(δ x ε)r(ε)] r(ε) [ δx ] E x r(t)dt ε δ x ε < r(ε) E x [ˆr(δ x )] < (2.7) ε for some positive real number ε. From (2.4) (2.7), we can choose h to be sufficiently small that E x [r (δ x (h ))] (1 + r(h )) E x [r (δ x (h ) h )] <. Thus, we have proved that part (ii) holds. This completes the proof of the theorem. We make the following remarks concerning Theorem 2.1. Remark 2.1. If Assumption 1.1 holds then, by an argument similar to the proof of Theorem 2.1, we can show that E x [δ x ] < if and only if E x [δ x (h )] < for some h -skeleton chain. Remark 2.2. For an ergodic continuous-time Markov chain, if E j [ˆr(δ j )] < for some j X then, from Theorem 2.1, we see that E j [r (δ j (h ))] < for some h -skeleton chain. It follows, from Remark 2.8 of [15], that E i [r (δ j (h ))] < for all i, j X. Again by virtue of Theorem 2.1, we have E i [ˆr(δ i )] < for any i X. When j, the initial state of t, does not equal i, wehaveδ i δ i (h ), meaning that, for each j = i, E j [ˆr(δ i )](1 + r(2)) E j [ˆr(δ i 1)] (1 + r(2)) E j [r ( δ i )] (1 + r(2)) E j [r (δ i (h ))] <, where x is the integer-part function. Thus, we see that if E j [ˆr(δ j )] < for some j X, then E i [ˆr(δ j )] < for all i, j X. In particular, let r(t) = t l 1 for l R +, with l 1. If E i [δi l] < for some i X then E i[δj l ] < for all i, j X. We are now in a position to establish our main result, which gives sufficient conditions for the subgeometric convergence of a class of continuous-time Markov process.

7 74 Z. HOU ET AL. Theorem 2.2. Let t be a Harris ergodic continuous-time Markov process satisfying Assumption 1.1, with invariant probability measure π. Ifr and E x [ˆr(δ x )] <, then r(t) P t (x, ) π( ), t, for π-a.e. x X. Proof. It follows, from Lemma 2.1, that any skeleton chain nh of t is Harris ergodic. Since P nh (x,x ) e λh for any h>, it follows that x is an atom and the single-point set {x } is a small set of the skeleton chain nh. Hence, π(x )>. From Theorem 2.1 and Lemma 2.2, we see that there exists a skeleton chain nh that is subgeometrically convergent. The assertion therefore follows from Lemma 2.1. Remark 2.3. For a continuous-time Markov chain t, the total variation norm is such that P t (i, ) π( ) = P t (i, j) π(j). j X Hence, if E j [ˆr(δ j )] < for some j X, by using Theorem 2.2 we see that r(t) P t (i, j) π(j), t, for all i, j X. This result extends Theorem 1.2 of [12], which deals with the special case r(t) = t l, l Z +, using a different method. The following theorem and remark play a crucial role in bounding M(x) in Section 4. The theorem is interesting in its own right and tells us that, under some conditions, the subgeometrically convergent Markov chains do have stationary distributions with subgeometric tails. Theorem 2.3. Let t be a Harris ergodic continuous-time Markov process satisfying Assumption 1.1, with invariant probability measure π. If r and E x [ˆr(δ x )] <, then E π [r(δ x )] <. Proof. By virtue of Theorem 2.1, if E x [ˆr(δ x )] < we know that there exists a skeleton chain nh such that E x [r (δ x (h ))] <. It follows, from Theorem 2.6 of [15], that E π [r(δ x (h ))] <. If y, the initial state of t, does not equal x, then δ x δ x (h), meaning that we have E y [r(δ x )]E y [r(δ x (h ))] for any y = x. Since E x [ˆr(δ x )] <, it follows from (2.7) that E x [r(δ x )] <, whence E π [r(δ x )]=π(x ) E x [r(δ x )]+ π(dy)e y [r(δ x )] X\{x } π(x ) E x [r(δ x )]+E π [r(δ x (h ))] <. Remark 2.4. Suppose that r, ˆr. It follows fromtheorem 2.3 that if E x [ δ x ˆr(t)dt] <, then E π [ˆr(δ x )] <. Many subgeometric functions satisfy the required assumption. For example, let r(t) = (l + 1)t l, l R +. Then ˆr(t) = t l+1, and it is obvious that r, ˆr.

8 Subgeometric rates of convergence Length of the M/G/1 queue with multiple vacations There is much literature (e.g. [2] and [9]) on M/G/1 queues with vacations. These queues include, for example, the M/G/1 queue with step-up time, with N-policy, with single vacation, or with multiple vacations. In this section, we consider the most complicated case: the M/G/1 queue with multiple vacations, which is denoted simply by M/G/1(E, MV). The other queues can be studied more easily via the same method. For more details about M/G/1(E, MV), see [5], [7], and [1]. This queue is obtained by introducing the strategy of exhaustive service and multiple vacation to the classical M/G/1 queue: once the system has no customers, the server begins a vacation of random length V immediately. If, when the vacation ends, the system still has no customers, the server continues with further independent, identically distributed vacations that do not end until the system has customers queueing when a vacation ends. Here V is always assumed to be a nonnegative random variable, with distribution function V(x), that has finite first and second moments, i.e. E[V ] < and E[V 2 ] <. For M/G/1(E, MV), the customers arrive according to a Poisson process with parameter λ, <λ<, and the service time B has a general distribution B(x). Let L t be the queue length process of M/G/1(E, MV). It is known that L t is not a Markov process unless B(x) is exponentially distributed. We introduce a supplementary variable as follows: θ t = the elapsed service time of the customer being served at time t = if the server is idle at time t. Then (L t,θ t ) becomes a continuous-time Markov process on the two-dimensional state space X = Z + R +. Several types of ergodicity for the discrete-time embedded chain of the queue length process L(t) were studied in [8]. Here, we study the subgeometric convergence of the continuous-time queue length process (L t,θ t ). Let Q b be the number of customers in the system when one busy period begins. Then P[Q b = j] = v j, j N +, 1 v where (λt) j v j = e λt dv(t), j Z +, j! is the probability that j customers join the queue during a vacation. Denote by D v the busy period of M/G/1(E, MV) and by D the busy period of the classical M/G/1 queue. It is easy to see that {D v Q b = k} ={D 1 + D 2 + +D k }, where D k is the busy period of the classical M/G/1 queue caused by the kth customer, and the D i are independent and identically distributed. Let J be the number of vacations during a series of consecutive vacations. Then P[J = j] =v j 1 (1 v ), j N +. Let V v be the vacation period of M/G/1(E, MV). Then {V v J = j} ={V 1 + V 2 + +V j },

9 76 Z. HOU ET AL. where V i denotes the ith vacation and the V i are independent and identically distributed. Define 1/µ = x db(x) and ρ = λ/µ. From the known result E[D] = 1/µ(1 ρ), we have E[D v ]= ρ E[V ] (1 ρ)(1 v ), E[V v]= E[V ]. (3.1) 1 v The following proposition is a known result [17], which is adopted to study the Harris ergodicity of the queue length process. Proposition 3.1. Let n be a discrete-time Markov chain on a general state space X. Suppose that there exists a state x X such that (i) gcd{n 1: P n (x, {x })>} =1 (where gcd stands for greatest common divisor ); (ii) inf x X P x [S x < ] >, where S x = inf{n 1: n = x }; and (iii) E x [S x ] <. Then n is Harris ergodic. It is easy to see that when (L t,θ t ) hits the state (, ), it will stay there for a random length of time that is exponentially distributed with parameter λ, meaning that (L t,θ t ) satisfies Assumption 1.1. We first study the Harris ergodicity of (L t,θ t ) and then investigate the subgeometric rates of convergence based on Harris ergodicity. Theorem 3.1. For M/G/1(E, MV), (L t,θ t ) is Harris ergodic if and only if ρ<1. Proof. Suppose that ρ<1. From Lemma 2.1, we only need to prove that some skeleton chain (L nh,θ nh ) is Harris ergodic. It follows from P nh ((, ), {(, )}) e λnh, n N +, that part (i) of Proposition 3.1 holds. Since ρ<1 the system is stable, meaning that P (i,x) {(L nh,θ nh ) = (, ), i.o.} =1 for any (i, x) Z + R +, where i.o. means infinitely often. Hence, part (ii) of Proposition 3.1 holds. From (3.1), we know that if ρ<1, then E (,) [δ (,) ]=E[D v ]+E[V v ] <. It follows from Remark 2.1 that E (,) [δ (,) (h )] < for some h >. Thus, part (iii) of Proposition 3.1 holds, (L nh,θ nh ) is Harris ergodic, and, by Lemma 2.1, if follows immediately that (L t,θ t ) is Harris ergodic. On the other hand, suppose that (L t,θ t ) is Harris ergodic. Then, for any h>, (L nh,θ nh ) is also Harris ergodic. Since {(, )} is a small set for (L nh,θ nh ),wehavee (,) [δ (,) (h)] < and it follows, from Remark 2.1, that E (,) [δ (,) ] <. From (3.1), we find that ρ<1. Theorem 3.2. Let l R + such that l 1. For the process (L t,θ t ) of M/G/1(E, MV), E (,) [δ(,) l ] < if and only if t l dv(t)< and t l db(t) <. Proof. For a k Z + such that k 2, we have k l =[(k 1) + 1] l max{1, 2 l 1 }[(k 1) l + 1] =2 l 1 [(k 1) l + 1] 2 l (k 1) l.

10 Subgeometric rates of convergence 77 For such a k, using the above inequality we have k=l+1 k l (λt)k k! = = k=l+1 k=l+1 k=l 1 l 1 (λt)k 1 k (k 1)! λt 2 l 1 l 1 (λt)k 1 (k 1) (k 1)! λt 2 l 1 2 l 2 l 2 (λt)k k (λt) 2 k! 2 l 1 2 l 2 2 l l l l (λt)k k (λt) l k! k=1 2 k=1 l(l 1)/2 (λt)k k! (λt) l. (3.2) Let {u i,i N + } be a sequence of nonnegative real numbers. For a p R + such that p 1, by using Jensen s inequality on the convex function x x p, x we see that ( 1 n ) p u i 1 n u p i n n and, thus, ( n ) p u i n p 1 n u p i. (3.3) Many research works study the moments of busy period D in the classical M/G/1 queue. From Theorem 2.1 of [6] or Example 1 of [4], we know that E[D l ] < if and only if t l db(t) <. We first prove the sufficiency hypothesis. If t l dv(t)< and t l db(t) < then, from (3.2) and (3.3), we obtain E[Dv l ]= P[Q b = k] E[(D 1 + D 2 + +D Qb ) l Q b = k] k=1 k=1 = E[Dl ] 1 v v k 1 v k l E[D l ] k=1 [ l = E[Dl ] 1 v k=1 k l (λt)k e λt dv(t) k! k l (λt)k k! E[Dl ] 1 v (l l + 2 l(l 1)/2 λ l E[V l ]) <. e λt dv(t)+ k=l+1 ] k l (λt)k e λt dv(t) k!

11 78 Z. HOU ET AL. Similarly, we have Hence, E[Vv l ]= P[J = j] E[(V 1 + V 2 + +V J ) l J = j] j=1 (1 v )v j 1 j l E[V l ] j=1 <. E (,) [δ l (,) ]=E[(D v + V v ) l ]max{1, 2 l 1 }(E[D l v ]+E[V l v ])<. Now we prove the necessity hypothesis. If E (,) [δ(,) l ] < then, owing to the fact that E (,) [δ l (,) ]=E[(D v + V v ) l ] E[D l v ]+E[V l v ], we find that t l dv(t)< and t l db(t) <, as required. Let r(t) = t l for l R + such that l 1. From Theorem 2.2 and Theorem 3.2, we have the following result. Theorem 3.3. Suppose that the process (L t,θ t ) of M/G/1(E, MV) is Harris ergodic. If then for π-a.e. (i, x) X. t l dv(t)< and t l 1 P t ((i, x), ) π( ), t l db(t) <, t, 4. Explicit rates of convergence for stochastically ordered Markov processes In this section, we suppose that t is a pathwise-ordered Markov process on the state space X =[, ); i.e. a sample path of the process with a high initial state is never below a sample path of the process with a lower initial state. The taboo probability of being in the set A at time t without first passing through {} is P t (x, A) = P x [{ t A} {δ t}] for x>. We will make the irreducibility assumption that, for some η>, the process can travel from to [η, ) with positive probability before first returning to, and, for each y, y>x>, can travel from x to [y, ) without passing through {}; i.e. there exist t 1,t 2 > such that P t 1 (, [η, )) >, P t 2 (x, [y, )) >. (4.1) For a continuous-time Markov process as described above, the exponential convergence rates were found explicitly in [11] using the coupling method. We aim to apply the coupling method here to give explicit bounds on the subgeometric rates of convergence. The following lemma establishes a property of stochastically ordered Markov processes.

12 Subgeometric rates of convergence 79 Lemma 4.1. Let t be a stochastically ordered Markov process satisfying (4.1). If r then E [ˆr(δ )] < if and only if E x [ˆr(δ )] < for every x>. Proof. Obviously ˆr(t) is nondecreasing in t, so the pathwise ordering of t shows that E x [ˆr(δ )] is nondecreasing in x. Hence, if E x [ˆr(δ )] < for some x>then E y [ˆr(δ )] E x [ˆr(δ )] < for each y<x. Let y>xand choose a t 2 > such that P t 2(x, [y, )) >. By the monotonicity of E x [ˆr(δ )] in x,wehave E x [ˆr(δ )] y P t 2 (x, dz) E z [ˆr(δ )] E y [ˆr(δ )] P t 2 (, [y, )). Thus, we have proved that E x [ˆr(δ )] < for some x>if and only if E x [ˆr(δ )] < for every x>. Now suppose that E [ˆr(δ )] < and use the irreducibility assumption to choose a t 1 > such that P t 1(x, [y, )) >. By the monotonicity of E x [ˆr(δ )] in x, wenowhave E [ˆr(δ )] η P t 1 (, dz) E z [ˆr(δ )] E η [ˆr(δ )] P t 1 (, [η, )). Thus, if E [ˆr(δ )] < then E η [ˆr(δ )] < for some η>, meaning that E x [ˆr(δ )] < for every x>. We introduce further notation for the first hitting time as follows: τ = inf{t : t = }. Note the relationships between δ and τ : when = wehaveτ = = δ, and when = wehaveτ = δ. Theorem 4.1. Let t be a stochastically ordered Markov process satisfying Assumption 1.1 and (4.1). If r, ˆr and E [ δ ˆr(t)dt] <, then for every x, where r(t) P t (x, ) π( ) dt M(x) M(x) = 2E x [ˆr(τ )]+2E π [ˆr(τ )] <. Proof. Let 1 t and 2 t be two copies of the process, with the initial conditions 1 x and 2 Y, where Y is a random variable on (, F, P) with invariant probability measure π. We define T = inf{t : 1 t = 2 t } to be the coupling time and use the coupling inequality to find that P t (x, ) π( ) 2P x,π [T>t]. (4.2) Owing to the path-wise ordering of t,wehavep x,π [T > t]p v [τ >t], where v(a) = P[max{Y, x} A] for A B(X). Hence, by virtue of the Markov inequality and (4.2) we obtain P t (x, ) π( ) 2P v [τ >t] = 2P[Y x] P x [τ >t]+2 2P x [τ >t]+2p π [τ >t]. x P u [τ >t]π(du)

13 71 Z. HOU ET AL. Since E [ δ ˆr(t)dt] <, wehavee [ˆr(τ )]E [ˆr(δ )] <. By applying Lemma 4.1, we find that E x [ˆr(τ )]E x [ˆr(δ )] < for every x X. From Remark 2.4, we infer that E π [ˆr(τ )]E π [ˆr(δ )] <. Hence, r(t) P t (x, ) π( ) dt 2 for every x X. Thus, our assertion holds. r(t)p x [τ >t] dt + 2 = 2E x [ˆr(τ )]+2E π [ˆr(τ )] = M(x) < 5. Waiting time of the M/G/1 queue r(t)p π [τ >t] dt Let W t be the virtual waiting time of a customer who joins an M/G/1 queue at time t [17], [11]. The sufficient and necessary conditions for Harris ergodicity and exponential ergodicity were found in [17], and the best exponential convergence rates were investigated in [11]. It is known that W t is stochastically ordered and that (4.1) holds. For an M/G/1 queue, the customers arrive according to a Poisson process with some parameter λ. Once W t hits the state, it will stay there for a random length of time that is exponentially distributed. Thus, W t satisfies Assumption 1.1. Theorem 5.1. If the virtual waiting time W t of an M/G/1 queue is Harris ergodic then, for l R +, E [τ l] < if and only if t l db(t) <. Moreover, if t l db(t) < for some l 2 then, for every x X, t l 2 P t (x, ) π( ) dt M(x), where M(x) = E x [τ l 1 ]+E π [τ l 1 ] <. Proof. We know the result that E[D l ] < if and only if t l db(t) <. Since E [τ l]= E[D l ], it follows that E [τ l] < if and only if t l db(t) <. Let r(t) = t l for l R + such that l 2. It follows from Theorem 4.1 that our assertion holds. 6. Continuous-time birth death chains Let t be a continuous-time irreducible birth death chain on a countable space X = Z + with Q-matrix as follows: q i,i+1 = b i, i Z + ; q i,i 1 = a i, i N + ; q ij =, i j 2. We suppose that the Q-matrix is conservative and totally stable. From [19], we see that t is stochastically ordered. It is obvious that (4.1) and Assumption 1.1 hold. Suppose that t is ergodic. Then the invariant probability measure π exists and can be computed as follows: π = µ i µ, µ = µ i, µ = 1, µ i = b b 1 b i 1, i N +. a 1 a 2 a i i=

14 Subgeometric rates of convergence 711 Let r(t) = t l for l N +. According to Theorem 1.4 of [12], we can compute E i [τ l ] as follows: i 1 E i [τ 1 ]= E i [τ n+1 j= i 1 ]=n j= 1 µ j b j 1 µ j b j k=j+1 k=j+1 µ k for every i N +, µ k E k [(τ ) n ] for n N +. Since π and E i [τ 1 ] can be computed completely, so too can the bound M(i). Theorem 6.1. Let t be an irreducible, regular birth death chain. For l Z + such that l 2, if E [δ l ] < then, for any i X, t l 1 P t (i, j) π j, t. Moreover, t l 2 P t (i, j) π j dt M(i), where M(i) = E i [τ l 1 ]+ i= π i E i [τ l 1 ] < and E [τ l 1 ]=. The following example, in which we can compute the quantities E i [τ l 1 ] and E π [τ l 1 ], is taken from [12]. Example 6.1. Let a i = b i = i γ, γ (1, 2]. For any l 1, if γ>2 1/l then from [12] we see that E [δ l ] < and E i[τ l 1 i 1 ] j 2(l 1) 1 lγ. Hence, from Theorem 6.1, we have j= where and t l 2 P t (i, j) π j dt E i [τ l 1 ]+E π [τ l 1 ] <, E i [τ l 1 i 1 ] j 2(l 1) 1 lγ, i N +, E [τ l 1 ]=, j= E π [τ l 1 ] j= j 2(l 1) 1 lγ i=j+1 i γ i= i γ. Acknowledgements We would like to thank the referees for helpful suggestions. The work of Hou and Liu is supported by the National Natural Science Foundation of China (grant no ). The work of Zhang is supported by the Australian Research Council Centre of Excellence for Mathematics and Statistics of Complex Systems.

15 712 Z. HOU ET AL. References [1] Chen, M. F. (1992). From Markov Chains to Nonequilibrium Particle Systems. World Scientific, River Edge, NJ. [2] Doshi, B. (1985). An M/G/1 queue with variable vacations. In Proc. Internat. Conf. Model. Techniques Tools Performance Anal., North-Holland, Amsterdam, pp [3] Down, D., Meyn, S. P. and Tweedie, R. L. (1995). Exponential and uniform ergodicity of Markov processes. Ann. Prob. 23, [4] Foss, S. and Sapozhnikov, A. (24). On the existence of moments for the busy period in a single-server queue. Math. Operat. Res. 29, [5] Fuhrman, S. (1984). A note on the M/G/1 queue with server vacations. Operat. Res. 32, [6] Gut, A. (1974). On the moments of some first passage times for sums of dependent random variables. Stoch. Process. Appl. 2, [7] Harris, C. and Marchal, W. (1988). State dependence in M/G/1 server vacation models. Operat. Res. 36, [8] Hou, Z. and Liu, Y. (24). Explicit criteria for several types of ergodicity of the embedded M/G/1 and GI/M/n queues. J. Appl. Prob. 41, [9] Levy, H. and Kleinrock, L. (1986). A queue with starter and a queue with vacations: delay analysis by decompostition. Operat. Res. 34, [1] Levy, Y. and Yechiali, U. (1975). Utilization of idle time in an M/G/1 queueing system. Manag. Sci. 22, [11] Lund, R. B., Meyn, S. P. and Tweedie, R. L. (1996). Computable exponential convergence rates for stochastically ordered Markov processes. Ann. Appl. Prob. 6, [12] Mao, Y. H. (24). Ergodic degrees for continuous-time Markov chains. Sci. China A 47, [13] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes. II. Continuous-time processes and sampled chains. Adv. Appl. Prob. 25, [14] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes. III. Foster Lyapunov criteria for continuous-time processes. Adv. Appl. Prob. 25, [15] Nummelin, E. and Tuominen, P. (1983). The rate of convergence in Orey s theorem for Harris recurrent Markov chains with applications to renewal theory. Stoch. Process. Appl. 15, [16] Thorisson, H. (1985). The queue GI/G/1: finite moments of the cycle variables and uniform rates of convergence. Stoch. Process. Appl. 19, [17] Tuominen, P. and Tweedie, R. L. (1979). Exponential ergodicity in Markovian queueing and dam models. J. Appl. Prob. 16, [18] Tuominen, P. and Tweedie, R. L. (1994). Subgeometric rates of convergence of f -ergodic Markov chains. Adv. Appl. Prob. 26, [19] Van Doorn, E. A. (1981). Stochastic Monotonicity and Queueing Applications of Birth Death Processes (Lecture Notes Statist. 4). Springer, New York.

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST J. Appl. Prob. 45, 568 574 (28) Printed in England Applied Probability Trust 28 RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST EROL A. PEKÖZ, Boston University SHELDON

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Asymptotics for Polling Models with Limited Service Policies

Asymptotics for Polling Models with Limited Service Policies Asymptotics for Polling Models with Limited Service Policies Woojin Chang School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332-0205 USA Douglas G. Down Department

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

OPTIMAL CONTROL OF A FLEXIBLE SERVER

OPTIMAL CONTROL OF A FLEXIBLE SERVER Adv. Appl. Prob. 36, 139 170 (2004) Printed in Northern Ireland Applied Probability Trust 2004 OPTIMAL CONTROL OF A FLEXIBLE SERVER HYUN-SOO AHN, University of California, Berkeley IZAK DUENYAS, University

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

M/G/1 and M/G/1/K systems

M/G/1 and M/G/1/K systems M/G/1 and M/G/1/K systems Dmitri A. Moltchanov dmitri.moltchanov@tut.fi http://www.cs.tut.fi/kurssit/elt-53606/ OUTLINE: Description of M/G/1 system; Methods of analysis; Residual life approach; Imbedded

More information

A COMPOUND POISSON APPROXIMATION INEQUALITY

A COMPOUND POISSON APPROXIMATION INEQUALITY J. Appl. Prob. 43, 282 288 (2006) Printed in Israel Applied Probability Trust 2006 A COMPOUND POISSON APPROXIMATION INEQUALITY EROL A. PEKÖZ, Boston University Abstract We give conditions under which the

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS (February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Stationary Analysis of a Multiserver queue with multiple working vacation and impatient customers

Stationary Analysis of a Multiserver queue with multiple working vacation and impatient customers Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 932-9466 Vol. 2, Issue 2 (December 207), pp. 658 670 Applications and Applied Mathematics: An International Journal (AAM) Stationary Analysis of

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

M/M/1 Queueing System with Delayed Controlled Vacation

M/M/1 Queueing System with Delayed Controlled Vacation M/M/1 Queueing System with Delayed Controlled Vacation Yonglu Deng, Zhongshan University W. John Braun, University of Winnipeg Yiqiang Q. Zhao, University of Winnipeg Abstract An M/M/1 queue with delayed

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

A Queueing Model for Sleep as a Vacation

A Queueing Model for Sleep as a Vacation Applied Mathematical Sciences, Vol. 2, 208, no. 25, 239-249 HIKARI Ltd, www.m-hikari.com https://doi.org/0.2988/ams.208.8823 A Queueing Model for Sleep as a Vacation Nian Liu School of Mathematics and

More information

Faithful couplings of Markov chains: now equals forever

Faithful couplings of Markov chains: now equals forever Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Stochastic relations of random variables and processes

Stochastic relations of random variables and processes Stochastic relations of random variables and processes Lasse Leskelä Helsinki University of Technology 7th World Congress in Probability and Statistics Singapore, 18 July 2008 Fundamental problem of applied

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

GI/M/1 and GI/M/m queuing systems

GI/M/1 and GI/M/m queuing systems GI/M/1 and GI/M/m queuing systems Dmitri A. Moltchanov moltchan@cs.tut.fi http://www.cs.tut.fi/kurssit/tlt-2716/ OUTLINE: GI/M/1 queuing system; Methods of analysis; Imbedded Markov chain approach; Waiting

More information

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times ISSN 1750-9653, England, UK International Journal of Management Science and Engineering Management Vol. 2 (2007) No. 4, pp. 289-296 The discrete-time Geom/G/1 queue with multiple adaptive vacations and

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis. Service Engineering Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis. G/G/1 Queue: Virtual Waiting Time (Unfinished Work). GI/GI/1: Lindley s Equations

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Generalized Resolvents and Harris Recurrence of Markov Processes

Generalized Resolvents and Harris Recurrence of Markov Processes Generalized Resolvents and Harris Recurrence of Markov Processes Sean P. Meyn R.L. Tweedie March 1992 Abstract In this paper we consider a ϕ-irreducible continuous parameter Markov process Φ whose state

More information

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES by Peter W. Glynn Department of Operations Research Stanford University Stanford, CA 94305-4022 and Ward Whitt AT&T Bell Laboratories Murray Hill, NJ 07974-0636

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline 1 Some open problems related to stability S. Foss Heriot-Watt University, Edinburgh and Sobolev s Institute of Mathematics, Novosibirsk I will speak about a number of open problems in queueing. Some of

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Attahiru Sule Alfa Department of Mechanical and Industrial Engineering University of Manitoba Winnipeg, Manitoba

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING

ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING ANZIAM J. 49(2007), 231 241 ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING ZHENTING HOU 1, HAILING DONG 1 and PENG SHI 2 (Received March 17, 2007;

More information

M/M/3/3 AND M/M/4/4 RETRIAL QUEUES. Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi

M/M/3/3 AND M/M/4/4 RETRIAL QUEUES. Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi JOURNAL OF INDUSTRIAL AND doi:10.3934/imo.2009.5.431 MANAGEMENT OPTIMIZATION Volume 5, Number 3, August 2009 pp. 431 451 M/M/3/3 AND M/M/4/4 RETRIAL QUEUES Tuan Phung-Duc, Hiroyuki Masuyama, Shoi Kasahara

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University.

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University. Continuous-Time Markov Decision Processes Discounted and Average Optimality Conditions Xianping Guo Zhongshan University. Email: mcsgxp@zsu.edu.cn Outline The control model The existing works Our conditions

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

On Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process

On Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process Queueing Systems 46, 353 361, 24 24 Kluwer Academic Publishers Manufactured in The Netherlands On Existence of Limiting Distribution for Time-Nonhomogeneous Countable Markov Process V ABRAMOV vyachesl@zahavnetil

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March

More information

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011 Queuing Theory Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Queuing Theory STAT 870 Summer 2011 1 / 15 Purposes of Today s Lecture Describe general

More information

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. MAT 4371 - SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. Question 1: Consider the following generator for a continuous time Markov chain. 4 1 3 Q = 2 5 3 5 2 7 (a) Give

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Integrals for Continuous-time Markov chains

Integrals for Continuous-time Markov chains Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.

More information

Dynamic control of a tandem system with abandonments

Dynamic control of a tandem system with abandonments Dynamic control of a tandem system with abandonments Gabriel Zayas-Cabán 1, Jingui Xie 2, Linda V. Green 3, and Mark E. Lewis 4 1 Center for Healthcare Engineering and Patient Safety University of Michigan

More information

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY

J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY J. MEDHI STOCHASTIC MODELS IN QUEUEING THEORY SECOND EDITION ACADEMIC PRESS An imprint of Elsevier Science Amsterdam Boston London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo Contents

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

P (L d k = n). P (L(t) = n),

P (L d k = n). P (L(t) = n), 4 M/G/1 queue In the M/G/1 queue customers arrive according to a Poisson process with rate λ and they are treated in order of arrival The service times are independent and identically distributed with

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

arxiv:math/ v4 [math.pr] 12 Apr 2007

arxiv:math/ v4 [math.pr] 12 Apr 2007 arxiv:math/612224v4 [math.pr] 12 Apr 27 LARGE CLOSED QUEUEING NETWORKS IN SEMI-MARKOV ENVIRONMENT AND ITS APPLICATION VYACHESLAV M. ABRAMOV Abstract. The paper studies closed queueing networks containing

More information

International Journal of Pure and Applied Mathematics Volume 28 No ,

International Journal of Pure and Applied Mathematics Volume 28 No , International Journal of Pure and Applied Mathematics Volume 28 No. 1 2006, 101-115 OPTIMAL PERFORMANCE ANALYSIS OF AN M/M/1/N QUEUE SYSTEM WITH BALKING, RENEGING AND SERVER VACATION Dequan Yue 1, Yan

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information