Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Size: px
Start display at page:

Download "Model reversibility of a two dimensional reflecting random walk and its application to queueing network"

Transcription

1 arxiv: v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu Department of Information Sciences, Tokyo University of Science Abstract We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. It is assumed that this reflecting random walk has skip free transitions. We are concerned with its time reversed process assuming that the stationary distribution exists. In general, the time reversed process may not be a reflecting random walk. In this paper, we derive necessary and sufficient conditions for the time reversed process also to be a reflecting random walk. These conditions are different from but closely related to the product form of the stationary distribution. 1 Introduction We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. We are interested in its stationary distribution for queueing applications. This stationary distribution is generally hard to analytically get, and recent research interests have been directed to its tail asymptotics. We now have good pictures for those tail asymptotics (e.g., see [7], but many other characteristics like moments are not available. In this paper, we look at this problem up side down. Namely, we aim to find a class of the reflecting random walks whose stationary distributions are obtained in closed form. For this, we use a time reversed process for the reflecting random walk, and expect that the stationary distribution is analytically obtained when the time reversed process is also a reflecting random walk. Kelly [4] pioneered to use this time reversed idea for deriving so called product form solutions for various queueing network models. Here, the stationary distribution is saidto be a product formsolution if it is theproduct of marginal 1

2 stationary distributions. This approach has been further studied (see, e.g., [2, 4, 9] and references therein. However, they are limited in use for applications. While those traditional approaches use Markov chains which are specialized to queueing models, we here use a different class of Markov chains. Namely, we take two dimensional reflecting random walks for this class motivated by [8]. They may be interpreted as queueing models, but our intension is to consider the time reversed processes within a class of two dimensional reflecting random walks. In this way, we study a class of the reflecting random walks which have tractable stationary distributions. To make clear our approach, we introduce the notion of model reversibility. A reflecting random walk is said to be model reversible if it has the stationary distribution under which its time reversed process is a reflecting random walk, where its transition probabilities may not be identical with those of the original reflecting random walk. We then derive necessary and sufficient conditions for the reflecting random walk to be model reversible. In this derivation, the stationary distribution is simultaneously obtained. This stationary distribution has product form in the interior of the quadrant but may not be of product form on the boundary. It is notable that our stationary distribution is closely related to the one which was recently obtained by Latouche and Miyazawa [6]. They derived it by characterizing a class of the reflecting random walks whose stationary distribution has product form. In the interior of the quadrant, this two dimensional distribution has geometric marginals whose rates are identical with those of ours. We also have geometric interpretations to characterize such decay rates similarly to those of [6]. However, for the stationary distribution, the characterization of model reversibility is different from the one of product form. In particular, the difference is clearly observed when the stationary distribution is not of product form. This paper is made up by four sections. We formally define our reflecting random walk and its time reversed process in Section 2. In Section 3, we give a main result, i.e., we derive conditions of model reversibility, and prove it. For queueing model, we give an example to be model reversible in Section 4. 2 Two dimensional random walk and its time-reversed process In this section, we briefly introduce a two dimensional reflecting random walk and its time-reversed process. We use the following notations. Z = the set of all integers, Z + = {i Z;i 0}, R = the set of all real numbers. 2

3 Let S Z 2 + be a state space for the reflecting random walk. We partition it into following subsets. S 0 = {(0,0}, S 1 = {(i,0 S;i 1}, S 2 = {(0,j S;j 1}, S + = {(i,j S;i,j 1}. Let S = 2 i=0s i. Clearly, S = S + S. The subsets S + and S are called an interior and a boundary, and S i is called a boundary face for i = 0,1,2. Let X (+ (X (+ 1,X (+ 2 be a random vector taking value in U { 1,0,1} 2, and for i = 0,1,2, X (i (X (i 1,X(i 2 be a random vector taking values in U such that X(0 0, X (1 2 0 and X ( Define {Z l ;l Z + } as the Markov chain with state space S and the following transition probabilities. P(Z l+1 = n Z l = n = { P(X (+ = n n, n S +,n S, P(X (i = n n, i = 0,1,2,n S i,n S. (2.1 By this definition, {Z l } is skip free, and has homogeneous transition probabilities in each boundary face S i and the interior S +. We refer to this Markov chain {Z l } as a two dimensional reflecting random walk. Throughout this paper, we assume that (i {Z l } is irreducible. The modeling primitives of this reflecting random walk are given by the four sets of the following probability distributions: p (0 ij = P(X (0 = (i,j, i,j = 0,1, ij = P(X (1 = (i,j, i = 0,±1,j = 0,1, ij = P(X (2 = (i,j, i = 0,1,j = 0,±1, ij = P(X (+ = (i,j, i,j = 0,±1. The transition diagram of {Z l } is illustrated in Figure 1. For the modeling parameter ij, we add the irreducibility assumption. Let {Y l Z 2 ;l Z + } be a two dimensional random walk removing the boundary S of {Z l }. We note that the distribution of increments for {Y l } is identical with ij. For this random walk, we assume the following condition. (ii {Y l ;l Z + } is irreducible. Of course, if {Y l } is not irreducible, the irreducible condition (i may be still satisfied because of the reflection at the boundary. We first note the following fact. 3

4 Figure 1: Transition diagram of {Z l } is posi- Lemma 2.1 Under the condition (ii, at least one of 10, p(+, p(+ 01 and 0( 1 tive. Proof. Suppose that our claim is not true, that is, 10 = = p(+ 01 = 0( 1 = 0. But then, for any n 1,n 2 Z, if Y 0 = (n 1,n 2, then Y l does not arrive at (n 1 +1,n 2 and (n 1,n 2 +1 for any l Z +. This is a contradiction for irreducibility of {Y l }. Our problem is to find necessary and sufficient conditions for {Z l } under which its time reversal is a reflecting random walk on S. For this, we assume that {Z l } has the stationary distribution, which is denoted by π. Then, we can construct the stationary Markov chain {Z l ;l Z} starting from with Z 0 subject to π. We define the timereversed process { Z l ;l Z} by Z l = Z l, l Z. It is easy to see that { Z l } is the Markov chain, and transition probability of { Z l } is given by P( Z l+1 = n Z l = n = π(n π(n P(Z l+1 = n Z l = n, n,n S. (2.2 The Markov chain { Z l } is referred to as a time reversed process under π (see, e.g., [1]. 4

5 3 Characterization of model reversibility A Markov chain is said to be reversible if its time reversed process is stochastically identical with the original Markov chain. However, this condition is too strong. Instead of this reversibility, we consider weaker concept of reversibility for a reflecting random walk. Definition 3.1 The reflecting random walk {Z l } is said to be model reversible if it has a stationary distribution and if its time-reversed process { Z l } is also a reflecting random walk. Thus, if {Z l } has model reversibility, then the transition probabilities of { Z l } are given by P( Z l+1 = n Z l = n = P( X (i = n n, l Z,i = 0,1,2,+,n S i, (3.1 for some random vector X (i (i (i = ( X 1, X 2 {0,1, 1}2. From the reflection property of { Z l }, it is required that X (i 1 0 for i = 0,2 and X (i 2 0 for i = 0,1. We also note that the distributions of X (i and X (i may not be the same. We are now ready to present conditions for model reversibility.@ Theorem 3.1 For the reflecting random walk {Z l }, assume the conditions (i and (ii. Then {Z l } is model reversible if and only if the following conditions hold. (C1 There exist c (1+,c (2+ > 0 such that c (1+ i1 = i1 and c (2+ 1i = 1i for any i = 0,±1. (C2 There exist c (10,c (20 0 such that c (10 1j = p (0 1j and c (20 j1 = p (0 j1 for any j = 0,1. If c (10 = 0 (resp. c (20 = 0, then 1j = 0 (resp. j1 = 0 for any j = 0,1. (C3 If both c (10 > 0 and c (20 > 0, then c (10 c (1+ = c (20 c (2+. (C4 There exist η 1,η 2 (0,1 such that If c (10 > 0, then π(n,0 = η1 n 1 π(1,0, n 1, (3.2 π(0,n = η2 n 1 π(0,1, n 1, (3.3 π(n 1,n 2 = η n η n π(1,1, n 1,n 2 1. (3.4 π(1,0 = c (10 η 1 π(0,0, (3.5 π(0,1 = c(10 c (1+ η c (2+ 2 π(0,0 (3.6 π(1,1 = c (10 c (1+ η 1 η 2 π(0,0, (3.7 5

6 and if c (20 > 0, then π(1,0 = c(20 c (2+ η c (1+ 1 π(0,0, (3.8 π(0,1 = c (20 η 2 π(0,0, (3.9 π(1,1 = c (20 c (2+ η 1 η 2 π(0,0. (3.10 Remark 3.1 Under the condition (C2, if c (10 = c (20 = 0, then {Z l } is not irreducible, that is, the condition (i is not satisfied. Therefore, at least one of c (10 > 0 and c (20 > 0 holds, and we can obtain at least one of (3.5 (3.7 and (3.8 (3.10 since c (1+,c (2+ > 0 under the condition (C1. Remark 3.2 From the condition (C3, if c (10 > 0 and c (20 > 0, then (3.5 (3.7 are identical with (3.8 (3.10. Remark 3.3 We can obtain the reversibility condition even if the condition (ii is not satisfied. Then, the condition (C4 is slightly changed. Such an example is given in Appendix C. We first prove that (C1 (C4 are necessity for model reversibility. Lemma 3.1 Under the conditions (i and (ii, if { Z l } is a reflecting random walk, then the stationary distribution satisfies (3.4. Proof. From Lemma 2.1, we divide our proof into the following three cases. (D1 i0 > 0 and 0j > 0 for some i,j = ±1. (D2 i0 > 0 for some i = ±1 and 0j = 0 for all j = ±1. (D3 0i > 0 for some i = ±1 and j0 = 0 for all j = ±1. The cases (D2 and (D3 are symmetric, and therefore, we prove (3.4 for (D1 and (D2. For the case (D1, we only consider the case that 10 > 0 and 01 > 0 since the other cases are similarly proved. Consider the transition of the reversed process { Z l } from n to n = n e 1, where e 1 = (1,0. Then, it follows from (2.1 and (2.2 for all n e 1 S + (also n S + that P( Z l+1 = n e 1 Z l = n = π(n e 1 π(n 6 P(X (+ = (1,0 = π(n e (3.11 π(n

7 Since 10 > 0 and { Z l } is a reflecting random walk, the right hand side must be a constant for all n,n S +. Denote this constant by η1 1, i.e., η 1 1 = π(n e 1, n,n e 1 S +. (3.12 π(n Similarly, we can show that π(n e 2 is a positive constant for all n,n e π(n 2 S +, and denote it by η2 1. Thus, for n = (n 1,n 2 S +, we have π(n = η 1 π(n e 1 = η n π(1,n 2 = η n η n π(1,1. Obviously, from this equation, we have η 1,η 2 (0,1 since n S + π(n 1. We next assume the case (D2. Since i0 > 0 for some i { 1,1}, we also have (3.12 for the case (D2. In what follows, we prove that π(n e 2 also must be constant for π(n the case (D2. First assume both 10 > 0 and > 0. Under the condition (ii and the case (D2 with 10 > 0 and > 0, it is clear that the conditions p(+ i( 1 > 0 and p(+ j1 > 0 hold for some i,j { 1,1}, and we may assume ( 11 > 0 since a proof is similar to the other cases. Then, it again follows from (2.1 and (2.2 for n = (n 1,n 2 S + and n = n+e 1 e 2, we have P( Z l+1 = n+e 1 e 2 Z l = n = π(n+e 1 e 2 ( 11 π(n = π(n+e 1 e 2 π(n+e 1 π(n π(n+e 1 p(+ = π(n+e 1 e 2 η 1 ( 11 π(n+e 1 ( 11 = π(m e 2 η 1 ( 11 π(m, m n+e 1 S +, where the third equation is obtained by (3.12. Since ( 11 must be independent for m, we have > 0 and the left hand side π(m e 2 π(m = η 1 2, as long as m S +. Thus, we have (3.4 for case (D2 with 10 > 0 and > 0. 7

8 The rest of proof for (D2 is the cases that 10 > 0, = 0 and p(+ 10 = 0, > 0. These are also symmetric, so we only consider the case p(+ 10 > 0 and = 0. Repeatedly, from the condition (ii, we must have ( 11 conditions 10 > 0 and and > 0, we obtain (3.4. > 0 and p(+ ( 1( 1 > 0 under the = 0. Thus, using the same argument of thecase for p(+ 10 > 0 Lemma 3.2 Under the same assumptions of Theorem 3.1, if { Z l } is a reflecting random walk, then we have (3.2 and (3.3. Proof. We only obtain (3.2 since (3.3 is similarly proved. We separately consider the cases such that (E1 Either 10 > 0 or > 0. (E2 Both 10 = 0 and = 0. For (E1, to change the proof of the case (D1 in Lemma 3.1 from S + into S 1, we have, for some constant α 1 (0,1, 1 = π(n e 1, n S 1. (3.13 π(n α 1 From the condition (ii, (3.12 and (3.13, i( 1 for n = (n 1,0 S 1 and n = (n 1 i,1 S +, P( Z l+1 = (n 1 i,1 Z l = (n 1,0 = π(n 1 i,1 π(n 1,0 > 0 for some i = 0,±1, and therefore, i( 1 = ηn1 i α n π(1, 1 p(+ i( 1 π(1,0. (3.14 The left hand side of this equation is independent of n 1 since { Z l } is a reflecting random walk, and therefore, we have α 1 = η 1, which implies (3.2. For (E2, from the irreducible condition (i, i1 > 0 and j( 1 > 0 for some i,j = 0,±1. Then, from (2.1, (2.2 and (3.1, we have, for n = (n 1 + j,0 S 1 and n = (n 1,1 S +, P( Z l+1 = (n 1,1 Z l = (n 1 +j,0 = π(n 1,1 π(n 1 +j,0 p(+ j( 1. (3.15 8

9 Since {Z l } is model reversible and j( 1 > 0, the left hand side of this equation is a positive constant which depends on j, and denote it by ( j1. Similarly, for n = (n 1 +i,1 S + and (n 1,0 S 1, P( Z l+1 = (n 1,0 Z l = (n 1 +i,1 = = = π(n 1,0 π(n 1 +i,1 p(1 i1 π(n 1,0 π(n 1 +i+j,0 π(n 1 +i,1 π(n 1 +i+j,0 p(1 i1 π(n 1,0 π(n 1 +i+j,0 j( 1 ( j1 i1, where the third equality is given by (3.15. Since the left hand side of this equation also does not depend on n 1, we directly obtain, for η 1 (0,1 satisfying (3.12, π(n 1,0 π(n 1 +1,0 = η 1 1, n 1 1, (3.16 if i+j = 1 holds, and we have (3.2. On the other hand, from irreducible condition (i, if i+j 1, then at least one of 10 > 0 and > 0 holds, and we have, for k = 1 or k = 1, P( Z l+1 = (n 1,1 Z l = (n 1 +k,1 = π(n 1,1 π(n 1 +k,1 p(+ k0. From Lemma 3.1, the probability of the left hand side of this equation is independent for n 1, and denote its probability by ( k0 Thus, we obtain, from (3.15 P( Z l+1 = (n 1,0 Z l = (n 1 +i,1 = π(n 1,0 π(n 1 +i,1 p(1 i1 = π(n 1,0 π(n 1 +i+k,1 π(n 1 +i+j +k,0 π(n 1 +i,1 π(n 1 +i+k,1 π(n 1 +i+j +k,0 p(1 i1 = π(n 1,0 π(n 1 +i+j +k,0 k0 ( k0 j( 1 ( j1 From the irreducible condition (i again, if i+j 1, we must have i+j +k = 1, and therefore, we also have (3.16. We complete the proof since (3.16 implies ( i1.

10 Lemma 3.3 Under the conditions in Theorem 3.1, if { Z l } is a reflecting random walk, then (C1, (C2 and (C3 hold, and the stationary distribution satisfies (3.5 (3.10. The proof of Lemma 3.3 is deferred to Appendix A. We are now ready to prove Theorem 3.1. Proof of Theorem 3.1 From Lemmas 3.1, 3.2 and 3.3, we already prove necessity of Theorem 3.1. For sufficiency, suppose the conditions (C1 (C4 hold. Then, from (2.1 and (2.2, we have, for i = 0,±1,+., j,k = 0,±1,n S i and n = n+(j,k S i, P( Z l+1 = n Z l = n = π(n π(n P(X(i = n n = π((n+(j,k P(X (i = ( j, k π(n = η j 1η k 2 p(i ( j( k. (3.17 Thus, the transition probabilities of { Z l } from S i to S i are homogeneous. We next verify that the downward transitions from S + to S 1 and from S + to S + are homogeneous. To this end, let consider the transitions from S + to S 1. For n = (n 1,1 S + and n = (n 1 +i,0 S 1 and i = 0,±1, we have, by the conditions (C1, (C2, (C3 and (C4 P( Z l+1 = (n 1 +i,0 Z l = (n 1,1 = π(n 1 +i,0 P(X (1 = ( i,1 π(n 1,1 = ηi 1 π(1,0 π(1,1 p(1 ( i1 1 = c (1+ηi 1 η 1 2 p(1 ( i1 = η1 i η 1 2 p(+ ( i1, = P( Z l+1 = (n 1 +i,m 1 Z l = (n 1,m, m 2. where the last equation is obtained by (3.17. Note that (n 1,m,(n 1 + i,m 1 S + since we assume n 1,n 1 + i 1. Thus, the downward transitions for the direction of n 1 -axis in the interior are homogeneous. Using a similar argument, we can prove that another downward transitions are also homogeneous. That is, { Z l } has the homogeneous transitions in each subset S i. Hence, { Z l } is a reflecting random walk if the conditions (C1 (C4 hold. We complete the proof. 10

11 3.1 Geometric view for the model reversibility condition We characterize the model reversibility in Theorem 3.1. However, the condition (C4 may not be easily checked because it requires the stationary distribution. Thus, we replace it by the one only using modeling primitives. Theorem 3.2 Suppose that conditions (C1, (C2 and (C3 hold. For z R 2, let γ 0 (z = p (0 00 +c(10 z 1 1 +c (20 0( 1 z 1 2 +c (0 ( 1( 1 z 1 1 z 1 2 (, γ 1 (z = 00 +p(1 10 z 1 + z 1 1 +c (1+ 1( 1 z 1z ( 1 z ( 1( 1 z 1 γ 2 (z = z 2 + 0( 1 z γ + (z = ij z1 i zj 2, i= 1j= 1 2 +c (2+ ( ( 11 z 1 1 z z 1 2 z ( 1( 1 z 1 1 z2 1 where c (k+ and c (k0 are nonnegative constants defined in Theorem 3.1 for k = 1,2, and Then, the condition (C4 is equivalent to c (0 max(c (10 c (1+,c (20 c (2+. (C5 There exist η 1,η 2 (0,1 satisfying γ i (η 1 1,η 1 2 = 1 for all i = 0,1,2,+. Remark 3.4 From the condition (C3, if c (10 > 0 and c (20 > 0, then we have c (0 = c (10 c (1+ = c (20 c (2+. Note that γ + is the generating function of X (+. From this theorem, we derive the condition of model reversibility given by modeling parameters. Corollary 3.1 Suppose the conditions (i and (ii. Then, {Z l } is model reversible if and only if the conditions (C1, (C2, (C3 and (C5 hold. Remark 3.5 Latouche and Miyazawa[6] derive necessary and sufficient conditions for the stationary distribution of the two dimensional reflecting random walk to have a product form solution. Similarly to the condition (C5, their conditions have geometric interruptions.,, 11

12 Remark 3.6 If the conditions: 11 = p (0 11 = p(1 11 = p(2 11, (3.18 p (0 10 = p(1 10, p(0 01 = p(2 01, p(2 1j = 1j, p(1 j1 = p(+ j1, j = 0, 1. (3.19 are satisfied, then (C1, (C2 and (C3 hold, and the stationary distribution has a exactly geometric form, and therefore, has a product form solution. Namely, under the conditions (3.18 and(3.19, the condition(c5 is just identical with the product form condition(see, Theorem 3.2 in [6]. We briefly explain the condition (C5. Obviously, γ + (1,1 = 1. Moreover, the subset Γ i = {z R 2 ;γ i (z = 1} is a nonnegative-directed convex (see, e.g., [5]. This means that Γ i describe a locally convex curve on the nonnegative quadrant for i = 0,±1,+. In Figure 2, we illustrate the curve Γ i that the condition (C5 holds. Figure 2: Example curve of Γ i In what follows, we prove Theorem 3.2. We first obtain the following lemma. Lemma 3.4 Suppose the condition (C1 holds. The stationary distribution satisfies (3.2, (3.3 and (3.4 if and only if γ i (η 1,η 2 = 1 for i = 1,2,+. We defer the proof of Lemma 3.4 to Appendix B. Proof of Theorem 3.2 By Lemma 3.4, we already verify that (3.2, (3.3 and(3.4are equivalent to γ i (η 1,η 2 = 1 for i = 1,2,+. Here, we prove that the stationary distribution 12

13 satisfies (3.5 (3.10 if and only if γ i (η1 1,η2 1 = 1 for i = 0,1,2 under the conditions (C2 and (C3. We first prove necessity, that is, if the stationary distribution satisfies (3.5 (3.10, then γ i (η1 1,η2 1 = 1 for i = 0,1,2. From the stationary equation, for n 2, (1 00 π(n,0 = p(1 10 π(n 1,0+p(1 π(n+1,0+p(+ 1( 1 π(n 1,1 and therefore, we have, from Lemma 3.4, + 0( 1 π(n,1+p(+ ( 1( 1 π(n+1,1, (1 00 π(1,0 = p(1 10 η 1 1 π(1,0+p(1 η 1π(1,0+ 1( 1 η 1 1 π(1,1 + 0( 1 π(1,1+p(+ ( 1( 1 η 1π(1,1. (3.20 If c (10 > 0, substituting (3.5 and (3.7 into this equation, we have γ 1 (η1 1,η 1 2 = 1. Even if c (10 = 0, we still have γ 1 (η1 1,η2 1 = 1 using (3.8 and (3.10. Similarly, from the stationary equations of boundary, we obtain γ 2 (η1 1,η2 1 = 1 and γ 0 (η1 1,η2 1 = 1. We next verify sufficiency. From (3.20, we have π(1, 1 π(1, 0 The condition γ 1 (η 1 1,η 1 2 = 1 is equivalent to Thus, we obtain = c (1+ η 2 = Using a similar argument, if γ 2 (η η 1 1( 1 η ( 1 +p(+ ( 1( 1 η p(1 10 p(1 η 1 1( 1 η ( 1 +p(+ ( 1( 1 η 1 π(1,1 = c (1+ η 2 π(1,0. (3.21 1,η 1 2 = 1, we have π(1,1 = c (2+ η 1 π(0,1. (3.22 We now consider the stationary equation for the origin, that is, (1 p (0 00 π(0,0 = p(1 π(1,0+p(2 0( 1 π(0,1+p(+ ( 1( 1 π(1,1. (3.23 From (3.21, (3.22 and (3.23, we have ( (1 p (0 00 π(0,0 = + c(1+ η 2 c (2+ 0( 1 η +c(1+ ( 1( 1 η π(1,0.

14 Moreover, from γ 0 (η1 1,η2 1 = 1 and the conditions (C2 and (C3, if c (10 > 0, ( 1 p (0 00 = η 1 + c(1+ η 2 c (2+ 0( 1 η +c(1+ ( 1( 1 η 2 c (10 η 1, 1 which implies π(1,0 = c (10 η 1 π(0,0. Thus, from (3.21 and (3.22 and this equation, we obtain (3.5 (3.7. This completes the proof of theorem since we can use a similar argument to obtain (3.8 (3.10 if c (20 > 0. 4 Application to the queueing network In this section, we construct a discrete time queueing network whose stationary distribution is not of product form but has closed form, using model reversibility. For this, we modify a discrete time Jackson network, which is introduced below. We define the reflecting random walk by p (i 11 = 0, p(i ( 1( 1 = 0, p(i 10 = λ 1, p (i 01 = λ 2, i = 0,1,2,+, p (j = µ 1r 10, p (j ( 11 = µ 1r 12, j = 1,+, p (k 0( 1 = µ 2r 20, p (k 1( 1 = µ 2r 21, k = 2,+, 00 = µ 1 r 11 +µ 2 r 22, 11 = µ 1r 11 +µ 2, 00 = µ 1 +µ 2 r 22, p (0 11 = µ 1 +µ 2, where all constants are positive and λ 1 +λ 2 +µ 1 +µ 2 = 1, r 10 +r 11 +r 12 = 1, r 20 +r 21 +r 22 = 1. This random walk is called a discrete time Jackson network. For i = 1,2, let α i be a solution of the following traffic equations. α 1 = λ 1 +α 2 r 21 +α 1 r 11, α 2 = λ 2 +α 1 r 12 +α 2 r 22. (4.1 The stability condition for this model is given by ρ 1 α 1 µ 1 < 1, ρ 2 α 2 µ 2 < 1. 14

15 It is well known that the stationary distribution of the discrete Jackson network has a product form solution, that is, stationary distribution is given by π(n 1,n 2 = (1 ρ 1 (1 ρ 2 ρ n 1 1 ρn 2 2, n 1,n 2 0. We modify a discrete time Jackson network with extra arrivals at empty nodes. We assume that arrival rate of the customer increases if there is a no customer in the node. In additions, we also assume that the departure customers which moved the same node in the Jackson network always go to another node. That is, the parameters of the Jackson network are changed as follows. 01 = λ 2 +λ (1 2, 10 = λ 1 +λ (2 1, p (0 10 = λ 1 +λ (0 1, p (0 01 = λ 2 +λ (0 2, ( 11 = µ 1(r 12 +r 11, 1( 1 = µ 2(r 21 +r 22, 00 = µ 2 λ (1 2, 00 = µ 1 λ (2 1, p (0 00 = µ 1 +µ 2 λ (0 1 λ (0 2, where we assume 0 λ (1 2 µ 2, 0 λ (2 1 µ 1 and 0 λ (0 1 +λ (0 2 µ 1 +µ 2. We refer to this random walk as a discrete time Jackson network with extra arrivals at empty nodes. In Figure 3, we depict the transition diagram of this queueing network. Node 2 Node 1 Figure 3: Transition diagram For the discrete time Jackson network with extra arrivals at empty nodes, the condition (C2 is automatically satisfied. Assume that λ 2 +λ (1 2 λ 2 = r 12 +r 11 r 12, λ 1 +λ (2 1 λ 1 = r 21 +r 22 r 21. (4.2 15

16 Then, the condition (C1 is satisfied, and we have c (1+ = 1+ λ(1 2, c (2+ = 1+ λ(2 1, c (10 = 1+ λ(0 1, c (20 = 1+ λ(0 2. (4.3 λ 2 λ 1 λ 1 λ 2 We also assume the condition (C3, that is, c (10 c (1+ = c (20 c (2+ which is equivalent to ( ( ( ( 1+ λ(0 1 λ 1 1+ λ(1 2 λ 2 = 1+ λ(0 2 λ 2 1+ λ(2 1 λ 1 Moreover, for the condition (C5, we assume the following conditions.. (4.4 λ 1 = λ 2 λ, (4.5 r 12 = r 21. r 10 r 20 (4.6 Then, we confirm that the condition (C5 is satisfied with η 1 = ρ 1 and η 2 = ρ 2 (see, Appendix D. Thus, under the conditions (4.2 (4.6, the reflecting random walk of the discrete Jackson network with extra arrivals at empty nodes is model reversible. From (4.2, (4.4 and (4.5, we have λ+λ (0 1 λ+λ (0 2 = λ+λ(2 1 λ+λ (1 2, (4.7 λ (2 1 = r 22 r 21 λ, λ (2 1 = r 11 r 12 λ. (4.8 These imply that λ (2 1, λ (1 2, λ (0 1 and λ (0 2 are determined by λ, r 11, r 12, r 21 and r 22. It is easy to see that the condition (4.4 is satisfied if λ (2 1 = λ (0 1, λ (1 2 = λ (0 2. (4.9 In addition, under the model reversibility conditions (4.2 (4.6, the stationary distribution of this network has a product form solution if and only if (4.9 is satisfied (see, Appendix E. From (4.7 and (4.8, there may be the case where (4.9 does not hold while (4.2 (4.6 are satisfied. We give such an example below. λ 1 = λ 2 = , µ 1 = , µ 2 = , r 10 = 0.368, r 11 = r 12 = , r 20 = 0,3784, r 21 = , r 22 = , λ (1 2 = , λ (2 1 = , λ (0 1 = , λ (0 2 = Thus, the Jackson network with additional arrivals at empty nodes may not have a product form solution when it is model reversible. 16

17 Acknowledgements This research was supported in part by Japan Society for the Promotion of Science under grant No A The proof of Lemma 3.3 We first prove the condition (C1. For n 2 1, substituting n = (1,n 2 and n = (0,n 2 into (3.1, we have P( Z l+1 = (0,n 2 Z l = (1,n 2 = P( X (+ = ( 1,0, since (1,n 2 S +. From (2.1, (2.2 and Lemma 3.1, we have P( X (+ = ( 1,0 = π(0,n 2 π(1,n 2 P(Z l+1 = n Z l = n Thus, from (3.11 and (3.12, we have = π(0,1 π(1,1 P(X(2 = (1,0 = π(0,1 π(1,1 p(2 10. (A.1 η1 1 p(+ 10 = π(0,1 π(1,1 p(2 10. This equation implies that 10 = 0 if and only if 10 then we must have = 0. Conversely, if p(2 10,p(+ 10 > 0, = π(1,1 π(0,1 η 1 1. (A.2 On the other hand, let us consider the case n = (1,n 2 and n = (0,n 2 + i for n 2 2 and i = ±1. Since we can replace n 2, ( 1,0 and (0,1 by n 2 + i, ( 1,i and (1,i in (3.11 and (A.1 respectively, we have η 1 1 p(+ 1i = π(0,1 π(1,1 p(2 1i, i = ±1. 17

18 From the condition (ii, note that at least one of 1i for i = 0,±1 is positive. Thus, we must have c (2+ > 0 and c (2+ 1i = 1i if {Z l } is model reversible. For n 1 1 and i = 0,±1, substituting n = (n 1,1 and n = (n 1 +i,0 into (3.1, P( Z l+1 = (n 1 +i,1 Z l = (n 1,1 = P( X (+ = (i, 1. Since n S 1, we have, using Lemma 3.1, P( X (+ = (i, 1 = π(n 1 +i,0 P(Z l+1 = n Z l = n π(n 1,1 = π(1,0 π(1,1 p(1 ( i1 ηi 1, i = 0,±1. On the other hand, from Lemma 3.1, for n,n = n+(i, 1 S +, we have P( X (+ = (i, 1 = π(n+(i, 1 P(X (+ = ( i,1 π(n Thus, we obtain, for all i = 0,±1, = ( i1 ηi 1 η 1 2, i = 0,±1. π(1, 0 π(1,1 p(1 ( i1 = p(+ ( i1 η 1 2. In addition, by the condition (ii, for some i = 0,±1, ( i1 > 0, ( i1 ( i1 Using a similar argument, we also have 1( i 1( i = π(1,1 π(1,0 η 1 2. (A.3 = π(1,1 π(0,1 η 1 1. (A.4 Hence, there exists c (1+ > 0 such that c (1+ ( i1 = p(1 ( i1 for i = 0,±1. This completes the proof of the condition (C1. We next obtain the conditions (C2, (C3 and (C4. Since { Z l } has the homogeneous transitions as long as Z l S 1, we have, from (2.2, (3.1 and Lemma 3.1, P( X (1 = ( 1,0 = π(0,0 π(1,0 p(0 10 = π(n 1,0 π(n, = η , n 1.

19 Similarly, for X (+ = ( 1, 1 and for any n 1, we have P( X (+ = ( 1, 1 = π(0,0 π(1,1 p(0 11 = π(n 1,0 π(n, 1 11 = π(1,0 π(1,1 η (A.5 Hence, for any i = 0,1, we must have p (0 1i = 1i = 0 if c (10 = 0, and c (10 1i = p (0 1i if c (10 > 0. In addition, if c (10 > 0, then we have, from (A.3 (A.5 π(1,0 = c (10 η 1 π(0,0, π(1,1 = c (1+ η 2 π(1,0 = c (10 c (1+ η 1 η 2 π(0,0, π(0,1 = 1 c (2+η 1 1 π(1,1 = c(10 c (1+ η c (2+ 2 π(0,0. Similarly, we obtain (3.8 (3.10 if c (20 > 0. We complete the proof. B The proof of Lemma 3.4 For each i = 1,2, denote the generating function of X (i by γ i, that is, for z = (z 1,z 2 R 2, ( 1 1 γ 1 (z = E z X(1 1 1 z X(1 2 2 = ij zi 1 zj 2, ( γ 2 (z = E z X(2 1 1 z X(2 2 2 = i= 1 j=0 1 1 i=0 j= 1 ij zi 1 zj 2. From Lemma 3.2 of [6], the stationary distribution satisfies (3.2 (3.4 if and only if γ + (η1 1,η 1 2 = 1, and there exists ζ i R 2 such that γ + (η1 1 2 = 1, γ 1(η1 1 2 = 1, (B.6 γ + (ζ1 1,η2 1 = 1, γ 2 (ζ1 1,η2 1 = 1. (B.7 Hence, under the condition (C1 and γ + (η1 1,η2 1 = 1, we show that γ i (η1 1,η2 1 = 1 for i = 1,2 if and only if the equations (B.6 and (B.7 hold. For i = 0,±1 and z R, let i (z = 1 j= 1 p(+ ji z j. Then we have γ + (z 1,z 2 = ( ( 11 z p(+ 11 z 1z 2 +( z p(+ 10 z 1 +( ( 1( 1 z ( 1 +p(+ 1( 1 z 1z2 1 = 1 (z 1z (z 1+ ( 1 (z 1z 1 2. (B.8 19

20 For z 2 R, we note that γ + (η1 1,z 2 = 1 has two solutions including a multiple root, and from condition (C5, one of their solutions is z 2 = η2 1. Denote the other solution by ζ 1 2. From (B.8, η 2 = p(+ 1 (η1 1 ( 1 (η 1 1 ζ 1 2. (B.9 Note that ( 1 (η 1 1, 1 (η1 1 > 0 under the condition (ii. We first prove that γ 1 (η 1 1,η 1 2 = 1 is equivalent to (B.6. (B.10 Under the condition (C1, for any z R 2, γ 1 (z is given by ( γ 1 (z = 00 +p(1 10 z 1 + z 1 1 +c (1+ 1( 1 z 1z ( 1 z ( 1( 1 z 1 1 z 1 2 = 00 +p(1 10 z 1 + z 1 1 +c (1+ ( 1( 1 z 1 + 0( 1 +p(+ ( 1( 1 z 1 1 z 1 2 = 00 +p(1 10 z 1 + z 1 1 +c (1+ ( 1 (z 1z 1 2. From (B.9, γ 1 (η 1 1,η 1 2 = 1 is equivalent to γ 1 (η1 1,η 1 2 = p(1 00 +p(1 10 η η 1 +c (1+ ( 1 (η 1 1 η 2 = η η 1 +c (1+ ( 1 (η 1 1 p(+ 1 (η 1 1 ( 1 (η 1 1 ζ 1 2 = 00 +p(1 10 η η 1 +c (1+ 1 (η 1 1 ζ 1 2. (B.11 Under the condition (C1, if i1 > 0 for all i = 0,±1, then c (1+ = p( = p( = p(1 ( 11 ( 11 and therefore, we compute the last term of (B.11 as follows. c (1+ 1 (η 1 1 ζ 1 2 = c (1+ 11 η 1 1 ζ 1 2 +c (1+ 01 ζ 1 2 +c (1+ ( 11 η 1ζ2 1 = p( η 1 1 ζ p( , 01 ζ p(1 ( 11 ( 11 = 11 η 1 1 ζ ζ ( 11 η 1ζ ( 11 η 1ζ2 1

21 This equation also holds with i1 = 0 for some i = 0,±1 since i1 = 0 under the condition (C1. For ζ 2 satisfying γ + (η 1 = 1, we obtain 1,ζ 1 2 γ 1 (η1 1,η 1 2 = p( η1 1 + η 1 +c (1+ 1 (η1 1 ζ 1 2 = 00 + = 1 1 i= 1 j=0 = γ 1 (η 1 1,ζ 1 2, 10η1 1 + ij η i 1 ζ j 2 η η1 1 ζ ζ2 1 + ( 11 η 1ζ2 1 and therefore, we have (B.10. Similarly, γ 2 (η 1 1,η 1 2 = 1 if and only if (B.7 holds. Thus, we complete the proof. C The reversibility condition of a singular reflecting random walk In this section, we obtain a model reversibility condition in the special case. For this, we assume the following conditions. ij = 0, i = 0,±1,j = ±1, i0 > 0, i = 0,±1. (C.1 (C.2 Then, it is easy to see that the random walk {Y l } is not irreducible. This reflecting random walk is referred to as a singular reflecting random walk, which is introduced by [3]. From irreducibility condition (i, we must have, for some i,j = 0,1 10 > 0, > 0, i1 > 0, p(2 j( 1 > 0. (C.3 (C.4 In Figure 4, we depict transition diagram of the reflecting random walk satisfying the conditions (C.1 (C.4. This reflecting random walk is model reversible if and only if the following conditions 21

22 Figure 4: Singular reflecting random walk satisfying the irreducible condition (i hold. π(n 1,n 2 = η n η n π(1,1, n 1 1,n 2 1 π(n 1,0 = α n π(1,0, n 1 1, π(0,n 2 = η n π(0,1, n 2 1, π(1,1 = c (20 c (2+ η 1 η 2 π(0,0, π(1,0 = c (10 α 1 π(0,0, π(1,1 = c (20 η 2 π(0,0, ( i1 = 0, p(2 1j = 0, i = 0,±1,j = ±1, where η 1,η 2,α 1 (0,1 and c (2+,c (10,c (20 > 0 are given by η 1 = p(+ 10 c (2+ = p( , η 2 = p(2 01 0( 1, c (10 = p( , α 1 = p(1 10,, c (20 = p(0 01 In what follows, we will derive these conditions. We assume that {Z l } is model reversible. Then, for i,j = 0,±1 and (n 1,n 2 S k, we can define the following probability function p (k ij. 01 p (k ij = P( Z l+1 = (n 1 +i,n 2 +j Z l = (n 1,n

23 From (2.2, we have 10 = P( Z l+1 = (n 1 +1,n 2 Z l = (n 1,n 2 = π(n 1 +1,n 2 π(n 1,n 2 > 0, (n 1,n 2 S +. Thus, if {Z l } is model reversible, then we must have π(n 1 +1,n 2 = η 1 π(n 1,n 2, (n 1,n 2 S +, (C.5 for some η 1 (0,1. Similarly, from (C.3, we have 10 = P( Z l+1 = (n 1 +1,0 Z l = (n 1,0 = π(n 1 +1,0 π(n 1,0 > 0, (n 1,0 S 1. Thus, for some α 1 (0,1, π(n 1 +1,0 = α 1 π(n 1,0, (n 1,0 S 1. (C.6 On the other hand, we have, for i = 0,±1 and (n 1 +i,n 2 1,(n 1,n 2 S +, i( 1 = P( Z l+1 = (n 1 +i,n 2 1 Z l = (n 1,n 2 = π(n 1 +i,n 2 1 ( i1 π(n 1,n 2 = 0, since we assume (C.1. For n 1 > 1, i( 1 = P( Z l+1 = (n 1 +i,0 Z l = (n 1,1 = π(n 1 +i,0 ( i1 π(n 1,1. Thus, we must have ( i1 = 0 for any i = 0,±1. Similarly, we have, for j = ±1 and (n 1 1,n 2 j,(n 1,n 2 S +, ( 1( j = P( Z l+1 = (n 1 1,n 2 j Z l = (n 1,n 2 = π(n 1 1,n 2 j 1j = 0, π(n 1,n 2 from (C.1. Since ( 1( j = P( Z l+1 = (0,n 2 j Z l = (1,n 2 = π(0,n 2 j 1j π(1,n 2, 23

24 we also have 1j = 0 for j = ±1, and from (C.4, 01 > 0 and 0( 1 > 0. Moreover, for n 2 1, 01 = P( Z l+1 = (0,n 2 +1 Z l = (0,n 2 = π(0,n ( 1 π(0,n 2 > 0. Thus, for η 2 (0,1, π(0,n 2 +1 = η 2 π(0,n 2, (0,n 2 S 2. We consider the relation π(1,n and π(1,n 2 for n 2 1. We have the following equation for any n 2 1. Hence, = P( Z l+1 = (0,n 2 Z l = (1,n 2 = π(0,n 2 π(1,n 2 p(2 10 = η1 1 p(+ 10. π(1,n 2 +1 π(1,n 2 = π(0,n 2 π(1,n 2 +1 π(1,n 2 π(0,n 2 +1 = π(0,n 2 +1 = η 2, π(0,n 2 π(0,n 2 +1 π(0,n 2 and we have On the other hand π(n 1,n 2 = η n η n π(1,1, (n 1,n 2 S +, π(1,1 = p( η 1 π(0,1 = c (2+ η 1 π(0,1. ( 1( 1 = P( Z l+1 = (0,0 Z l = (1,1 = π(0,0 π(1,1 p(0 11 = 0, = P( Z l+1 = (0,0 Z l = (1,0 = π(0,0 π(1,0 p(0 0( 1 = P( Z l+1 = (0,0 Z l = (0,1 = π(0,0 π(0,1 p(0 10 = π(n 1 1,0 π(n 1,0 01 = π(0,n 2 1 π(0,n 2 10 = α , 01 = η 1 2 p(

25 Thus, we have p (0 11 = 0 and From these equations, and therefore π(1,0 = p( π(0,1 = p( α 1 π(0,0 = c (10 α 1 π(0,0, η 2 π(0,0 = c (20 η 2 π(0,0. π(0,0 = 1 c (10 α 1 π(1,0 = 1 c (20 η 2 π(0,1 = π(1,1 = c(20 c (2+ c (10 η 1 η 2 α 1 π(1,0. 1 c (20 c (2+ η 1 η 2 π(1,1, We finally obtain η 1,η 2 and α 1. We have the following stationary equations. ( 10 + π(n 1,n 2 = π(n 1 +1,n π(n 1 1,n 2, ( 10 + π(n 1,0 = π(n 1 +1,0+ 10π(n 1 1,0. From (C.5 and (C.6, we have Thus, we obtain ( 10 +p(+ π(n 1,n 2 = η 1π(n 1,n η 1 1 π(n 1,n 2, ( 10 +p(1 π(n 1,0 = α 1π(n 1,0+ 10 α 1π(n 1 1,0. η 1 = p(+ 10 Repeatedly, from the stationary equations, we have < 1, α 1 = p(1 10 < 1. ( ( 1 +p(2 10π(0,n 2 = 0( 1 π(0,n π(0,n 2 1+ π(1,n 2, ( 01 +p(2 0( 1 +p(2 10 = p(2 0( 1 η η = 0( 1 η η

26 Hence, η 2 = p(2 01 0( 1 < 1. D The proof of γ i (ρ 1 1,ρ 1 2 = 1 Under the assumption (4.5, we rewrite the traffic equation (4.1 as follows. α 1 = λ+α 2 r 21 +α 1 r 11, α 2 = λ+α 1 r 12 +α 2 r 22. (D.7 The solutions of these equations are given by α 1 = λ(1 r 22 +λr 21 1 r 11 r 22 r 12 r 21 +r 11 r 22, α 2 = λ(1 r 11 +λr 12 1 r 11 r 22 r 12 r 21 +r 11 r 22. We note that 1 r 11 = r 12 +r 10 and 1 r 22 = r 21 +r 20. From the assumption (4.6, (1 r 22 r 12 = (1 r 11 r 21, and therefore, we have α 1 r 12 = α 2 r 21. Substituting this equation into (D.7, we have λ = α 1 r 10 = α 2 r 20. For i = 0,1,+, γ 0 (ρ 1 1,ρ 1 2 = µ 1 +µ 2 λ (0 1 λ (0 2 + = µ 1 +µ 2 λ (0 1 λ (0 2 + = µ 1 +µ 2 +λ+λ = 1, (1+ λ(0 1 λ (1+ λ(0 1 λ γ 1 (ρ 1 1,ρ 1 2 = µ 2 λ (1 2 +ρ 1 1 λ+µ 1 r 10 ρ 1 + ( µ 1 r 10 ρ 1 + α 1 r λ(1 2 λ (1+ λ(0 2 (1+ λ(0 2 λ λ µ 2 r 20 ρ 2 α 2 r 20 (µ2 r 21ρ 1 1 ρ 2 +µ 2 r 20 ρ 2 = µ 2 r 11 λ+ µ ( 1 λ+µ 1 λ+ 1+ r ( 11 α2 r 21 µ 1 +r 20 α 2 r 12 α 1 r 12 α 1 = µ 2 r 11 r 12 λ+µ 1 r 10 +µ 1 λ+µ 1 r 12 +λ+µ 1 r 11 + r 11 r 12 λ = 1, γ + (ρ 1 1,ρ 1 2 = µ 1r 11 +µ 2 r 22 +µ 1 r 10 +µ 2 r 20 +λ+µ 2 r 12 r 20 r 10 +λ+µ 1 r 21 r 10 r 20 = 1. By symmetry of γ 1 (ρ 1 1,ρ 1 2 = 1, we have γ 2(ρ 1 1,ρ 1 2 = 1. 26

27 E Product form conditions for model reversibility In this section, under the conditions (4.2 (4.6, the Jackson network with extra arrivals at empty nodes has a product form if and only if the condition (4.9 is satisfied. Note that the stationary distribution has a product form if and only if π(n 1,n 2 = ν (1 n 1 ν (2 n 2, n 1,n 2 Z +, (E.1 for some ν n (1 1,ν n (2 2 (0,1. From [6], we need to check (E.1 for n 1,n 2 {0,1}. From (3.5 (3.10 and (4.3, we have ( π(1,0 = π(0,1 = π(1,1 = π(1,1 = ( ( 1+ λ(0 1 λ 1+ λ(0 1 λ 1+ λ(0 2 λ 1+ λ(0 1 λ ( η 1 π(0,0, η 2 π(0,0, 1+ λ(0 2 λ 1+ λ(1 2 λ If (4.9 is satisfied, we rewrite the last equation as ( ( Thus, for n 1,n 2 {0,1}, (E.1 holds with ( ν (1 1 = 1+ λ(0 1 λ η 1 π(0,0, ν (1 2 = ( 1+ λ(0 2 λ On the other hand, if (E.1 is satisfied, then we have π(1, 1 π(1,0 = π(0,1 π(0,0 = ν(1 2 η 1 η 2 π(0,0. η 1 η 2 π(0,0. ν (0 2 and therefore, we must have λ (1 2 = λ (0 2. Similarly, we obtain λ (1 1 = λ (0 1., η 2 π(0,0, ν (0 1 = ν (0 2 = π(0,0. 27

28 References [1] Asmussen, S. (2003. Applied probability and queues, vol. 51 of Applications of Mathematics. 2nd ed. Springer-Verlag, New York. Stochastic Modelling and Applied Probability. [2] Chao, X., Miyazawa, M. and Pinedo, M. (1999. Queueing networks, customers, signals and product form solutions. John Wiley & Sons Inc., New York. [3] Fayolle, G., Iasnogorodski, R. and Malyshev, V. (1999. Random Walks in the Quarter-Plane: Algebraic Methods, Boundary Value Problems and Applications. Springer, New York. [4] Kelly, F. P. (1979. Reversibility and Stochastic Networks. New York, John Wiley & Sons Inc. [5] Kobayashi, M. and Miyazawa, M. (2012. Revisit to the tail asymptotics of the double QBD process: Refinement and complete solutions for the coordinate and diagonal directions. In Matrix-Analytic Methods in Stochastic Models (G. Latouche and M. S. Squillante, eds.. Springer, ArXiv: [6] Latouche, G. and Miyazawa, M. (2013. Product form characterization for a two dimensional reflecting random walk and its applications. To appear Queueing systems, URL [7] Miyazawa, M.(2011. Light tail asymptotics in multidimensional reflecting processes for queueing networks. TOP, [8] Miyazawa, M. (2013. Reversibility in Queueing Models. Wiley Encyclopedia of Operations Research and Management Science, New York. [9] Serfozo, R. (1999. Introduction to stochastic networks, vol. 44 of Applications of Mathematics. Springer-Verlag, New York. 28

Tail Decay Rates in Double QBD Processes and Related Reflected Random Walks

Tail Decay Rates in Double QBD Processes and Related Reflected Random Walks Tail Decay Rates in Double QBD Processes and Related Reflected Random Walks Masakiyo Miyazawa Department of Information Sciences, Tokyo University of Sciences, Noda, Chiba 278-8510, Japan email: miyazawa@is.noda.tus.ac.jp

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information

Synchronized Queues with Deterministic Arrivals

Synchronized Queues with Deterministic Arrivals Synchronized Queues with Deterministic Arrivals Dimitra Pinotsi and Michael A. Zazanis Department of Statistics Athens University of Economics and Business 76 Patission str., Athens 14 34, Greece Abstract

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Departure Processes of a Tandem Network

Departure Processes of a Tandem Network The 7th International Symposium on perations Research and Its Applications (ISRA 08) Lijiang, China, ctober 31 Novemver 3, 2008 Copyright 2008 RSC & APRC, pp. 98 103 Departure Processes of a Tandem Network

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Stability of the two queue system

Stability of the two queue system Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)

More information

arxiv: v2 [math.pr] 2 Jul 2014

arxiv: v2 [math.pr] 2 Jul 2014 Necessary conditions for the invariant measure of a random walk to be a sum of geometric terms arxiv:1304.3316v2 [math.pr] 2 Jul 2014 Yanting Chen 1, Richard J. Boucherie 1 and Jasper Goseling 1,2 1 Stochastic

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS J.R. ARTALEJO, Department of Statistics and Operations Research, Faculty of Mathematics, Complutense University of Madrid,

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Generalized Fibonacci Numbers and Blackwell s Renewal Theorem

Generalized Fibonacci Numbers and Blackwell s Renewal Theorem Generalized Fibonacci Numbers and Blackwell s Renewal Theorem arxiv:1012.5006v1 [math.pr] 22 Dec 2010 Sören Christensen Christian-Albrechts-Universität, Mathematisches Seminar, Kiel, Germany Abstract:

More information

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University

More information

Stochastic models in product form: the (E)RCAT methodology

Stochastic models in product form: the (E)RCAT methodology Stochastic models in product form: the (E)RCAT methodology 1 Maria Grazia Vigliotti 2 1 Dipartimento di Informatica Università Ca Foscari di Venezia 2 Department of Computing Imperial College London Second

More information

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.

More information

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin Queuing Networks Florence Perronnin Polytech Grenoble - UGA March 23, 27 F. Perronnin (UGA) Queuing Networks March 23, 27 / 46 Outline Introduction to Queuing Networks 2 Refresher: M/M/ queue 3 Reversibility

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS

TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS YUSUKE SUYAMA Abstract. We give a necessary and sufficient condition for the nonsingular projective toric variety associated to a building set to be

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

arxiv: v1 [math.pr] 28 Jul 2015

arxiv: v1 [math.pr] 28 Jul 2015 EQUIVALECE OF ESEMBLES FOR LARGE VEHICLE-SHARIG MODELS arxiv:507.07792v [math.pr] 28 Jul 205 CHRISTIE FRICKER AD DAIELLE TIBI Abstract. For a class of large closed Jackson networks submitted to capacity

More information

A tandem queueing model with coupled processors

A tandem queueing model with coupled processors A tandem queueing model with coupled processors Jacques Resing Department of Mathematics and Computer Science Eindhoven University of Technology P.O. Box 513 5600 MB Eindhoven The Netherlands Lerzan Örmeci

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

reversed chain is ergodic and has the same equilibrium probabilities (check that π j = Lecture 10 Networks of queues In this lecture we shall finally get around to consider what happens when queues are part of networks (which, after all, is the topic of the course). Firstly we shall need

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Approximation in an M/G/1 queueing system with breakdowns and repairs

Approximation in an M/G/1 queueing system with breakdowns and repairs Approximation in an M/G/ queueing system with breakdowns and repairs Karim ABBAS and Djamil AÏSSANI Laboratory of Modelization and Optimization of Systems University of Bejaia, 6(Algeria) karabbas23@yahoo.fr

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Structured Markov Chains

Structured Markov Chains Structured Markov Chains Ivo Adan and Johan van Leeuwaarden Where innovation starts Book on Analysis of structured Markov processes (arxiv:1709.09060) I Basic methods Basic Markov processes Advanced Markov

More information

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING Stochastic Models, 21:695 724, 2005 Copyright Taylor & Francis, Inc. ISSN: 1532-6349 print/1532-4214 online DOI: 10.1081/STM-200056037 A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING N. D. van Foreest

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. MAT 4371 - SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. Question 1: Consider the following generator for a continuous time Markov chain. 4 1 3 Q = 2 5 3 5 2 7 (a) Give

More information

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France Queueing Networks G. Rubino INRIA / IRISA, Rennes, France February 2006 Index 1. Open nets: Basic Jackson result 2 2. Open nets: Internet performance evaluation 18 3. Closed nets: Basic Gordon-Newell result

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers

On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers Ger Koole INRIA Sophia Antipolis B.P. 93, 06902 Sophia Antipolis Cedex France Mathematics of Operations Research 21:469

More information

APPROXIMATE/PERFECT SAMPLERS FOR CLOSED JACKSON NETWORKS. Shuji Kijima Tomomi Matsui

APPROXIMATE/PERFECT SAMPLERS FOR CLOSED JACKSON NETWORKS. Shuji Kijima Tomomi Matsui Proceedings of the 005 Winter Simulation Conference M. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds. APPROXIMATE/PERFECT SAMPLERS FOR CLOSED JACKSON NETWORKS Shuji Kijima Tomomi Matsui

More information

Another algorithm for nonnegative matrices

Another algorithm for nonnegative matrices Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,

More information

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4 Do the following exercises from the text: Chapter (Section 3):, 1, 17(a)-(b), 3 Prove that 1 3 + 3 + + n 3 n (n + 1) for all n N Proof The proof is by induction on n For n N, let S(n) be the statement

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES THE ROYAL STATISTICAL SOCIETY 9 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES The Society provides these solutions to assist candidates preparing

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Asymptotic efficiency of simple decisions for the compound decision problem

Asymptotic efficiency of simple decisions for the compound decision problem Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

On the stationary distribution of queue lengths in a multi-class priority queueing system with customer transfers

On the stationary distribution of queue lengths in a multi-class priority queueing system with customer transfers Queueing Syst 2009 62: 255 277 DOI 10.1007/s11134-009-9130-0 On the stationary distribution of queue lengths in a multi-class priority queueing system with customer transfers Jingui Xie Qi-Ming He Xiaobo

More information

A tandem queue under an economical viewpoint

A tandem queue under an economical viewpoint A tandem queue under an economical viewpoint B. D Auria 1 Universidad Carlos III de Madrid, Spain joint work with S. Kanta The Basque Center for Applied Mathematics, Bilbao, Spain B. D Auria (Univ. Carlos

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

arxiv: v2 [math.pr] 2 Nov 2017

arxiv: v2 [math.pr] 2 Nov 2017 Performance measures for the two-node queue with finite buffers Yanting Chen a, Xinwei Bai b, Richard J. Boucherie b, Jasper Goseling b arxiv:152.7872v2 [math.pr] 2 Nov 217 a College of Mathematics and

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

IBM Research Report. Stochastic Analysis of Resource Allocation in Parallel Processing Systems

IBM Research Report. Stochastic Analysis of Resource Allocation in Parallel Processing Systems RC23751 (W0510-126) October 19, 2005 Mathematics IBM Research Report Stochastic Analysis of Resource Allocation in Parallel Processing Systems Mark S. Squillante IBM Research Division Thomas J. Watson

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

On Successive Lumping of Large Scale Systems

On Successive Lumping of Large Scale Systems On Successive Lumping of Large Scale Systems Laurens Smit Rutgers University Ph.D. Dissertation supervised by Michael Katehakis, Rutgers University and Flora Spieksma, Leiden University April 18, 2014

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math-Stat-491-Fall2014-Notes-V

Math-Stat-491-Fall2014-Notes-V Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan November 18, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

A note on stochastic context-free grammars, termination and the EM-algorithm

A note on stochastic context-free grammars, termination and the EM-algorithm A note on stochastic context-free grammars, termination and the EM-algorithm Niels Richard Hansen Department of Applied Mathematics and Statistics, University of Copenhagen, Universitetsparken 5, 2100

More information

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung

More information