Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Similar documents
Tail Decay Rates in Double QBD Processes and Related Reflected Random Walks

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Synchronized Queues with Deterministic Arrivals

Lecture 20: Reversible Processes and Queues

Departure Processes of a Tandem Network

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Queueing Networks and Insensitivity

Non-homogeneous random walks on a semi-infinite strip

Stability of the two queue system

arxiv: v2 [math.pr] 2 Jul 2014

Lecture 20 : Markov Chains

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

arxiv: v2 [math.pr] 4 Sep 2017

ON THE NON-EXISTENCE OF PRODUCT-FORM SOLUTIONS FOR QUEUEING NETWORKS WITH RETRIALS

Statistics 150: Spring 2007

Time Reversibility and Burke s Theorem

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Generalized Fibonacci Numbers and Blackwell s Renewal Theorem

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Cover Page. The handle holds various files of this Leiden University dissertation

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

Markov Chain Model for ALOHA protocol

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains, Stochastic Processes, and Matrix Decompositions

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA

Stochastic models in product form: the (E)RCAT methodology

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2: Markov Chains and Queues in Discrete Time

TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS

P(X 0 = j 0,... X nk = j k )

SIMILAR MARKOV CHAINS

arxiv: v1 [math.pr] 28 Jul 2015

A tandem queueing model with coupled processors

A log-scale limit theorem for one-dimensional random walks in random environments

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

STOCHASTIC PROCESSES Basic notions

NEW FRONTIERS IN APPLIED PROBABILITY

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Continuous-Time Markov Chain

Markov Chains and Stochastic Sampling

Approximation in an M/G/1 queueing system with breakdowns and repairs

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Structured Markov Chains

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

Determinants of Partition Matrices

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Quantitative Model Checking (QMC) - SS12

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

Queueing Networks G. Rubino INRIA / IRISA, Rennes, France

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

On the Pathwise Optimal Bernoulli Routing Policy for Homogeneous Parallel Servers

APPROXIMATE/PERFECT SAMPLERS FOR CLOSED JACKSON NETWORKS. Shuji Kijima Tomomi Matsui

Another algorithm for nonnegative matrices

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

8. Prime Factorization and Primary Decompositions

Markov processes and queueing networks

The Distribution of Mixing Times in Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Other properties of M M 1

The Google Markov Chain: convergence speed and eigenvalues

Exact Simulation of the Stationary Distribution of M/G/c Queues

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

Censoring Technique in Studying Block-Structured Markov Chains

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Asymptotic efficiency of simple decisions for the compound decision problem

Lecture 9 Classification of States

Gärtner-Ellis Theorem and applications.

THE QUEEN S UNIVERSITY OF BELFAST

Lecture 2: September 8

Stochastic modelling of epidemic spread

On the stationary distribution of queue lengths in a multi-class priority queueing system with customer transfers

A tandem queue under an economical viewpoint

Convergence Rate of Markov Chains

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ISE/OR 760 Applied Stochastic Modeling

arxiv: v2 [math.pr] 2 Nov 2017

25.1 Markov Chain Monte Carlo (MCMC)

1.3 Convergence of Regular Markov Chains

A NICE PROOF OF FARKAS LEMMA

IBM Research Report. Stochastic Analysis of Resource Allocation in Parallel Processing Systems

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

On Successive Lumping of Large Scale Systems

CHAPTER 10 Shape Preserving Properties of B-splines

Linear Algebra March 16, 2019

Math-Stat-491-Fall2014-Notes-V

6 Markov Chain Monte Carlo (MCMC)

Modelling Complex Queuing Situations with Markov Processes

A note on stochastic context-free grammars, termination and the EM-algorithm

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

Transcription:

arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu Department of Information Sciences, Tokyo University of Science Abstract We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. It is assumed that this reflecting random walk has skip free transitions. We are concerned with its time reversed process assuming that the stationary distribution exists. In general, the time reversed process may not be a reflecting random walk. In this paper, we derive necessary and sufficient conditions for the time reversed process also to be a reflecting random walk. These conditions are different from but closely related to the product form of the stationary distribution. 1 Introduction We consider a two dimensional reflecting random walk on the nonnegative integer quadrant. We are interested in its stationary distribution for queueing applications. This stationary distribution is generally hard to analytically get, and recent research interests have been directed to its tail asymptotics. We now have good pictures for those tail asymptotics (e.g., see [7], but many other characteristics like moments are not available. In this paper, we look at this problem up side down. Namely, we aim to find a class of the reflecting random walks whose stationary distributions are obtained in closed form. For this, we use a time reversed process for the reflecting random walk, and expect that the stationary distribution is analytically obtained when the time reversed process is also a reflecting random walk. Kelly [4] pioneered to use this time reversed idea for deriving so called product form solutions for various queueing network models. Here, the stationary distribution is saidto be a product formsolution if it is theproduct of marginal 1

stationary distributions. This approach has been further studied (see, e.g., [2, 4, 9] and references therein. However, they are limited in use for applications. While those traditional approaches use Markov chains which are specialized to queueing models, we here use a different class of Markov chains. Namely, we take two dimensional reflecting random walks for this class motivated by [8]. They may be interpreted as queueing models, but our intension is to consider the time reversed processes within a class of two dimensional reflecting random walks. In this way, we study a class of the reflecting random walks which have tractable stationary distributions. To make clear our approach, we introduce the notion of model reversibility. A reflecting random walk is said to be model reversible if it has the stationary distribution under which its time reversed process is a reflecting random walk, where its transition probabilities may not be identical with those of the original reflecting random walk. We then derive necessary and sufficient conditions for the reflecting random walk to be model reversible. In this derivation, the stationary distribution is simultaneously obtained. This stationary distribution has product form in the interior of the quadrant but may not be of product form on the boundary. It is notable that our stationary distribution is closely related to the one which was recently obtained by Latouche and Miyazawa [6]. They derived it by characterizing a class of the reflecting random walks whose stationary distribution has product form. In the interior of the quadrant, this two dimensional distribution has geometric marginals whose rates are identical with those of ours. We also have geometric interpretations to characterize such decay rates similarly to those of [6]. However, for the stationary distribution, the characterization of model reversibility is different from the one of product form. In particular, the difference is clearly observed when the stationary distribution is not of product form. This paper is made up by four sections. We formally define our reflecting random walk and its time reversed process in Section 2. In Section 3, we give a main result, i.e., we derive conditions of model reversibility, and prove it. For queueing model, we give an example to be model reversible in Section 4. 2 Two dimensional random walk and its time-reversed process In this section, we briefly introduce a two dimensional reflecting random walk and its time-reversed process. We use the following notations. Z = the set of all integers, Z + = {i Z;i 0}, R = the set of all real numbers. 2

Let S Z 2 + be a state space for the reflecting random walk. We partition it into following subsets. S 0 = {(0,0}, S 1 = {(i,0 S;i 1}, S 2 = {(0,j S;j 1}, S + = {(i,j S;i,j 1}. Let S = 2 i=0s i. Clearly, S = S + S. The subsets S + and S are called an interior and a boundary, and S i is called a boundary face for i = 0,1,2. Let X (+ (X (+ 1,X (+ 2 be a random vector taking value in U { 1,0,1} 2, and for i = 0,1,2, X (i (X (i 1,X(i 2 be a random vector taking values in U such that X(0 0, X (1 2 0 and X (2 1 0. Define {Z l ;l Z + } as the Markov chain with state space S and the following transition probabilities. P(Z l+1 = n Z l = n = { P(X (+ = n n, n S +,n S, P(X (i = n n, i = 0,1,2,n S i,n S. (2.1 By this definition, {Z l } is skip free, and has homogeneous transition probabilities in each boundary face S i and the interior S +. We refer to this Markov chain {Z l } as a two dimensional reflecting random walk. Throughout this paper, we assume that (i {Z l } is irreducible. The modeling primitives of this reflecting random walk are given by the four sets of the following probability distributions: p (0 ij = P(X (0 = (i,j, i,j = 0,1, ij = P(X (1 = (i,j, i = 0,±1,j = 0,1, ij = P(X (2 = (i,j, i = 0,1,j = 0,±1, ij = P(X (+ = (i,j, i,j = 0,±1. The transition diagram of {Z l } is illustrated in Figure 1. For the modeling parameter ij, we add the irreducibility assumption. Let {Y l Z 2 ;l Z + } be a two dimensional random walk removing the boundary S of {Z l }. We note that the distribution of increments for {Y l } is identical with ij. For this random walk, we assume the following condition. (ii {Y l ;l Z + } is irreducible. Of course, if {Y l } is not irreducible, the irreducible condition (i may be still satisfied because of the reflection at the boundary. We first note the following fact. 3

Figure 1: Transition diagram of {Z l } is posi- Lemma 2.1 Under the condition (ii, at least one of 10, p(+, p(+ 01 and 0( 1 tive. Proof. Suppose that our claim is not true, that is, 10 = = p(+ 01 = 0( 1 = 0. But then, for any n 1,n 2 Z, if Y 0 = (n 1,n 2, then Y l does not arrive at (n 1 +1,n 2 and (n 1,n 2 +1 for any l Z +. This is a contradiction for irreducibility of {Y l }. Our problem is to find necessary and sufficient conditions for {Z l } under which its time reversal is a reflecting random walk on S. For this, we assume that {Z l } has the stationary distribution, which is denoted by π. Then, we can construct the stationary Markov chain {Z l ;l Z} starting from with Z 0 subject to π. We define the timereversed process { Z l ;l Z} by Z l = Z l, l Z. It is easy to see that { Z l } is the Markov chain, and transition probability of { Z l } is given by P( Z l+1 = n Z l = n = π(n π(n P(Z l+1 = n Z l = n, n,n S. (2.2 The Markov chain { Z l } is referred to as a time reversed process under π (see, e.g., [1]. 4

3 Characterization of model reversibility A Markov chain is said to be reversible if its time reversed process is stochastically identical with the original Markov chain. However, this condition is too strong. Instead of this reversibility, we consider weaker concept of reversibility for a reflecting random walk. Definition 3.1 The reflecting random walk {Z l } is said to be model reversible if it has a stationary distribution and if its time-reversed process { Z l } is also a reflecting random walk. Thus, if {Z l } has model reversibility, then the transition probabilities of { Z l } are given by P( Z l+1 = n Z l = n = P( X (i = n n, l Z,i = 0,1,2,+,n S i, (3.1 for some random vector X (i (i (i = ( X 1, X 2 {0,1, 1}2. From the reflection property of { Z l }, it is required that X (i 1 0 for i = 0,2 and X (i 2 0 for i = 0,1. We also note that the distributions of X (i and X (i may not be the same. We are now ready to present conditions for model reversibility.@ Theorem 3.1 For the reflecting random walk {Z l }, assume the conditions (i and (ii. Then {Z l } is model reversible if and only if the following conditions hold. (C1 There exist c (1+,c (2+ > 0 such that c (1+ i1 = i1 and c (2+ 1i = 1i for any i = 0,±1. (C2 There exist c (10,c (20 0 such that c (10 1j = p (0 1j and c (20 j1 = p (0 j1 for any j = 0,1. If c (10 = 0 (resp. c (20 = 0, then 1j = 0 (resp. j1 = 0 for any j = 0,1. (C3 If both c (10 > 0 and c (20 > 0, then c (10 c (1+ = c (20 c (2+. (C4 There exist η 1,η 2 (0,1 such that If c (10 > 0, then π(n,0 = η1 n 1 π(1,0, n 1, (3.2 π(0,n = η2 n 1 π(0,1, n 1, (3.3 π(n 1,n 2 = η n 1 1 1 η n 2 1 2 π(1,1, n 1,n 2 1. (3.4 π(1,0 = c (10 η 1 π(0,0, (3.5 π(0,1 = c(10 c (1+ η c (2+ 2 π(0,0 (3.6 π(1,1 = c (10 c (1+ η 1 η 2 π(0,0, (3.7 5

and if c (20 > 0, then π(1,0 = c(20 c (2+ η c (1+ 1 π(0,0, (3.8 π(0,1 = c (20 η 2 π(0,0, (3.9 π(1,1 = c (20 c (2+ η 1 η 2 π(0,0. (3.10 Remark 3.1 Under the condition (C2, if c (10 = c (20 = 0, then {Z l } is not irreducible, that is, the condition (i is not satisfied. Therefore, at least one of c (10 > 0 and c (20 > 0 holds, and we can obtain at least one of (3.5 (3.7 and (3.8 (3.10 since c (1+,c (2+ > 0 under the condition (C1. Remark 3.2 From the condition (C3, if c (10 > 0 and c (20 > 0, then (3.5 (3.7 are identical with (3.8 (3.10. Remark 3.3 We can obtain the reversibility condition even if the condition (ii is not satisfied. Then, the condition (C4 is slightly changed. Such an example is given in Appendix C. We first prove that (C1 (C4 are necessity for model reversibility. Lemma 3.1 Under the conditions (i and (ii, if { Z l } is a reflecting random walk, then the stationary distribution satisfies (3.4. Proof. From Lemma 2.1, we divide our proof into the following three cases. (D1 i0 > 0 and 0j > 0 for some i,j = ±1. (D2 i0 > 0 for some i = ±1 and 0j = 0 for all j = ±1. (D3 0i > 0 for some i = ±1 and j0 = 0 for all j = ±1. The cases (D2 and (D3 are symmetric, and therefore, we prove (3.4 for (D1 and (D2. For the case (D1, we only consider the case that 10 > 0 and 01 > 0 since the other cases are similarly proved. Consider the transition of the reversed process { Z l } from n to n = n e 1, where e 1 = (1,0. Then, it follows from (2.1 and (2.2 for all n e 1 S + (also n S + that P( Z l+1 = n e 1 Z l = n = π(n e 1 π(n 6 P(X (+ = (1,0 = π(n e 1 10. (3.11 π(n

Since 10 > 0 and { Z l } is a reflecting random walk, the right hand side must be a constant for all n,n S +. Denote this constant by η1 1, i.e., η 1 1 = π(n e 1, n,n e 1 S +. (3.12 π(n Similarly, we can show that π(n e 2 is a positive constant for all n,n e π(n 2 S +, and denote it by η2 1. Thus, for n = (n 1,n 2 S +, we have π(n = η 1 π(n e 1 = η n 1 1 1 π(1,n 2 = η n 1 1 1 η n 2 1 2 π(1,1. Obviously, from this equation, we have η 1,η 2 (0,1 since n S + π(n 1. We next assume the case (D2. Since i0 > 0 for some i { 1,1}, we also have (3.12 for the case (D2. In what follows, we prove that π(n e 2 also must be constant for π(n the case (D2. First assume both 10 > 0 and > 0. Under the condition (ii and the case (D2 with 10 > 0 and > 0, it is clear that the conditions p(+ i( 1 > 0 and p(+ j1 > 0 hold for some i,j { 1,1}, and we may assume ( 11 > 0 since a proof is similar to the other cases. Then, it again follows from (2.1 and (2.2 for n = (n 1,n 2 S + and n = n+e 1 e 2, we have P( Z l+1 = n+e 1 e 2 Z l = n = π(n+e 1 e 2 ( 11 π(n = π(n+e 1 e 2 π(n+e 1 π(n π(n+e 1 p(+ = π(n+e 1 e 2 η 1 ( 11 π(n+e 1 ( 11 = π(m e 2 η 1 ( 11 π(m, m n+e 1 S +, where the third equation is obtained by (3.12. Since ( 11 must be independent for m, we have > 0 and the left hand side π(m e 2 π(m = η 1 2, as long as m S +. Thus, we have (3.4 for case (D2 with 10 > 0 and > 0. 7

The rest of proof for (D2 is the cases that 10 > 0, = 0 and p(+ 10 = 0, > 0. These are also symmetric, so we only consider the case p(+ 10 > 0 and = 0. Repeatedly, from the condition (ii, we must have ( 11 conditions 10 > 0 and and > 0, we obtain (3.4. > 0 and p(+ ( 1( 1 > 0 under the = 0. Thus, using the same argument of thecase for p(+ 10 > 0 Lemma 3.2 Under the same assumptions of Theorem 3.1, if { Z l } is a reflecting random walk, then we have (3.2 and (3.3. Proof. We only obtain (3.2 since (3.3 is similarly proved. We separately consider the cases such that (E1 Either 10 > 0 or > 0. (E2 Both 10 = 0 and = 0. For (E1, to change the proof of the case (D1 in Lemma 3.1 from S + into S 1, we have, for some constant α 1 (0,1, 1 = π(n e 1, n S 1. (3.13 π(n α 1 From the condition (ii, (3.12 and (3.13, i( 1 for n = (n 1,0 S 1 and n = (n 1 i,1 S +, P( Z l+1 = (n 1 i,1 Z l = (n 1,0 = π(n 1 i,1 π(n 1,0 > 0 for some i = 0,±1, and therefore, i( 1 = ηn1 i α n 1 1 1 π(1, 1 p(+ i( 1 π(1,0. (3.14 The left hand side of this equation is independent of n 1 since { Z l } is a reflecting random walk, and therefore, we have α 1 = η 1, which implies (3.2. For (E2, from the irreducible condition (i, i1 > 0 and j( 1 > 0 for some i,j = 0,±1. Then, from (2.1, (2.2 and (3.1, we have, for n = (n 1 + j,0 S 1 and n = (n 1,1 S +, P( Z l+1 = (n 1,1 Z l = (n 1 +j,0 = π(n 1,1 π(n 1 +j,0 p(+ j( 1. (3.15 8

Since {Z l } is model reversible and j( 1 > 0, the left hand side of this equation is a positive constant which depends on j, and denote it by ( j1. Similarly, for n = (n 1 +i,1 S + and (n 1,0 S 1, P( Z l+1 = (n 1,0 Z l = (n 1 +i,1 = = = π(n 1,0 π(n 1 +i,1 p(1 i1 π(n 1,0 π(n 1 +i+j,0 π(n 1 +i,1 π(n 1 +i+j,0 p(1 i1 π(n 1,0 π(n 1 +i+j,0 j( 1 ( j1 i1, where the third equality is given by (3.15. Since the left hand side of this equation also does not depend on n 1, we directly obtain, for η 1 (0,1 satisfying (3.12, π(n 1,0 π(n 1 +1,0 = η 1 1, n 1 1, (3.16 if i+j = 1 holds, and we have (3.2. On the other hand, from irreducible condition (i, if i+j 1, then at least one of 10 > 0 and > 0 holds, and we have, for k = 1 or k = 1, P( Z l+1 = (n 1,1 Z l = (n 1 +k,1 = π(n 1,1 π(n 1 +k,1 p(+ k0. From Lemma 3.1, the probability of the left hand side of this equation is independent for n 1, and denote its probability by ( k0 Thus, we obtain, from (3.15 P( Z l+1 = (n 1,0 Z l = (n 1 +i,1 = π(n 1,0 π(n 1 +i,1 p(1 i1 = π(n 1,0 π(n 1 +i+k,1 π(n 1 +i+j +k,0 π(n 1 +i,1 π(n 1 +i+k,1 π(n 1 +i+j +k,0 p(1 i1 = π(n 1,0 π(n 1 +i+j +k,0 k0 ( k0 j( 1 ( j1 From the irreducible condition (i again, if i+j 1, we must have i+j +k = 1, and therefore, we also have (3.16. We complete the proof since (3.16 implies (3.2. 9 i1.

Lemma 3.3 Under the conditions in Theorem 3.1, if { Z l } is a reflecting random walk, then (C1, (C2 and (C3 hold, and the stationary distribution satisfies (3.5 (3.10. The proof of Lemma 3.3 is deferred to Appendix A. We are now ready to prove Theorem 3.1. Proof of Theorem 3.1 From Lemmas 3.1, 3.2 and 3.3, we already prove necessity of Theorem 3.1. For sufficiency, suppose the conditions (C1 (C4 hold. Then, from (2.1 and (2.2, we have, for i = 0,±1,+., j,k = 0,±1,n S i and n = n+(j,k S i, P( Z l+1 = n Z l = n = π(n π(n P(X(i = n n = π((n+(j,k P(X (i = ( j, k π(n = η j 1η k 2 p(i ( j( k. (3.17 Thus, the transition probabilities of { Z l } from S i to S i are homogeneous. We next verify that the downward transitions from S + to S 1 and from S + to S + are homogeneous. To this end, let consider the transitions from S + to S 1. For n = (n 1,1 S + and n = (n 1 +i,0 S 1 and i = 0,±1, we have, by the conditions (C1, (C2, (C3 and (C4 P( Z l+1 = (n 1 +i,0 Z l = (n 1,1 = π(n 1 +i,0 P(X (1 = ( i,1 π(n 1,1 = ηi 1 π(1,0 π(1,1 p(1 ( i1 1 = c (1+ηi 1 η 1 2 p(1 ( i1 = η1 i η 1 2 p(+ ( i1, = P( Z l+1 = (n 1 +i,m 1 Z l = (n 1,m, m 2. where the last equation is obtained by (3.17. Note that (n 1,m,(n 1 + i,m 1 S + since we assume n 1,n 1 + i 1. Thus, the downward transitions for the direction of n 1 -axis in the interior are homogeneous. Using a similar argument, we can prove that another downward transitions are also homogeneous. That is, { Z l } has the homogeneous transitions in each subset S i. Hence, { Z l } is a reflecting random walk if the conditions (C1 (C4 hold. We complete the proof. 10

3.1 Geometric view for the model reversibility condition We characterize the model reversibility in Theorem 3.1. However, the condition (C4 may not be easily checked because it requires the stationary distribution. Thus, we replace it by the one only using modeling primitives. Theorem 3.2 Suppose that conditions (C1, (C2 and (C3 hold. For z R 2, let γ 0 (z = p (0 00 +c(10 z 1 1 +c (20 0( 1 z 1 2 +c (0 ( 1( 1 z 1 1 z 1 2 (, γ 1 (z = 00 +p(1 10 z 1 + z 1 1 +c (1+ 1( 1 z 1z2 1 + 0( 1 z 1 2 + ( 1( 1 z 1 γ 2 (z = 00 + 01z 2 + 0( 1 z 1 1 1 γ + (z = ij z1 i zj 2, i= 1j= 1 2 +c (2+ ( ( 11 z 1 1 z 2 + 1 z 1 2 z 1 1 + ( 1( 1 z 1 1 z2 1 where c (k+ and c (k0 are nonnegative constants defined in Theorem 3.1 for k = 1,2, and Then, the condition (C4 is equivalent to c (0 max(c (10 c (1+,c (20 c (2+. (C5 There exist η 1,η 2 (0,1 satisfying γ i (η 1 1,η 1 2 = 1 for all i = 0,1,2,+. Remark 3.4 From the condition (C3, if c (10 > 0 and c (20 > 0, then we have c (0 = c (10 c (1+ = c (20 c (2+. Note that γ + is the generating function of X (+. From this theorem, we derive the condition of model reversibility given by modeling parameters. Corollary 3.1 Suppose the conditions (i and (ii. Then, {Z l } is model reversible if and only if the conditions (C1, (C2, (C3 and (C5 hold. Remark 3.5 Latouche and Miyazawa[6] derive necessary and sufficient conditions for the stationary distribution of the two dimensional reflecting random walk to have a product form solution. Similarly to the condition (C5, their conditions have geometric interruptions.,, 11

Remark 3.6 If the conditions: 11 = p (0 11 = p(1 11 = p(2 11, (3.18 p (0 10 = p(1 10, p(0 01 = p(2 01, p(2 1j = 1j, p(1 j1 = p(+ j1, j = 0, 1. (3.19 are satisfied, then (C1, (C2 and (C3 hold, and the stationary distribution has a exactly geometric form, and therefore, has a product form solution. Namely, under the conditions (3.18 and(3.19, the condition(c5 is just identical with the product form condition(see, Theorem 3.2 in [6]. We briefly explain the condition (C5. Obviously, γ + (1,1 = 1. Moreover, the subset Γ i = {z R 2 ;γ i (z = 1} is a nonnegative-directed convex (see, e.g., [5]. This means that Γ i describe a locally convex curve on the nonnegative quadrant for i = 0,±1,+. In Figure 2, we illustrate the curve Γ i that the condition (C5 holds. Figure 2: Example curve of Γ i In what follows, we prove Theorem 3.2. We first obtain the following lemma. Lemma 3.4 Suppose the condition (C1 holds. The stationary distribution satisfies (3.2, (3.3 and (3.4 if and only if γ i (η 1,η 2 = 1 for i = 1,2,+. We defer the proof of Lemma 3.4 to Appendix B. Proof of Theorem 3.2 By Lemma 3.4, we already verify that (3.2, (3.3 and(3.4are equivalent to γ i (η 1,η 2 = 1 for i = 1,2,+. Here, we prove that the stationary distribution 12

satisfies (3.5 (3.10 if and only if γ i (η1 1,η2 1 = 1 for i = 0,1,2 under the conditions (C2 and (C3. We first prove necessity, that is, if the stationary distribution satisfies (3.5 (3.10, then γ i (η1 1,η2 1 = 1 for i = 0,1,2. From the stationary equation, for n 2, (1 00 π(n,0 = p(1 10 π(n 1,0+p(1 π(n+1,0+p(+ 1( 1 π(n 1,1 and therefore, we have, from Lemma 3.4, + 0( 1 π(n,1+p(+ ( 1( 1 π(n+1,1, (1 00 π(1,0 = p(1 10 η 1 1 π(1,0+p(1 η 1π(1,0+ 1( 1 η 1 1 π(1,1 + 0( 1 π(1,1+p(+ ( 1( 1 η 1π(1,1. (3.20 If c (10 > 0, substituting (3.5 and (3.7 into this equation, we have γ 1 (η1 1,η 1 2 = 1. Even if c (10 = 0, we still have γ 1 (η1 1,η2 1 = 1 using (3.8 and (3.10. Similarly, from the stationary equations of boundary, we obtain γ 2 (η1 1,η2 1 = 1 and γ 0 (η1 1,η2 1 = 1. We next verify sufficiency. From (3.20, we have π(1, 1 π(1, 0 The condition γ 1 (η 1 1,η 1 2 = 1 is equivalent to Thus, we obtain = c (1+ η 2 = Using a similar argument, if γ 2 (η 1 1 00 10 η 1 1( 1 η 1 1 + 0( 1 +p(+ ( 1( 1 η 1 1 00 p(1 10 p(1 η 1 1( 1 η 1 1 + 0( 1 +p(+ ( 1( 1 η 1 π(1,1 = c (1+ η 2 π(1,0. (3.21 1,η 1 2 = 1, we have π(1,1 = c (2+ η 1 π(0,1. (3.22 We now consider the stationary equation for the origin, that is, (1 p (0 00 π(0,0 = p(1 π(1,0+p(2 0( 1 π(0,1+p(+ ( 1( 1 π(1,1. (3.23 From (3.21, (3.22 and (3.23, we have ( (1 p (0 00 π(0,0 = + c(1+ η 2 c (2+ 0( 1 η +c(1+ ( 1( 1 η 2 1 13.. π(1,0.

Moreover, from γ 0 (η1 1,η2 1 = 1 and the conditions (C2 and (C3, if c (10 > 0, ( 1 p (0 00 = η 1 + c(1+ η 2 c (2+ 0( 1 η +c(1+ ( 1( 1 η 2 c (10 η 1, 1 which implies π(1,0 = c (10 η 1 π(0,0. Thus, from (3.21 and (3.22 and this equation, we obtain (3.5 (3.7. This completes the proof of theorem since we can use a similar argument to obtain (3.8 (3.10 if c (20 > 0. 4 Application to the queueing network In this section, we construct a discrete time queueing network whose stationary distribution is not of product form but has closed form, using model reversibility. For this, we modify a discrete time Jackson network, which is introduced below. We define the reflecting random walk by p (i 11 = 0, p(i ( 1( 1 = 0, p(i 10 = λ 1, p (i 01 = λ 2, i = 0,1,2,+, p (j = µ 1r 10, p (j ( 11 = µ 1r 12, j = 1,+, p (k 0( 1 = µ 2r 20, p (k 1( 1 = µ 2r 21, k = 2,+, 00 = µ 1 r 11 +µ 2 r 22, 11 = µ 1r 11 +µ 2, 00 = µ 1 +µ 2 r 22, p (0 11 = µ 1 +µ 2, where all constants are positive and λ 1 +λ 2 +µ 1 +µ 2 = 1, r 10 +r 11 +r 12 = 1, r 20 +r 21 +r 22 = 1. This random walk is called a discrete time Jackson network. For i = 1,2, let α i be a solution of the following traffic equations. α 1 = λ 1 +α 2 r 21 +α 1 r 11, α 2 = λ 2 +α 1 r 12 +α 2 r 22. (4.1 The stability condition for this model is given by ρ 1 α 1 µ 1 < 1, ρ 2 α 2 µ 2 < 1. 14

It is well known that the stationary distribution of the discrete Jackson network has a product form solution, that is, stationary distribution is given by π(n 1,n 2 = (1 ρ 1 (1 ρ 2 ρ n 1 1 ρn 2 2, n 1,n 2 0. We modify a discrete time Jackson network with extra arrivals at empty nodes. We assume that arrival rate of the customer increases if there is a no customer in the node. In additions, we also assume that the departure customers which moved the same node in the Jackson network always go to another node. That is, the parameters of the Jackson network are changed as follows. 01 = λ 2 +λ (1 2, 10 = λ 1 +λ (2 1, p (0 10 = λ 1 +λ (0 1, p (0 01 = λ 2 +λ (0 2, ( 11 = µ 1(r 12 +r 11, 1( 1 = µ 2(r 21 +r 22, 00 = µ 2 λ (1 2, 00 = µ 1 λ (2 1, p (0 00 = µ 1 +µ 2 λ (0 1 λ (0 2, where we assume 0 λ (1 2 µ 2, 0 λ (2 1 µ 1 and 0 λ (0 1 +λ (0 2 µ 1 +µ 2. We refer to this random walk as a discrete time Jackson network with extra arrivals at empty nodes. In Figure 3, we depict the transition diagram of this queueing network. Node 2 Node 1 Figure 3: Transition diagram For the discrete time Jackson network with extra arrivals at empty nodes, the condition (C2 is automatically satisfied. Assume that λ 2 +λ (1 2 λ 2 = r 12 +r 11 r 12, λ 1 +λ (2 1 λ 1 = r 21 +r 22 r 21. (4.2 15

Then, the condition (C1 is satisfied, and we have c (1+ = 1+ λ(1 2, c (2+ = 1+ λ(2 1, c (10 = 1+ λ(0 1, c (20 = 1+ λ(0 2. (4.3 λ 2 λ 1 λ 1 λ 2 We also assume the condition (C3, that is, c (10 c (1+ = c (20 c (2+ which is equivalent to ( ( ( ( 1+ λ(0 1 λ 1 1+ λ(1 2 λ 2 = 1+ λ(0 2 λ 2 1+ λ(2 1 λ 1 Moreover, for the condition (C5, we assume the following conditions.. (4.4 λ 1 = λ 2 λ, (4.5 r 12 = r 21. r 10 r 20 (4.6 Then, we confirm that the condition (C5 is satisfied with η 1 = ρ 1 and η 2 = ρ 2 (see, Appendix D. Thus, under the conditions (4.2 (4.6, the reflecting random walk of the discrete Jackson network with extra arrivals at empty nodes is model reversible. From (4.2, (4.4 and (4.5, we have λ+λ (0 1 λ+λ (0 2 = λ+λ(2 1 λ+λ (1 2, (4.7 λ (2 1 = r 22 r 21 λ, λ (2 1 = r 11 r 12 λ. (4.8 These imply that λ (2 1, λ (1 2, λ (0 1 and λ (0 2 are determined by λ, r 11, r 12, r 21 and r 22. It is easy to see that the condition (4.4 is satisfied if λ (2 1 = λ (0 1, λ (1 2 = λ (0 2. (4.9 In addition, under the model reversibility conditions (4.2 (4.6, the stationary distribution of this network has a product form solution if and only if (4.9 is satisfied (see, Appendix E. From (4.7 and (4.8, there may be the case where (4.9 does not hold while (4.2 (4.6 are satisfied. We give such an example below. λ 1 = λ 2 = 0.0667, µ 1 = 0.4000, µ 2 = 0.4667, r 10 = 0.368, r 11 = r 12 = 0.3158, r 20 = 0,3784, r 21 = 0.3243, r 22 = 0.2973, λ (1 2 = 0.0667, λ (2 1 = 0.0611, λ (0 1 = 0.2104, λ (0 2 = 0.2225. Thus, the Jackson network with additional arrivals at empty nodes may not have a product form solution when it is model reversible. 16

Acknowledgements This research was supported in part by Japan Society for the Promotion of Science under grant No. 24310115. A The proof of Lemma 3.3 We first prove the condition (C1. For n 2 1, substituting n = (1,n 2 and n = (0,n 2 into (3.1, we have P( Z l+1 = (0,n 2 Z l = (1,n 2 = P( X (+ = ( 1,0, since (1,n 2 S +. From (2.1, (2.2 and Lemma 3.1, we have P( X (+ = ( 1,0 = π(0,n 2 π(1,n 2 P(Z l+1 = n Z l = n Thus, from (3.11 and (3.12, we have = π(0,1 π(1,1 P(X(2 = (1,0 = π(0,1 π(1,1 p(2 10. (A.1 η1 1 p(+ 10 = π(0,1 π(1,1 p(2 10. This equation implies that 10 = 0 if and only if 10 then we must have 10 10 = 0. Conversely, if p(2 10,p(+ 10 > 0, = π(1,1 π(0,1 η 1 1. (A.2 On the other hand, let us consider the case n = (1,n 2 and n = (0,n 2 + i for n 2 2 and i = ±1. Since we can replace n 2, ( 1,0 and (0,1 by n 2 + i, ( 1,i and (1,i in (3.11 and (A.1 respectively, we have η 1 1 p(+ 1i = π(0,1 π(1,1 p(2 1i, i = ±1. 17

From the condition (ii, note that at least one of 1i for i = 0,±1 is positive. Thus, we must have c (2+ > 0 and c (2+ 1i = 1i if {Z l } is model reversible. For n 1 1 and i = 0,±1, substituting n = (n 1,1 and n = (n 1 +i,0 into (3.1, P( Z l+1 = (n 1 +i,1 Z l = (n 1,1 = P( X (+ = (i, 1. Since n S 1, we have, using Lemma 3.1, P( X (+ = (i, 1 = π(n 1 +i,0 P(Z l+1 = n Z l = n π(n 1,1 = π(1,0 π(1,1 p(1 ( i1 ηi 1, i = 0,±1. On the other hand, from Lemma 3.1, for n,n = n+(i, 1 S +, we have P( X (+ = (i, 1 = π(n+(i, 1 P(X (+ = ( i,1 π(n Thus, we obtain, for all i = 0,±1, = ( i1 ηi 1 η 1 2, i = 0,±1. π(1, 0 π(1,1 p(1 ( i1 = p(+ ( i1 η 1 2. In addition, by the condition (ii, for some i = 0,±1, ( i1 > 0, ( i1 ( i1 Using a similar argument, we also have 1( i 1( i = π(1,1 π(1,0 η 1 2. (A.3 = π(1,1 π(0,1 η 1 1. (A.4 Hence, there exists c (1+ > 0 such that c (1+ ( i1 = p(1 ( i1 for i = 0,±1. This completes the proof of the condition (C1. We next obtain the conditions (C2, (C3 and (C4. Since { Z l } has the homogeneous transitions as long as Z l S 1, we have, from (2.2, (3.1 and Lemma 3.1, P( X (1 = ( 1,0 = π(0,0 π(1,0 p(0 10 = π(n 1,0 π(n, 0 18 10 = η 1 1 10, n 1.

Similarly, for X (+ = ( 1, 1 and for any n 1, we have P( X (+ = ( 1, 1 = π(0,0 π(1,1 p(0 11 = π(n 1,0 π(n, 1 11 = π(1,0 π(1,1 η 1 1 11. (A.5 Hence, for any i = 0,1, we must have p (0 1i = 1i = 0 if c (10 = 0, and c (10 1i = p (0 1i if c (10 > 0. In addition, if c (10 > 0, then we have, from (A.3 (A.5 π(1,0 = c (10 η 1 π(0,0, π(1,1 = c (1+ η 2 π(1,0 = c (10 c (1+ η 1 η 2 π(0,0, π(0,1 = 1 c (2+η 1 1 π(1,1 = c(10 c (1+ η c (2+ 2 π(0,0. Similarly, we obtain (3.8 (3.10 if c (20 > 0. We complete the proof. B The proof of Lemma 3.4 For each i = 1,2, denote the generating function of X (i by γ i, that is, for z = (z 1,z 2 R 2, ( 1 1 γ 1 (z = E z X(1 1 1 z X(1 2 2 = ij zi 1 zj 2, ( γ 2 (z = E z X(2 1 1 z X(2 2 2 = i= 1 j=0 1 1 i=0 j= 1 ij zi 1 zj 2. From Lemma 3.2 of [6], the stationary distribution satisfies (3.2 (3.4 if and only if γ + (η1 1,η 1 2 = 1, and there exists ζ i R 2 such that γ + (η1 1 2 = 1, γ 1(η1 1 2 = 1, (B.6 γ + (ζ1 1,η2 1 = 1, γ 2 (ζ1 1,η2 1 = 1. (B.7 Hence, under the condition (C1 and γ + (η1 1,η2 1 = 1, we show that γ i (η1 1,η2 1 = 1 for i = 1,2 if and only if the equations (B.6 and (B.7 hold. For i = 0,±1 and z R, let i (z = 1 j= 1 p(+ ji z j. Then we have γ + (z 1,z 2 = ( ( 11 z 1 1 + 01 +p(+ 11 z 1z 2 +( z 1 1 + 00 +p(+ 10 z 1 +( ( 1( 1 z 1 1 + 0( 1 +p(+ 1( 1 z 1z2 1 = 1 (z 1z 2 + 0 (z 1+ ( 1 (z 1z 1 2. (B.8 19

For z 2 R, we note that γ + (η1 1,z 2 = 1 has two solutions including a multiple root, and from condition (C5, one of their solutions is z 2 = η2 1. Denote the other solution by ζ 1 2. From (B.8, η 2 = p(+ 1 (η1 1 ( 1 (η 1 1 ζ 1 2. (B.9 Note that ( 1 (η 1 1, 1 (η1 1 > 0 under the condition (ii. We first prove that γ 1 (η 1 1,η 1 2 = 1 is equivalent to (B.6. (B.10 Under the condition (C1, for any z R 2, γ 1 (z is given by ( γ 1 (z = 00 +p(1 10 z 1 + z 1 1 +c (1+ 1( 1 z 1z2 1 + 0( 1 z 1 2 + ( 1( 1 z 1 1 z 1 2 = 00 +p(1 10 z 1 + z 1 1 +c (1+ ( 1( 1 z 1 + 0( 1 +p(+ ( 1( 1 z 1 1 z 1 2 = 00 +p(1 10 z 1 + z 1 1 +c (1+ ( 1 (z 1z 1 2. From (B.9, γ 1 (η 1 1,η 1 2 = 1 is equivalent to γ 1 (η1 1,η 1 2 = p(1 00 +p(1 10 η 1 1 + η 1 +c (1+ ( 1 (η 1 1 η 2 = 00 + 10η 1 1 + η 1 +c (1+ ( 1 (η 1 1 p(+ 1 (η 1 1 ( 1 (η 1 1 ζ 1 2 = 00 +p(1 10 η 1 1 + η 1 +c (1+ 1 (η 1 1 ζ 1 2. (B.11 Under the condition (C1, if i1 > 0 for all i = 0,±1, then c (1+ = p(1 11 11 = p(1 01 01 = p(1 ( 11 ( 11 and therefore, we compute the last term of (B.11 as follows. c (1+ 1 (η 1 1 ζ 1 2 = c (1+ 11 η 1 1 ζ 1 2 +c (1+ 01 ζ 1 2 +c (1+ ( 11 η 1ζ2 1 = p(1 11 11 11 η 1 1 ζ 1 2 + p(1 01 01, 01 ζ 1 2 + p(1 ( 11 ( 11 = 11 η 1 1 ζ 1 2 + 01 ζ 1 2 + ( 11 η 1ζ2 1. 20 ( 11 η 1ζ2 1

This equation also holds with i1 = 0 for some i = 0,±1 since i1 = 0 under the condition (C1. For ζ 2 satisfying γ + (η 1 = 1, we obtain 1,ζ 1 2 γ 1 (η1 1,η 1 2 = p(1 00 + 10η1 1 + η 1 +c (1+ 1 (η1 1 ζ 1 2 = 00 + = 1 1 i= 1 j=0 = γ 1 (η 1 1,ζ 1 2, 10η1 1 + ij η i 1 ζ j 2 η 1 + 11η1 1 ζ 1 2 + 01ζ2 1 + ( 11 η 1ζ2 1 and therefore, we have (B.10. Similarly, γ 2 (η 1 1,η 1 2 = 1 if and only if (B.7 holds. Thus, we complete the proof. C The reversibility condition of a singular reflecting random walk In this section, we obtain a model reversibility condition in the special case. For this, we assume the following conditions. ij = 0, i = 0,±1,j = ±1, i0 > 0, i = 0,±1. (C.1 (C.2 Then, it is easy to see that the random walk {Y l } is not irreducible. This reflecting random walk is referred to as a singular reflecting random walk, which is introduced by [3]. From irreducibility condition (i, we must have, for some i,j = 0,1 10 > 0, > 0, i1 > 0, p(2 j( 1 > 0. (C.3 (C.4 In Figure 4, we depict transition diagram of the reflecting random walk satisfying the conditions (C.1 (C.4. This reflecting random walk is model reversible if and only if the following conditions 21

Figure 4: Singular reflecting random walk satisfying the irreducible condition (i hold. π(n 1,n 2 = η n 1 1 1 η n 2 1 2 π(1,1, n 1 1,n 2 1 π(n 1,0 = α n 1 1 1 π(1,0, n 1 1, π(0,n 2 = η n 2 1 2 π(0,1, n 2 1, π(1,1 = c (20 c (2+ η 1 η 2 π(0,0, π(1,0 = c (10 α 1 π(0,0, π(1,1 = c (20 η 2 π(0,0, ( i1 = 0, p(2 1j = 0, i = 0,±1,j = ±1, where η 1,η 2,α 1 (0,1 and c (2+,c (10,c (20 > 0 are given by η 1 = p(+ 10 c (2+ = p(2 10 10, η 2 = p(2 01 0( 1, c (10 = p(0 10 10, α 1 = p(1 10,, c (20 = p(0 01 In what follows, we will derive these conditions. We assume that {Z l } is model reversible. Then, for i,j = 0,±1 and (n 1,n 2 S k, we can define the following probability function p (k ij. 01 p (k ij = P( Z l+1 = (n 1 +i,n 2 +j Z l = (n 1,n 2. 22.

From (2.2, we have 10 = P( Z l+1 = (n 1 +1,n 2 Z l = (n 1,n 2 = π(n 1 +1,n 2 π(n 1,n 2 > 0, (n 1,n 2 S +. Thus, if {Z l } is model reversible, then we must have π(n 1 +1,n 2 = η 1 π(n 1,n 2, (n 1,n 2 S +, (C.5 for some η 1 (0,1. Similarly, from (C.3, we have 10 = P( Z l+1 = (n 1 +1,0 Z l = (n 1,0 = π(n 1 +1,0 π(n 1,0 > 0, (n 1,0 S 1. Thus, for some α 1 (0,1, π(n 1 +1,0 = α 1 π(n 1,0, (n 1,0 S 1. (C.6 On the other hand, we have, for i = 0,±1 and (n 1 +i,n 2 1,(n 1,n 2 S +, i( 1 = P( Z l+1 = (n 1 +i,n 2 1 Z l = (n 1,n 2 = π(n 1 +i,n 2 1 ( i1 π(n 1,n 2 = 0, since we assume (C.1. For n 1 > 1, i( 1 = P( Z l+1 = (n 1 +i,0 Z l = (n 1,1 = π(n 1 +i,0 ( i1 π(n 1,1. Thus, we must have ( i1 = 0 for any i = 0,±1. Similarly, we have, for j = ±1 and (n 1 1,n 2 j,(n 1,n 2 S +, ( 1( j = P( Z l+1 = (n 1 1,n 2 j Z l = (n 1,n 2 = π(n 1 1,n 2 j 1j = 0, π(n 1,n 2 from (C.1. Since ( 1( j = P( Z l+1 = (0,n 2 j Z l = (1,n 2 = π(0,n 2 j 1j π(1,n 2, 23

we also have 1j = 0 for j = ±1, and from (C.4, 01 > 0 and 0( 1 > 0. Moreover, for n 2 1, 01 = P( Z l+1 = (0,n 2 +1 Z l = (0,n 2 = π(0,n 2 +1 0( 1 π(0,n 2 > 0. Thus, for η 2 (0,1, π(0,n 2 +1 = η 2 π(0,n 2, (0,n 2 S 2. We consider the relation π(1,n 2 + 1 and π(1,n 2 for n 2 1. We have the following equation for any n 2 1. Hence, = P( Z l+1 = (0,n 2 Z l = (1,n 2 = π(0,n 2 π(1,n 2 p(2 10 = η1 1 p(+ 10. π(1,n 2 +1 π(1,n 2 = π(0,n 2 π(1,n 2 +1 π(1,n 2 π(0,n 2 +1 = π(0,n 2 +1 = η 2, π(0,n 2 π(0,n 2 +1 π(0,n 2 and we have On the other hand π(n 1,n 2 = η n 1 1 1 η n 2 1 2 π(1,1, (n 1,n 2 S +, π(1,1 = p(2 10 10 η 1 π(0,1 = c (2+ η 1 π(0,1. ( 1( 1 = P( Z l+1 = (0,0 Z l = (1,1 = π(0,0 π(1,1 p(0 11 = 0, = P( Z l+1 = (0,0 Z l = (1,0 = π(0,0 π(1,0 p(0 0( 1 = P( Z l+1 = (0,0 Z l = (0,1 = π(0,0 π(0,1 p(0 10 = π(n 1 1,0 π(n 1,0 01 = π(0,n 2 1 π(0,n 2 10 = α 1 1 10, 01 = η 1 2 p(2 01. 24

Thus, we have p (0 11 = 0 and From these equations, and therefore π(1,0 = p(0 10 10 π(0,1 = p(0 01 01 α 1 π(0,0 = c (10 α 1 π(0,0, η 2 π(0,0 = c (20 η 2 π(0,0. π(0,0 = 1 c (10 α 1 π(1,0 = 1 c (20 η 2 π(0,1 = π(1,1 = c(20 c (2+ c (10 η 1 η 2 α 1 π(1,0. 1 c (20 c (2+ η 1 η 2 π(1,1, We finally obtain η 1,η 2 and α 1. We have the following stationary equations. ( 10 + π(n 1,n 2 = π(n 1 +1,n 2 + 10 π(n 1 1,n 2, ( 10 + π(n 1,0 = π(n 1 +1,0+ 10π(n 1 1,0. From (C.5 and (C.6, we have Thus, we obtain ( 10 +p(+ π(n 1,n 2 = η 1π(n 1,n 2 + 10 η 1 1 π(n 1,n 2, ( 10 +p(1 π(n 1,0 = α 1π(n 1,0+ 10 α 1π(n 1 1,0. η 1 = p(+ 10 Repeatedly, from the stationary equations, we have < 1, α 1 = p(1 10 < 1. ( 01 + 0( 1 +p(2 10π(0,n 2 = 0( 1 π(0,n 2 +1+ 01π(0,n 2 1+ π(1,n 2, ( 01 +p(2 0( 1 +p(2 10 = p(2 0( 1 η 2 + 01 η 1 2 + = 0( 1 η 2 + 01 η 1 2 + 10. 25 10 10 10

Hence, η 2 = p(2 01 0( 1 < 1. D The proof of γ i (ρ 1 1,ρ 1 2 = 1 Under the assumption (4.5, we rewrite the traffic equation (4.1 as follows. α 1 = λ+α 2 r 21 +α 1 r 11, α 2 = λ+α 1 r 12 +α 2 r 22. (D.7 The solutions of these equations are given by α 1 = λ(1 r 22 +λr 21 1 r 11 r 22 r 12 r 21 +r 11 r 22, α 2 = λ(1 r 11 +λr 12 1 r 11 r 22 r 12 r 21 +r 11 r 22. We note that 1 r 11 = r 12 +r 10 and 1 r 22 = r 21 +r 20. From the assumption (4.6, (1 r 22 r 12 = (1 r 11 r 21, and therefore, we have α 1 r 12 = α 2 r 21. Substituting this equation into (D.7, we have λ = α 1 r 10 = α 2 r 20. For i = 0,1,+, γ 0 (ρ 1 1,ρ 1 2 = µ 1 +µ 2 λ (0 1 λ (0 2 + = µ 1 +µ 2 λ (0 1 λ (0 2 + = µ 1 +µ 2 +λ+λ = 1, (1+ λ(0 1 λ (1+ λ(0 1 λ γ 1 (ρ 1 1,ρ 1 2 = µ 2 λ (1 2 +ρ 1 1 λ+µ 1 r 10 ρ 1 + ( µ 1 r 10 ρ 1 + α 1 r 10 + 1+ λ(1 2 λ (1+ λ(0 2 (1+ λ(0 2 λ λ µ 2 r 20 ρ 2 α 2 r 20 (µ2 r 21ρ 1 1 ρ 2 +µ 2 r 20 ρ 2 = µ 2 r 11 λ+ µ ( 1 λ+µ 1 λ+ 1+ r ( 11 α2 r 21 µ 1 +r 20 α 2 r 12 α 1 r 12 α 1 = µ 2 r 11 r 12 λ+µ 1 r 10 +µ 1 λ+µ 1 r 12 +λ+µ 1 r 11 + r 11 r 12 λ = 1, γ + (ρ 1 1,ρ 1 2 = µ 1r 11 +µ 2 r 22 +µ 1 r 10 +µ 2 r 20 +λ+µ 2 r 12 r 20 r 10 +λ+µ 1 r 21 r 10 r 20 = 1. By symmetry of γ 1 (ρ 1 1,ρ 1 2 = 1, we have γ 2(ρ 1 1,ρ 1 2 = 1. 26

E Product form conditions for model reversibility In this section, under the conditions (4.2 (4.6, the Jackson network with extra arrivals at empty nodes has a product form if and only if the condition (4.9 is satisfied. Note that the stationary distribution has a product form if and only if π(n 1,n 2 = ν (1 n 1 ν (2 n 2, n 1,n 2 Z +, (E.1 for some ν n (1 1,ν n (2 2 (0,1. From [6], we need to check (E.1 for n 1,n 2 {0,1}. From (3.5 (3.10 and (4.3, we have ( π(1,0 = π(0,1 = π(1,1 = π(1,1 = ( ( 1+ λ(0 1 λ 1+ λ(0 1 λ 1+ λ(0 2 λ 1+ λ(0 1 λ ( η 1 π(0,0, η 2 π(0,0, 1+ λ(0 2 λ 1+ λ(1 2 λ If (4.9 is satisfied, we rewrite the last equation as ( ( Thus, for n 1,n 2 {0,1}, (E.1 holds with ( ν (1 1 = 1+ λ(0 1 λ η 1 π(0,0, ν (1 2 = ( 1+ λ(0 2 λ On the other hand, if (E.1 is satisfied, then we have π(1, 1 π(1,0 = π(0,1 π(0,0 = ν(1 2 η 1 η 2 π(0,0. η 1 η 2 π(0,0. ν (0 2 and therefore, we must have λ (1 2 = λ (0 2. Similarly, we obtain λ (1 1 = λ (0 1., η 2 π(0,0, ν (0 1 = ν (0 2 = π(0,0. 27

References [1] Asmussen, S. (2003. Applied probability and queues, vol. 51 of Applications of Mathematics. 2nd ed. Springer-Verlag, New York. Stochastic Modelling and Applied Probability. [2] Chao, X., Miyazawa, M. and Pinedo, M. (1999. Queueing networks, customers, signals and product form solutions. John Wiley & Sons Inc., New York. [3] Fayolle, G., Iasnogorodski, R. and Malyshev, V. (1999. Random Walks in the Quarter-Plane: Algebraic Methods, Boundary Value Problems and Applications. Springer, New York. [4] Kelly, F. P. (1979. Reversibility and Stochastic Networks. New York, John Wiley & Sons Inc. [5] Kobayashi, M. and Miyazawa, M. (2012. Revisit to the tail asymptotics of the double QBD process: Refinement and complete solutions for the coordinate and diagonal directions. In Matrix-Analytic Methods in Stochastic Models (G. Latouche and M. S. Squillante, eds.. Springer, 147 181. ArXiv:1201.3167. [6] Latouche, G. and Miyazawa, M. (2013. Product form characterization for a two dimensional reflecting random walk and its applications. To appear Queueing systems, URL http://link.springer.com/article/10.1007/s11134-013-9381-7. [7] Miyazawa, M.(2011. Light tail asymptotics in multidimensional reflecting processes for queueing networks. TOP, 19 233 299. [8] Miyazawa, M. (2013. Reversibility in Queueing Models. Wiley Encyclopedia of Operations Research and Management Science, New York. [9] Serfozo, R. (1999. Introduction to stochastic networks, vol. 44 of Applications of Mathematics. Springer-Verlag, New York. 28