Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan Li School of Business and Baning Adelphi University Garden City, New Yor 11530 U.S.A. September 2, 2004 Abstract: In this paper, based on probabilistic arguments, we obtain an explicit solution of the stationary distribution for a discrete time Marov chain with an upper Hessenberg time stationary transition probability matrix. Our solution then leads to a numerically stable and efficient algorithm for computing stationary probabilities. Two other expressions for the stationary distribution are also derived, which lead to two alternative algorithms. Numerical analysis of the algorithms is given, which shows the reliability and efficiency of the algorithms. Examples of applications are provided, including results of a discrete time state dependent batch arrive queueing model. The idea used in this paper can be generalized to deal with Marov chains with a more general structure. Keywords: Stationary probabilities; Hessenberg matrices; Censored Marov chains. Y.Q. Zhao acnowledges that this wor was support by Grant No.4452 from the Natural Sciences and Engineering Research Council of Canada (NSERC). 1
1 Introduction The transition probability matrix of a Marov chain is upper Hessenberg, or p i,j = 0 whenever i > j + 1, if from any state + 1 the only lower state that can be reached in the next transition is state. These are a special type of Marov chains and they are encountered in a variety of application areas. In queueing theory, the most wellnown model, which leads to such a Marov chain, might be the M/G/1 queue. For the imbedded Marov chain of the M/G/1 queue, not only is the transition probability matrix upper Hessenberg, but the transition probability from state i + to state j + is also independent of state if i > 0. Unlie for the GI/M/1 queue, the stationary probability distribution for the M/G/1 queue is no longer geometric. One needs greater computational effort for a numerical solution to such a model, especially for a Marov chain with a general upper Hessenberg transition probability matrix. By general, we mean that there is no property of repeating rows in the transition probability matrix such as in that of the imbedded Marov chain of the M/G/1 queue. As far as we now, there are no explicit solutions provided in the literature to the stationary probability distributions of the Marov chains with upper Hessenberg transition probability matrices. Algorithms for computing such a stationary probability distribution usually either involve numerically unstable computations or require more computer memory to handle two dimensional arrays. In this paper, we obtain explicit expressions of the stationary probabilities of a Marov chain with a general upper Hessenberg transition probability matrix, based on purely probabilistic arguments. The probabilistic argument allows us to loo into the insight of the solution structure much more clearly. Our solution then naturally leads to a numerically stable and efficient algorithm for computing stationary probabilities. There are no subtraction and division operations at all involved in the algorithm. Only one column of the transition probability matrix is needed for each iteration and therefore only one-dimensional arrays are involved in programming. Two other expressions for the stationary distribution are obtained and the corresponding algorithms are derived. When the state space is finite, the computational results are exact in the sense that we do not need to truncate an infinite matrix into a finite one. When the state space is infinite, we need to truncate the transition probability matrix into a finite matrix. Specifically, we use the northwest corner of the transition probability matrix and then 2
augment it into a stochastic matrix. The stationary probabilities of this finite Marov chain are used as approximations for the infinite Marov chain. Of course, we need to augment the northwest corner into an upper Hessenberg stochastic matrix in order to use our algorithm. There are many ways to achieve it such that the stationary distribution of the resulting finite Marov chain converges to that of the original infinite Marov chain; for example, augmenting the last column only, which is the same as the censoring operation for the Marov chains studied in this paper. For this issue, one may refer to Gibson and Seneta [2] and Heyman [4] and the references therein. Since the censoring operation and the augmentation to the last column only lead to the same finite transition probability matrix, and the former has been proved to be the best augmentation method by Zhao and Liu [7], this approximation gives the result with the minimal error sum. We also give a criterion for determining the truncation size. The rest of the paper is organized as follows: In Section 2, after introducing some basic results of the censored Marov chain, we obtain the main result of the paper: the solution of the stationary probabilities of an upper Hessenberg matrix. Two other expressions are also obtained in this section. In Section 3, based on our main results, a numerically stable and efficient algorithm is obtained. Two alternative algorithms are also discussed. Finally, we include various application models, including the Geom X(n) /Geom(n)/1 queue, as our examples. 2 Main results In this section, after introducing the concept of the censored Marov chain, we give an explicit expression for the stationary probabilities of a Marov chain whose transition probability matrix is upper Hessenberg. Our solution is obtained based on pure probabilistic arguments. The technique used here is the censoring operation. Our solution leads to a numerically stable and efficient algorithm for computing the stationary probabilities, which is discussed in the next section. Based on the above solution, two other expressions of the stationary probability distribution are derived, which lead to two alternative algorithms for computing stationary probabilities. Consider a discrete time Marov chain X(t) with state space S = {; 0 < n S +1}, n S. The censored process X J (t), with censoring set J a subset of S, is defined as the stochastic process whose nth transition is the nth time for the Marov chain X(t) to visit 3
J. In other words, the sample paths of the process X J (t) are obtained from the sample paths of X(t) by omitting all parts in J c, where J c is the complement of J. Therefore, X J (t) is the process obtained by watching X(t) only when it is in J. A rigorous definition can be found on page 13 of Freedman [1]. The following Lemma is essentially Lemma 6-6 in Kemeny et al. [5] (also see Lemma 1 and Lemma 2 of Zhao and Liu [7]). Lemma 1 Let P = (p i,j ) i,j S be the transition probability matrix of an arbitrary discrete time Marov chain partitioned according to subsets J c and J: J P = Jc Q D. (1) J U T J c Then, the censored process is a Marov chain and its transition probability matrix is given by with ˆQ = =0 Q. Let π and π (J) chain and the censored Marov chain, respectively. Then, P J = T + U ˆQD (2) be the stationary probabilities of the original Marov π (J) = π i J π. (3) i The (i, j)th element of ˆQ is the expected number of visits to state j J c before entering J, given that the initial state is i J c. The next Lemma is simply a corollary of (35) Proposition of Freedman [1]. Lemma 2 Let J and K be two subsets of state space S such that K includes J. Then, (P J ) K = P J. This lemma simply tells us that you can obtain the censored Marov chain with the censoring set J by many steps. In each step, we use a smaller censoring set. Similar to the definition in Kleinroc (pp. 246 248, [6]), we define N (t) the total number of visits to state by time t and η the expected number of visits to state between two successive visits to state + 1. It follows from the definition of η that N (t) η = lim t N +1 (t), 0 < n S. (4) 4
Therefore, the stationary probabilities π are given by according to Marov chain theory, where π = η π +1, 0 < n S, π 0 = 1 1 + =0 (1/ i=0 η i ). (5) Hence, the determination of the stationary probability distribution depends on that of η, 0 < n S. For a Marov chain with an upper Hessenberg transition probability matrix, since from any state + 1, the only lower state that can be reached in the next transition is state, p +1, is the probability that state is visited at least once between two successive visits to state + 1. Hence, η = p +1, F, 0 < n S, where F is the conditional expected number of visits to state between two successive visits to state + 1, given that there is at least one visit to state. Since the transition to a lower state can only be done state by state, the probability of visiting again before returning to J +1 = { + 1, + 2,...} is the same as that of visiting state again before returning to state + 1. So, F is the expected number of visits to states before the process returns to state + 1 from state for the first time, given that state is the entering state from a higher state. According to the probabilistic meaning of matrix ˆQ J +1 given after Lemma 1, we have F = ˆq J +1 ( + 1, + 1), where ˆq J+1 ( + 1, + 1) is the ( + 1, + 1)th entry of the matric ˆQ J +1 in (2) when the censoring set is J +1. It means that F is the ( + 1, + 1)th entry of the matrix i=0 Q i J +1 with Q J +1 = p 0,0 p 0,1 p 0,2 p 0, 1 p 0, p 1,0 p 1,1 p 1,2 p 1, 1 p 1, p 2,1 p 2,2 p 2, 1 p 2, p 3,2 p 3, 1 p 3,..... p, 1 p,. (6) 5
Since the special structure of the upper Hessenberg matrix, ˆq J +1 ( + 1, + 1) can be explicitly recursively determined by using Lemma 2 and the following Lemma. Lemma 3 Let η and η (J) be, respectively, the expected numbers of visits to state between two successive visits to state + 1 for the original Marov chain P and for the censored Marov chain P J. The η (J) = η for all J. Proof: Let π and π (J) be, respectively, the stationary probabilities of the Marov chain P and the censored Marov chain P J. The proof follows from Lemma 1, π = η π +1 and π (J) = η (J) π(j) +1. We now specifically show how to use Lemma 2 and Lemma 3 to recursively determine η in terms of transition probabilities p i,j. First, let J 1 = {1, 2,...}. Then, Q(1) = [p 0,0 ]. Therefore, γ 0 = p 0,0 is the probability that the process visits state 0 only once between two successive visits to state 1, and hence Denote P J 1 = (p (J 1) i,j ) i,j J 1. η 0 = p 1,0 F 0 = p 1,0ˆq J 1 (1, 1) = p 1,0 γ0 i i=0 = p 1,0 1 γ 0. Next, let J 2 = {2, 3,...}. The probability γ 1 that the process visits state 1 only once between two successive visits to state 2 is equal to p (J 1) 1,1, which is given by γ 1 = p (J 1) 1,1 = p 1,1 + η 0 p 0,1. Therefore, Denote P J 2 = (p (J 2) i,j ) i,j J 2. η 1 = p 2,1 F 1 = p 2,1ˆq J 2 (2, 2) = p 2,1 γ1 i i=0 = p 2,1 1 γ 1. 6
Continue the above procedure and let J 3 = {3, 4,...}. The probability γ 2 that the process visits state 2 only once between two successive visits to state 3 is equal to p (J 2) 2,2, which is given by γ 2 = p (J 2) 2,2 = p 2,2 + η 1 p (J 1) 1,2, where p (J 1) 1,2 = p 1,2 + η 0 p 0,2. Therefore, Denote P J 3 = (p (J 3) i,j ) i,j J 3. where γ2 i i=0 η 2 = p 3,2 F 2 = p 3,2ˆq J 3 (3, 3) = p 3,2 = p 3,2 1 γ 2. Repeatedly using lemmas by reducing one state in each step, we have γ = p (J ), = p, + η 1 p (J 1) 1,, p (J 1) 1, = p 1, + η 2 p (J 2) 2, [ = p 1, + η 2 p 2, + η 3 p (J ] 3) 3, = p 1, + η 2 p 2, + η 2 η 3 p (J 3) 3, = = p 1, + η 2 p 2, + η 2 η 3 p 3, + + η 2 η 3 η 0 p 0,. So, γ = p, + η 1 p 1, + η 1 η 2 p 2, + η 1 η 2 η 3 p 3, + + η 1 η 2 η 0 p 0, and η = p +1, F = p +1,ˆq J +1 ( + 1, + 1) = p +1, γ i (8) We summarize the above discussion into the following theorem. i=0 (7) = p +1, 1 γ. (9) 7
Theorem 1 The stationary probability distribution of the Marov chain with upper Hessenberg transition probability matrix P = (p i,j ) i,j S is determined by π = η π +1, 0 < n S, where η is given by (8) or (9) with γ determined by (7). Or, π +1 = π /η, 0 < n S with π 0 determined by (5). Remar: The concept of the censored Marov chain was also used by Grassmann and Heyman [3] to study the state reduction method. The expressions in Theorem 1 lead to a numerically stable algorithm for computing stationary probabilities, which is very efficient also. Before we start a discussion of the algorithm, two alternative expressions for the stationary distribution of the Marov chain with an upper Hessenberg transition probability matrix can be similarly derived as follows: For = 1, 2,..., define θ the expected number of visits to state between two successive visits to state 0 and σ the expected number of visits to state between two successive visits to state 1. A similar argument to that in (4) leads to π = σ π 1 and π = θ π 0 for 1 < n S + 1. Therefore, θ = σ σ 1 σ 1, that is, the expected number of visits to state between two successive visits to state 0 is the product of the expected number of visits to state i between two successive visits to state i 1 over i = 1 to i =. All θ and hence all σ can be explicitly determined by using a similar probabilistic argument used earlier. We omit some details and state the results in the following theorem. Theorem 2 The stationary probability distribution of the Marov chain with upper Hessenberg transition probability matrix P = (p i,j ) is determined by π = θ π 0, 0 < < n S + 1, (10) 8
or by π = σ π 1, 0 < < n S + 1, (11) where In the above expressions, θ 0 = 1 and σ = θ θ 1, 0 < < n S + 1. (12) θ 1 = 1 p 0,0 p 1,0, (13) and θ +1 = θ i=0 p i, θ p +1,, 0 < < n S + 1, (14) π 0 = 1 i=0 θ i. (15) Two alternative algorithms can be derived from the above theorem in terms of either θ or σ, and they are discussed in the next section. As the final remar of this section, we mention that expressions in Theorem 1 and Theorem 2 may be obtained by solving the stationary equations directly. The probabilistic argument provided above is not only an alternative proof, but also, and more importantly, provides insight into the solution mechanism. For example, γ is a probability and the Marov chain is ergodic, γ < 1. Hence, the expressions in Theorem 1 lead to a numerically stable and efficient algorithm. If one directly solve for π from the stationary equations, this property would not be observed. 3 Algorithms and applications In this section, we first give an algorithm based on Theorem 1, to compute stationary probabilities of the Marov chain with a upper Hessenberg transition matrix and with a finite state space S = {0, 1,...,K}. We then show how to use the algorithm to compute stationary probabilities when S is infinite. Two alternative algorithms based on Theorem 2 are also given. As applications, we finally provide some examples to show how our 9
results apply. Algorithm 1: γ 0 = p 0,0 ; η 0 = p 1,0 /(1 γ 0 ) ( for j = 1, 2,...,K 1, γ j = tmp; if j > 1, tmp = p 1,j + η 0 p 0,j ; or η 0 = p 1,0 γ0 l l=0 for i = 1, 2...,j 1, γ j = p i+1,j + η i γ j ; ( ) η j = p j+1,j /(1 γ j ) or η j = p j+1,j ; [ π 0 = 1 + 1 + 1 + + η 0 η 0 η 1 for = 1, 2,...,K, π = π 1 /η 1. 1 ) η 0 η 1 η K 1 ; γj l l=0 ] 1 ; It is obvious that no subtraction or division is used in the algorithm for computing all η if the alternative formulas in the parentheses are used. Therefore, our algorithm is numerically stable. In fact, for all cases we have tested, there is no difference between the computational results of the two sets of formulas. In the computation, instead of computing π by using π = η π +1, we have reversed the procedure. This is done because π 0 has a much smaller relative error compared to π K. The total computations needed for computing all stationary probabilities is estimated as 4 + K + 3(K 1)K/2. Since only one column of transition probabilities in the transition probability matrix is needed for each iteration, it saves a tremendous amount of computer memory. Therefore, the algorithm is considered to be very efficient. Two alternative algorithms for computing stationary probabilities can be easily derived based on the results in Theorem 2. They are also very reliable even though there are either subtractions or divisions involved in computations. These algorithms may be 10
used as a cross chec on results computed using algorithm 1. Algorithm 2: Algorithm 3: For = 1, 2...,K, compute θ according to (13) and (14); K π 0 = 1/ θ i ; i=0 π = θ π 0. θ 0 = 1; for = 1, 2...,K, compute η according to (13) and (14); σ = θ /θ 1 ; K π 0 = 1/ θ i ; i=0 π = σ π 1. We now discuss how to use our algorithms to compute stationary probabilities when the state space S is infinite. The idea is very simple and natural. Choose a size K and compute probabilities, which are used as the approximate solution for the Marov chain with an infinite state space. A number of questions we need to answer when we use the above procedure: a) Does the above procedure converge? More specifically, let π, and π (J) be, respectively, the transition probabilities of the infinite Marov chain and the finite Marov chain. Is the following equation true? lim K π(k) = π, for all = 0, 1,...,K. (16) b) How is the value of K chosen? c) What is the approximation error when we use the value K chosen in b)? We provide the answers to question a) to c) in the following. The answer to question a: When we use Algorithm 1, we in fact assume that the original transition probability matrix P is truncated into a (K + 1) (K + 1) finite matrix. This matrix is obtained by augmenting the last column of the northwest corner of the transition probability matrix P. The validity of (16) has been established by many 11
researchers, for example Gibson and Seneta [2] and therein references. More than that, since the transition probability matrix is upper Hessenberg, the method of augmenting the last column only or the censoring operation with censoring set {0, 1,...,K} gives the same finite matrix. Therefore, this procedure gives the best approximation in the sense that the error sum of the probabilities between the finite Marov chain and the original infinite Marov chain is the minimum according to Zhao and Liu [7]. The answer to question b and c: The finite Marov chain used for approximation is indeed the censored Marov chain. Let K 1 and K 2 be two truncation sizes. It follows from Lemma 2 that π (K 1) = π K1 i=0 π i and π (K 2) = π K2 i=0 π. i Also notice that η is independent of the truncation size when is less than the truncation size. So, π (K 1) 0 and π (K 2) 0 are the only difference. Assume K 1 < K 2, we then have ratio = π(k 2) 0 π (K 1) 0 = K1 i=0 π i K2 i=0 π. i Fix K 1 and increase K 2, the ratio will finally approach to i K 1 π i. For a precision ɛ, if 1 ratio < ɛ for different chosen values of K 2, then the relative error between the approximate probability and the exact probability is approximately error = π(k 1) π π = 1 ratio ratio < ɛ ratio. For some special cases, for example the M/G/1 queue, the error analysis can be accomplished much more easily. We will show this later. We use the above algorithms to compute stationary probabilities for two application models: the M/G/1 queue and the Geom X(n) /Geom/1 queue. The M/G/1 queue: We use the M/G/1 queue as our first example to show how to use the algorithms presented in the above to compute the stationary probabilities. Various computational issues are also discussed. Unlie the GI/M/1 queue, it is well nown that the stationary distribution of the imbedded Marov chain of the M/G/1 queue, where B(t) is the service time distribution with mean 1/µ and λ is the arrival rate, does not have the geometric form, but π 0 can be explicitly determined by the expression π 0 = 1 ρ, 12
where ρ = λ/µ is the traffic intensity. Since π 0 is the exact solution, all other probabilities computed according to π = π 1 /η 1 are also exact. Algorithm 1 is now simplified to: Algorithm 1: π 0 = 1 ρ; γ 0 = a 0 ; ( η 0 = a 0 /(1 γ 0 ) or η 0 = a 0 for j = 1, 2,...,K 1, tmp = a j + η 0 a j ; γ j = tmp; if j > 1, γ0 l l=0 for i = 1, 2...,j 1, γ j = a j i + η i γ j ; ( ) η j = a 0 /(1 γ j ) or η j = a 0 ; for = 1, 2,...,K, π = π 1 /η 1. γj l l=0 ) ; In the algorithm, a = 0 (λt) e λt db(t). (17)! We tested the algorithm when the service time is deterministic and uniform. It computes stationary probabilities for all values of the traffic intensity ρ from 0.01 to 0.99999. The algorithm shows an excellent performance for heavy traffic situation, say ρ 0.9. η converges to some value, which coincides that the M/G/1 queue has a geometric tail. In Table 1, we give values of size K such that η for K has already been convergent up to 14 significant digits and the value of η K up to 10 significant digits for different heavy traffic conditions. ρ 0.9 0.99 0.999 0.9999 K 18 19 19 20 η K 1.230162781 1.020269813 1.002002670 1.000200267 13
Table 1 (a). The M/D/1 queue. K is the size such that η for K converges up to 14 significant digits. ρ 0.9 0.99 0.999 0.9999 K 22 23 23 23 η K 1.171148173 1.015189661 1.001501877 1.000150019 Table 1 (b). The M/U/1 queue. K is the size such that η for K converges up to 14 significant digits. The Geom X(n) /Geom(n)/1 queue: The motivation of studying the Geom X(n) /Geom(n)/1 queue is from a solving a problem of inventory control. The flow of goods from the factory or wholesaler through a retail store or a warehouse is an operation involving variable supply and demand. At the end of each day (or wee, or other time unit), the retailer checs the inventory of the goods and then decides if a new order should be made, which is described by a probability. The time between two successive orders is therefore a geometric random variable. The amount X(n) of goods in each order is also a random variable, which further depends on stoc on the day of taing inventory. The retail may change his sell policy (say price, different promotion plans and so forth), which is in turn affects the demand of the goods. The retailer s main concerns may include the probability distribution of the inventory, the probability distribution of the demand of the goods, optimal policy of the sell, and optimal policy of maing profit. The analysis of these interesting aspects can be often done by mean of the techniques of queueing theory. We formulate this inventory model into the following Geom X(n) /Geom(n)/1 queue. The Geom X(n) /Geom(n)/1 queue is the discrete time queueing model, where both the interarrival times and the service times are independent geometric random variables. The arrival is in groups or batches and the batch size is a random variable depending on the number of customers in the system at the customer arrival epoch. The service rate depends on the number of customers in the system at the epoch when the service starts. We assume that during a time interval, the arrival occurs, if any, is after the completion of the service, if any. Consider the number of customers in the system at time epochs immediately before the times of the possible service completions. It gives a discrete time Marov chain with the state space S = {0, 1, 2,...} and with its transition probability 14
matrix of upper Hessenberg form: a 0,0 a 0,1 a 0,2 a 0,3 µ 1 a 0,0 µ 1 a 0,1 + µ 1 a 1,0 µ 1 a 0,2 + µ 1 a 1,1 µ 1 a 0,3 + µ 1 a 1,2 µ 2 a 1,0 µ 2 a 1,1 + µ 2 a 2,0 µ 2 a 1,2 + µ 2 a 2,1 P = µ 3 a 2,0 µ 3 a 2,1 + µ 3 a 3,0, µ 4 a 3,0...... where µ n is the probability that the server completes a service in one time unit, given that there are n customers in the system when the service starts, µ n = 1 µ n, and a n, is the probability that there are customers arrived during one time unit, given that there are n customers in the system at the arrival epoch. This is an upper Hessenberg matrix without the property of repeating columns. The algorithm can be easily derived from our general algorithm. We use the following values of µ n and a n, to find the stationary probabilities and then the mean number of customers in the system. Let µ n = n n + 1 µ and a n, = (λ/(n + 1)) e λ/(n+1).! 15
In Table 2, this mean number is provided for different values of µ and λ. λ µ = 0.1 µ = 0.2 µ = 0.3 µ = 0.4 µ = 0.5 µ = 0.6 µ = 0.7 µ = 0.8 µ = 0.9 µ = 1.0 1 10.96 5.921 4.229 3.375 2.855 2.503 2.247 2.051 1.894 1.766 2 21.00 11.00 7.670 6.008 5.015 4.356 3.888 3.538 3.268 3.053 3 31.02 16.03 11.05 8.562 7.077 6.094 5.396 4.877 4.478 4.162 4 41.02 21.05 14.40 11.09 9.113 7.801 6.870 6.177 5.642 5.219 5 51.03 26.06 17.75 13.61 11.14 9.495 8.329 7.461 6.790 6.258 6 61.03 31.06 21.10 16.13 13.15 11.18 9.780 8.736 7.928 7.287 7 71.04 36.07 24.44 18.64 15.17 12.86 11.23 10.01 9.061 8.311 8 81.04 41.07 27.78 21.14 17.18 14.54 12.67 11.27 10.19 9.329 9 91.04 46.08 31.11 23.65 19.18 16.22 14.11 12.53 11.31 10.35 10 101.0 51.08 34.45 26.15 21.19 17.89 15.55 13.79 12.44 11.36 20 201.0 101.1 67.80 51.18 41.22 34.59 29.87 26.34 23.60 21.42 30 301.1 151.1 101.1 76.18 61.23 51.27 44.17 38.86 34.74 31.45 40 401.1 201.1 134.5 101.2 81.24 67.95 58.47 51.37 45.86 41.46 50 483.4 251.1 167.8 126.2 101.2 84.62 72.76 63.88 56.98 51.47 100 499.1 483.9 334.5 251.2 201.2 168.0 144.2 126.4 112.6 101.5 Table 2. The mean number of customers in the system for the Geom X(n) /Geom(n)/1 queue for different values of µ and λ. 4 Concluding Remars We use probabilistic arguments to obtain solutions to the stationary distributions of Marov chains with upper Hessenberg transition probability matrices. Our solution leads to a numerically stable and efficient algorithm. By using this algorithm, the performance of computations is often improved, including the time complexity, the computer memory required, and the range of values of model parameters which can be used. Finally, we would lie to add that the ideas used in this paper can be employed or generalized to analyze other Marov chains including the Marov chains with lower Hessenberg transition matrices, and upper and lower Hessenberg-type Marov chains, which includes M/G/1 and GI/M/1 type Marov chains as special cases. 16
References [1] Freedman, D. (1983) Approximating Countable Marov Chains, 2nd edn, Springer- Verlag, New Yor. [2] Gibson, D. and Seneta, E. (1987) Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600 608. [3] Grassmann, W.K. and Heyman, D.P. (1990) Equilibrium distribution of blocstructured Marov chains with repeating rows. J. Appl. Prob. 27, 557 576. [4] Heyman, D.P. (1991) Approximating the stationary distribution of an infinite stochastic matrix. J. Appl. Prob. 28, 96 103. [5] Kemeny, J.G., Snell, J.L. and Knapp, A.W. (1976) Denumerable Marov Chains, 2nd edn, Springer-Verlag, New Yor. [6] Kleinroc, L. (1975) Queueing Systems, Volume 1: Theory, John Wiley & Sons, New Yor. [7] Zhao, Y.Q. and Liu, D. (1996) The censored Marov chain and the best augmentation. Appl. Prob. 33, 623 629. 17