Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Size: px
Start display at page:

Download "Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices"

Transcription

1 Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan Li School of Business and Baning Adelphi University Garden City, New Yor U.S.A. September 2, 2004 Abstract: In this paper, based on probabilistic arguments, we obtain an explicit solution of the stationary distribution for a discrete time Marov chain with an upper Hessenberg time stationary transition probability matrix. Our solution then leads to a numerically stable and efficient algorithm for computing stationary probabilities. Two other expressions for the stationary distribution are also derived, which lead to two alternative algorithms. Numerical analysis of the algorithms is given, which shows the reliability and efficiency of the algorithms. Examples of applications are provided, including results of a discrete time state dependent batch arrive queueing model. The idea used in this paper can be generalized to deal with Marov chains with a more general structure. Keywords: Stationary probabilities; Hessenberg matrices; Censored Marov chains. Y.Q. Zhao acnowledges that this wor was support by Grant No.4452 from the Natural Sciences and Engineering Research Council of Canada (NSERC). 1

2 1 Introduction The transition probability matrix of a Marov chain is upper Hessenberg, or p i,j = 0 whenever i > j + 1, if from any state + 1 the only lower state that can be reached in the next transition is state. These are a special type of Marov chains and they are encountered in a variety of application areas. In queueing theory, the most wellnown model, which leads to such a Marov chain, might be the M/G/1 queue. For the imbedded Marov chain of the M/G/1 queue, not only is the transition probability matrix upper Hessenberg, but the transition probability from state i + to state j + is also independent of state if i > 0. Unlie for the GI/M/1 queue, the stationary probability distribution for the M/G/1 queue is no longer geometric. One needs greater computational effort for a numerical solution to such a model, especially for a Marov chain with a general upper Hessenberg transition probability matrix. By general, we mean that there is no property of repeating rows in the transition probability matrix such as in that of the imbedded Marov chain of the M/G/1 queue. As far as we now, there are no explicit solutions provided in the literature to the stationary probability distributions of the Marov chains with upper Hessenberg transition probability matrices. Algorithms for computing such a stationary probability distribution usually either involve numerically unstable computations or require more computer memory to handle two dimensional arrays. In this paper, we obtain explicit expressions of the stationary probabilities of a Marov chain with a general upper Hessenberg transition probability matrix, based on purely probabilistic arguments. The probabilistic argument allows us to loo into the insight of the solution structure much more clearly. Our solution then naturally leads to a numerically stable and efficient algorithm for computing stationary probabilities. There are no subtraction and division operations at all involved in the algorithm. Only one column of the transition probability matrix is needed for each iteration and therefore only one-dimensional arrays are involved in programming. Two other expressions for the stationary distribution are obtained and the corresponding algorithms are derived. When the state space is finite, the computational results are exact in the sense that we do not need to truncate an infinite matrix into a finite one. When the state space is infinite, we need to truncate the transition probability matrix into a finite matrix. Specifically, we use the northwest corner of the transition probability matrix and then 2

3 augment it into a stochastic matrix. The stationary probabilities of this finite Marov chain are used as approximations for the infinite Marov chain. Of course, we need to augment the northwest corner into an upper Hessenberg stochastic matrix in order to use our algorithm. There are many ways to achieve it such that the stationary distribution of the resulting finite Marov chain converges to that of the original infinite Marov chain; for example, augmenting the last column only, which is the same as the censoring operation for the Marov chains studied in this paper. For this issue, one may refer to Gibson and Seneta [2] and Heyman [4] and the references therein. Since the censoring operation and the augmentation to the last column only lead to the same finite transition probability matrix, and the former has been proved to be the best augmentation method by Zhao and Liu [7], this approximation gives the result with the minimal error sum. We also give a criterion for determining the truncation size. The rest of the paper is organized as follows: In Section 2, after introducing some basic results of the censored Marov chain, we obtain the main result of the paper: the solution of the stationary probabilities of an upper Hessenberg matrix. Two other expressions are also obtained in this section. In Section 3, based on our main results, a numerically stable and efficient algorithm is obtained. Two alternative algorithms are also discussed. Finally, we include various application models, including the Geom X(n) /Geom(n)/1 queue, as our examples. 2 Main results In this section, after introducing the concept of the censored Marov chain, we give an explicit expression for the stationary probabilities of a Marov chain whose transition probability matrix is upper Hessenberg. Our solution is obtained based on pure probabilistic arguments. The technique used here is the censoring operation. Our solution leads to a numerically stable and efficient algorithm for computing the stationary probabilities, which is discussed in the next section. Based on the above solution, two other expressions of the stationary probability distribution are derived, which lead to two alternative algorithms for computing stationary probabilities. Consider a discrete time Marov chain X(t) with state space S = {; 0 < n S +1}, n S. The censored process X J (t), with censoring set J a subset of S, is defined as the stochastic process whose nth transition is the nth time for the Marov chain X(t) to visit 3

4 J. In other words, the sample paths of the process X J (t) are obtained from the sample paths of X(t) by omitting all parts in J c, where J c is the complement of J. Therefore, X J (t) is the process obtained by watching X(t) only when it is in J. A rigorous definition can be found on page 13 of Freedman [1]. The following Lemma is essentially Lemma 6-6 in Kemeny et al. [5] (also see Lemma 1 and Lemma 2 of Zhao and Liu [7]). Lemma 1 Let P = (p i,j ) i,j S be the transition probability matrix of an arbitrary discrete time Marov chain partitioned according to subsets J c and J: J P = Jc Q D. (1) J U T J c Then, the censored process is a Marov chain and its transition probability matrix is given by with ˆQ = =0 Q. Let π and π (J) chain and the censored Marov chain, respectively. Then, P J = T + U ˆQD (2) be the stationary probabilities of the original Marov π (J) = π i J π. (3) i The (i, j)th element of ˆQ is the expected number of visits to state j J c before entering J, given that the initial state is i J c. The next Lemma is simply a corollary of (35) Proposition of Freedman [1]. Lemma 2 Let J and K be two subsets of state space S such that K includes J. Then, (P J ) K = P J. This lemma simply tells us that you can obtain the censored Marov chain with the censoring set J by many steps. In each step, we use a smaller censoring set. Similar to the definition in Kleinroc (pp , [6]), we define N (t) the total number of visits to state by time t and η the expected number of visits to state between two successive visits to state + 1. It follows from the definition of η that N (t) η = lim t N +1 (t), 0 < n S. (4) 4

5 Therefore, the stationary probabilities π are given by according to Marov chain theory, where π = η π +1, 0 < n S, π 0 = =0 (1/ i=0 η i ). (5) Hence, the determination of the stationary probability distribution depends on that of η, 0 < n S. For a Marov chain with an upper Hessenberg transition probability matrix, since from any state + 1, the only lower state that can be reached in the next transition is state, p +1, is the probability that state is visited at least once between two successive visits to state + 1. Hence, η = p +1, F, 0 < n S, where F is the conditional expected number of visits to state between two successive visits to state + 1, given that there is at least one visit to state. Since the transition to a lower state can only be done state by state, the probability of visiting again before returning to J +1 = { + 1, + 2,...} is the same as that of visiting state again before returning to state + 1. So, F is the expected number of visits to states before the process returns to state + 1 from state for the first time, given that state is the entering state from a higher state. According to the probabilistic meaning of matrix ˆQ J +1 given after Lemma 1, we have F = ˆq J +1 ( + 1, + 1), where ˆq J+1 ( + 1, + 1) is the ( + 1, + 1)th entry of the matric ˆQ J +1 in (2) when the censoring set is J +1. It means that F is the ( + 1, + 1)th entry of the matrix i=0 Q i J +1 with Q J +1 = p 0,0 p 0,1 p 0,2 p 0, 1 p 0, p 1,0 p 1,1 p 1,2 p 1, 1 p 1, p 2,1 p 2,2 p 2, 1 p 2, p 3,2 p 3, 1 p 3,..... p, 1 p,. (6) 5

6 Since the special structure of the upper Hessenberg matrix, ˆq J +1 ( + 1, + 1) can be explicitly recursively determined by using Lemma 2 and the following Lemma. Lemma 3 Let η and η (J) be, respectively, the expected numbers of visits to state between two successive visits to state + 1 for the original Marov chain P and for the censored Marov chain P J. The η (J) = η for all J. Proof: Let π and π (J) be, respectively, the stationary probabilities of the Marov chain P and the censored Marov chain P J. The proof follows from Lemma 1, π = η π +1 and π (J) = η (J) π(j) +1. We now specifically show how to use Lemma 2 and Lemma 3 to recursively determine η in terms of transition probabilities p i,j. First, let J 1 = {1, 2,...}. Then, Q(1) = [p 0,0 ]. Therefore, γ 0 = p 0,0 is the probability that the process visits state 0 only once between two successive visits to state 1, and hence Denote P J 1 = (p (J 1) i,j ) i,j J 1. η 0 = p 1,0 F 0 = p 1,0ˆq J 1 (1, 1) = p 1,0 γ0 i i=0 = p 1,0 1 γ 0. Next, let J 2 = {2, 3,...}. The probability γ 1 that the process visits state 1 only once between two successive visits to state 2 is equal to p (J 1) 1,1, which is given by γ 1 = p (J 1) 1,1 = p 1,1 + η 0 p 0,1. Therefore, Denote P J 2 = (p (J 2) i,j ) i,j J 2. η 1 = p 2,1 F 1 = p 2,1ˆq J 2 (2, 2) = p 2,1 γ1 i i=0 = p 2,1 1 γ 1. 6

7 Continue the above procedure and let J 3 = {3, 4,...}. The probability γ 2 that the process visits state 2 only once between two successive visits to state 3 is equal to p (J 2) 2,2, which is given by γ 2 = p (J 2) 2,2 = p 2,2 + η 1 p (J 1) 1,2, where p (J 1) 1,2 = p 1,2 + η 0 p 0,2. Therefore, Denote P J 3 = (p (J 3) i,j ) i,j J 3. where γ2 i i=0 η 2 = p 3,2 F 2 = p 3,2ˆq J 3 (3, 3) = p 3,2 = p 3,2 1 γ 2. Repeatedly using lemmas by reducing one state in each step, we have γ = p (J ), = p, + η 1 p (J 1) 1,, p (J 1) 1, = p 1, + η 2 p (J 2) 2, [ = p 1, + η 2 p 2, + η 3 p (J ] 3) 3, = p 1, + η 2 p 2, + η 2 η 3 p (J 3) 3, = = p 1, + η 2 p 2, + η 2 η 3 p 3, + + η 2 η 3 η 0 p 0,. So, γ = p, + η 1 p 1, + η 1 η 2 p 2, + η 1 η 2 η 3 p 3, + + η 1 η 2 η 0 p 0, and η = p +1, F = p +1,ˆq J +1 ( + 1, + 1) = p +1, γ i (8) We summarize the above discussion into the following theorem. i=0 (7) = p +1, 1 γ. (9) 7

8 Theorem 1 The stationary probability distribution of the Marov chain with upper Hessenberg transition probability matrix P = (p i,j ) i,j S is determined by π = η π +1, 0 < n S, where η is given by (8) or (9) with γ determined by (7). Or, π +1 = π /η, 0 < n S with π 0 determined by (5). Remar: The concept of the censored Marov chain was also used by Grassmann and Heyman [3] to study the state reduction method. The expressions in Theorem 1 lead to a numerically stable algorithm for computing stationary probabilities, which is very efficient also. Before we start a discussion of the algorithm, two alternative expressions for the stationary distribution of the Marov chain with an upper Hessenberg transition probability matrix can be similarly derived as follows: For = 1, 2,..., define θ the expected number of visits to state between two successive visits to state 0 and σ the expected number of visits to state between two successive visits to state 1. A similar argument to that in (4) leads to π = σ π 1 and π = θ π 0 for 1 < n S + 1. Therefore, θ = σ σ 1 σ 1, that is, the expected number of visits to state between two successive visits to state 0 is the product of the expected number of visits to state i between two successive visits to state i 1 over i = 1 to i =. All θ and hence all σ can be explicitly determined by using a similar probabilistic argument used earlier. We omit some details and state the results in the following theorem. Theorem 2 The stationary probability distribution of the Marov chain with upper Hessenberg transition probability matrix P = (p i,j ) is determined by π = θ π 0, 0 < < n S + 1, (10) 8

9 or by π = σ π 1, 0 < < n S + 1, (11) where In the above expressions, θ 0 = 1 and σ = θ θ 1, 0 < < n S + 1. (12) θ 1 = 1 p 0,0 p 1,0, (13) and θ +1 = θ i=0 p i, θ p +1,, 0 < < n S + 1, (14) π 0 = 1 i=0 θ i. (15) Two alternative algorithms can be derived from the above theorem in terms of either θ or σ, and they are discussed in the next section. As the final remar of this section, we mention that expressions in Theorem 1 and Theorem 2 may be obtained by solving the stationary equations directly. The probabilistic argument provided above is not only an alternative proof, but also, and more importantly, provides insight into the solution mechanism. For example, γ is a probability and the Marov chain is ergodic, γ < 1. Hence, the expressions in Theorem 1 lead to a numerically stable and efficient algorithm. If one directly solve for π from the stationary equations, this property would not be observed. 3 Algorithms and applications In this section, we first give an algorithm based on Theorem 1, to compute stationary probabilities of the Marov chain with a upper Hessenberg transition matrix and with a finite state space S = {0, 1,...,K}. We then show how to use the algorithm to compute stationary probabilities when S is infinite. Two alternative algorithms based on Theorem 2 are also given. As applications, we finally provide some examples to show how our 9

10 results apply. Algorithm 1: γ 0 = p 0,0 ; η 0 = p 1,0 /(1 γ 0 ) ( for j = 1, 2,...,K 1, γ j = tmp; if j > 1, tmp = p 1,j + η 0 p 0,j ; or η 0 = p 1,0 γ0 l l=0 for i = 1, 2...,j 1, γ j = p i+1,j + η i γ j ; ( ) η j = p j+1,j /(1 γ j ) or η j = p j+1,j ; [ π 0 = η 0 η 0 η 1 for = 1, 2,...,K, π = π 1 /η 1. 1 ) η 0 η 1 η K 1 ; γj l l=0 ] 1 ; It is obvious that no subtraction or division is used in the algorithm for computing all η if the alternative formulas in the parentheses are used. Therefore, our algorithm is numerically stable. In fact, for all cases we have tested, there is no difference between the computational results of the two sets of formulas. In the computation, instead of computing π by using π = η π +1, we have reversed the procedure. This is done because π 0 has a much smaller relative error compared to π K. The total computations needed for computing all stationary probabilities is estimated as 4 + K + 3(K 1)K/2. Since only one column of transition probabilities in the transition probability matrix is needed for each iteration, it saves a tremendous amount of computer memory. Therefore, the algorithm is considered to be very efficient. Two alternative algorithms for computing stationary probabilities can be easily derived based on the results in Theorem 2. They are also very reliable even though there are either subtractions or divisions involved in computations. These algorithms may be 10

11 used as a cross chec on results computed using algorithm 1. Algorithm 2: Algorithm 3: For = 1, 2...,K, compute θ according to (13) and (14); K π 0 = 1/ θ i ; i=0 π = θ π 0. θ 0 = 1; for = 1, 2...,K, compute η according to (13) and (14); σ = θ /θ 1 ; K π 0 = 1/ θ i ; i=0 π = σ π 1. We now discuss how to use our algorithms to compute stationary probabilities when the state space S is infinite. The idea is very simple and natural. Choose a size K and compute probabilities, which are used as the approximate solution for the Marov chain with an infinite state space. A number of questions we need to answer when we use the above procedure: a) Does the above procedure converge? More specifically, let π, and π (J) be, respectively, the transition probabilities of the infinite Marov chain and the finite Marov chain. Is the following equation true? lim K π(k) = π, for all = 0, 1,...,K. (16) b) How is the value of K chosen? c) What is the approximation error when we use the value K chosen in b)? We provide the answers to question a) to c) in the following. The answer to question a: When we use Algorithm 1, we in fact assume that the original transition probability matrix P is truncated into a (K + 1) (K + 1) finite matrix. This matrix is obtained by augmenting the last column of the northwest corner of the transition probability matrix P. The validity of (16) has been established by many 11

12 researchers, for example Gibson and Seneta [2] and therein references. More than that, since the transition probability matrix is upper Hessenberg, the method of augmenting the last column only or the censoring operation with censoring set {0, 1,...,K} gives the same finite matrix. Therefore, this procedure gives the best approximation in the sense that the error sum of the probabilities between the finite Marov chain and the original infinite Marov chain is the minimum according to Zhao and Liu [7]. The answer to question b and c: The finite Marov chain used for approximation is indeed the censored Marov chain. Let K 1 and K 2 be two truncation sizes. It follows from Lemma 2 that π (K 1) = π K1 i=0 π i and π (K 2) = π K2 i=0 π. i Also notice that η is independent of the truncation size when is less than the truncation size. So, π (K 1) 0 and π (K 2) 0 are the only difference. Assume K 1 < K 2, we then have ratio = π(k 2) 0 π (K 1) 0 = K1 i=0 π i K2 i=0 π. i Fix K 1 and increase K 2, the ratio will finally approach to i K 1 π i. For a precision ɛ, if 1 ratio < ɛ for different chosen values of K 2, then the relative error between the approximate probability and the exact probability is approximately error = π(k 1) π π = 1 ratio ratio < ɛ ratio. For some special cases, for example the M/G/1 queue, the error analysis can be accomplished much more easily. We will show this later. We use the above algorithms to compute stationary probabilities for two application models: the M/G/1 queue and the Geom X(n) /Geom/1 queue. The M/G/1 queue: We use the M/G/1 queue as our first example to show how to use the algorithms presented in the above to compute the stationary probabilities. Various computational issues are also discussed. Unlie the GI/M/1 queue, it is well nown that the stationary distribution of the imbedded Marov chain of the M/G/1 queue, where B(t) is the service time distribution with mean 1/µ and λ is the arrival rate, does not have the geometric form, but π 0 can be explicitly determined by the expression π 0 = 1 ρ, 12

13 where ρ = λ/µ is the traffic intensity. Since π 0 is the exact solution, all other probabilities computed according to π = π 1 /η 1 are also exact. Algorithm 1 is now simplified to: Algorithm 1: π 0 = 1 ρ; γ 0 = a 0 ; ( η 0 = a 0 /(1 γ 0 ) or η 0 = a 0 for j = 1, 2,...,K 1, tmp = a j + η 0 a j ; γ j = tmp; if j > 1, γ0 l l=0 for i = 1, 2...,j 1, γ j = a j i + η i γ j ; ( ) η j = a 0 /(1 γ j ) or η j = a 0 ; for = 1, 2,...,K, π = π 1 /η 1. γj l l=0 ) ; In the algorithm, a = 0 (λt) e λt db(t). (17)! We tested the algorithm when the service time is deterministic and uniform. It computes stationary probabilities for all values of the traffic intensity ρ from 0.01 to The algorithm shows an excellent performance for heavy traffic situation, say ρ 0.9. η converges to some value, which coincides that the M/G/1 queue has a geometric tail. In Table 1, we give values of size K such that η for K has already been convergent up to 14 significant digits and the value of η K up to 10 significant digits for different heavy traffic conditions. ρ K η K

14 Table 1 (a). The M/D/1 queue. K is the size such that η for K converges up to 14 significant digits. ρ K η K Table 1 (b). The M/U/1 queue. K is the size such that η for K converges up to 14 significant digits. The Geom X(n) /Geom(n)/1 queue: The motivation of studying the Geom X(n) /Geom(n)/1 queue is from a solving a problem of inventory control. The flow of goods from the factory or wholesaler through a retail store or a warehouse is an operation involving variable supply and demand. At the end of each day (or wee, or other time unit), the retailer checs the inventory of the goods and then decides if a new order should be made, which is described by a probability. The time between two successive orders is therefore a geometric random variable. The amount X(n) of goods in each order is also a random variable, which further depends on stoc on the day of taing inventory. The retail may change his sell policy (say price, different promotion plans and so forth), which is in turn affects the demand of the goods. The retailer s main concerns may include the probability distribution of the inventory, the probability distribution of the demand of the goods, optimal policy of the sell, and optimal policy of maing profit. The analysis of these interesting aspects can be often done by mean of the techniques of queueing theory. We formulate this inventory model into the following Geom X(n) /Geom(n)/1 queue. The Geom X(n) /Geom(n)/1 queue is the discrete time queueing model, where both the interarrival times and the service times are independent geometric random variables. The arrival is in groups or batches and the batch size is a random variable depending on the number of customers in the system at the customer arrival epoch. The service rate depends on the number of customers in the system at the epoch when the service starts. We assume that during a time interval, the arrival occurs, if any, is after the completion of the service, if any. Consider the number of customers in the system at time epochs immediately before the times of the possible service completions. It gives a discrete time Marov chain with the state space S = {0, 1, 2,...} and with its transition probability 14

15 matrix of upper Hessenberg form: a 0,0 a 0,1 a 0,2 a 0,3 µ 1 a 0,0 µ 1 a 0,1 + µ 1 a 1,0 µ 1 a 0,2 + µ 1 a 1,1 µ 1 a 0,3 + µ 1 a 1,2 µ 2 a 1,0 µ 2 a 1,1 + µ 2 a 2,0 µ 2 a 1,2 + µ 2 a 2,1 P = µ 3 a 2,0 µ 3 a 2,1 + µ 3 a 3,0, µ 4 a 3, where µ n is the probability that the server completes a service in one time unit, given that there are n customers in the system when the service starts, µ n = 1 µ n, and a n, is the probability that there are customers arrived during one time unit, given that there are n customers in the system at the arrival epoch. This is an upper Hessenberg matrix without the property of repeating columns. The algorithm can be easily derived from our general algorithm. We use the following values of µ n and a n, to find the stationary probabilities and then the mean number of customers in the system. Let µ n = n n + 1 µ and a n, = (λ/(n + 1)) e λ/(n+1).! 15

16 In Table 2, this mean number is provided for different values of µ and λ. λ µ = 0.1 µ = 0.2 µ = 0.3 µ = 0.4 µ = 0.5 µ = 0.6 µ = 0.7 µ = 0.8 µ = 0.9 µ = Table 2. The mean number of customers in the system for the Geom X(n) /Geom(n)/1 queue for different values of µ and λ. 4 Concluding Remars We use probabilistic arguments to obtain solutions to the stationary distributions of Marov chains with upper Hessenberg transition probability matrices. Our solution leads to a numerically stable and efficient algorithm. By using this algorithm, the performance of computations is often improved, including the time complexity, the computer memory required, and the range of values of model parameters which can be used. Finally, we would lie to add that the ideas used in this paper can be employed or generalized to analyze other Marov chains including the Marov chains with lower Hessenberg transition matrices, and upper and lower Hessenberg-type Marov chains, which includes M/G/1 and GI/M/1 type Marov chains as special cases. 16

17 References [1] Freedman, D. (1983) Approximating Countable Marov Chains, 2nd edn, Springer- Verlag, New Yor. [2] Gibson, D. and Seneta, E. (1987) Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, [3] Grassmann, W.K. and Heyman, D.P. (1990) Equilibrium distribution of blocstructured Marov chains with repeating rows. J. Appl. Prob. 27, [4] Heyman, D.P. (1991) Approximating the stationary distribution of an infinite stochastic matrix. J. Appl. Prob. 28, [5] Kemeny, J.G., Snell, J.L. and Knapp, A.W. (1976) Denumerable Marov Chains, 2nd edn, Springer-Verlag, New Yor. [6] Kleinroc, L. (1975) Queueing Systems, Volume 1: Theory, John Wiley & Sons, New Yor. [7] Zhao, Y.Q. and Liu, D. (1996) The censored Marov chain and the best augmentation. Appl. Prob. 33,

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Attahiru Sule Alfa Department of Mechanical and Industrial Engineering University of Manitoba Winnipeg, Manitoba

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

Synchronized Queues with Deterministic Arrivals

Synchronized Queues with Deterministic Arrivals Synchronized Queues with Deterministic Arrivals Dimitra Pinotsi and Michael A. Zazanis Department of Statistics Athens University of Economics and Business 76 Patission str., Athens 14 34, Greece Abstract

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

A matrix-analytic solution for the DBMAP/PH/1 priority queue

A matrix-analytic solution for the DBMAP/PH/1 priority queue Queueing Syst (6) 53:17 145 DOI 117/s11134-6-836- A matrix-analytic solution for the DBMAP/PH/1 priority queue Ji-An Zhao Bo Li Xi-Ren Cao Ishfaq Ahmad Received: 18 September / Revised: November 5 C Science

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

M/M/1 Queueing System with Delayed Controlled Vacation

M/M/1 Queueing System with Delayed Controlled Vacation M/M/1 Queueing System with Delayed Controlled Vacation Yonglu Deng, Zhongshan University W. John Braun, University of Winnipeg Yiqiang Q. Zhao, University of Winnipeg Abstract An M/M/1 queue with delayed

More information

M/M/1 Transient Queues and Path Counting

M/M/1 Transient Queues and Path Counting M/M/1 Transient Queues and Path Counting M.HlynaandL.M.Hurajt Department of Mathematics and Statistics University of Windsor Windsor, Ontario, Canada N9B 3P4 December 14, 006 Abstract We find combinatorially

More information

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes Linear ependence of Stationary istributions in Ergodic Markov ecision Processes Ronald Ortner epartment Mathematik und Informationstechnologie, Montanuniversität Leoben Abstract In ergodic MPs we consider

More information

A Simple Solution for the M/D/c Waiting Time Distribution

A Simple Solution for the M/D/c Waiting Time Distribution A Simple Solution for the M/D/c Waiting Time Distribution G.J.Franx, Universiteit van Amsterdam November 6, 998 Abstract A surprisingly simple and explicit expression for the waiting time distribution

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

Advanced Queueing Theory

Advanced Queueing Theory Advanced Queueing Theory 1 Networks of queues (reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity, BCMP networks, mean-value analysis, Norton's

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Queues and Queueing Networks

Queues and Queueing Networks Queues and Queueing Networks Sanjay K. Bose Dept. of EEE, IITG Copyright 2015, Sanjay K. Bose 1 Introduction to Queueing Models and Queueing Analysis Copyright 2015, Sanjay K. Bose 2 Model of a Queue Arrivals

More information

Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues

Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues Sensitivity Analysis for Discrete-Time Randomized Service Priority Queues George Kesidis 1, Takis Konstantopoulos 2, Michael Zazanis 3 1. Elec. & Comp. Eng. Dept, University of Waterloo, Waterloo, ON,

More information

Non Markovian Queues (contd.)

Non Markovian Queues (contd.) MODULE 7: RENEWAL PROCESSES 29 Lecture 5 Non Markovian Queues (contd) For the case where the service time is constant, V ar(b) = 0, then the P-K formula for M/D/ queue reduces to L s = ρ + ρ 2 2( ρ) where

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Model reversibility of a two dimensional reflecting random walk and its application to queueing network arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu

More information

A monotonic property of the optimal admission control to an M/M/1 queue under periodic observations with average cost criterion

A monotonic property of the optimal admission control to an M/M/1 queue under periodic observations with average cost criterion A monotonic property of the optimal admission control to an M/M/1 queue under periodic observations with average cost criterion Cao, Jianhua; Nyberg, Christian Published in: Seventeenth Nordic Teletraffic

More information

On Tandem Blocking Queues with a Common Retrial Queue

On Tandem Blocking Queues with a Common Retrial Queue On Tandem Blocking Queues with a Common Retrial Queue K. Avrachenkov U. Yechiali Abstract We consider systems of tandem blocking queues having a common retrial queue. The model represents dynamics of short

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Departure Processes of a Tandem Network

Departure Processes of a Tandem Network The 7th International Symposium on perations Research and Its Applications (ISRA 08) Lijiang, China, ctober 31 Novemver 3, 2008 Copyright 2008 RSC & APRC, pp. 98 103 Departure Processes of a Tandem Network

More information

On Tandem Blocking Queues with a Common Retrial Queue

On Tandem Blocking Queues with a Common Retrial Queue On Tandem Blocking Queues with a Common Retrial Queue K. Avrachenkov U. Yechiali Abstract We consider systems of tandem blocking queues having a common retrial queue, for which explicit analytic results

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

DS-GA 1002 Lecture notes 2 Fall Random variables

DS-GA 1002 Lecture notes 2 Fall Random variables DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the

More information

Representation of doubly infinite matrices as non-commutative Laurent series

Representation of doubly infinite matrices as non-commutative Laurent series Spec. Matrices 217; 5:25 257 Research Article Open Access María Ivonne Arenas-Herrera and Luis Verde-Star* Representation of doubly infinite matrices as non-commutative Laurent series https://doi.org/1.1515/spma-217-18

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

QUEUING MODELS AND MARKOV PROCESSES

QUEUING MODELS AND MARKOV PROCESSES QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically

More information

Optimal Control of an Inventory System with Joint Production and Pricing Decisions

Optimal Control of an Inventory System with Joint Production and Pricing Decisions Optimal Control of an Inventory System with Joint Production and Pricing Decisions Ping Cao, Jingui Xie Abstract In this study, we consider a stochastic inventory system in which the objective of the manufacturer

More information

Relating Polling Models with Zero and Nonzero Switchover Times

Relating Polling Models with Zero and Nonzero Switchover Times Relating Polling Models with Zero and Nonzero Switchover Times Mandyam M. Srinivasan Management Science Program College of Business Administration The University of Tennessee Knoxville, TN 37996-0562 Shun-Chen

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Control of Fork-Join Networks in Heavy-Traffic

Control of Fork-Join Networks in Heavy-Traffic in Heavy-Traffic Asaf Zviran Based on MSc work under the guidance of Rami Atar (Technion) and Avishai Mandelbaum (Technion) Industrial Engineering and Management Technion June 2010 Introduction Network

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011 Queuing Theory Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Queuing Theory STAT 870 Summer 2011 1 / 15 Purposes of Today s Lecture Describe general

More information

Stochastic Models. Edited by D.P. Heyman Bellcore. MJ. Sobel State University of New York at Stony Brook

Stochastic Models. Edited by D.P. Heyman Bellcore. MJ. Sobel State University of New York at Stony Brook Stochastic Models Edited by D.P. Heyman Bellcore MJ. Sobel State University of New York at Stony Brook 1990 NORTH-HOLLAND AMSTERDAM NEW YORK OXFORD TOKYO Contents Preface CHARTER 1 Point Processes R.F.

More information

Optimism in the Face of Uncertainty Should be Refutable

Optimism in the Face of Uncertainty Should be Refutable Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:

More information

1 Markov decision processes

1 Markov decision processes 2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe

More information

Waiting time characteristics in cyclic queues

Waiting time characteristics in cyclic queues Waiting time characteristics in cyclic queues Sanne R. Smits, Ivo Adan and Ton G. de Kok April 16, 2003 Abstract In this paper we study a single-server queue with FIFO service and cyclic interarrival and

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

A Joining Shortest Queue with MAP Inputs

A Joining Shortest Queue with MAP Inputs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 25 32 A Joining Shortest Queue with

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle  holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST J. Appl. Prob. 45, 568 574 (28) Printed in England Applied Probability Trust 28 RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST EROL A. PEKÖZ, Boston University SHELDON

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Technical Note: Capacitated Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets

Technical Note: Capacitated Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Technical Note: Capacitated Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Jacob Feldman Olin Business School, Washington University, St. Louis, MO 63130, USA

More information

High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013

High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013 High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013 Abstract: Markov chains are popular models for a modelling

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

Chapter 1. Introduction. 1.1 Stochastic process

Chapter 1. Introduction. 1.1 Stochastic process Chapter 1 Introduction Process is a phenomenon that takes place in time. In many practical situations, the result of a process at any time may not be certain. Such a process is called a stochastic process.

More information

On Successive Lumping of Large Scale Systems

On Successive Lumping of Large Scale Systems On Successive Lumping of Large Scale Systems Laurens Smit Rutgers University Ph.D. Dissertation supervised by Michael Katehakis, Rutgers University and Flora Spieksma, Leiden University April 18, 2014

More information

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times

The discrete-time Geom/G/1 queue with multiple adaptive vacations and. setup/closedown times ISSN 1750-9653, England, UK International Journal of Management Science and Engineering Management Vol. 2 (2007) No. 4, pp. 289-296 The discrete-time Geom/G/1 queue with multiple adaptive vacations and

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Series Expansions in Queues with Server

Series Expansions in Queues with Server Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil Aïssani Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

University of Twente. Faculty of Mathematical Sciences. The deviation matrix of a continuous-time Markov chain

University of Twente. Faculty of Mathematical Sciences. The deviation matrix of a continuous-time Markov chain Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 75 AE Enschede The Netherlands Phone: +31-53-48934 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy.

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy. University of Liverpool piunov@liv.ac.uk The Markov Decision Process under consideration is defined by the following elements X = {0, 1, 2,...} is the state space; A is the action space (Borel); p(z x,

More information

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES THE ROYAL STATISTICAL SOCIETY 9 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES The Society provides these solutions to assist candidates preparing

More information

Simplex Algorithm for Countable-state Discounted Markov Decision Processes

Simplex Algorithm for Countable-state Discounted Markov Decision Processes Simplex Algorithm for Countable-state Discounted Markov Decision Processes Ilbin Lee Marina A. Epelman H. Edwin Romeijn Robert L. Smith November 16, 2014 Abstract We consider discounted Markov Decision

More information

Zero-sum square matrices

Zero-sum square matrices Zero-sum square matrices Paul Balister Yair Caro Cecil Rousseau Raphael Yuster Abstract Let A be a matrix over the integers, and let p be a positive integer. A submatrix B of A is zero-sum mod p if the

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS

A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS Applicable Analysis and Discrete Mathematics, 1 (27, 371 385. Available electronically at http://pefmath.etf.bg.ac.yu A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS H. W. Gould, Jocelyn

More information

LIGHT-TAILED ASYMPTOTICS OF STATIONARY PROBABILITY VECTORS OF MARKOV CHAINS OF GI/G/1 TYPE

LIGHT-TAILED ASYMPTOTICS OF STATIONARY PROBABILITY VECTORS OF MARKOV CHAINS OF GI/G/1 TYPE Adv. Appl. Prob. 37, 1075 1093 (2005) Printed in Northern Ireland Applied Probability Trust 2005 LIGHT-TAILED ASYMPTOTICS OF STATIONARY PROBABILITY VECTORS OF MARKOV CHAINS OF GI/G/1 TYPE QUAN-LIN LI,

More information

ISyE 6761 (Fall 2016) Stochastic Processes I

ISyE 6761 (Fall 2016) Stochastic Processes I Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents

More information

Stability of the two queue system

Stability of the two queue system Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Analysis of Software Artifacts

Analysis of Software Artifacts Analysis of Software Artifacts System Performance I Shu-Ngai Yeung (with edits by Jeannette Wing) Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213 2001 by Carnegie Mellon University

More information

Topic 6: Projected Dynamical Systems

Topic 6: Projected Dynamical Systems Topic 6: Projected Dynamical Systems John F. Smith Memorial Professor and Director Virtual Center for Supernetworks Isenberg School of Management University of Massachusetts Amherst, Massachusetts 01003

More information

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS Andrea Bobbio Anno Accademico 999-2000 Queueing Systems 2 Notation for Queueing Systems /λ mean time between arrivals S = /µ ρ = λ/µ N mean service time traffic

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Chapter 6 Queueing Models. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Chapter 6 Queueing Models Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Total Expected Discounted Reward MDPs: Existence of Optimal Policies

Total Expected Discounted Reward MDPs: Existence of Optimal Policies Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600

More information

Inventory Ordering Control for a Retrial Service Facility System Semi- MDP

Inventory Ordering Control for a Retrial Service Facility System Semi- MDP International Journal of Engineering Science Invention (IJESI) ISS (Online): 239 6734, ISS (Print): 239 6726 Volume 7 Issue 6 Ver I June 208 PP 4-20 Inventory Ordering Control for a Retrial Service Facility

More information

A New Look at Matrix Analytic Methods

A New Look at Matrix Analytic Methods Clemson University TigerPrints All Dissertations Dissertations 8-216 A New Look at Matrix Analytic Methods Jason Joyner Clemson University Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method Applied Mathematical Sciences, Vol. 4, 21, no. 48, 2355-2367 M/M/1 Retrial Queueing System with Negative Arrival under Erlang-K Service by Matrix Geometric Method G. Ayyappan Pondicherry Engineering College,

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Dynamic Control of a Tandem Queueing System with Abandonments

Dynamic Control of a Tandem Queueing System with Abandonments Dynamic Control of a Tandem Queueing System with Abandonments Gabriel Zayas-Cabán 1 Jungui Xie 2 Linda V. Green 3 Mark E. Lewis 1 1 Cornell University Ithaca, NY 2 University of Science and Technology

More information