Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing Models

Size: px
Start display at page:

Download "Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing Models"

Transcription

1 Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing Models Aditya Umesh Kulkarni Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Industrial and Systems Engineering Michael R. Taaffe, Chair Raghu Pasupathy Ebru K. Bish May 25, 212 Blacksburg, Virginia Keywords: Queueing, phase-type, time-varying queues, Polya Eggenberger Copyright 212, Aditya Umesh Kulkarni

2 Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing Models Aditya Umesh Kulkarni (ABSTRACT) A deterministic change to a time-varying queueing model is described as either changing the number of entities, the queue capacity, or the number of servers in the system at selected times. We use a surrogate distribution for N(t), the number of entities in the system at time t, to approximate deterministic changes to the Ph(t)/Ph(t)/1/c and the Ph(t)/M(t)/s/c queueing models. We develop a solution technique to minimize the number of state probabilities to be approximated.

3 Dedication This work is dedicated to my fiancé, Ashima Athri, without whose constant encouragement and motivation it would not have been possible, and to my parents, Umesh Kulkarni and Tara Umesh, whose sole purpose in life has been my education. iii

4 Acknowledgments I would like to thank Dr. Michael R. Taaffe for his constant support, encouragement, his ability to put his faith in me, and for gracefully controlling my youthful audacity. Special thanks to my committee members, Dr. Raghu Pasupathy and Dr. Ebru K. Bish, who supported me throughout my thesis work. iv

5 Contents 1 Introduction Motivation M(t)/M(t)/1/c example Polya Eggenberger distribution Deterministic addition to N(t) Deterministic deletion from N(t) Deterministic increases in queue capacity c Deterministic decrease in queue capacity c Updating E[N(t), N(t) β] Algorithm SSMQ Conclusion v

6 2 Deterministic changes to the Ph(t)/Ph(t)/1/c queueing model Deterministic addition to N(t) Deterministic deletion from N(t) Deterministic increase in queue capacity c Deterministic decrease in queue capacity c Updating idle state KFEs and PMDEs Algorithm SSFC Conclusion Deterministic changes to the Ph(t)/M(t)/s/c queueing model Deterministic addition to N(t) Deterministic deletion from N(t) Deterministic increase in queue capacity c s Deterministic decrease in queue capacity c s Deterministic addition to s Deterministic deletion from s Updating required values Algorithm MSFC vi

7 3.9 Conclusion Tests and Results Ph(t)/Ph(t)/1/c queueing model results Ph(t)/M(t)/s/c queueing model results Conclusion 85 Bibliography 85 Appix I 88 Appix II 18 Appix III 259 vii

8 List of Figures 1.1 Approximating E[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 5) Approximating StdDev[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 5) Approximating E[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 15) Approximating StdDev[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 15) Approximating E[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 5) Approximating StdDev[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 5) Approximating E[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 15) Approximating StdDev[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 15) Simple decision tree for deterministic changes to M(t)/M(t)/1/c queueing model Decision tree for deterministic changes to Ph(t)/Ph(t)/1/c queueing model Decision tree for Ω 1,i, for the Ph(t)/M(t)/s/c queueing model viii

9 3.2 Decision tree for Ω 2,i, for the Ph(t)/M(t)/s/c queueing model E[N(t)] vs. t and errors for Case StdDev[N(t)] vs. t and errors for Case E[N(t)] vs. t and errors for Case SD[N(t)] vs. t and errors for Case E[N(t)] vs. t and errors SD[N(t)] vs. t and errors E[N(t)] vs. t and errors SD[N(t)] vs. t and errors ix

10 List of Tables 1.1 Number of state probabilities approximated using Method Ph(t)/Ph(t)/1/c queueing model information E[N(t)] related errors for the Ph(t)/Ph(t)/1/c queueing model StdDev[N(t)] related errors for the Ph(t)/Ph(t)/1/c queueing model Ph(t)/M(t)/s/c queueing model information E[N(t)] related errors for the Ph(t)/M(t)/s/c queueing model StdDev[N(t)] related errors for the Ph(t)/M(t)/s/c queueing model x

11 Chapter 1 Introduction The state probabilities for a time-varying Phase-type queueing model can be obtained by numerically integrating the full set of Kolmogorov forward equations (KFEs) for the queueing model. Quantitative measures, such as the expectation and variance of the number of entities in the system, are computed from the state probabilities obtained via numerical integration of KFEs. Deterministic change to a time-varying Phase-type queueing model is defined as deterministically changing the number of entities in the system, or the number of servers, or the queue capacity, at selected times. Deterministic changes made to the system at selected times causes an instantaneous change in the state probabilities, at these times. This requires the KFEs to be updated at the times of deterministic changes to the system. Mapping the state of a time-varying Phase-type queueing model, that undergoes deterministic changes, with respect to time, by using the KFEs, requires computational effort. The computational effort increases as the number of KFEs increase. We ext the work of Taaffe and Ong 1

12 2 [1, 8] and approximate the p th moment of N(t), where N(t) is a random variable that denotes the number of entities in the system at time t, when deterministic changes are made to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c queueing models. 1.1 Motivation Taaffe and Ong [1, 8] showed, that the p th moment of N(t) is well approximated using a state-space partitioning and surrogate distribution approximation (SDA) approach for the Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c queueing model, where the time between arrivals has a continuous positive valued distribution, the service time has a continuous positive valued distribution, the number of servers for each model is constant in the time interval of interest, and the queue capacity for each model is constant in the time interval of interest. Such a scenario is an extreme case. The general case is where the time between arrivals and service time have indepent continuous positive valued distribution, and deterministic changes are made to the number of entities in the system, the number of servers in the system, or the queue capacity of the system, at selected times. We developed software to approximate p th moment of N(t) for the general case. Figure 1.1 and Figure 1.2 show the approximation and associated absolute error for E[N(t)] and StdDev[N(t)], respectively, for the Ph(t)/Ph(t)/1/c queueing model, where the time between arrivals and service time are equal to zero and deterministic changes are made to the system at selected times in the time interval of interest. Figure 1.3 and Figure 1.4 show the approximation and associated

13 3 absolute error for E[N(t)] and StdDev[N(t)], respectively, for the Ph(t)/Ph(t)/1/c queueing model, where the time between arrivals and service time have indepent time-varying Phase-type distribution, and deterministic changes are made to the system at selected times in the time interval of interest. In the figures, the solid line denotes the actual value vs. t and the dotted line denotes the approximated value vs. t, where t lies in the time interval of interest. Absolute error is defined as: Absolute Error = Actual value - Approximated value. Figure 1.5 and Figure 1.6 show the approximation and associated absolute error for E[N(t)] and StdDev[N(t)], respectively, for the Ph(t)/M(t)/s/c queueing model, where the time between arrivals and service time are equal to zero, and deterministic changes are made to the system at selected times in the time interval of interest. Figure 1.7 and Figure 1.8 show the approximation and associated absolute error for E[N(t)] and StdDev[N(t)], respectively, for the Ph(t)/M(t)/s/c queueing model, where the time between arrivals has a time-varying Phase-type distribution, service time has a time-varying exponential distribution, and deterministic changes are made to the system at selected times in the time interval of interest. The SDA method is now known as the Closure Approximation [4]. We refer to it as the SDA method for reasons mentioned later. To develop intuition, we study deterministic changes made to the M(t)/M(t)/1/c queueing model. We present a mapping scheme for the state probabilities, when deterministic changes are made to the M(t)/M(t)/1/c queueing model, and use it to develop a solution technique.

14 4 8 Figure 1.1: Approximating E[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 5) E[N(t)] vs. t x 1 14 Absolute Error E[N(t)] vs. t

15 5 Figure 1.2: Approximating StdDev[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 5) 2.5 x 1 6 StdDev[N(t)] vs. t x 1 6 Absolute Error StdDev[N(t)] vs. t

16 Figure 1.3: Approximating E[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 15) E[N(t)] vs. t.5 Absolute Error E[N(t)] vs. t

17 7 Figure 1.4: Approximating StdDev[N(t)], Ph(t)/Ph(t)/1/c queueing model (Test Case 15) 8 StdDev[N(t)] vs. t Absolute Error StdDev[N(t)] vs. t

18 8 3 Figure 1.5: Approximating E[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 5) E[N(t)] vs. t x 1 15 Absolute Error E[N(t)] vs. t 3 2 1

19 9 Figure 1.6: Approximating StdDev[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 5) 5 x 1 7 StdDev[N(t)] vs. t x 1 7 Absolute Error StdDev[N(t)] vs. t

20 1 2 Figure 1.7: Approximating E[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 15) E[N(t)] vs. t Absolute Error E[N(t)] vs. t

21 11 Figure 1.8: Approximating StdDev[N(t)], Ph(t)/M(t)/s/c queueing model (Test Case 15) StdDev[N(t)] vs. t.35 Absolute Error StdDev[N(t)] vs. t M(t)/M(t)/1/c example The M(t)/M(t)/1/c queueing model is a specific case of the Ph(t)/Ph(t)/1/c queueing model, where the number of arrival phases and the number of service phases are both equal to one.

22 12 See Neuts [3], Johnson and Taaffe [1, 2], for a deeper understanding of the Phase-type distribution. Let λ(t) denote the arrival rate at time t. The service process has a timedepent exponential distribution, with rate µ(t). Let P i (t) denote P[N(t) = i], and let P i (t) denote dp[n(t) = i]/dt. Let E[N p (t)] denote the p th moment of N(t) at time t and let E [N p (t)] denote de[n p (t)]/dt. The Kolmogorov forward equations for the M(t)/M(t)/1/c queueing model are: P (t) = λ(t)p (t) + µ(t)p 1 (t) P i (t) = (λ(t) + µ(t))p i (t) + λ(t)p i 1 (t) + µ(t)p i+1 (t) for i = 1, 2,..., c 1 P c(t) = µ(t)p c (t) + λ(t)p c 1 (t). We obtain the value of E[N p (t)], where t lies in the time interval of interest, by numerically integrating the p th moment differential equation given by equation (1.1), over the time interval of interest. E [N p (t)] = c n= n p P i (t) (1.1) With simple algebra and index shifting we can simplify the right-hand side (RHS) of equation (1.1), such that the RHS of equation (1.1) contains only state probabilities and E[N q (t)], where q p. Theorem 1.1 p 1 ( ) E [N p (t)] = λ(t) p (E[N (q) (t)] c q P c (t) ) q= q p 1 ( ) +µ(t) p ( 1) p q E[N (q) (t)] ( 1) p P (t). (1.2) q q=

23 13 See Appix I for proof of equation (1.2). The SDA approach described by Taaffe and Ong [1, 8] involves using the Polya Eggenberger random variable, X t, as a surrogate random variable for N(t). We match the first two moments of N(t) to the first two moments of X t, which has a Polya Eggenberger distribution with parameters c, θ t and γ t (PE(c, θ t, γ t )). Once the parameters θ t and γ t are determined, we approximate the state probabilities P c (t) and P (t) by using P[X t = c] and P[X t = ], where X t PE(c, θ t, γ t ). The approximated probabilities are then used to close equation (1.2). Note, for the M(t)/M(t)/1/c queueing model, we numerically integrate at least the first and the second moment differential equations of N(t). The first and second moment differential equations for M(t)/M(t)/1/c queueing model are: E [N(t)] = λ(t)[1 P c (t)] µ(t)[1 P (t)] (1.3) E [N 2 (t)] = λ(t)[2e[n (1) (t) + 1 (2c + 1)P c (t)] µ(t)[2e[n (1) (t)] (1 P (t))]. (1.4) We obtain the approximation for E[N(t)] and E[N 2 (t)] by numerically integrating the moment differential equations (1.3) and (1.4), and using the approximate values of the state probabilities P (t) and P c (t) Polya Eggenberger distribution The Polya Eggenberger distribution is a special case of a Polya Urn Model. Let there be b black balls and r red balls in an urn. Define a trial as randomly choosing a ball from the urn,

24 14 noting the color of the ball, and returning the ball to the urn along with s balls of the same color. Define a success as choosing a black ball. Let X be a random variable that denotes the number of successes in n trails. The probability mass function (p.m.f) of X is P[X = i] = ( ) n i i 1 j= n i 1 (θ + jγ) j= n 1 j= (1 + jγ) (1 θ + jγ) where θ = b b + r and γ = s b + r. Thus X has a Polya Eggenberger distribution with the parameters n, γ and θ. The first two moments of X are: E[X] = cθ E[X 2 ] = cθ(c(θ + γ) + (1 θ)). (1 + γ) We match the first two moments of N(t) to the first two moments of X and evaluate θ and γ as: θ = E[N(t)] c γ = E[N(t)](E[N(t)] + (1 θ)) E[N 2 (t)]. E[N 2 (t)] ce[n(t)] P (t) is approximated by P[X = ] and P c (t) is approximated by P[X = c]. We augment this standard definition of the Polya Eggenberger distribution for the case of Var[X] = and E[X], n. If Var[X] =, then P[X = E[X]] = 1.

25 Deterministic addition to N(t) Let k (where k is a positive integer) entities be deterministically added to the system at time t ɛ, where t ɛ lies in the time interval of interest. Let the system capacity at time t ɛ be c. The system capacity, c, is finite, and we assume that deterministic additions cannot increase the number of entities in the system beyond c. Thus if k c, then k = c. Let P i (t ɛ ) denote P[N(t ɛ ) = i] and P i (t + ɛ ) denote P[N(t + ɛ ) = i]. The mapping scheme for state probabilities at time t ɛ is: P (t + ɛ ) =. P k 1 (t + ɛ ) = P k (t + ɛ ) = P (t ɛ ). P c 1 (t + ɛ ) = P c k 1 (t ɛ ) P c (t + ɛ ) = c i=c k P i (t ɛ ). The mapping scheme presented above is used to update the KFEs. This instantaneous change in the state probabilities of N(t ɛ ), leads to an instantaneous change in the moments of N(t ɛ ). The p th moment differential equation of N(t ɛ ) is updated. We update the value of E[N p (t ɛ )] as follows:

26 16 E[N p (t + ɛ )] = = = = c i p P i (t + ɛ ) i= c i p P i (t + ɛ ) i=k c i p P i k (t ɛ ) + c c p P j (t ɛ ) i=k j=c k+1 c k (i + k) p P i (t ɛ ) + c c p P j (t ɛ ). (1.5) i= j=c k+1 There are two methods to evaluate equation (1.5) without numerically integrating the KFEs to obtain the values of state probabilities at time t ɛ. For the first method we approximate the state probabilities of N(t ɛ ) with X t ɛ, where X t ɛ PE(c, θ t ɛ, γ t ɛ ). For the second method, observe that, equation (1.5) can also be written as: E[N p (t + ɛ )] = p q= ( ) p (k) p q E[N q (t ɛ ), N(t ɛ ) c k] q +c p E[N (t ɛ ), N(t ɛ ) c k + 1]. (1.6) The term E[N q (t ɛ ), N(t ɛ ) c k] is a partial-moment of N(t ɛ ), and its value is obtained by numerically integrating the partial-moment differential equations de[n q (t), N(t) c k]/dt, for q {,..., p}, over the time interval of interest. The name closure approximation suggests that we are closing differential equations by approximating state probabilities. In order to evaluate (1.5), by Method 1, we approximate the state probabilities of N(t ɛ ) with X t ɛ PE(c, θ t ɛ, γ t ɛ ). Hence we refer to the method of matching moments of N(t) to a surrogate random variable X t, Polya Eggenberger with three parameters, as the SDA method.

27 17 Method 1: Approximate state probabilities To minimize the number of approximated state probabilities, we use the values of E[N q (t ɛ )], where q {1,..., p}. E[N p (t + ɛ )] = = or E[N p (t + ɛ )] = = c (i + k) p P i (t ɛ ) i= + c [c p (j + k) p ] P j (t ɛ ) j=c k+1 ( ) p p (k) p q E[N q (t ɛ )] q= q + c [c p (j + k) p ] P j (t ɛ ) when k c k + 1 (1.7) j=c k+1 c k c [(i + k) p c p ] P i (t ɛ ) + c p P j (t ɛ ) i= j= c k i= [(i + k) p c p ] P i (t ɛ ) + c p when k > c k + 1. (1.8) Using Method 1 the maximum number of approximated state probabilities is (c + 1)/2. Method 2: Partial-Moment Differential Equations Let E [N p (t) β] denote de[n p (t) β]/dt, where < β < c. Theorem 1.2 E [N p (t) β] = δ β, λ(t) ( δ p, ( P (t)) + δ p, ) p 1 ( ) +δ β, δ β,c λ(t) p E[N q (t), N(t) β] (β + 1) p P β (t) q= q p 1 ( ) p p 1 +δ β,c λ(t) E[N (q) (t)] c q P c (t) q= q q= +δ β, µ(t) ( δ p, P 1 (t) + δ p, )

28 p 1 +δ β, µ(t) q= ( ) p ( 1) p q E[N q (t), N(t) β] q +δ β, µ(t) ( δ β,c β p P β+1 (t) δ p, ( 1) p P (t) ) (1.9) 18 where δ a,b = 1 if a = b if a b δ a,b = 1 δ a,b < β < c. Refer to Appix I for proof of Theorem 1.2. Numerical integration of equation (1.9) requires the values of P (t), P c (t), P β (t) and P β+1 (t). Note, by using the SDA approach we approximate P (t) and P c (t) in order to numerically integrate the first and second moment differential equations of N(t). To approximate P β (t), define a random variable W t such that W t PE(β, θ t1, γ t1 ). Match the conditional moments E[N(t) N(t) β] and E[N 2 (t) N(t) β] to the first and second moment of W t, respectively, and evaluate θ t1 and γ t1. The approximate value of P β (t) is given by P[W t = β]. To approximate P β+1 (t), define a random variable Y t such that Y t PE(c β 1, θ t2, γ t2 ). Match E[N(t) N(t) > β] (β + 1) and E[N 2 (t) N(t) > β] 2(β + 1)(E[N(t) N(t) > β] (β + 1)) (β + 1) 2 to the first and second moment of Y t, respectively. The approximate value of P β+1 (t) is given by P[Y t = ].

29 Deterministic deletion from N(t) Let k (where k is a positive integer) entities be deterministically deleted from the system at time t ɛ, where t ɛ lies in the time interval of interest. Let the system capacity at time t ɛ be c. There can be at most c entities in the system at time t ɛ. If k c, then k = c. Let P i (t ɛ ) denote P[N(t ɛ ) = i] and P i (t + ɛ ) denote P[N(t + ɛ ) = i]. The mapping scheme for state probabilities at time t ɛ is: P c (t + ɛ ) =. P c k (t + ɛ ) = P c (t ɛ ). P 1 (t + ɛ ) = P k+1 (t ɛ ) P (t + ɛ ) = k P i (t ɛ ). i= At time t + ɛ the p th moment of N(t + ɛ ) is E[N p (t + ɛ )] = = c i p P i (t + ɛ ) i= c k i p P i (t + ɛ ) i= k = δ p, P i (t ɛ ) + i p P i+k (t ɛ ) i= k 1 = δ p, i= k 1 = δ p, i= P i (t ɛ ) + P i (t ɛ ) + c k i=1 c k i= c i=k i p P i+k (t ɛ ) (i k) p P i (t ɛ ). (1.1)

30 2 Equation (1.1), can be approximated by either Method 1 or Method 2. Method 1 k 1 E[N p (t + c ɛ )] = δ p, P i (t ɛ ) + (i k) p P i (t ɛ ) when k > c k + 1 (1.11) i= i=k or k 1 E[N p (t + ɛ )] = δ p, i= k 1 i= k 1 = δ p, i= k 1 i= c P i (t ɛ ) + (i k) p P i+k (t ɛ ) i= (i k) p P i (t ɛ ) P i (t ɛ ) + p q= ( ) p ( k) p q E[N q (t ɛ )] q (i k) p P i (t ɛ ) when k c k + 1. (1.12) Method 2 k 1 E[N p (t + c ɛ )] = δ p, P i (t ɛ ) + (i k) p P i (t ɛ ) i= i=k k 1 c = δ p, P i (t ɛ ) + (i k) p P i (t ɛ ) i= i= k 1 (i k) p P i (t ɛ ) i= p = δ p, E[N (t ɛ ), N(t ɛ ) k 1] + q= p q= ( ) p ( k) p q E[N q (t ɛ )] q ( ) p ( k) p q E[N q (t ɛ ), N(t ɛ ) k 1] (1.13) q Next consider deterministic changes made to the queue capacity c 1.

31 Deterministic increases in queue capacity c 1 Let the system capacity at time t ɛ be c. Let the queue capacity, c 1, be increased deterministically by k (where k is a positive integer) at time t ɛ. Let P i (t ɛ ) denote P[N(t ɛ ) = i] and P i (t + ɛ ) denote P[N(t + ɛ ) = i]. The mapping scheme for state probabilities at time t ɛ is: P (t + ɛ ) = P (t ɛ ). P c (t + ɛ ) = P c (t ɛ ) P c+1 (t + ɛ ) =. P c+k (t + ɛ ) =. At time t + ɛ the p th moment of N(t + ɛ ) is given by: E[N p (t + ɛ )] = = c+k i= c i= i p P i (t + ɛ ) i p P i (t ɛ ) = E[N p (t ɛ )]. (1.14) A deterministic increase in queue capacity c 1 creates new states for the system and the state probability values, for these new states, are all equal to zero. Deterministic increase in queue capacity does not change the probability values of the existing states.

32 Deterministic decrease in queue capacity c 1 Let the system capacity at time t ɛ be c. Let the queue capacity c 1, where c 1, be decreased deterministically by k (where k is a positive integer) at time t ɛ. The queue capacity c 1 is finite. If k > c 1, then k = c 1. Let P i (t ɛ ) denote P[N(t ɛ ) = i] and P i (t + ɛ ) denote P[N(t + ɛ ) = i]. The mapping scheme for state probabilities at time t ɛ is: P c (t + ɛ ) =. P c k+1 (t + ɛ ) = P c k (t + ɛ ) = P c k (t ɛ ) + c P i (t ɛ ) P c k 1 (t + ɛ ) = P c k 1 (t ɛ ). P (t + ɛ ) = P (t ɛ ). i=c k+1 At time t + ɛ the p th moment of N(t + ɛ ) is: E[N p (t + ɛ )] = = = c i p P i (t + ɛ ) i= c k 1 i p P i (t ɛ ) + (c k) p c P i (t ɛ ) i= i=c k c k i p P i (t ɛ ) + (c k) p c P i (t ɛ ). (1.15) i= i=c k+1 Method 1 or Method 2 can be used to evaluate equation (1.15).

33 23 Method 1 E[N p (t + ɛ )] = c k c i p P i (t ɛ ) + (c k) p P i (t ɛ ) i= i=c k+1 = c i p P i (t ɛ ) + c ((c k) p i p ) P i (t ɛ ) i= i=c k+1 = E[N p (t ɛ )] or E[N p (t + ɛ )] = = c + ((c k) p i p ) P i (t ɛ ) when k c k + 1 (1.16) c k i=c k+1 c (i p (c k) p ) P i (t ɛ ) + (c k) p P i (t ɛ ) i= i= c k i= (i p (c k) p ) P i (t ɛ ) + (c k) p when k > c k + 1. (1.17) Method 2 c k E[N p (t + c ɛ )] = i p P i (t ɛ ) + (c k) p P i (t ɛ ) i= i=c k+1 = E[N p (t ɛ ), N(t ɛ ) c k] +(c k) ( p 1 P[N(t ɛ ) c k]. ) (1.18) Method 2 requires the value of the partial moment E[N p (t), N(t) β] to update E[N p (t)]. The partial moment E[N p (t), N(t) β] deps on β, which in turn deps on the type of deterministic changes to be made. Since β is not fixed for every case, the states β and β + 1, whose probabilities are approximated, are not fixed. We numerically integrate one differential equation for each unique value of β. We now consider how to suitably update E[N(t), N(t) β].

34 Updating E[N(t), N(t) β] Deterministic increases in queue capacity c 1 does not require E[N p (t), N(t) β] to be updated since the probabilities of the existing states do not change (the probabilities of the new states created are equal to zero). Excluding deterministic increases in queue capacity, we have three possible types of deterministic changes, where E[N p (t), N(t) β] needs to be updated. They are deterministic additions to N(t), deterministic deletions from N(t) and deterministic decreases in c 1. We refer to these as Addition, Deletion and Decrease respectively. Let the time interval of interest be (t, t f ). Assume an Addition occurs at times t 1, t 2 and t 3 of magnitude k 1, k 2 and k 3, where t 1, t 2, t 3 < t f and k 1 k 2 k 3. Consider the last deterministic change first and observe that E[N p (t 3 ), N(t 3 ) c k 3 ] is required to update E[N p (t 3 )]. Similarly, at time t 2 and t 1, E[N p (t 2 ), N(t 2 ) c k 2 ] and E[N p (t 1 ), N(t 1 ) c k 1 ] are required to update E[N p (t)] at times t 2 and t 1, respectively. Thus, in this case we numerically integrate de[n p (t), N(t) c k 1 ]/dt from time to t 1, de[n p (t), N(t) c k 2 ]/dt from time to t 2, and de[n p (t), N(t) c k 3 ]/dt from time to t 3. The first deterministic change occurs at time t 1, thus we do not update E[N p (t), N(t) c k 1 ], unlike E[N p (t), N(t) c k 2 ] and E[N p (t), N(t) c k 3 ], which do need to be updated. We update E[N p (t), N(t) c k 3 ] at times t 1 and t 2. Similarly, we update E[N p (t), N(t) c k 2 ] at time t 1. At time t 2, the mapping scheme for state

35 25 probabilities is: P (t + 2 ) =. P k2 1(t + 2 ) = P k2 (t + 2 ) = P (t 2 ). P c 1 (t + 2 ) = P c k2 1(t 2 ) P c (t + 2 ) = c i=c k 2 P i (t 2 ). thus E[N p (t 2 + ), N(t 2 + ) c k 3 ] = c k 3 i= i p P i (t 2 + ) = if c k 3 k 2 1 or = = = = c k 3 i=k 2 i p P i (t 2 c k 3 + ) i=k 2 i p P i k2 (t 2 c k 3 k 2 i= p q= ( p q ) (i + k 2 ) p P i (t 2 ) (1.19) ) E[N q (t 2 ), N(t 2 ) c k 3 k 2 ] if c k 3 k 2 (1.2) Observe that E[N p (t 2 + ), N(t 2 + ) c k 3 ] can either be evaluated by numerically integrating de[n q (t), N(t) c k 3 k 2 ]/dt, where q {,..., p}, or by approximating state proba-

36 26 bilities. Numerically integrating de[n q (t), N(t) c k 3 k 2 ]/dt, when c k 3 k 3 k 1 requires E[N q (t 1 ), N(t 1 ) c k 3 k 2 k 1 ] in order to update E[N q (t), N(t) c k 3 k 2 ] at time t 1. Observe that, in this example, the total number of differential equations numerically integrated deps on the number of deterministic changes made, and the type of deterministic changes made in the time interval of interest. We refer to one type of deterministic change followed by the same or another type of deterministic change as a change-pair. There are 16 change-pairs for the M(t)/M(t)/1/c model. Of these only 9 change-pairs require E[N p (t)] to be updated at the times of both deterministic changes in the change-pair. For a large number of deterministic changes in the time interval of interest, the numerical integration of a large number of partial-moment differential equations may be required, in order to update E[N p (t)] at the times of the deterministic change. The computational effort spent on approximating the p th moment of N(t), using partial-moment differential equations to update p th moment differential equation of N(t), may potentially be more than the computational effort required to numerically integrate the full set of KFEs. Thus, we approximate the probabilities on the RHS of equation (1.19), using the SDA method. We match the first two moments of the random variable Z t, where Z t = (N(t) N(t) c k 3 k 2 ), to a PE(c k 3 k 2, θ, γ), and approximate all required state probabilities. Method 1 requires approximated state probabilities using the SDA approach. For Method 1, the maximum number of state probabilities approximated is (c+1)/2. The maximum number of probabilities that are approximated using Method 2 and SDA approach for different

37 27 change-pairs, is given in Table 1.1. Observe that, numerically integrating E [N(t), N(t) β] does not result in an advantage. Thus for deterministic changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c queueing models, we use the SDA approach, where we match the first two moments of N(t) to the first two moments of X t PE(c, θ, γ), and approximate state probabilities that are required to update E[N p (t)]. Let deterministic additions to N(t) and deterministic deletion from N(t) be referred to as EA (Entity Addition) and ED (Entity Deletion), respectively. Let increases in queue capacity and decrease in queue capacity be referred to as IC (Increase in Capacity) and DC (Decrease in Capacity), respectively. We henceforth refer to all update equations as Change Equations. The method of approximating deterministic changes using the SDA approach is represented by a simple decision tree, shown in Figure (1.9), where Change Equations 1-7 are given by: Change Equation 1: E[N p (t + ɛ )] = ( ) p p (k) p q E[N q (t ɛ )] q= q + c i=c k+1 (c p (i + k) p )P i (t ɛ ). Change Equation 2: E[N p (t + ɛ )] = c k i= ((i + k) p c p )P i (t ɛ ) + c p. Change Equation 3: k 1 E[N p (t + ɛ )] = δ p, i= k 1 i= P i (t ɛ ) + p q= (i k) p P i (t ɛ ). ( ) p ( k) p q E[N q (t ɛ )] q

38 28 Table 1.1: Number of state probabilities approximated using Method 2 Current Magnitude Next Case Magnitude Relation be- Number of prob- Maximum no. of case tween k1,k2 and abilities to be probabilities to c approximated be approximated Addition k1 Addition k2 k2 c + 1 k1 Addition k1 Addition k2 k2 < c + 1 k1 c k1 k2 + 1 (c 1)/2 Addition k1 Deletion k2 k2 k1 Addition k1 Deletion k2 k2 > k1 k2 k1 (c 1)/2 Addition k1 Decrease k2 k2 c + 1 k1 Addition k1 Decrease k2 k2 < c + 1 k1 c k1 k2 + 1 (c 1)/2 Deletion k1 Addition k2 k2 k1 c k2 + 1 c/2 Deletion k1 Addition k2 k2 < k1 c k1 + 1 (c 1)/2 Deletion k1 Deletion k2 k2 c + 1 k1 k2 (c 1)/2 Deletion k1 Deletion k2 k2 > c + 1 k1 k2 1 (c 1)/2 Deletion k1 Decrease k2 k2 k1 c k2 + 1 c/2 Deletion k1 Decrease k2 k2 < k1 c k1 + 1 (c 1)/2 Decrease k1 Addition k2 N.A. c k1 k1 + 1 (c k1 + 1)/2 Decrease k1 Deletion k2 N.A. k2 (c k1 + 1)/2 Decrease k1 Decrease k2 N.A. k2 (c k1 + 1)/2

39 29 EA k c k + 1 k > c k + 1 Change Equation 1 Change Equation 2 Deterministic Change ED k c k + 1 k > c k + 1 Change Equation 3 Change Equation 4 IC Change Equation 5 DC k c k + 1 k > c k + 1 Change Equation 6 Change Equation 7 Figure 1.9: Simple decision tree for deterministic changes to M(t)/M(t)/1/c queueing model Change Equation 4: k 1 E[N p (t + c ɛ )] = δ p, P i (t ɛ ) + (i k) p P i (t ɛ ). i= i=k Change Equation 5: E[N p (t + ɛ )] = E[N p (t ɛ )]. Change Equation 6: E[N p (t + ɛ )] = E[N p (t ɛ )] + c ((c k) p i p )P i (t ɛ ). i=c k+1

40 3 Change Equation 7: E[N p (t + ɛ )] = c k i= (i p (c k) p )P i (t ɛ ) + (c k) p. 1.3 Algorithm SSMQ Let the time interval of interest be (t, t f ). For each value of time t (t, t f ), assume p 2 and assume the values of E[N q (t )] are known, where q {1,..., p}. Let the solution algorithm be labelled Single Server Markovian Queue (SSMQ). The algorithm to approximate deterministic changes to the M(t)/M(t)/1/c queueing model, using the SDA approach is given by: Algorithm SSMQ for t = t t f do if a deterministic change is to be made at time t then 1. Set E[N q (t )] E[N q (t)], for q {1,..., p} 2. Match E[N(t )] and E[N 2 (t )] to the first two moments of X t PE(c, θ t, γ t ) 3. Evaluate E[N q (t + )] for q {1,..., p}, using E[N q (t )], for q {1,..., p}, and X t as a surrogate for N(t ) 4. Update c if required 5. Set E[N q (t)] E[N q (t + )], for q {1,..., p} else

41 31 1. Match E[N(t)] and E[N 2 (t)] to the first and second moment of X t PE(c, θ, γ), respectively 2. Approximate P (t) and P c (t) using X t as a surrogate for N(t) 3. Numerically integrate E[N q (t)], where q {1,..., p}, to get approximations for E[N q (t + t)] 4. Set t t + t if for. 1.4 Conclusion We presented a technique to approximate the p th moment of N(t), using a mapping scheme for state probabilities and the SDA approach described by Taaffe and Ong ([8],[1]), when deterministic changes are made to the M(t)/M(t)/1/c queueing model. It was shown that approximating state probabilities, to update the value of p th moment of N(t), at the time of a deterministic change, requires less computational effort as compared to numerically integrating partial moment differential equations to do the same. We next study deterministic changes to the Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c queueing models. We use the SDA approach to approximate deterministic change to both queueing models.

42 Chapter 2 Deterministic changes to the Ph(t)/Ph(t)/1/c queueing model The versatility and flexibility of the Phase-type distribution is attributed to its denseness over (, ]. Johnson and Taaffe [2] showed, that a general non-negative continuous distribution can be arbitrarily closely approximated by a Phase-type distribution. Ong and Taaffe ([8]) note, that the Phase-type distribution s flexibility in approximating non-negative continuous distributions can be utilized in approximating a time-varying GI/G/1/c queueing model, with a Ph(t)/Ph(t)/1/c queueing model. The arrival and service process for the Ph(t)/Ph(t)/1/c queueing model are both described by time-varying Phase-type distributions. A stationary Phase-type distribution having m phases can be described by a continuous time Markov process with m transient states and 32

43 33 one absorbing state. An initial-state probability vector α contains the probability of starting in any of the m transient states. The vector λ contains the inverses of the mean times spent in the transient states; i.e. the vector of rates for the underlying Markov process. Given the routing probabilities among the m + 1 states, and given that the process starts in a transient state, the time until absorption random variable has a Phase-type distribution. This random variable is said to have a time-varying Phase-type distribution if the routing probability matrix, α, and λ are all time depent. For the Ph(t)/Ph(t)/1/c queueing model, let the number of phases in the arrival process be m A and let the number of phases in the service process be m B. Referring to Ong and Taaffe([8]), the time-varying phase arrival process and the time-varying phase service process are described by (A(t), λ(t)) and (B(t), µ(t)) respectively. A(t) and B(t) are the underlying Markov chains of the arrival and service process, given by A(t) = A 1 (t) A 2 (t) α(t) B(t) = B 1 (t) B 2 (t) β(t). The matrix A 1 (t) is the routing probability matrix of size m A m A, for transient-to-transient states in the underlying Markov chain for the arrival process. Similarly B 1 (t) is the routing probability matrix of size m B m B, of the transient-to-transient states in the underlying Markov chain for the service process. A 2 (t) and B 2 (t) are vectors, of size m A 1 and m B 1, respectively, representing the probabilities of being absorbed from transient states for their respective processes. The vector λ(t) is the time depent vector of rates for the arrival process, and µ(t) is the vector of rates corresponding to the service process.

44 34 An actual arrival occurs when an absorption takes place in the underlying Markov chain for the arrival process. This is similar to the service process, where an actual completion of service occurs when an absorption takes place in the underlying Markov chain for the service process. Once an absorption occurs, for either arrival or service process, the underlying Markov chain restarts. To complete the description of the arrival and service processes we define an initial-state probability vector for each process. The initial-state probability vector contains the probabilities of starting in any one of the transient states. The vector α(t) is the initial-state probability vector for the arrival process, and the vector β(t) is the initial-state probability vector for the service process, at time t. For the Ph(t)/Ph(t)/1/c queueing model, we define the state variable as a triple (N(t), I(t), J(t)), where N(t) is a random variable that denotes the number of entities in the system at time t, I(t) is a random variable that denotes the current-arrival phase of the next arrival at time t and J(t) is a random variable that denotes the current-service phase of an entity at time t. For the Ph(t)/Ph(t)/1/c queueing model, N(t) {,..., c}, I(t) {1,..., m A }, and J(t) {,..., m B }. The current-service phase at time t, J(t), is equal to zero if the system is idle at time t. The state space Ω = {(, i, ) i {1,..., m A }} {(n, i, j) n {1,..., c}, i {1,..., m A }, j {1,..., m B }} (Refer [8]). In order to approximate the p th moment of N(t) using the SDA approach, Ong and Taaffe ([8]) partitioned the state space. Define the indexing set D = {(i, j) (i, j) {1,..., m A } {1,..., m B }}.

45 35 Define a subspace, Ω i,j, as Ω i,j = {(n, i, j) n {1,..., c}, (i, j) D}. Define a subspace Ω = {(, i, ) i {1,..., m A }}. All the state probabilities Ω will be collectively referred to as idle-state probabilities. The KFEs for the Ph(t)/Ph(t)/1/c queueing model are: dp,i, (t)/dt = λ i (t)p,i, (t) + m B + l=1 m A k=1 b l,mb +1(t)µ l (t)p 1,i,l (t) dp n,i,j (t)/dt = (λ i (t) + µ j (t)) P n,i,j (t) + + m A k=1 +α i (t) a k,i (t)λ k (t)p,k, (t) m B a k,i (t)λ k (t)p n,k,j (t) + b l,j (t)µ l (t)p n,i,l (t) m A k=1 l=1 a k,ma+1(t)λ k (t) (δ n1 β j (t)p,k, (t) +δ n1 {P n 1,k,j (t) + δ nc P c,i,j (t)} ) m B +δ nc β j (t) b l,mb +1(t)µ l (t)p n+1,i,l (t) where (i, j) {1,..., m A }, j {1,..., m B } and n {1,..., c}. l=1 For each Ω i,j, where (i, j) D, define E (p) i,j (t) as E[N p (t), N(t) 1, I(t) = i, J(t) = j]. Thus de (p) i,j (t)/dt = (λ i (t) + µ j (t)) E (p) i,j (t) m A a k,i (t)λ k (t)e (p) m B k,j (t) + + k=1 l=1 m A +α i (t) a k,ma+1(t)λ k (t) β j (t)p,k, (t) k=1 ( ) m A p +α i (t) a k,ma+1(t)λ k (t) p E (q) p 1 k,j k=1 q= q (t) q= b l,j (t)µ l (t)e (p) i,l (t) ( ) p q c q P c,k,j (t)

46 ( ) m B p +β j (t) b l,mb +1(t)µ l (t) p ( 1) p q E (q) i,l l=1 q= q (t) δ p, P 1,i,l (t). (2.1) 36 Let Ω 1 = Ω i,j. (i,j) D Observe that, Ω = Ω Ω 1 and note that E[N p (t)] = or E[N p (t)] = m A m B i=1 j=1 m A m B i=1 j=1 E (p) i,j (t) when p 1 E (p) m A i,j (t) + i=1 P,i, (t) when p = 1 We refer to E (p) i,j (t) as the p th partial-moment differential equation (PMDE) of Ω i,j. Ong and Taaffe ([8]) used the SDA approach to approximate E (p) i,j (t) for each subspace Ω i,j. Numerical integration of equation (2.1), for (i, j) D, and for every q {,..., p}, requires the values of idle-state probabilities, along with the state probabilities P 1,i,j (t) and P c,i,j (t), for (i, j) D, in the time interval of interest. We numerically integrate the KFEs for the idle-state probabilities and approximate the state probabilities P 1,i,j (t) and P c,i,j (t), for (i, j) D, by using the SDA and state-space partitioning approach described by Ong and Taaffe ([8]). For each subspace Ω i,j, define a random variable X ij t PE(c 1, θ ij t, γ ij t ). Match the conditional moment (E (1) i,j (t)/e () i,j (t)) 1 to the first moment of X ij t, and match the conditional

47 37 moment (E (2) i,j (t)/e () i,j (t)) 2((E (1) i,j (t)/e () i,j (t)) 1) 1 to the second moment of X ij t, to determine the parameters θ ij t and γ ij t. The approximate values of P 1,i,j (t) and P c,i,j (t) are given by P[X ij t = ] and P[X ij t = c 1], respectively. We use the SDA and state-space partitioning approach described by Ong and Taaffe ([8]), to approximate deterministic changes to the Ph(t)/Ph(t)/1/c queueing model. To approximate the p th moment of N(t), we update m A idle-state KFEs and (p + 1)m A m B PMDEs, at the times of deterministic changes to the Ph(t)/Ph(t)/1/c queueing model. There are four possible deterministic changes that can be made to the Ph(t)/Ph(t)/1/c queueing model. They are: EA, ED, IC and DC. No deterministic change to the number of servers is allowed for the Ph(t)/Ph(t)/1/c queueing model. We present a mapping scheme for the state probabilities, in Ω and Ω i,j, where (i, j) D. Note, there are many mapping schemes for state probabilities with respect to deterministic changes made to the Ph(t)/Ph(t)/1/c queueing model. The one presented in this thesis represents one set of rules. The mapping scheme presented here is based on the rule that if a deterministic change requires an entity to be cleared from the system then it will be cleared. The system has no memory of any entity that is cleared (counter to this, is the mapping scheme where cleared customers can enter the system at a later time, and thus the system does not lose entities due to a deterministic change).

48 Deterministic addition to N(t) Let z entities be deterministically added to the system at a selected time t ɛ. Let the system capacity at time t ɛ be c. Since the system has finite capacity, if z c, then z = c. At time t ɛ the mapping scheme for the state probabilities is: P,i, (t + ɛ ) = P 1,i,j (t + ɛ ) =. P z,i,j (t + ɛ ) = β j (t + ɛ ) P,i, (t ɛ ) P z+1,i,j (t + ɛ ) = P 1,i,j (t ɛ ). P c 1,i,j (t + ɛ ) = P c z 1,i,j (t ɛ ) P c,i,j (t + ɛ ) = c n=c z P n,i,j (t ɛ ) for i {1,..., m A }, j {1,..., m A } and n {1,..., c}. 2.2 Deterministic deletion from N(t) Deleting entities in the queue is possible only when N(t) 1. Let z entities be deterministically deleted from the system at a selected time t ɛ. Let the system capacity at time t ɛ be c. Since there can be at most c entities in the queue, if z c, then z = c. At time t ɛ the

49 39 mapping scheme for the state probabilities is: P c,i,j (t + ɛ ) = P c z+1,i,j (t + ɛ ) = P c z,i,j (t + ɛ ) = P c,i,j (t ɛ ).. P 1,i,j (t + ɛ ) = P z+1,i,j (t ɛ ) P,i, (t + ɛ ) = P,i, (t m B z ɛ ) + P n,i,j (t ɛ ) j=1 n=1 for i {1,..., m A }, j {1,..., m A } and n {1,..., c}. 2.3 Deterministic increase in queue capacity c 1 Let the system capacity at time t ɛ be c. Given that the system capacity is c, let the queue capacity be deterministically increased by z at time t ɛ. The mapping scheme for the state probabilities at time t ɛ is: P,i, (t + ɛ ) = P,i, (t ɛ ) P 1,i,j (t + ɛ ) = P 1,i,j (t ɛ ). P c,i,j (t + ɛ ) = P c,i,j (t ɛ )

50 4 P c+1,i,j (t + ɛ ) = P c+z,i,j (t + ɛ ) =. for i {1,..., m A }, j {1,..., m A } and n {1,..., c}. 2.4 Deterministic decrease in queue capacity c 1 Deterministic decrease in queue capacity reduces the capacity of the system to queue potential entities. If reduction in queue capacity requires deletion of entities from the system, then entities are deleted from the system. Let the system capacity at time t ɛ be c. Given that the system capacity is c, let the queue capacity be reduced by z at time t ɛ. The mapping scheme for the state probabilities is: P c,i,j (t + ɛ ) =. P c z+1,i,j (t + ɛ ) = P c z,i,j (t + ɛ ) = P c z,i,j (t ɛ ) + c P n,i,j (t ɛ ) P c z 1,i,j (t + ɛ ) = P c z 1,i,j (t ɛ ). P 1,i,j (t + ɛ ) = P 1,i,j (t ɛ ) n=c z+1

51 41 P,i, (t + ɛ ) = P,i, (t ɛ ) for i {1,..., m A }, j {1,..., m A } and n {1,..., c}. 2.5 Updating idle state KFEs and PMDEs Using the mapping scheme presented in the last four sections, for the state probabilities, the solution technique for updating the idle state KFEs and all PMDEs, given that a deterministic change is made at time t ɛ, is represented by the decision tree given in Figure (2.1), where Change Equations 1-7 are given by Change Equation 1: For i {1,..., m A } P,i, (t + ɛ ) = (2.2) and for (i, j) D E (p) i,j (t + ɛ ) = z p β j (t + ɛ )P,i, (t ɛ ) + + c n=c z+1 p q= ( ) p z p q E (q) i,j (t ɛ ) q {c p (n + z) p }P n,i,j (t ɛ ). (2.3) Change Equation 2: For i {1,..., m A } P,i, (t + ɛ ) = (2.4)

52 42 EA z c z z > c z Change Equation 1 Change Equation 2 Deterministic Change ED z c z z > c z Change Equation 3 Change Equation 4 IC Change Equation 5 DC z c z z > c z Change Equation 6 Change Equation 7 Figure 2.1: Decision tree for deterministic changes to Ph(t)/Ph(t)/1/c queueing model and for (i, j) D E (p) c z i,j (t + ɛ ) = z p β j (t + ɛ )P,i, (t ɛ ) + {(n + z) p c p }P n,i,j (t ɛ ) + c p E () i,j (t ɛ ). (2.5) n=1 Change Equation 3: For i {1,..., m A } P,i, (t + ɛ ) = P,i, (t m B z ɛ ) + P n,i,j (t ɛ ) (2.6) j=1 n=1 and for (i, j) D E (p) i,j (t + ɛ ) = p q= ( ) p ( z) p q E (q) i,j (t ɛ ) q

53 43 z (n z) p P n,i,j (t ɛ ). (2.7) n=1 Change Equation 4: For i {1,..., m A } P,i, (t + ɛ ) = P,i, (t m B ɛ ) + E () i,j (t ɛ ) j=1 c n=z+1 P n,i,j (t ɛ ). (2.8) and for (i, j) D E (p) i,j (t + ɛ ) = c n=z+1 (n z) p P n,i,j (t ɛ ). (2.9) Change Equation 5: For i {1,..., m A } P,i, (t + ɛ ) = P,i, (t ɛ ) (2.1) and for (i, j) D E (p) i,j (t + ɛ ) = E (p) i,j (t ɛ ). (2.11) Change Equation 6: For i {1,..., m A } P,i, (t + ɛ ) = P,i, (t ɛ ) (2.12) and for (i, j) D E (p) i,j (t + ɛ ) = E (p) i,j (t ɛ ) + c n=c z+1 {(c z) p n p }P n,i,j (t ɛ ). (2.13)

54 44 Change Equation 7: For i {1,..., m A } P,i, (t + ɛ ) = P,i, (t ɛ ) (2.14) and for (i, j) D E (p) i,j (t + ɛ ) = (c z) p E () c z i,j (t ɛ ) + {n p (c z) p }P n,i,j (t ɛ ). (2.15) n=1 2.6 Algorithm SSFC Let the solution algorithm for deterministic change in the Ph(t)/Ph(t)/1/c system be labelled as SSFC (Single Server Finite Capacity) algorithm. Let the interval of interest be (t, t f ) and p 2. Assume, we know the values of P,i, (t ), where i {1,..., m A }, and E (q) i,j (t ), where j {1,..., m B } and q {,..., p}. Algorithm SSFC for t = t t f do if a deterministic change is to be made at t then 1. For each Ω i,j, match the conditional first moment and the conditional second moment (E (1) i,j (t ɛ )/E () i,j (t ɛ )) 1 (E (2) i,j (t ɛ )/E () i,j (t ɛ )) 2(E (1) i,j (t ɛ )/E () i,j (t ɛ )) 1

55 45 to the first and second moments of X ij t PE(c 1, θ ij t, γ ij t ), respectively, and approximate all required state probabilities 2. Set E () i,j (t ɛ ) E () i,j (t ɛ ), E (1) i,j (t ɛ ) E (1) i,j (t ɛ ), E (2) i,j (t ɛ ) E (2) i,j (t ɛ ), for (i, j) D, and set P,i, (t ɛ ) P,i, (t ɛ ), for i {1,..., m A } 3. Use approximated state probability values, along with values of E () i,j (t ɛ ), E (1) i,j (t ɛ ), and E (2) i,j (t ɛ ), to approximate E () i,j (t + ɛ ), E (1) i,j (t + ɛ ), E (2) i,j (t + ɛ ), for (i, j) D, and P,i, (t + ɛ ), for i {1,..., m A } 4. Set E () i,j (t ɛ ) E () i,j (t + ɛ ), E (1) i,j (t ɛ ) E (1) i,j (t + ɛ ), E (2) i,j (t ɛ ) E (2) i,j (t + ɛ ), for (i, j) D and P,i, (t ɛ ) P,i, (t + ɛ ) for each i {1,..., m A } 5. Update c if required else 1. For each Ω i,j, match the conditional first moment (E (1) i,j (t ɛ )/E () i,j (t ɛ )) 1 and the conditional second moment (E (2) i,j (t ɛ )/E () i,j (t ɛ )) 2(E (1) i,j (t ɛ )/E () i,j (t ɛ )) 1 to the first and second moments of X ij t PE(c 1, θ ij t, γ ij t ), respectively 2. For each Ω i,j, approximate P 1,i,j (t) and P c,i,j (t) 3. Use the approximated state probability values to numerically integrate PMDEs and idle state KFEs from t t + t, to get approximations for P,i, (t + t), for each i {1,..., m A }, and E () i,j (t + t), E (1) i,j (t + t), E (2) i,j (t + t), for (i, j) D

56 46 4. Set t t + t if for. 2.7 Conclusion We used the state-space partitioning and the SDA approach described by Ong and Taaffe [8] to approximate deterministic changes to the Ph(t)/Ph(t)/1/c queueing model. The solution technique along with the Change Equations presented, minimize the number of approximated state probabilities.

57 Chapter 3 Deterministic changes to the Ph(t)/M(t)/s/c queueing model The Ph(t)/Ph(t)/s/c queueing model is a flexible queueing model. The property of the Phase-type distribution to be able to approximate any non-negative continuous distribution, arbitrarily closely, ensures that the Ph(t)/Ph(t)/s/c queueing model can be used to approximate time-varying queueing models, where the time between arrivals and service time are described by non-negative continuous distributions. The Ph(t)/M(t)/s/c queueing model is a special case of the Ph(t)/Ph(t)/s/c queueing model, where the service time has a timevarying Phase-type distribution with one phase, and rate µ(t). The time between arrivals is represented by a time-varying Phase-type distribution. Let the maximum number of arrival phases be m A. The arrival process is described by (A(t), λ(t)), where A(t) represents the underlying Markov chain for the arrival process, and as defined in Chapter 2 is given by 47

58 48 A(t) = A 1 (t) A 2 (t) α(t). The vector λ(t) is the vector of rates for the arrival process. At the start of the time interval of interest, there are s servers, and the queue capacity is c s. Unlike the Ph(t)/Ph(t)/1/c queueing model, deterministic changes to the number of servers is allowed for the Ph(t)/M(t)/s/c queueing model. We define the state variable as a pair (N(t), I(t)), where N(t) is a random variable that denotes the number of entities in the system at time t, and I(t) is a random variable that denotes the current phase of the next arrival at time t. For the Ph(t)/M(t)/s/c queueing model, N(t) {,..., c} and I(t) {1,..., m A }. The state space Ω = {(m, i) m {,..., c}, i {1,..., m A }}. Let P m,i (t) denote P[N(t) = m, I(t) = i]. The KFEs for the Ph(t)/M(t)/s/c queueing model are: dp n,i (t)/dt = ( λ i (t) + δ n, nµ(t) ) P n,i (t) + a j,i (t)λ j (t)p n,j (t) m A m A j=1 +δ n, α i (t) a j,ma +1 λ j (t)p n 1,j (t) j=1 +(n + 1)µ(t)P n+1,i (t) dp k,i (t)/dt = m A (λ i (t) + sµ(t)) P k,i (t) + a j,i (t)λ j (t)p k,j (t) m A j=1 +α i (t) a j,ma +1 λ j (t) (P k 1,j (t) + δ n,c P c,j (t)) j=1 +δ n,c sµ(t)p k+1,i (t) where n {,..., s 1}, k {s,..., c} and i {1,..., m A }.

59 49 Taaffe and Ong ([1]) partitioned the state space into subspaces to approximate the p th moment of N(t). Define Ω 1,i = {(n, i) n {,..., s 1}, i {1,..., m A }} and Ω 2,i = {(k, i) k {s,..., c}, i {1,..., m A }}. Let and Ω 1 = i {1,...,m A } Ω 1,i Ω 2 = Ω 2,i. i {1,...,m A } Let E (p) i,1 (t) denote E[N p (t), N(t) s 1, I(t) = i] and E (p) i,2 (t) denote E[N p (t), N(t) s, I(t) = i]. We refer to de (p) i,1 (t)/dt as the p th PMDE of Ω 1,i and refer to de (p) i,2 (t)/dt as the p th PMDE of Ω 2,i. The p th PMDE of Ω 1,i and Ω 2,i are given by: de (p) i,1 (t)/dt = λ i (t)e (p) m A i,1 (t) + a j,i (t)λ j (t)e (p) j,1(t) j=1 ( ) m A p +α i (t) a j,ma +1(t)λ j (t) p E (q) j=1 q= q j,1(t) s p P s 1,j (t) p 1 ( ) +µ(t) p ( 1) p q E (q+1) i,1 (t) + (s 1) p s P s,i (t) (3.1) q= q

60 5 de (p) i,2 (t)/dt = λ i (t)e (p) m A i,2 (t) + a j,i (t)λ j (t)e (p) j,2(t) j=1 ( ) m A p +α i (t) a j,ma +1(t)λ j (t) p E (q) p 1 ( ) p j=1 q= q j,2(t) c q P c, j(t) q= q m A +α i (t) a j,ma +1(t)λ j (t)s p P s 1,j (t) j=1 p 1 ( ) +sµ(t) p ( 1) p q E (q) i,2 (t) (s 1) p P s,i (t) q= q (3.2) where i {1,..., m A }. Observe that Ω = Ω 1 Ω 2 and note that E[N p (t)] = m A (E (p) i=1 i,1 (t) + E (p) i,2 (t)). Taaffe and Ong ([1]) used the SDA approach to approximate the p th PMDE of Ω 1,i, and the p th PMDE of Ω 2,i, for i {1,..., m A }. Numerical integration of the p th PMDE of Ω 1,i and Ω 2,i requires the values of P s 1,i (t), P s,i (t), and P c,i (t), for i {1,..., m A }, in the time interval of interest. To approximate P s 1,i (t), for i {1,..., m A }, define a random variable Xt 1i PE(s 1, θt 1i, γt 1i ). Match the conditional moment E (1) i,1 (t)/e () i,1 (t) to the first moment of X 1i t and the conditional moment E (2) i,1 (t)/e () i,1 (t) to the second moment of Xt 1i. Once θt 1i and γ 1i t are determined, the approximate value of P s 1,i (t) is given by P[X 1i t = s 1]. To approximate P s,i (t) and P c,i (t), for i {1,..., m A }, define a random variable Yt 2i PE(c s, θt 2i, γt 2i ). Match the conditional moment (E (1) i,2 (t)/e () i,2 (t)) s

Analysis and Approximations for Time Dependant Queueing Models

Analysis and Approximations for Time Dependant Queueing Models Analysis and Approximations for Time Dependant Queueing Models Walid Nasr Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

6 Continuous-Time Birth and Death Chains

6 Continuous-Time Birth and Death Chains 6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Queues and Queueing Networks

Queues and Queueing Networks Queues and Queueing Networks Sanjay K. Bose Dept. of EEE, IITG Copyright 2015, Sanjay K. Bose 1 Introduction to Queueing Models and Queueing Analysis Copyright 2015, Sanjay K. Bose 2 Model of a Queue Arrivals

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Markov Processes and Queues

Markov Processes and Queues MIT 2.853/2.854 Introduction to Manufacturing Systems Markov Processes and Queues Stanley B. Gershwin Laboratory for Manufacturing and Productivity Massachusetts Institute of Technology Markov Processes

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS By MATTHEW GOFF A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Birth-Death Processes

Birth-Death Processes Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,

More information

CDA5530: Performance Models of Computers and Networks. Chapter 4: Elementary Queuing Theory

CDA5530: Performance Models of Computers and Networks. Chapter 4: Elementary Queuing Theory CDA5530: Performance Models of Computers and Networks Chapter 4: Elementary Queuing Theory Definition Queuing system: a buffer (waiting room), service facility (one or more servers) a scheduling policy

More information

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi.

Optimal Rejuvenation for. Tolerating Soft Failures. Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi. Optimal Rejuvenation for Tolerating Soft Failures Andras Pfening, Sachin Garg, Antonio Puliato, Miklos Telek, Kishor S. Trivedi Abstract In the paper we address the problem of determining the optimal time

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES THE ROYAL STATISTICAL SOCIETY 9 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES The Society provides these solutions to assist candidates preparing

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

Examination paper for TMA4265 Stochastic Processes

Examination paper for TMA4265 Stochastic Processes Department of Mathematical Sciences Examination paper for TMA4265 Stochastic Processes Academic contact during examination: Andrea Riebler Phone: 456 89 592 Examination date: December 14th, 2015 Examination

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have . (a (i I: P(exactly event occurs in [t, t + δt = λδt + o(δt, [o(δt/δt 0 as δt 0]. II: P( or more events occur in [t, t + δt = o(δt. III: Occurrence of events after time t is indeendent of occurrence of

More information

Markov Reliability and Availability Analysis. Markov Processes

Markov Reliability and Availability Analysis. Markov Processes Markov Reliability and Availability Analysis Firma convenzione Politecnico Part II: Continuous di Milano e Time Veneranda Discrete Fabbrica State del Duomo di Milano Markov Processes Aula Magna Rettorato

More information

Name of the Student:

Name of the Student: SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Performance Modelling of Computer Systems

Performance Modelling of Computer Systems Performance Modelling of Computer Systems Mirco Tribastone Institut für Informatik Ludwig-Maximilians-Universität München Fundamentals of Queueing Theory Tribastone (IFI LMU) Performance Modelling of Computer

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Queueing Networks and Insensitivity

Queueing Networks and Insensitivity Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains Chapter 3 Balance equations, birth-death processes, continuous Markov Chains Ioannis Glaropoulos November 4, 2012 1 Exercise 3.2 Consider a birth-death process with 3 states, where the transition rate

More information

Slides 12: Output Analysis for a Single Model

Slides 12: Output Analysis for a Single Model Slides 12: Output Analysis for a Single Model Objective: Estimate system performance via simulation. If θ is the system performance, the precision of the estimator ˆθ can be measured by: The standard error

More information

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin Queuing Networks Florence Perronnin Polytech Grenoble - UGA March 23, 27 F. Perronnin (UGA) Queuing Networks March 23, 27 / 46 Outline Introduction to Queuing Networks 2 Refresher: M/M/ queue 3 Reversibility

More information

Introduction to Queuing Networks Solutions to Problem Sheet 3

Introduction to Queuing Networks Solutions to Problem Sheet 3 Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus

More information

Markovské řetězce se spojitým parametrem

Markovské řetězce se spojitým parametrem Markovské řetězce se spojitým parametrem Mgr. Rudolf B. Blažek, Ph.D. prof. RNDr. Roman Kotecký, DrSc. Katedra počítačových systémů Katedra teoretické informatiky Fakulta informačních technologií České

More information

Markov Chains Absorption Hamid R. Rabiee

Markov Chains Absorption Hamid R. Rabiee Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A

More information

Bernoulli Counting Process with p=0.1

Bernoulli Counting Process with p=0.1 Stat 28 October 29, 21 Today: More Ch 7 (Sections 7.4 and part of 7.) Midterm will cover Ch 7 to section 7.4 Review session will be Nov. Exercises to try (answers in book): 7.1-, 7.2-3, 7.3-3, 7.4-7 Where

More information

Probability and Statistics Concepts

Probability and Statistics Concepts University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each

More information

Random point patterns and counting processes

Random point patterns and counting processes Chapter 7 Random point patterns and counting processes 7.1 Random point pattern A random point pattern satunnainen pistekuvio) on an interval S R is a locally finite 1 random subset of S, defined on some

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

Renewal processes and Queues

Renewal processes and Queues Renewal processes and Queues Last updated by Serik Sagitov: May 6, 213 Abstract This Stochastic Processes course is based on the book Probabilities and Random Processes by Geoffrey Grimmett and David Stirzaker.

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 011 MODULE 3 : Stochastic processes and time series Time allowed: Three Hours Candidates should answer FIVE questions. All questions carry

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

Lecturer: Olga Galinina

Lecturer: Olga Galinina Renewal models Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Reminder. Exponential models definition of renewal processes exponential interval distribution Erlang distribution hyperexponential

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY (formerly the Examinations of the Institute of Statisticians) GRADUATE DIPLOMA, 2004

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY (formerly the Examinations of the Institute of Statisticians) GRADUATE DIPLOMA, 2004 EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY (formerly the Examinations of the Institute of Statisticians) GRADUATE DIPLOMA, 004 Statistical Theory and Methods I Time Allowed: Three Hours Candidates should

More information

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Queueing Theory. VK Room: M Last updated: October 17, 2013. Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server

More information

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011 Queuing Theory Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Queuing Theory STAT 870 Summer 2011 1 / 15 Purposes of Today s Lecture Describe general

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1 Queueing systems Renato Lo Cigno Simulation and Performance Evaluation 2014-15 Queueing systems - Renato Lo Cigno 1 Queues A Birth-Death process is well modeled by a queue Indeed queues can be used to

More information

Approximating Performance and Traffic Flow in Nonstationary Tandem Networks of Markovian Queues

Approximating Performance and Traffic Flow in Nonstationary Tandem Networks of Markovian Queues Approximating Performance and Traffic Flow in Nonstationary Tandem Networks of Markovian Queues Ira Gerhardt Department of Mathematics and Computer Science Manhattan College Riverdale, NY 10471 ira.gerhardt@manhattan.edu

More information

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam. Final Examination Closed book. 120 minutes. Cover page plus five pages of exam. To receive full credit, show enough work to indicate your logic. Do not spend time calculating. You will receive full credit

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Multivariate Risk Processes with Interacting Intensities

Multivariate Risk Processes with Interacting Intensities Multivariate Risk Processes with Interacting Intensities Nicole Bäuerle (joint work with Rudolf Grübel) Luminy, April 2010 Outline Multivariate pure birth processes Multivariate Risk Processes Fluid Limits

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

Probability Distributions

Probability Distributions Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations Chapter 5 Statistical Models in Simulations 5.1 Contents Basic Probability Theory Concepts Discrete Distributions Continuous Distributions Poisson Process Empirical Distributions Useful Statistical Models

More information

The Dynamic Analysis and Design of A Communication link with Stationary and Nonstationary Arrivals

The Dynamic Analysis and Design of A Communication link with Stationary and Nonstationary Arrivals 1 of 28 The Dynamic Analysis and Design of A Communication link with Stationary and Nonstationary Arrivals Five dubious ways to dynamically analyze and design a communication system Wenhong Tian, Harry

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

MA6451 PROBABILITY AND RANDOM PROCESSES

MA6451 PROBABILITY AND RANDOM PROCESSES MA6451 PROBABILITY AND RANDOM PROCESSES UNIT I RANDOM VARIABLES 1.1 Discrete and continuous random variables 1. Show that the function is a probability density function of a random variable X. (Apr/May

More information

Stochastic Models in Computer Science A Tutorial

Stochastic Models in Computer Science A Tutorial Stochastic Models in Computer Science A Tutorial Dr. Snehanshu Saha Department of Computer Science PESIT BSC, Bengaluru WCI 2015 - August 10 to August 13 1 Introduction 2 Random Variable 3 Introduction

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Systems Simulation Chapter 6: Queuing Models

Systems Simulation Chapter 6: Queuing Models Systems Simulation Chapter 6: Queuing Models Fatih Cavdur fatihcavdur@uludag.edu.tr April 2, 2014 Introduction Introduction Simulation is often used in the analysis of queuing models. A simple but typical

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012 SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012 This exam is closed book. YOU NEED TO SHOW YOUR WORK. Honor Code: Students are expected to behave honorably, following the accepted

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

THE ON NETWORK FLOW EQUATIONS AND SPLITTG FORMULAS TRODUCTION FOR SOJOURN TIMES IN QUEUEING NETWORKS 1 NO FLOW EQUATIONS

THE ON NETWORK FLOW EQUATIONS AND SPLITTG FORMULAS TRODUCTION FOR SOJOURN TIMES IN QUEUEING NETWORKS 1 NO FLOW EQUATIONS Applied Mathematics and Stochastic Analysis 4, Number 2, Summer 1991, III-I16 ON NETWORK FLOW EQUATIONS AND SPLITTG FORMULAS FOR SOJOURN TIMES IN QUEUEING NETWORKS 1 HANS DADUNA Institut flit Mathematische

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS Andrea Bobbio Anno Accademico 999-2000 Queueing Systems 2 Notation for Queueing Systems /λ mean time between arrivals S = /µ ρ = λ/µ N mean service time traffic

More information

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments Jun Luo Antai College of Economics and Management Shanghai Jiao Tong University

More information

Performance Evaluation of Queuing Systems

Performance Evaluation of Queuing Systems Performance Evaluation of Queuing Systems Introduction to Queuing Systems System Performance Measures & Little s Law Equilibrium Solution of Birth-Death Processes Analysis of Single-Station Queuing Systems

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals

CHAPTER 4. Networks of queues. 1. Open networks Suppose that we have a network of queues as given in Figure 4.1. Arrivals CHAPTER 4 Networks of queues. Open networks Suppose that we have a network of queues as given in Figure 4.. Arrivals Figure 4.. An open network can occur from outside of the network to any subset of nodes.

More information

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K Attahiru Sule Alfa Department of Mechanical and Industrial Engineering University of Manitoba Winnipeg, Manitoba

More information