Strong Stability in an (R, s, S) Inventory Model

Size: px
Start display at page:

Download "Strong Stability in an (R, s, S) Inventory Model"

Transcription

1 Strong Stability in an (R, s, S) Inventory Model Boualem RABTA and Djamil AÏSSANI L.A.M.O.S. Laboratory of Modelization and Optimization of Systems University of Bejaia, (Algeria) lamos bejaia@hotmail.com Abstract In this paper, we prove the applicability of the strong stability method to inventory models. Real life inventory problems are often very complicated and they are resolved only through approximations. Therefore, it is very important to justify these approximations and to estimate the resultant error. We study the strong stability in a periodic review inventory model with an (R, s, S) policy. After showing the strong v stability of the underlying Markov chain with respect to the perturbation of the demand distribution, we obtain quantitative stability estimates with an exact computation of constants. Keywords : Inventory control, (R, s, S) policy, Markov chain, Perturbation, Strong Stability, Quantitative Estimates. Introduction When modelling practical problems, one may often replace a real system by another one which is close to it in some sense but simpler in structure and/or components. This approximation is necessary because real systems are generally very complicated, so their analysis can not lead to analytical results or it leads to complicated results which are not Corresponding author : Boualem RABTA, L.A.M.O.S., Laboratory of Modelization and Optimization of Systems, University of Bejaia, (Algeria). Tel : (213) Fax : (213) brabta@yahoo.fr. 1

2 useful in practice. Also, all model parameters are imprecisely known because they are obtained by means of statistical methods. Such circumstances suggest to seek qualitative properties of the real system, i.e., the manner in which the system is affected by the changes in its parameters. These qualitative properties include invariance, monotonicity and stability (robustness). It s by means of qualitative properties that bounds can be obtained mathematically and approximations can be made rigorously [18]. On the other hand, in case of instability, a small perturbation may lead to at least a finite deviation [11]. The paper by F.W. Harris [8] was the first contribution to inventory research. Since, thousands of papers were written about inventory problems and this field still a wide area for research. for a recent contribution see for exemple [15]. Inventory models are the first stochastic models for which monotonicity properties were proved (see [12, 20]). In 1969, Boylan has proved a robustness theorem for inventory problems by showing that the solution of the optimal inventory equation depends continuously on its parameters including the demand distribution [5]. Recently, Chen and Zheng (1997) in [6] have shown that the inventory cost in an (R, s, S) model is relatively insensitive to the changes in D = S s. Also, Tee and Rossetti in [19] examine the robustness of a standard model of multi-echelon inventory systems by developing a simulation model to explore the model s ability to predict system performance for a two-echelon one-warehouse, multiple retailer system using (R,Q) inventory policies under conditions that violate the model s fundamental modeling assumptions. As many other stochastic systems, inventory systems can be entirely described by a Markov chain. We will say that the inventory system (or model) is stable if the underly- 2

3 ing Markov chain is stable. For the investigation of the stability of Markov chains many methods have been elaborated. Kalashnikov and Tsitsishvili [11] have proposed a method inspired from Liapunov s direct method for differential equations. Also, we can find in the literature the weak convergence method (Stoyan [18]), metric method (Zolotarev [22]), renewal method (Borovkov [3]), strong stability method (Aïssani and Kartashov [1]), uniform stability method (Ipsen and Meyer [10]),... Some of these methods allow us to obtain quantitative estimates in addition to the qualitative affirmation of the continuity. The strong stability method elaborated in the beginning of the 80 s is applicable to all operations research models which can be represented by a Markov chain. It has been applied to queueing systems (see for example [2]). According to this approach, we suppose that the perturbation is small with respect to a certain norm. Such a strict condition allow us to obtain better estimations on the characteristics of the perturbed chain. For definitions and results on this method we suggest [14]. In this paper, we apply the strong stability method to the investigation of the stability in an (R, s, S) periodic review inventory model and to the obtention of quantitative estimates. In section 1, we present the model, define the underlying Markov chain and study its transient and permanent regime. Notations are introduced in section 2. The main results of this paper are given in sections 3, 4 and 5. After perturbing the demand probabilities, we first prove the strong v stability of the considered Markov chain (section 3), estimate the deviation of its transition matrix (section 4) and then, we give an upper bound to the approximation error (section 5). A numerical example is given in section 6 to illustrate the use of the method. Definition and basic theorems of the strong stability method are given 3

4 in appendix. 1 The model Consider the following single-item, single-echelon, inventory problem. Inventory level is inspected every R time units. At the beginning of each period, the manager decides whether he should order and if yes, how much to order. Suppose that the outside supplier is perfectly reliable and that orders arrive immediately. During period n, n 1, total demand is a random variable ξ n. Assume that ξ n, n 1, are independent and identically distributed (i.i.d.) random variables with common probabilities given by a k = P (ξ 1 = k), k = 0, 1, 2,... So, a k is the probability to have a demand of k items during the period. For such inventory problems, (R, s, S) policy is proved to be optimal (see [17, 9, 21]). According to this control rule, if at review moment (t n = nr, n 1), the inventory level X n is below or equal to s (Order-Point) we order so much items to raise the inventory level to S (Order-Up-To level). Thus, at the beginning of n + 1 period, the size of the order is equal to Z n = S X n. Figure (1) illustrates the mechanism of this model. Note that for this model, R, s and S are decision variables which values should be optimally chosen through a suitable algorithm (see for example [21]). Assume that the initial inventory level s < X 0 S (if not, we raise it to S by ordering sufficient quantity). The inventory level X n+1 at the end of period n + 1 is given by : X n+1 = { (S ξ n+1 ) + if X n s, (X n ξ n+1 ) + if X n > s, 4

5 Figure 1: The inventory level in the (R, s, S) inventory system with zero lead time. where (A) + = max(a, 0). The random variable X n+1 depends only on X n and ξ n+1, where ξ n+1 is independent of n and the state of the system. Thus, X = {X n, n 0} is a homogeneous Markov chain with values in E = {0, 1,..., S}. 1.1 Transition probabilities Suppose that at date t n = nr, X n = i. 1. If 0 i s, then X n+1 = (S ξ n+1 ) +, and we have two cases : for j = 0, P (X n+1 = 0 X n = i) = P (S ξ n+1 0) = P (ξ n+1 S) = for 0 < j S, a k, k=s P (X n+1 = j X n = i) = P (S ξ n+1 = j) = P (ξ n+1 = S j) = a S j. 5

6 2. If s < i S, then X n+1 = (i ξ n+1 ) +, and we distinguish the following cases : for j = 0, P (X n+1 = 0 X n = i) = P (i ξ n+1 0) = P (ξ n+1 i) = a k, k=i for 0 < j i, P (X n+1 = j X n = i) = P (i ξ n+1 = j) = P (ξ n+1 = i j) = a i j, for j > i, P (X n+1 = j X n = i) = 0. So, we have a k if 0 i s and j = 0, k=s a S j if 0 i s and 1 j S, P ij = a k if s + 1 i S and j = 0, k=i a i j if s + 1 i S and 1 j i, 0 if s + 1 i S and j i + 1. Thus, the transition matrix of the Markov chain X is given by P = 0 1 s s + 1 S 0 1 s s + 1 S a k a S 1 a S s a S s 1 a 0 S a k a S 1 a S s a S s 1 a 0 S a k a S 1 a S s a S s 1 a 0 S a k a s a 1 a s a k a S 1 a S s a S s 1 a 0 S 6

7 The Markov chain X is irreducible and aperiodic. It has a unique stationary distribution π = (π 0, π 1,..., π S ) given by the solution of π.p = π π i = 1. Furthermore, this stationary distribution is independent from the initial inventory level X Stationary probabilities Consider the Markov chain Y = {Y n, n 1} given by Y n = {Items on-hand at the beginning of the period n (just after order Z n 1 delivery)}, with initial state Y 1 = X 0. So, we have X n = (Y n ξ n ) + and Y n+1 = { S if Y n ξ n s, Y n ξ n if Y n ξ n > s. Also, we define the Markov chain V = {V n, n 1} by V n = S Y n So, we have V n+1 = { 0 if V n + ξ n S s, V n + ξ n if V n + ξ n < S s. or equivalently V n+1 = (V n + ξ n ) 1I {Vn+ξ n<s s} (1) 7

8 V is a Markov chain with states in {0, 1,..., S s 1}. If we denote by V the random variable having the stationary distribution of the Markov chain V then V = (V + ξ 1 ) 1I {V +ξ 1 <S s} (2) This relation implies that for every 0 v S s 1 F V (v) = P (V v) = C + (F V F ξ )(v), (3) where C = 1 (F V F ξ )(S s 1) is a normalisation constant and F ξ (x) = P (ξ 1 x) is the cumulative function common to all ξ n, n 1. Since (3) is a renewal type equation, it follows (cf. [16]) that its uniquely determined solution is given by where H = n=1 F V (v) = C(1 + H(v)), F n ξ is the renewal function associated to F ξ. The constant C can be easily determined by the condition F V (S s 1) = 1, therefore we obtain that the unique invariant distribution of the Markov chain V is given by F V (v) = 1 + H(v) 1 + H(S s 1), 0 v S s 1. Now, we can deduce the stationary distribution of the Markov chain X from relation X = (S (V + ξ 1 )) + where X is the random variable having the stationary distribution of the Markov chain X. 8

9 2 Preliminary and notations In this section, we introduce necessary notations adapted to our case, i.e., the case of a finite discrete Markov chain. For a general framework see [14] and comments in the appendix. Let X = {X n, n 0} be a homogeneous discrete Markov chain with values in a finite space E = {0, 1,..N}, given by a transition matrix P. Assume that X admits a unique stationary vector π. For a transition matrix P and a function f : E R, we define P f(i) = j E P ij f(j), i E, and for a vector µ R N+1 we define µf = i E µ i f(i). Also, f µ is the matrix having the form f(i)µ i, i, j E. We introduce in R N+1 the special family of norms µ v = i E v(i) µ i, µ R N+1, (4) where v : E R + is a function bounded from bellow by a positive constant (not necessary finite). So, the induced norms on space of matrices of size (N + 1) (N + 1) will have the following form Q v = max i E (v(i)) 1 Q ij v(j), (5) j E 9

10 Also, for a function f : E R, f v = max i E (v(i)) 1 f(i). 3 Strong v stability of the (R, s, S) inventory model Denote by Σ the considered inventory model. Let X = {X n, n 0} be the Markov chain where X n is the on-hand inventory level at date t n = nr, n 1 and X 0 is the initial inventory level. Let Σ be another inventory model with the same structure but with demands ξ n having common probabilities a k = P (ξ 1 = k), k = 0, 1,... Let X = {X n, n 0} be the Markov chain where X n is the on-hand inventory level at date t n = nr, n 1 in Σ and X 0 = X 0 is the initial inventory level. Denote by P and Q the transition operators of the Markov chains X and X respectively. Definition 3.1. (cf. [1]) We say that the Markov chain X verifying P v < is strongly v stable, if every stochastic matrix Q in the neighborhood {Q : Q P v < ɛ} admits a unique stationary vector ν and : ν π v 0 when Q P v 0. Theorem 3.1. In the (R, s, S) inventory model with instant replenishments, the Markov chain X = {X n, n 0} is strongly v stable for a function v(k) = β k for all β > 1. Proof. To prove the strong v stability of the Markov chain X for a function v(k) = β k, β > 1, we check the conditions of Theorem A.1. 10

11 So, we choose the measurable function h(i) = and the measure α = (α 0, α 1,..., α S ), where { 1 if 0 i s, 0 if s < i S, α j = P 0j. Thus, we can easily verify that πh = S π i h(i) = s π i > 0, α1i = S P 0j = 1, αh = S P 0j h(j) = s P 0j = i=s s a i > 0. It is obvious that T ij = P ij h(i)α j = { 0 if 0 i s, P ij otherwise, is a nonnegative matrix. Now, we aim to show that there exists some constant ρ < 1 such that T v(k) ρv(k) for all k E. Compute T v(k) = S T kj v(j). If 0 k s, then T v(k) = 0. If s < k S, then we have T v(k) = k P kj β j = k a i + a k j β j = i=k j=1 k 1 a i + a i β k i i=k 11

12 = k 1 a i a i + β k a i β i = i=k + β k i=k a i i=k β + s+1 k 1 a i i=s+1 β s+1 + k 1 i=s+1 s a i β i βk = a i β i + a i i=s+1 β s+1 + s a i β i βk s a i β i βk. It suffices to take ρ = which is smaller then 1 for all β > 1. Now, we verify that P v < a i i=s+1 β s+1 + s a i β i (6) P v = max k {0,1,..,S} 1 β k P kj β j = max(a, B) where and A = max k {0,1,..,s} B = 1 β k P kj β j = max k {0,1,..,s} [ 1 a β k i + i=s ] a S j β j = j=1 S 1 S 1 S 1 ( = 1 a i + a i β S i = 1 + a i β S i 1 ) max k {s+1,..,s} 1 β k P kj β j = max k {s+1,..,s} [ 1 a β k i + i=k [ a i + i=s ] k a k j β j j=1 a S j β j [ 1 k 1 ( = max 1 + a k {s+1,..,s} β k i β k i 1 )] 1 k 1 ( max 1 + a k {s+1,..,s} β s+1 i β k i 1 )] [ 1 S 1 ( 1 + a β s+1 i β S i 1 )] S 1 ( 1 + a i β S i 1 ) = A. So, we have S 1 ( P v = 1 + a i β S i 1 ) <. 12 j=1

13 4 Deviation of the transition matrix The considered inventory system is strongly v stable. This means that its characteristics can approximate those of a similar inventory model having different demand distribution under the condition that this distribution is close (in some sense) to the ideal system demand distribution. To characterize this proximity we define the quantity W = S 1 a i a i + a i a i β S i. (7) i=s Before estimating numerically the deviation between stationary distributions of the chains X and X, we estimate the deviation of the transition matrix P of the chain X. So, we have Lemma 4.1. Let P (resp. Q) be the transition matrix of the Markov chain X (resp. X ). Then, P Q v W Proof. P Q v = max k {0,1,..,S} 1 β k P kj Q kj β j = max(c, D) where C = max k {0,1,..,s} 1 β k [ P kj Q kj β j 1 max a k {0,1,..,s} β k i a i + i=s ] a S j a S j β j j=1 = S 1 a i a i + a i a i β S i i=s 13

14 and D = max k {s+1,..,s} 1 β k = max k {s+1,..,s} [ P kj Q kj β j 1 max a k {s+1,..,s} β k i a i + i=k = max k {s+1,..,s} a i a i i=k + β k i=s+1 a i a i k 1 i=k + a β k i a i β i a i a i k 1 i=s+1 β s+1 + a i a i β i + ] k a k j a k j β j j=1 s a i a i β i s a i a i β i W. So, we obtain P Q v S 1 a i a i + a i a i β S i = W. i=s 5 Stability inequalities The purpose of this section is to obtain a numerical estimation of the deviation between stationary distributions of the Markov chains X and X. This can be done using Theorem A.2. First, estimate π v. Lemma 5.1. Let γ be the constant given by ( ) a i + S 1 a i β S i γ = i=s (1 ρ)(1 + H(S s 1)) (8) 14

15 where ρ is given by (6) and H = n=1 F n ξ distribution function F ξ of the random variable ξ 1. Then, is the renewal function associated to the cumulative π v γ Proof. According to Theorem A.2, π v (αv)(1 ρ) 1 (πh). Since αv = α(j)v(j) = P 0j β j = a i + a S j β j = i=s j=1 S 1 a i + a i β S i i=s and πh = π i h(i) = s π i = 1 π i = 1 P (X > s) = 1 P (S (V + ξ 1 ) > s) i=s+1 H(S s 1) = 1 P (V +ξ 1 S s 1) = 1 (F V F ξ )(S s 1) = H(S s 1) = H(S s 1), it follows that π v ( i=s a i + S 1 a i β S i ) (1 ρ)(1 + H(S s 1)) = γ. Now, we can prove the following result Theorem 5.1. Let π and ν be the stationary distributions of the Markov chains X and X respectively. Then, under the condition W < 1 ρ 1 + γ (9) 15

16 we have π ν v W γ (1 + γ) 1 ρ (1 + γ) W (10) where ρ is given by (6) and γ by (8). Proof. We have π v γ and 1I v = max k E 1 β k = 1. Imposing the condition W < 1 ρ 1 + γ and replacing constants in (12) by their values, we obtain π ν v W γ (1 + γ) 1 ρ (1 + γ) W. Remark 1. The quantitative estimates obtained by means of the strong stability method and given by the last result gives an upper bound of the deviation of the stationary distribution of the Markov chain X with respect to the perturbation of the transition matrix π ν C(P ) P Q (11) Some other stability methods can also be used to obtain such a bound. For the Absolutely stable chain defined in [10], the survey of Cho and Mayer [7] collects and compares several bounds of the form (11). Note that this method study the sensitivity of the individual stationary probabilities of a finite Markov chain. 16

17 6 Numerical example Consider the (R, s, S) inventory model with Poisson distributed demand. This can correspond to the case where clients arrive following a Poisson distribution and request all exactly one item. Now, we apply the above results to test the sensitivity of the model when perturbing the arriving rate λ to λ = λ + ε (for example, ε can be the error when estimating λ from real data). For model parameters we take R = 1, S = 6, s = 3, λ = 5. The transition matrix P of the Markov chain X is constructed and computations are made by means a computer program. In a first step, we vary β to see the impact of the choice of the norm. The perturbed parameter we take is λ = 5, 1 so ε = 0, 1. For each value of β, the program compute the deviation W of the transition matrix and check whether the condition (9) of Theorem 5.1 is satisfied. If yes, the program compute the upper bound of the approximation error from relation (10). The transition matrix of the chain X is given by 0, , , , , , , , , , , , , , , , , , , , , P = 0, , , , , , , , , , , , , , , , , , , , , , , , , , , , The stationary distribution of the chain X is given by π = (0, , 0, , 0, , 0, , 0, , 0, , 0, ) Table 1 gives the results of the computations and the error e = π ν v = f(β) is graphically represented in Figure 2. Observe that the condition (9) is satisfied only for 17

18 β W π ν v β W π ν v β W π ν v 1,11 0, , ,21 0, , ,31 0, , ,41 0, , ,51 0, , ,61 0, , ,71 0, , ,81 0, , ,91 0, , ,01 0, , ,11 0, , ,21 0, , Table 1: The deviation of the transition matrix and the error e = π ν v = f(β) for different values of β. Figure 2: The error e = π ν v = f(β). 1, 03 β 2, 24. So, the choice of the norm (β) is relevant. The minimum estimation of the error e = 0, 0489 is taken for β = 1, 29. Now, let us vary the perturbation ε for a fixed value of β. The error e = π ν v = g(ε) ε 0,01 0,02 0,03 0,04 0,1 0,2 0,3 0,4 0,5 e 0,0043 0,0087 0,0133 0,0179 0,0489 0,1150 0,2091 0,3537 0,6041 Table 2: The error e = π ν v = g(ε). is given in Table 2 for β = 1, 29. Naturally, the error is proportional to the perturbation 18

19 and the two quantities grow in the same time. Observe that the model have resisted to an important perturbation (0, 5 represents 10% of the value of λ). 7 Conclusion The Markov chain X denoting the on-hand inventory level at the end of periods in the (R, s, S) inventory system with instant replenishments, is strongly v stable with respect to the perturbation of the demand distribution, for a function v(k) = β k, β > 1. This means that its characteristics are close to those of another inventory model with the same structure but with demand distribution close to the ideal model demand distribution. The last result gives us a numerical estimation of the error due to the approximation. References [1] Aïssani, D. and N.V. Kartashov, 1983, Ergodicity and stability of Markov chains with respect to operator topology in the space of transition kernels, Dokl. Akad. Nauk. Ukr. SSR, ser. A 11, 3 5. [2] Aïssani, D. and N.V. Kartashov, 1984, Strong stability of the imbedded markov chain in an M/G/1 system, International Journal Theor. Probability and Math. Statist. 29, (American Mathematical Society), 1 5. [3] Borovkov, A.A., 1972, Processus probabilistes de la théorie des files d attente (Edition Navka, Moskou). 19

20 [4] Boualouche, L. and D. Aïssani, 2002, Performance evaluation of an SW Communication Protocol (Send and Wait), Proceedings of the MCQT 02 (First Madrid International Conference on Queueing Theory), Madrid (Spain), 18. [5] Boylan, E.S., 1969, Stability theorems for solutions to the optimal inventory equation, Journal of Applied Probability 6, [6] Chen, F. and Y.S. Zheng, 1997, Sensitivity analysis of an (s, S) inventory model, Operations Research Letters 21(1), [7] Cho, G. and C. Mayer, 2001, Comparaison of perturbation bounds for the stationary probabilities of a finite Markov chain, Linear Algebra and its Applications, 335, [8] Harris, F.W., 1913, How many parts to make at once, Factory (The magazine of Management), 10(2), [9] Iglehart, D., 1963, Dynamic programming and stationary analysis of inventory problems, multistage inventory models and techniques, in : H. Scarf, D. Gilford, and M. Shelly, editors, Multistage Inventory Models and Techniques, (Stanford University Press) [10] Ipsen, I. and C. Meyer, 1994, Uniform stability of Markov chains, SIAM Journal on Matrix Analysis and Applications, 15(4),

21 [11] Kalashnikov, V.V. and G.Sh. Tsitsishvili, 1971, On the stability of queueing systems with respect to disturbances of their distribution functions, Queueing theory and reliability, [12] Karlin, S., 1960, Dynamic inventory policy with varing stochastic demands, Management Sciences 6, [13] Kartashov, N.V., 1981, Strong Stability of Markov Chains, VNISSI, Vsesayouzni Seminar on Stability Problems for stochastic Models, Moscow, (See also : 1986, J. Soviet Mat. 34, ). [14] Kartashov, N.V., 1996, Strong Stable Markov Chains (VSP, Utrecht, TbiMC Scientific Publishers). [15] Mohebbi, E., 2004, A replenishment model for the supply-uncertainty problem, International Journal of Production Economics, 87(1), [16] Ross, S.M., 1970, Applied probability with optimization applications (Holden-Day, San Francisco). [17] Scarf, H., 1960, The optimality of (S, s) policies in the dynamic inventory problem, in : K.J. Arrow, S. Karlin, and P. Suppes, editors, Mathematical methods in the social sciences, (Stanford University Press, Stanford, Calif.) [18] Stoyan, D., 1983, Comparaison Methods for Queues and Other Stochastic Models, English translation, D.J. Daley, Editor ( J. Wiley and Sons, New York). 21

22 [19] Y.S. Tee and M.D. Rossetti, 2002, A robustness study of a multi-echelon inventory model via simulation, International Journal of Production Economics, 80(3), [20] Veinott, A., 1965, Optimal policy in a dynamic, single product, non-stationary inventory model with several demand classes, Operations Research 13, [21] Veinott, A., and H. Wagner, 1965, Computing optimal (s, S) policies, Management Sciences 11, [22] Zolotarev, V.M., 1975, On the continuity of stochastic sequenses generated by recurent processes, Theory of Probability and Its Applications, XX(4), A Strong stability criterion Conserve notations of section 2. The following theorem (cf. [1]) gives sufficient conditions for the strong stability of a Harris recurrent Markov chain. Theorem A.1. The Harris recurrent Markov chain X verifying P v < is strongly v-stable, if the following conditions are satisfied 1. α R N+1 +, h : E R + such that : πh > 0, α1i = 1, αh > 0, 2. T = P h α is a nonnegative matrix, 3. ρ < 1 such that, T v(x) ρ v(x), x E. where 1I is the function identically equal to 1. 22

23 One important particularity of the strong stability method is the possibility to obtain quantitative estimates. The following theorem (cf. [13]) allows us to obtain a numerical estimation of the deviation of the stationary distribution of the strongly stable Markov chain X. Theorem A.2. Under conditions of theorem (A.1) and for verifying the condition v < C 1 (1 ρ), we have : ν π v v π v C (1 ρ C v ) 1, (12) where C = 1 + 1I v π v and π v (αv)(1 ρ) 1 (πh) 23

STABILITY OF THE INVENTORY-BACKORDER PROCESS IN THE (R, S) INVENTORY/PRODUCTION MODEL

STABILITY OF THE INVENTORY-BACKORDER PROCESS IN THE (R, S) INVENTORY/PRODUCTION MODEL Pliska Stud. Math. Bulgar. 18 (2007), 255 270 STUDIA MATHEMATICA BULGARICA STABILITY OF THE INVENTORY-BACKORDER PROCESS IN THE (R, S) INVENTORY/PRODUCTION MODEL Zahir Mouhoubi Djamil Aissani The aim of

More information

Approximation in an M/G/1 queueing system with breakdowns and repairs

Approximation in an M/G/1 queueing system with breakdowns and repairs Approximation in an M/G/ queueing system with breakdowns and repairs Karim ABBAS and Djamil AÏSSANI Laboratory of Modelization and Optimization of Systems University of Bejaia, 6(Algeria) karabbas23@yahoo.fr

More information

Series Expansions in Queues with Server

Series Expansions in Queues with Server Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil Aïssani Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 1627 1649 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa New perturbation bounds

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Keywords: inventory control; finite horizon, infinite horizon; optimal policy, (s, S) policy.

Keywords: inventory control; finite horizon, infinite horizon; optimal policy, (s, S) policy. Structure of Optimal Policies to Periodic-Review Inventory Models with Convex Costs and Backorders for all Values of Discount Factors arxiv:1609.03984v2 [math.oc] 28 May 2017 Eugene A. Feinberg, Yan Liang

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

An Introduction to Stochastic Modeling

An Introduction to Stochastic Modeling F An Introduction to Stochastic Modeling Fourth Edition Mark A. Pinsky Department of Mathematics Northwestern University Evanston, Illinois Samuel Karlin Department of Mathematics Stanford University Stanford,

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke (

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke ( The Departure Process of the GI/G/ Queue and Its MacLaurin Series Jian-Qiang Hu Department of Manufacturing Engineering Boston University 5 St. Mary's Street Brookline, MA 2446 Email: hqiang@bu.edu June

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

HITTING TIME IN AN ERLANG LOSS SYSTEM

HITTING TIME IN AN ERLANG LOSS SYSTEM Probability in the Engineering and Informational Sciences, 16, 2002, 167 184+ Printed in the U+S+A+ HITTING TIME IN AN ERLANG LOSS SYSTEM SHELDON M. ROSS Department of Industrial Engineering and Operations

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

THIELE CENTRE. The M/M/1 queue with inventory, lost sale and general lead times. Mohammad Saffari, Søren Asmussen and Rasoul Haji

THIELE CENTRE. The M/M/1 queue with inventory, lost sale and general lead times. Mohammad Saffari, Søren Asmussen and Rasoul Haji THIELE CENTRE for applied mathematics in natural science The M/M/1 queue with inventory, lost sale and general lead times Mohammad Saffari, Søren Asmussen and Rasoul Haji Research Report No. 11 September

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

A Single-Unit Decomposition Approach to Multiechelon Inventory Systems

A Single-Unit Decomposition Approach to Multiechelon Inventory Systems OPERATIONS RESEARCH Vol. 56, No. 5, September October 2008, pp. 1089 1103 issn 0030-364X eissn 1526-5463 08 5605 1089 informs doi 10.1287/opre.1080.0620 2008 INFORMS A Single-Unit Decomposition Approach

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY. Wai-Ki Ching. Tang Li. Sin-Man Choi. Issic K.C. Leung

A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY. Wai-Ki Ching. Tang Li. Sin-Man Choi. Issic K.C. Leung Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 00X Website: http://aimsciences.org pp. X XX A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY WAI-KI CHING SIN-MAN CHOI TANG

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory Contents Preface... v 1 The Exponential Distribution and the Poisson Process... 1 1.1 Introduction... 1 1.2 The Density, the Distribution, the Tail, and the Hazard Functions... 2 1.2.1 The Hazard Function

More information

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE HASAN AKIN Abstract. The purpose of this short paper is to compute the measure-theoretic entropy of the onedimensional

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

ON THE STRUCTURE OF OPTIMAL ORDERING POLICIES FOR STOCHASTIC INVENTORY SYSTEMS WITH MINIMUM ORDER QUANTITY

ON THE STRUCTURE OF OPTIMAL ORDERING POLICIES FOR STOCHASTIC INVENTORY SYSTEMS WITH MINIMUM ORDER QUANTITY Probability in the Engineering and Informational Sciences, 20, 2006, 257 270+ Printed in the U+S+A+ ON THE STRUCTURE OF OPTIMAL ORDERING POLICIES FOR STOCHASTIC INVENTORY SYSTEMS WITH MINIMUM ORDER QUANTITY

More information

SOMEWHAT STOCHASTIC MATRICES

SOMEWHAT STOCHASTIC MATRICES SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.

More information

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

CHAPTER-3 MULTI-OBJECTIVE SUPPLY CHAIN NETWORK PROBLEM

CHAPTER-3 MULTI-OBJECTIVE SUPPLY CHAIN NETWORK PROBLEM CHAPTER-3 MULTI-OBJECTIVE SUPPLY CHAIN NETWORK PROBLEM 3.1 Introduction A supply chain consists of parties involved, directly or indirectly, in fulfilling customer s request. The supply chain includes

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method Applied Mathematical Sciences, Vol. 4, 21, no. 48, 2355-2367 M/M/1 Retrial Queueing System with Negative Arrival under Erlang-K Service by Matrix Geometric Method G. Ayyappan Pondicherry Engineering College,

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Inventory Control with Convex Costs

Inventory Control with Convex Costs Inventory Control with Convex Costs Jian Yang and Gang Yu Department of Industrial and Manufacturing Engineering New Jersey Institute of Technology Newark, NJ 07102 yang@adm.njit.edu Department of Management

More information

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline

Some open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline 1 Some open problems related to stability S. Foss Heriot-Watt University, Edinburgh and Sobolev s Institute of Mathematics, Novosibirsk I will speak about a number of open problems in queueing. Some of

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Monotonicity and Aging Properties of Random Sums

Monotonicity and Aging Properties of Random Sums Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,

More information

Replenishment Planning for Stochastic Inventory System with Shortage Cost

Replenishment Planning for Stochastic Inventory System with Shortage Cost Replenishment Planning for Stochastic Inventory System with Shortage Cost Roberto Rossi UCC, Ireland S. Armagan Tarim HU, Turkey Brahim Hnich IUE, Turkey Steven Prestwich UCC, Ireland Inventory Control

More information

Available online at ScienceDirect. IFAC PapersOnLine 51-7 (2018) 64 69

Available online at   ScienceDirect. IFAC PapersOnLine 51-7 (2018) 64 69 Available online at www.sciencedirect.com ScienceDirect IFAC PapersOnLine 51-7 (2018) 64 69 Efficient Updating of Node Importance in Dynamic Real-Life Networks J. Berkhout B.F. Heidergott Centrum Wiskunde

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Waiting time characteristics in cyclic queues

Waiting time characteristics in cyclic queues Waiting time characteristics in cyclic queues Sanne R. Smits, Ivo Adan and Ton G. de Kok April 16, 2003 Abstract In this paper we study a single-server queue with FIFO service and cyclic interarrival and

More information

ISyE 6761 (Fall 2016) Stochastic Processes I

ISyE 6761 (Fall 2016) Stochastic Processes I Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents

More information

Probability and Stochastic Processes Homework Chapter 12 Solutions

Probability and Stochastic Processes Homework Chapter 12 Solutions Probability and Stochastic Processes Homework Chapter 1 Solutions Problem Solutions : Yates and Goodman, 1.1.1 1.1.4 1.3. 1.4.3 1.5.3 1.5.6 1.6.1 1.9.1 1.9.4 1.10.1 1.10.6 1.11.1 1.11.3 1.11.5 and 1.11.9

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

A Simple Solution for the M/D/c Waiting Time Distribution

A Simple Solution for the M/D/c Waiting Time Distribution A Simple Solution for the M/D/c Waiting Time Distribution G.J.Franx, Universiteit van Amsterdam November 6, 998 Abstract A surprisingly simple and explicit expression for the waiting time distribution

More information

Inderjit Dhillon The University of Texas at Austin

Inderjit Dhillon The University of Texas at Austin Inderjit Dhillon The University of Texas at Austin ( Universidad Carlos III de Madrid; 15 th June, 2012) (Based on joint work with J. Brickell, S. Sra, J. Tropp) Introduction 2 / 29 Notion of distance

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Model reversibility of a two dimensional reflecting random walk and its application to queueing network arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu

More information

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION 2013-201 MH702/MAS6/MTH37 Probabilistic Methods in OR December 2013 TIME ALLOWED: 2 HOURS INSTRUCTIONS TO CANDIDATES 1. This examination paper contains

More information

Single-stage Approximations for Optimal Policies in Serial Inventory Systems with Non-stationary Demand

Single-stage Approximations for Optimal Policies in Serial Inventory Systems with Non-stationary Demand Single-stage Approximations for Optimal Policies in Serial Inventory Systems with Non-stationary Demand Kevin H. Shang Fuqua School of Business, Duke University, Durham, North Carolina 27708, USA khshang@duke.edu

More information

Stochastic inventory system with two types of services

Stochastic inventory system with two types of services Int. J. Adv. Appl. Math. and Mech. 2() (204) 20-27 ISSN: 2347-2529 Available online at www.ijaamm.com International Journal of Advances in Applied Mathematics and Mechanics Stochastic inventory system

More information

Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands

Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands Expected Time Delay in Multi-Item Inventory Systems with Correlated Demands Rachel Q. Zhang Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Michigan 48109 Received

More information

Risk-Sensitive and Average Optimality in Markov Decision Processes

Risk-Sensitive and Average Optimality in Markov Decision Processes Risk-Sensitive and Average Optimality in Markov Decision Processes Karel Sladký Abstract. This contribution is devoted to the risk-sensitive optimality criteria in finite state Markov Decision Processes.

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

M/M/1 Queueing systems with inventory

M/M/1 Queueing systems with inventory Queueing Syst 2006 54:55 78 DOI 101007/s11134-006-8710-5 M/M/1 Queueing systems with inventory Maike Schwarz Cornelia Sauer Hans Daduna Rafal Kulik Ryszard Szekli Received: 11 August 2004 / Revised: 6

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process

More information

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

On Finding Optimal Policies for Markovian Decision Processes Using Simulation On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

QUALIFYING EXAM IN SYSTEMS ENGINEERING

QUALIFYING EXAM IN SYSTEMS ENGINEERING QUALIFYING EXAM IN SYSTEMS ENGINEERING Written Exam: MAY 23, 2017, 9:00AM to 1:00PM, EMB 105 Oral Exam: May 25 or 26, 2017 Time/Location TBA (~1 hour per student) CLOSED BOOK, NO CHEAT SHEETS BASIC SCIENTIFIC

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent

More information