Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov modulation (PSMM) 2.1 M/G type queuing systems 2.2 Definition of PSMM 2.3 Regeneration properties of PSMM 3. Miscellaneous results 3.1 Ergodic theorems 3.2 Coupling and ergodic theorems 1
1. Semi-Markov processes (SMP) 1.1 Definition of SMP Let (J n, T n ), n =, 1,... be a homogeneous Markov chain with the phase space X [, ), where X = {1, 2,..., m}, initial distribution p i = P{J = i, X = }, i X and transition probabilities, for i, j X, s, t, Q ij (t) = P{J n+1 = j, T n+1 t/j n = i, T n = s}. Definition 1.1. The process (J n, S n = T 1 + + T n ), n =, 1,... is called a Markov renewal process. We assume that the following condition excluding instant renewals holds: A: Q ij () =, i, j X. Let N(t) = max(n : S n t), t be a Markov renewal counting process (N(t) counts a number of renewals in an interval [, t]). Condition A guarantees that the random variable N(t) < with probability 1 for every t. Definition 1.2. The process J(t) = J N(t), t is called a semi-markov process. (1) Semi-Markov processes have been introduced independently by Takács, Lévy, and Smith in the mid of 195s, as a natural generalization of Markov chains. 2
(2) If Q ij (t) = p ij (1 e λ it ), i, j X, t, then SMP J(t) is a continuous time Markov chain. (3) If Q ij (t) = p ij I(t 1), i, j X, t, then SMP J(t), t =, 1,... is a discrete time Markov chain. (4) By the definition, random variables J n = J(S n ), n =, 1,... are states of the SMP J(t) at moments of sequential jumps, S n, n =, 1,... are moments of sequential jumps, and T n, n = 1, 2,... are inter-jump times (also referred as sojourn times). Lemma 1.1. The random sequence J n, n =, 1,... is a discrete time homogeneous d Markov chain (an imbedded MC) with the phase space X, initial distribution p i = P{J = i}, i X and transition probabilities, p ij = Q ij ( ) = P{J n+1 = j/j n = i}, i, j X. (1) (5) Transition probabilities Q ij (t) can be always represented in the form Q ij (t) = p ij F ij (t), where F ij (t) = Q ij (t)/p ij if p ij > while F ij (t) can be an arbitrary distribution function such that F ij () = if p ij =. Lemma 1.2. The random sequence T n, n =, 1,... is a sequence of conditionally independent random variables with respect to the MC J n, n =, 1,... that is, for every t 1, t 2,..., i, i 1,... X and n = 1, 2,..., P{T k t k, k = 1,..., n/j k = i k, k =,..., n} = F i i 1 (t 1 ) F in 1 i n (t n ). (2) 3
(6) Let us introduce a process T (t) = t S N(t), t (T (t) counts time between the moment of the last before t jump of the SMP X(t) and the moment t). Theorem 1.2. Process (J(t), T (t)), t is a continuous time homogeneous Markov process with the phase space X [, ). 1.2 Transition probabilities for SMP (7) Let us introduce transition probabilities p ij (t) = P{J(t) = j/j() = i}, i, j X, t. Denote also, F i (t) = l X Q il (t), i X. Theorem 1.1. The transition probabilities p ij (t) satisfy the following system of Markov renewal equations, for every j X, p ij (t) = (1 F i (t))i(i = j) + l X t p lj (t s)q il (ds), t, i X. (3) This system has the unique solution in the class of all measurable, non-negative, bounded functions, if condition A holds. (a) One should write down the formula of total probability using as hypotheses a state of the SMP J(t) at the moment t S 1. (b) Take into account that {J(t) = j, S 1 > t} = {J() = j}. 4
(c) The proof of uniqueness of the solution is analogous to those for the renewal equation. (8) Denote, π ij (s) = e st p ij (t)dt, ϕ ij (s) = e st Q ij (dt), ψ i (s) = e st (1 F i (t))dt, s. Then, the system of equations (5) is equivalent to the following family of systems of linear equations, for every j X, π ij (s) = ψ i (s)i(j = i) + l X ϕ il (s)π lj (s), i X, s. (4) 1.3 Hitting times and semi-markov renewal equations (7) Let us introduce a random functional, which is the first hitting time to a state j X for the SMP J(t), V j = inf(t > S 1 : J(t) = j). (5) We assume that the following condition excluding instant renewals holds: B: X is a communicative class of states for the imbedded Markov chain J n. (8) Let us introduce conditional distribution functions, G ij (t) = P{V j t/j() = i}, i, j X, t. Theorem 1.3. The distribution functions G ij (t) satisfy the following system of Markov renewal equations, for every j X, G ij (t) = Q ij (t) + l j t G lj (t s)q il (ds), t, i X. (6) 5
(a) Analogously to the proof of Theorem 1.2, one should write down the formula of total probability using as hypotheses a state of the SMP J(t) at the moment t S 1. (b) Take into consideration that {J(t) = j, S 1 > t} = {J() = j}. (c) The proof of uniqueness of the solution is analogous to those for the renewal equation. (8) The following stochastic relation, which follows from the Markov property the Markov renewal process (J n, S n ), is very useful in semi-markov computations, V ij d = { Tij with prob. p ij, T il + V lj with prob. p il, l j, i X. (7) where: (a) V ij is a random variable with the distribution function G ij (t) for every i X; (b) T il is a random variable with the distribution function F il (t) for every i, l X; (c) random variables T il and V lj are independent for every l X. (9) Computation of distribution functions for random variables of the left and right hand sides in (1) yields equations (9). (1) Denote m il = tf il (dt), m i = l X p il m il = E i S 1, i, l X. The following theorem can be obtained with the use of relation (1). 6
Theorem 1.4. If m i <, i X then the expectations M ij = tg ij (dt) <, i, j X and are the unique solution of the following system of linear equations, for every j X, M ij = m ij p ij + l j p il m il + l j p il M lj = m i + l j p il M lj, i X. (8) Theorem 1.5. The transition probabilities p ij (t) satisfy the following system of Markov renewal equations, for every j X, p ij (t) = (1 F i (t))i(i = j) + t p jj (t s)g ij (ds), t, i X. (9) This system has the unique solution if condition A holds. (a) One should write down the formula of total probability using as hypotheses possible states for the SMP J(t) at the moment t V j. (b) Take into consideration that {J(t) = j, V j > t} = {J() = j}. (c) The proof of uniqueness of the solution is analogous to those for the renewal equation. (11) Note that for i = j, equation (12) is a renewal equation for probabilities p jj (t) while, for i j equation (12) is just relation which expresses the probability p ij (t) via the probabilities p jj (s), s t. 7
2. Processes with semi-markov modulation (PSMM) 2.1 M/G type queuing systems (12) Let us consider a standard M/G queuing system with a bounded queue buffer. This system have one server and a bounded queue buffer of size N 2 (including the place in the server). It functions in the following way: (a) the input flow of customers is a standard Poisson flow with a parameter λ > ; (b) the customers are placed in the queue buffer in the order of their coming in the system; (c) a new customer coming in the system immediately start to be served if the queue buffer is empty, is placed in the queue buffer if the number of customers in the system is less than N, or leaves the system if the queue buffer is full, i.e. the number of customers in the system equals to N; (d) the service times for different customers are independent random variables with the distribution function G(t); (e) the processes of customers coming in the system and the service processes are independent. (13) Let define as S n the sequential moments of the beginning of service for customers or waiting time for empty system, and J n the number of customers in the system in moments S n, and T n = S n S n 1. Due to the above assumptions (a) (e), (J n, S n ) is a Markov renewal process with the phase space {, 1,..., N 1} [, ) and transition probabilities given by the following relation, 8
Q ij (t) = P{J n+1 = j, T n+1 t/j n = i} (1) 1 e λt if i =, j = 1, if i =, 1 < j = N 1, t (λs) j i+1 (j i+1)! = e λs G(ds) if 1 i N 1, i 1 j N 2, t λ N i s N i 1 (N i 1)! e λs (1 G(s))ds if 1 i N 1, j = N 1. (14) The corresponding SMP J(t) represents the number of customers in the system at moment of the last before t moment of the beginning of service for customers or waiting time for empty system. (15) But SMP J(t) does not define the number of customers in the system at moment t. This can be achieved by introducing the corresponding processes with semi-markov modulation which would describe dynamics of the ques in the system between moments S n. 2.2 Definition of PSMM Let < Y n,i (t), t, J n,i, T n,i >, n = 1, 2,..., i X = {1,..., m} and J is a family of stochastic functions defined on a probability space < Ω, F, P > such that: (a) Y n,i (t), t are measurable processes with a measurable phase space Y (with sigma-algebra of measurable sets B Y ); (b) J n,i are random variables taking values in the set X; T n,i are positive random variables; (d) J is a random variable taking values in X. 9
We assume that the following condition holds: C: (a) < Y n,i (t), t, J n,i, T n,i >, n = 1, 2,..., i X, J is a family of independent random functions; (b) The joint finite dimensional distributions P{Y n,i (t r ) A r,i, J n,i = j i, T n,i s i, r = 1,..., m, i X}, A r,i B, t, t r, s i, r = 1,..., m, i X, m 1 do not depend on n = 1, 2,... (but can depend on i X). Let define recursively, for n = 1, 2,..., J n = J n,jn 1, T n = T n,jn 1, Y n (t) = Y n,jn 1 (t), t. (11) Let also S n = T 1 +... + T n, n =, 1,... (S = ), N(t) = max(n : S n t), t and T (t) = t S N(t), t be, respectively, sequential moments of renewals, the number of renewals in an interval [, t] for this process, and the waiting time (time between the last renewal before t and moment t) for the Markov renewal process (J n, S n ). Definition 9.5. The process Y (t) = Y N(t)+1 (T (t)), t is called a processes with semi-markov modulation (modulated by the SMP J(t) = J N(t), t ). (16) SMP J(t) is a particular case of the PSPM Y (t), where Y n,i (t) i, t. In this case, Y (t) J(t), t. Example Let return back to consideration of M/G queuing system with a bounded queue buffer. In this case, random functions independent random functions < Y n,i (t), t, J n,i, T n,i >, n = 1
1, 2,..., i X = {1,..., m} and J can be defined in the following way for every n = 1, 2,...: (e) If i = then: Y n, (t), t ; T n, is exponentially distributed r.v. with parameter λ; J n, = 1. (f) If 1 i N 1 then: Y n,i (t) = min(i+n n,i (t), N), t, where N n,i (t), t is a Poisson process with parameter λ; T n,i is r.v. with the distribution function G(t); J n,i = Y n,i (T n,i ). 2.3 Regeneration properties of PSMM (17) Let us introduce transition probabilities p ij,a (t) = P{Y (t) A, J(t) = j/j = i}, i, j X, A B Y, t. Let also denote, q ij,a (t) = P{Y 1,i (t) A, T 1,i > t}i(i = j), and Q ij (t) = P{J 1,i = j, T 1,i t}. Theorem 1.6. The transition probabilities p ij,a (t) satisfy the following system of Markov renewal equations, for every j X and A B Y, p ij,a (t) = q ij,a (t) + l X t p lj,a (t s)q il (ds), t, i X. (12) 11
(a) One should write down the formula of total probability using as hypotheses a state of the SMP J(t) at the moment t S 1. (b) Take into account that condition C and the corresponding Markov regeneration property determined by this condition and relation (11). (c) The proof of uniqueness of the solution is analogous to those for the renewal equation. Theorem 1.7. The PSMP Y (t) is a regenerative process with regeneration times which are times of return of the modulating SMP J(t) in some fixed state j X. Theorem 1.8. The transition probabilities p ij (t) satisfy the following system of Markov renewal equations, for every j X and A B Y, p ij,a (t) = q ij,a (t) + t 3. Miscellaneous results 3.1 Ergodic theorems p jj,a (t s)g ij (ds), t, i X. (13) Theorem 1.9. Let assume that (a) X is one class of communicative states for MC J n ; (b) m i <, i X; (c) at least one distribution function F i (t) is non-arithmetic; (d) q ij,a (t) is almost everywhere continuous with respect to Lebesgue measure 12
on [, ). Then, for i, j X, A B Y, q jj,a (s)ds p ij,a (t) π j,a = as t. (14) M jj (18) If A = Y then p ij,y (t) = p ij (t) = P{J(t) = j/j() = i}, i, j X. Let also ρ l, l X be the stationary distribution of the imbedded Markov chain J n. Theorem 1.1. Let assume that (a) X is one class of communicative states for MC J n ; (b) m i <, i X; (c) at least one distribution function F i (t) is non-arithmetic. Then, for i, j X, p ij (t) π j = m j ρ j m j = M jj l X ρ as t. (15) lm l (a) In this case, q jj,y (s) = (1 F j(s))ds = m j. (b) Also, it is known that M jj = ρ l m l, ρ j l X (c) Thus, relation (14) takes the following form of relation (15). 3.2 Coupling and ergodic theorems (18) The variational distance between two probability measures P 1 (A) and P 2 (A) defined on σ-algebra of measurable subsets B of some space Z is defined as, d(p 1 ( ), P 2 ( )) = sup P 1 (A) P 2 (A). (16) A B 13
Let L(P 1, P 2 ) be the class of all random vectors (Z 1, Z 2 ) defined on some probability space < Ω, F, P > and taking values in the space Z Z, such that P{Z i A} = P i (A), A B, i = 1, 2. Lemma 1.3. The maximal coincidence probability, P = sup P{Z 1 = Z 2 } = 1 d(p 1 ( ), P 2 ( )). (17) (Z 1,Z 2 ) L(P 1,P 2 ) (19) Let assume that P i (A) = A p i(z)ν(dz), where p i (z) is a non-negative measurable in (x, z) function, and µ is a σ-finite measure on B. Lemma 1.4. Under the above assumption, P = 1 d(p 1 ( ), P 2 ( )) = min(p 1 (z), p 2 (z))ν(dz). (18) Z and the vector (Z1, Z2), for which the coincidence probability take its maximal value, has the following structure, { (Z, Z) with prob. P, (Z1, Z2) = (19) (Z 1, Z 2 ) with prob. 1 P, where Z is a random variable with the distribution P (A) = 1 P A min(p 1(z), p 2 (z))ν(dz), while (Z 1, Z 2 ) is a random vector with independent components, which have distributions P i (A) = A (p i(z) min(p 1 (z), p 2 (z)))ν(dz), i = 1, 2. 1 1 P (2) Let Z n, n =, 1,... be a discrete time homogeneous Markov chain with a phase space, initial distribution µ(a) and transition probabilities P (z, A). 14
The following condition is known condition of exponential ergodicity: D: d(p (x, ), P (y, )) ρ < 1, x, y Z. Lemma 1.5. Let condition D holds. Then, for any initial distributions P 1 and P 2, it is possible to construct a Markov chain (Zn, 1 Zn), 2 n =, 1,... with the phase space Z Z, initial distribution µ 1 µ 2, and transition probabilities P ((x, y), A B) such that: (a) P ((x, y), A Z) = P (x, A) and P ((x, y), Z B) = P (y, B); (b) Zn 1 and Zn 2 are themselves homogeneous Markov chains with initial distributions, respectively, P 1 and P 2, and transition probabilities P (x, A); (c) P{Z1 1 = Z1/Z 2 1 = x, Z 2 = y} = 1 d(p (x, ), P (y, )), x, y Z. (21) Let now define the coupling time, T = min(n : Z 1 n = Z 2 n). (2) and define the coupling operation defining a new Markov chain ( Z 1 n, Z 2 n) = { (Z 1 n, Z 2 n) for n < T, (Z 1 n, Z 1 n) for n T. (21) Lemma 1.6. ( Z n, 1 Z n) 2 is a homogeneous Markov chain with the phase space Z Z, the initial distribution P 1 P 2, and transition probabilities P ((x, y), A B), P ((x, y), A B) = { P ((x, y), A B) for x y, P (x, A B) for x = y. (22) 15
(22) Lemmas 1.5 and 1.6 imply that Z 1 n, and Z 2 n are themselves homogeneous Markov chains with the phase space Z, initial distributions, respectively, µ 1 and µ 2, and transition probabilities P (x, A). (23) Let us introduce probabilities, for A B, n =, 1,..., i = 1, 2, P (n) µ i (A) = P{Z i n A} = P{ Z i n A}. Theorem 1.11. Let condition D holds. Then the following coupling estimate takes place, for every A B and n =, 1,..., P (n) µ i (A) P (n) µ i (A) P{T > n} (1 ρ) n. (23) (a) The following relation for random events holds, for n =, 1..., { Z 1 n A} { Z 2 n A} {T > n}. (24) (b) This relation implies that, for A B, n =, 1,..., i = 1, 2, P{ Z 1 n A} { Z 2 n A} (c) Also, for n =, 1... P{{ Z 1 n A} { Z 2 n A}} P{T > n}. (25) P{T > n} = P{ Z k 1 Z k, 2 k n} = P{ Z n 1 Z n/ 2 Z n 1 1 = x, Z n 1 2 = y}p{ Z n 1 1 dx, Z Z 2 n 1 dy, Z 1 k Z 2 k, k n 1} (1 ρ) P{ Z 1 k Z 2 k, k n 1} (1 ρ) n. (26) 16
(24) Under condition D there exists the stationary distribution π(a) for Markov chain Z n, which is the unique solution of the following integral equation, π(a) = π(dx)p (x, A), A B. (27) Z Lemma 1.7. em If to choose the initial distribution µ(a) π(a), then the Markov chain Z n is a stationary sequence, and for any A B and n =, 1,..., P{Z n A} = π(a). (28) Theorem 1.12. Let MC Z n has an initial distribution µ and condition D holds. Then the following coupling estimate takes place, for any A B and n =, 1,..., P{Z n A} π(a) (1 ρ) n. (29) (a) Taking into account Theorem 1.11 and Lemma 1.7, we get P (n) µ (A) P π (n) (A) = P{Z n A} π(a) P{T > n} (1 ρ) n. (3) (24) Coupling method can be applied to discrete and continuous time Markov chains with, semi-markov processes, processes with semi-markov modulation, etc. 17
Some references are: [1] Lindvall, T. Lectures on Coupling Method, Wiley, 1992. [2] Silvestrov, D. S. Coupling for Markov renewal processes and the rate of convergence in ergodic theorems for processes with semi-markov switchings. Acta Applicandae Mathematicae, 34, 1994, 19 124. 4. LN problems 1.1. Try to prove Lemmas 1.1 and 1.2 (Hint: use Markov property of the Markov renewal process (J n, S n ). 1.2. Prove that solution of the system (3) is unique in the class of all measurable, non-negative, bounded functions, if condition A holds. 1.3. Compute Laplace transforms in the equation (3) and get the systems of linear equations (4). 1.4. Using the stochastic relations (7) write down the system of linear equations for the for moments EV n i, i X. 1.5. Compute functions q ij,{l} (t) in for the PSMP Y (t) describing the function of the M/G queuing system with a bounded queue buffer. 1.6. Try to prove Lemma 1.4. 1.6. Prove Lemma 1.7. 18
19