Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Size: px
Start display at page:

Download "Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes"

Transcription

1 Front. Math. China 215, 1(4): DOI 1.17/s Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2 1 School of Mathematics and Statistics, New Campus, Central South University, Changsha 4183, China 2 School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 1875, China c Higher Education Press and Springer-Verlag Berlin Heidelberg 215 Abstract We obtain sufficient criteria for central limit theorems (CLTs) for ergodic continuous-time Markov chains (CTMCs). We apply the results to establish CLTs for continuous-time single birth processes. Moreover, we present an explicit expression of the time average variance constant for a single birth process whenever a CLT exists. Several examples are given to illustrate these results. Keywords Markov process, single birth processes, central limit theorem (CLT), ergodicity MSC 6J27, 6F5 1 Introduction Let {X t,t R + } be a continuous-time Markov chain (CTMC) on a countable state space E with the Q-matrix Q =(q ij ) and the transition function P t (i, j). Throughout this paper, we assume that Q is irreducible, totally stable, and regular, which implies that Q is conservative and the Q-process is unique. We further assume that X t is positive recurrent with the unique invariant distribution π. Then we have P t (i, ) π := P t (i, j) π j, t +, j E for any i E. Define τ i =inf{t >: X t = i} Received January 26, 215; accepted April 1, 215 Corresponding author: Yuanyuan LIU, liuyy@csu.edu.cn

2 934 Yuanyuan LIU, Yuhui ZHANG to be the first hitting time on the state i. Let J 1 be the first jump time of the process X t, and define δ i =inf{t >J 1 : X t = i} to be the first return time on the state i. Define π(g) = i E π i g i for a function g on E. When π( f ) < +, by [19, Proposition 4.2 (ii), we know that the sample mean S(t) is well defined by S(t) = 1 t t f Xs ds, t. Moreover, it is known that the strong law of large numbers holds, i.e., S(t) π(f) with probability 1 as t +. We say that a CLT holds if there exists a scaling constant σ 2 (f) < + such that t 1/2 [S(t) π(f) N(,σ 2 (f)), t +, for any initial distribution, where N(, 1) denotes the standard normal distribution, and means convergence in distribution. The (deterministic) constant σ 2 (f) is called the time average variance constant. It is a fundamental issue to find conditions on X t and f, under which a CLT holds. Let f := f π(f). From [8, we immediately know that if π( f ) < +, then a CLT holds if and only if [( δi ) 2 E i f Xs ds < + for any i E. Obviously, this condition holds whenever E i [( δi ) 2 f Xs ds < +. (1.1) It is hard to check the above two conditions directly to verify a CLT for Markov models, since both expressions involve the function f and the first return time δ i simultaneously. It is well known that the moments of the first return time are closely related with ergodicity (e.g., [6). We will establish three easily checked criteria in Theorem 3.1 for (1.1) in terms of the conditions on the function f and ergodicity separately, in which, the first one is new and the other two are analogous to those for discrete-time Markov chains (DTMCs) (see [13,2).

3 Central limit theorems for ergodic continuous-time Markov chains 935 This current research is also motivated by investigating CLTs for a continuous-time single birth process (see [6) on the state space E = Z +, which is a CTMC with the following Q-matrix: q q 1 q 1 q 11 q 12 Q = q 2 q 21 q 22 q The single birth processes, also called continuous-time Markov processes skipfree to the right ([1), are a class of important Markov processes, which cover many interesting Markov processes, such as the birth-death processes, the population processes, and the level-dependent queues. It has been shown by [18 that the single birth processes can be used to investigate generalized Markov branching processes and multidimensional CTMCs. The single birth processes have been applied to modeling real problems in queues, biology, and so on. However, to model a real-world phenomenon, we have to judge whether this model is appropriate or not. That is to say, we need to test this model through the sample path, which raises a problem of statistical hypothesis testing. To attack this problem, we apply Theorem 3.1 and the known ergodicity results to obtain sufficient criteria for a CLT for a single birth process. Moreover, we get the explicit expression of the variance constant, which is extremely important since it is a key parameter for determining the confidence interval for statistical hypothesis testing. In the literature, the variance constant has been investigated for continuous-time birth-death process on a finite state space (see, e.g., [21), the continuous-time queues driven by a Markovian marked point process ([3), the discrete-time waiting time of the M/G/1 queue [7 and the PH/PH/1 queue [4, the discrete-time birth-death process ([14) and single birth process ([12), the continuous-time matrixanalytical models ([15), and so on. 2 Preliminaries on ergodicity In this section, we review the definitions and known criteria of several types of ergodicities. We also state the known ergodicity results for continuous-time single birth processes. A positive recurrent CTMC X t is said to be strongly ergodic if sup P t (i, ) π, i E t +. From [6, we know that X t is strongly ergodic if and only if sup E k [τ i < + k E

4 936 Yuanyuan LIU, Yuhui ZHANG for some i E. A positive recurrent CTMC X t is said to be exponentially ergodic if e γt P t (i, ) π, t +, for some γ>andforanyi E. From [6 again, we know that the chain X t is exponentially ergodic if and only if E i [e rδ i < + for some r>andsomei E. For a positive real number l such that l 1, a positive recurrent CTMC X t is said to be l-ergodic (see, [11,17) if t l 1 P t (i, ) π, t +, for any i E. It follows from [16 that the chain X t is l-ergodic if and only if E i [δ l i < + for some i E. The equivalent criteria of the three types of ergodicity can be characterized in terms of different drift functions on the intensity matrix Q (see, e.g., [6,16). The sufficient and necessary criteria for strong ergodicity, exponential ergodicity, and l-ergodicity have been established for single-birth processes by [22, [18, and [24, respectively. The invariant distribution is obtained by [23. We do not present those results here in order to avoid introducing too many notations. We note that strong ergodicity = exponential ergodicity = l-ergodicity for any l R + and l 1. Generally speaking, it is easiest to investigate strong ergodicity for a CTMC among the three types of ergodicities. Strong ergodicity is a restrictive condition for DTMCs, since a DTMC on a infinitely countable state space fails to be strongly ergodic whenever its transition matrix P =(P ij ) is a Feller transition one (see [1, Section 2 for details), i.e., lim P ij = i + for any fixed j E. However, it is rather different for CTMCs with unbounded Q-matrices, for which, strong ergodicity may hold under feasible conditions. 3 CLTs for general CTMCs In the following, we will present sufficient criteria of a CLT for polynomially, exponentially, or strongly ergodic CTMCs. The proofs of these results are postponed to Section 5.

5 Central limit theorems for ergodic continuous-time Markov chains 937 Theorem 3.1 Let X t be a positive recurrent CTMC with the invariant distribution π. Let f be a function on E such that π( f ) < + and i be any fixed state in E. Then any one of the following conditions is sufficient for a CLT to hold: (i) X t is l-ergodic and π( f 2+η ) < + for some η> and l>2+ 4 η ; (ii) X t is exponentially ergodic and π( f 2+η ) < + for some η>; (iii) X t is strongly ergodic and π( f 2 ) < +. Moreover, if any of the above condition holds, then the variance constant is given by σ 2 (f) =2π(f ˆf) =2 π i f i ˆfi, (3.1) i E where the function ˆf, given by ˆf k = E k [ δi f Xs ds, k E, is a solution of the Poisson equation Qx = f. Note that ˆf i =. Remark 3.2 (1) The discrete-time analogue of (ii) and (iii) can be found in the survey papers [13 or [2. However, the arguments in the proof are new. (2) As we note in Section 2, strong ergodicity rarely holds for discretetime Markov chains, which is however the easiest checked ergodicity condition for CTMCs. Hence, assertion (iii) has a wider application potential than the corresponding discrete-time result. (3) Result (i) is new, which is useful for polynomially but not exponentially ergodic Markov processes. The proof of (i) is modified from the proof of [2, Lemma 34, by extending it to the continuous-time case and weakening the geometric ergodicity condition there with the polynomial ergodicity condition. We now give an example to illustrate Theorem 3.1 under various ergodic situations. Example 3.3 Let X t be a birth-death process with birth coefficients b i and death coefficients a i given by b =1, and b i = a i = i γ,i 1. From [5,17, we know that X t is l-ergodic if and only if γ (1, 2) and γ>2 1 l ; X t is exponentially ergodic if and only if γ =2;andX t is strongly ergodic if and only if γ>2. Now, take f i = i α, where α is a positive constant to be determined. Let η be an arbitrarily given positive constant. It is easy to obtain the following results. (1) Let l, γ, and α be such that l>2+ 4 η, 2 1 l <γ<2, α < γ 1 2+η. Then Theorem 3.1 (i) holds.

6 938 Yuanyuan LIU, Yuhui ZHANG (2) Let γ and α be such that γ =2andα<1/(2 + η). Then Theorem 3.1 (ii) holds. (3) Let γ and α be such that γ>2andα<(γ 1)/2. Then Theorem 3.1 (iii) holds. 4 Application to single birth processes In this section, we will apply Theorem 3.1 to single birth processes. Define k n = q nj, q (k) j= F (i) i =1, F (i) n = 1 q n,n+1 n 1 k=i k<n, q (k) n F (i) k, n > i, Theorem 4.1 Let X t be a positive recurrent single birth process with the invariant distribution π. Let f be a function on E such that π( f ) < +. Suppose that any one of the conditions in Theorem 3.1 (i) (iii) is satisfied. Then a CLT holds and the variance constant is given by + ( n σ 2 F (k) ) n n f (f) =2 k π k f k. (4.1) n= k= q k,k+1 Remark 4.2 (i) [12, Theorem 3.1 derives the explicit expression of the variance constant for discrete-time single birth processes. Note that there is a difference between both expressions, specifically, one more item, n E π nf 2 n, appears in the discrete-time expression. (ii) The proof of Theorem 4.1 is a little different from the one of [12, which can be modified to give a shorter and more complete proof of [12, Theorem 2.1. Remark 4.3 When X t is a birth-death process under the condition of Theorem 4.1, we obtain the variance constant + ( n ) σ (f) =2 μ k f b n μ k, n n= where μ =1, μ i = b b 1 b i 1, i 1. a 1 a 2 a i Example 4.4 Let q n,n+1 =1foralln, q 1 =1,q n,n 2 =1foralln 2, and q ij =forotheri j. The single birth process is exponentially ergodic but not strongly ergodic (refer to [18). By computing, we know that {F n (k) } n k are Fibonacci numbers for every k : k= k= F (k) k+n = 1 5 (A n+1 ( B) n+1 ), n,k,

7 Central limit theorems for ergodic continuous-time Markov chains 939 where 5+1 A =, B = 2 The stationary distribution is π n =(1 B)B n, n, which is obtained in [23. Let f i = i. Then we have π(f) =π( f ) =A, π( f 3 )=6A 4 + A. So by Theorem 4.1, (4.1) holds. Now, n k= F n (k) f k = n +( B) n+3 + A 3, q k,k+1 n π k f k = (n +1)B n+1, n. k= Hence, from (4.1), one gets σ 2 (f) = Example 4.4 is the so-called uniform catastrophes in population models. Example 4.5 Let q n,n+1 = n +1 for all n, q nj =1forall j<n,and q ij =forotheri j. The single birth process is strongly ergodic (see [25). By computing, we know that F (k) k =1, F n (k) n k 1 k +1 =2, n > k. n +1 The stationary distribution is Let f i = i. Then we have π n =2 n 1, n. π(f) =π( f ) =1, π( f 2 )=3. So by Theorem 4.1, (4.1) holds. Now, we have n k= F n (k) f k = 1 q k,k+1 n +1, Hence, we derive that n π k f k = (n +1)2 (n+1), n. k= σ 2 (f) =2.

8 94 Yuanyuan LIU, Yuhui ZHANG 5 Proof of Theorem 3.1 Note that assertion (ii) follows from (i) directly by observing that if E i [e rδ i < + for some r>, then E i [δi l < + for any l>. Hence, we only need to prove (i), (iii), and the last part of Theorem 3.1. We prove Theorem 3.1 (i) and (iii) by verifying condition (1.1). To proceed our arguments, we need the following lemma, which presents a functional Cauchy-Schwarz inequality for CTMCs. Lemma 5.1 Let V be a nonnegative function. If + + V (t, X t )dt, Ex [V 2 (t, X t ) dt exist, then we have E x [( + ) 2 V (t, X t )dt ( + ) 2 Ex [V 2 (t, X t ) dt. Proof Fix any b>. Divide the interval [,binton +1parts: [s,s 1, [s 1,s 2,..., [s n 1,s n, where s =ands n = b. Let Δs i = s i s i 1, 1 i n, d = max Δs i. 1 i n We simply write V i = V (t i,x ti ). Since both + + V (t, X t )dt, Ex [V 2 (t, X t ) dt exist, we have E x [( b ) 2 [( n ) 2 V (t, X t )dt = E x lim V i Δs i d i= [( n ) 2 lim inf E x V i Δs i d lim inf d ( b = ( n i= i= E x [V 2 i ) 2 ) 2 Ex [V 2 (t, X t ) dt, (5.1) where the first inequality follows from the Fatou lemma, and the second inequality follows from the Cauchy-Schwarz inequality, i.e., [( + ) 2 ( + ) 2 E X n E[X 2 n. n= n=

9 Central limit theorems for ergodic continuous-time Markov chains 941 We obtain the assertion by letting b + in (5.1). 5.1 Proof of Theorem 3.1 (i) Let V (t, X t )=I t<δi f Xt. By Lemma 5.1, we have [( δi ) 2 ( + 2 E i f Xs ds E i [I t<δi f Xt dt) 2. (5.2) Let It follows from Hölder inequality that Since π( f 2+η ) < +, we have p =1+ 2 η, q =1+η 2. E i [I t<δi f Xt 2 (E i [I t<δi ) 1/p (E i [ f Xt 2q ) 1/q. (5.3) (E i [ f Xt 2q ) 1/q ( 1 π i E π [ f Xt 2q ) 1/q = ( 1 π i π( f 2q )) 1/q < +. (5.4) It follows from the Markov inequality that E i [I t<δi =P i [t<δ i E i[δi l t l. (5.5) From (5.2) (5.5), we obtain (1.1). 5.2 Proof of Theorem 3.1 (iii) To bound (1.1), we let the process start at the state i. Recall that J 1 is the first jump time. Define T =, T k := inf{t T k 1 + J 1 : X t = i}, i 1. Then T k denotes the k-th return time on i. Obviously, X t is a undelayed regenerative process with T i,i, as the regenerative time points, i.e., {X t,t i t<t i+1,i } are independently identically distributed random variables. We divide the proof of this assertion into three steps. (a) Since we know that if E x [τ i = + tdp x [τ i t sp x [τ i >s, sup E x [τ i < +, x E s>,

10 942 Yuanyuan LIU, Yuhui ZHANG then there exists constants h<+ and β<1 such that sup P x [τ i h <β. x E (b) Now, we can use the time length h given by (a) to divide this interval [J 1,δ i. Let t n = J 1 + nh, n. Let us define a Bernoulli random sequence {Z n,n } as follows: { 1, Xt = i for some t [t n 1,t n, Z =, Z n =, otherelse. Let N =inf{n 1: Z n =1} be the first hitting time of the state 1. Then we have P [N = n = n 1 m=1 where we make the convention that Thus, we have P [Z m =P [Z n =1 β n 1 1=β n 1, n 1, P [Z m ==1. m=1 J 1 +(N 1)h = t N 1 δ i t N = J 1 + Nh. (5.6) (c) Now, we use (5.6) to bound E i [( δ i f X s ds) 2. E i [( δi ) 2 [( t δi f Xs ds = E i f Xt dt + t ) 2 [( t E i f Xt dt +2E i [ t ( 1 λ + 1 ) λ 2 ) 2 f Xt dt + E i [( δi t [ δi f Xt dt E i f Xt dt t f 2 (i)+ 2 f(i) λ ) 2 f Xt dt π( f ) π i + D, (5.7) where D = E i [( δi t ) 2 f Xt dt.

11 Central limit theorems for ergodic continuous-time Markov chains 943 Now, we focus on bounding on D. Let Then we have D E i [( tn t S k = ) 2 f Xt dt tk t k 1 f Xt dt, k 1. = E i [ N k=1 S 2 k [ N +2E i N S k k=1 m=k+1 Conditioning on J 1 and applying Lemma 5.1 gives that for any k 1, + E π [Sk 2 = E π [Sk 2 J 1 = s dp [J 1 s h π(f 2 ), which implies By conditioning on N, we have [ N [ [ N E i = E i E Sk 2 k=1 Similarly, we have [ N E i N S k k=1 m=k+1 E i [S 2 k 1 π i E π [S 2 k 1 π i h π(f 2 ) < +. k=1 S 2 k + S m =2 n=1 + + N n=1 n E i[ n=1 k=1 + ( n n 2 2 n=1 nh π(f 2 ) π i n S k m=k+1 k=1 m=k+1 n 2 n π i S m. β n 1 < +. S m P [N = n E i [S 2 k E i [S 2 m h π(f 2 ) β n 1 ) β n 1 < +, (5.8) whereweusethehölder inequality with p = q = 2 in the second inequality. The assertion follows immediately from (5.7) and (5.8). 5.3 Proof of last part of Theorem 3.1 It follows from [8 and the similar arguments in the proof of [3, Lemma 3.1, we have [ σ 2 2 δi (f) = E i [δ i E i f Xt ˆfXt dt =2π(f ˆf). [9, Theorem shows that the sequence {x k,k E}, given by [ δi x k = E k g Xt dt,

12 944 Yuanyuan LIU, Yuhui ZHANG is a solution of the following equations: k i q ik x k = g i, i E, (5.9) where g is a nonnegative function. [2, Theorem VI.1.2 shows that E i [ δi f Xt dt = π(f)e i [δ i, which implies that x i =. For a real number a, define Obviously, a + =max{a, }, a =max{ a, }. a = a + a. By considering [ δi [ δi x + k = E k (f Xt ) + dt, x k = E k (f Xt ) dt, we know that the sequence {x k,k E} satisfies the Poisson equation Qx = f. The proof of the theorem is finished. 6 Proof of Theorem 4.1 The fist part of Theorem 4.1 follows from Theorem 3.1 directly. We only need to prove the expression (4.1) of the variance constant. Lemma 6.1 Give a function f on E. The sequence of functions {h n,n Z + }, defined by h i = f i, h n = 1 [ n 1 f n + q n (k) h k, n i +1, (6.1) q i,i+1 q n,n+1 has the the following equivalent presentation: h n = n k=i Proof We use the induction. For n = i, k=i F n (k) f k, n i. (6.2) q k,k+1 h i = f i = F (i) i f i q i,i+1 q i,i+1 = i k=i F (k) i f k q k,k+1.

13 Central limit theorems for ergodic continuous-time Markov chains 945 Assume that (6.2) holds for all n m. When n = m +1, from (6.1), we see that h m+1 = f m+1 1 m k + q (k) F (l) k f l m+1 q m+1,m+2 = f m+1 q m+1,m+2 + = f m+1 q m+1,m+2 + = m+1 l=i F (l) m+1 f l q l,l+1. q m+1,m+2 k=i m ( l=i m l=i 1 m q m+1,m+2 k=l F (l) m+1 f l q l,l+1 l=i q l,l+1 ) fl q (k) m+1 F (l) k q l,l+1 Hence, (6.2) holds for n = m +1. By induction, we know that (6.2) hold for all n i. Lemma 6.2 For a single birth Q-matrix Q =(q ij ), a finite solution x = (x i,i E) of Poisson s equation Qx = g with x j =is given by j 1 ( n F (k) ) n g k, i < j, q n=i k,k+1 k= x i =, i = j, (6.3) i 1 ( n F (k) ) n g k, i > j. n=j k= q k,k+1 Proof From Poisson s equation Qx = g, one easily sees that x x 1 = g, x n x n+1 = 1 [ n 1 g n + q n (k) (x k x k+1 ), n 1. q 1 q n,n+1 So, by Lemma 6.1, we know that x n x n+1 = n k= k= F n (k) g k, n. q k,k+1 Note that i 1 i 1 ( n x i = (x n x n+1 )+x = n= Since x j =, we know that n= j 1 ( n x = n= F n (k) g k q k= k,k+1 F (k) n g k q k,k+1 k= ). ) + x, i 1.

14 946 Yuanyuan LIU, Yuhui ZHANG Then we obtain the assertion. Proof of Theorem 4.1 It is known that the sequence x =( x i,i E), defined by [ δj x i =E i f Xt dt, is a solution of Poisson s equation with x j =. Due to the special structure of the single birth processes, we know that Poisson s equation has a unique finite solution. Hence, x = x, where x is given by (6.3). By (3.1), we obtain + σ 2 (f) = 2 n=1 n 1 ( k π n f n k= m= F (m) k f m q m,m+1 ). (6.4) The assertion (4.1) follows by exchanging the order of summation in the righthand side of (6.4) and using the fact that + k= π k f(k) =. References 1. Abate J, Whitt W. Spectral theory for skip-free Markov chains. Probab Engrg Inform Sci, 1989, 3(1): Asmussen S. Applied Probability and Queues. 2nd ed. New York: Springer-Verlag, Asmussen S, Bladt M. Poisson s equation for queues driven by a Markovian marked point process. Queueing Syst, 1994, 17: Bladt M. The variance constant for the actual waiting time of the PH/PH/1 queue. Ann Appl Probab, 1996, 6(3): Chen M F. Equivalence of exponential ergodicity and L 2 -exponential convergence for Markov chains. Stochastic Process Appl, 2, 87: Chen M F. From Markov Chains to Non-equilibrium Particle Systems. 2nd ed. Singapore: World Scientific, Glynn P W. Poisson s equation for the recurrent M/G/1 queue. Adv Appl Probab, 1994, 26: Glynn P W, Whitt W. Necessary conditions for limit theorems in cumulative processes. Stochastic Process Appl, 22, 98: Hou Z, Guo Q. Homogeneous Denumerable Markov Processes. New York/Beijing: Springer-Verlag/Science Press, Hou Z, Liu Y. Explicit criteria for several types of ergodicity of the embedded M/G/1 and GI/M/n queues. J Appl Probab, 24, 41(3): Hou Z, Liu Y, Zhang H. Subgeometric rates of convergence for a class of continuoustime Markov processes. J Appl Probab, 25, 42(3): Jiang S, Liu Y, Yao S. Poisson s equation for discrete-time single-birth processes. Statist Probab Lett, 214, 85: Jones G L. On the Markov chain central limit theorem. Probab Surv, 24, 1: Liu Y. Additive functionals for discrete-time Markov chains with applications to birthdeath processes. J Appl Probab, 211, 48(4):

15 Central limit theorems for ergodic continuous-time Markov chains Liu Y, Wang P, Xie Y. Deviation matrix and asymptotic variance for GI/M/1-type Markov chains. Front Math China, 214, 9(4): Liu Y, Zhang H, Zhao Y. Subgeometric ergodicity for continuous-time Markov chains. J Math Anal Appl, 21, 368: Mao Y H. Ergodic degrees for continuous-time Markov chains. Sci China Ser A, 24, 47: Mao Y H, Zhang Y H. Exponential ergodicity for single birth processes. J Appl Probab, 24, 41(4): Meyn S P, Tweedie R L. Generalized resolvents and Harris recurrence of Markov processes. In: Cohn H, ed. Doeblin and Modern Probability: Proceedings of the Doeblin Conference. 5 Years After Doeblin: Development in the Theory of Markov Chains, Markov Processes, and Sums of Random Variables. Contemp Math, 149. Providence: Amer Math Soc, 1993, Roberts G O, Rosenthal J S. General state space Markov chains and MCMC algorithms. Probab Surv, 24, 1: Whitt W. Asymptotic formulas for Markov processes with applications to simulations. Oper Res, 1992, 4: Zhang Y H. Strong ergodicity for single birth processes. J Appl Probab, 21, 38(1): Zhang Y H. The hitting time and stationary distribution for single birth processes. J Beijing Normal Univ, 24, 4(2): (in Chinese) 24. Zhang Y H. A note on exponential ergodicity and l-ergodicity of single birth processes. J Beijing Normal Univ, 21, 46(1): 1 5 (in Chinese) 25. Zhang Y H, Zhao Q Q. Some single birth Q-matrices (I). J Beijing Normal Univ, 26, 42(2): (in Chinese)

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS

SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS J. Appl. Prob. 42, 698 712 (25) Printed in Israel Applied Probability Trust 25 SUBGEOMETRIC RATES OF CONVERGENCE FOR A CLASS OF CONTINUOUS-TIME MARKOV PROCESS ZHENTING HOU and YUANYUAN LIU, Central South

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS (February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Irregular Birth-Death process: stationarity and quasi-stationarity

Irregular Birth-Death process: stationarity and quasi-stationarity Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process

More information

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime

A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime A Method for Evaluating the Distribution of the Total Cost of a Random Process over its Lifetime P.K. Pollett a and V.T. Stefanov b a Department of Mathematics, The University of Queensland Queensland

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux

More information

Runge-Kutta Method for Solving Uncertain Differential Equations

Runge-Kutta Method for Solving Uncertain Differential Equations Yang and Shen Journal of Uncertainty Analysis and Applications 215) 3:17 DOI 1.1186/s4467-15-38-4 RESEARCH Runge-Kutta Method for Solving Uncertain Differential Equations Xiangfeng Yang * and Yuanyuan

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

On Differentiability of Average Cost in Parameterized Markov Chains

On Differentiability of Average Cost in Parameterized Markov Chains On Differentiability of Average Cost in Parameterized Markov Chains Vijay Konda John N. Tsitsiklis August 30, 2002 1 Overview The purpose of this appendix is to prove Theorem 4.6 in 5 and establish various

More information

The SIS and SIR stochastic epidemic models revisited

The SIS and SIR stochastic epidemic models revisited The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule

More information

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, 2012 Weirdness in CTMC s Where s your will to be weird? Jim Morrison, The Doors We are all a little weird. And life is a little weird.

More information

Two-sided Bounds for the Convergence Rate of Markov Chains

Two-sided Bounds for the Convergence Rate of Markov Chains UDC 519.217.2 Two-sided Bounds for the Convergence Rate of Markov Chains A. Zeifman, Ya. Satin, K. Kiseleva, V. Korolev Vologda State University, Institute of Informatics Problems of the FRC CSC RAS, ISEDT

More information

Quasi-Stationary Distributions in Linear Birth and Death Processes

Quasi-Stationary Distributions in Linear Birth and Death Processes Quasi-Stationary Distributions in Linear Birth and Death Processes Wenbo Liu & Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University Hunan 411105, China E-mail: liuwenbo10111011@yahoo.com.cn

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

AARMS Homework Exercises

AARMS Homework Exercises 1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality

More information

Generalized Resolvents and Harris Recurrence of Markov Processes

Generalized Resolvents and Harris Recurrence of Markov Processes Generalized Resolvents and Harris Recurrence of Markov Processes Sean P. Meyn R.L. Tweedie March 1992 Abstract In this paper we consider a ϕ-irreducible continuous parameter Markov process Φ whose state

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST J. Appl. Prob. 45, 568 574 (28) Printed in England Applied Probability Trust 28 RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST EROL A. PEKÖZ, Boston University SHELDON

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Bisection Ideas in End-Point Conditioned Markov Process Simulation

Bisection Ideas in End-Point Conditioned Markov Process Simulation Bisection Ideas in End-Point Conditioned Markov Process Simulation Søren Asmussen and Asger Hobolth Department of Mathematical Sciences, Aarhus University Ny Munkegade, 8000 Aarhus C, Denmark {asmus,asger}@imf.au.dk

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

Integrals for Continuous-time Markov chains

Integrals for Continuous-time Markov chains Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.

More information

Strong approximation for additive functionals of geometrically ergodic Markov chains

Strong approximation for additive functionals of geometrically ergodic Markov chains Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Oscillation Criteria for Delay Neutral Difference Equations with Positive and Negative Coefficients. Contents

Oscillation Criteria for Delay Neutral Difference Equations with Positive and Negative Coefficients. Contents Bol. Soc. Paran. Mat. (3s.) v. 21 1/2 (2003): 1 12. c SPM Oscillation Criteria for Delay Neutral Difference Equations with Positive and Negative Coefficients Chuan-Jun Tian and Sui Sun Cheng abstract:

More information

University of Twente. Faculty of Mathematical Sciences. The deviation matrix of a continuous-time Markov chain

University of Twente. Faculty of Mathematical Sciences. The deviation matrix of a continuous-time Markov chain Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 75 AE Enschede The Netherlands Phone: +31-53-48934 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a WHEN IS A MAP POISSON N.G.Bean, D.A.Green and P.G.Taylor Department of Applied Mathematics University of Adelaide Adelaide 55 Abstract In a recent paper, Olivier and Walrand (994) claimed that the departure

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,

More information

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung

More information

Asymptotic Irrelevance of Initial Conditions for Skorohod Reflection Mapping on the Nonnegative Orthant

Asymptotic Irrelevance of Initial Conditions for Skorohod Reflection Mapping on the Nonnegative Orthant Published online ahead of print March 2, 212 MATHEMATICS OF OPERATIONS RESEARCH Articles in Advance, pp. 1 12 ISSN 364-765X print) ISSN 1526-5471 online) http://dx.doi.org/1.1287/moor.112.538 212 INFORMS

More information

A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by a Double Rooted Tree

A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by a Double Rooted Tree Caspian Journal of Applied Mathematics, Ecology and Economics V. 2, No, 204, July ISSN 560-4055 A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

arxiv: v1 [math.pr] 1 Jan 2013

arxiv: v1 [math.pr] 1 Jan 2013 The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch

More information

Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions

Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions Stein s Method for Steady-State Approximations: Error Bounds and Engineering Solutions Jim Dai Joint work with Anton Braverman and Jiekun Feng Cornell University Workshop on Congestion Games Institute

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Kesten s power law for difference equations with coefficients induced by chains of infinite order

Kesten s power law for difference equations with coefficients induced by chains of infinite order Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

PITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS

PITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS Elect. Comm. in Probab. 6 (2001) 73 77 ELECTRONIC COMMUNICATIONS in PROBABILITY PITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS B.M. HAMBLY Mathematical Institute, University

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

The Continuity of SDE With Respect to Initial Value in the Total Variation

The Continuity of SDE With Respect to Initial Value in the Total Variation Ξ44fflΞ5» ο ffi fi $ Vol.44, No.5 2015 9" ADVANCES IN MATHEMATICS(CHINA) Sep., 2015 doi: 10.11845/sxjz.2014024b The Continuity of SDE With Respect to Initial Value in the Total Variation PENG Xuhui (1.

More information

LIST OF MATHEMATICAL PAPERS

LIST OF MATHEMATICAL PAPERS LIST OF MATHEMATICAL PAPERS 1961 1999 [1] K. Sato (1961) Integration of the generalized Kolmogorov-Feller backward equations. J. Fac. Sci. Univ. Tokyo, Sect. I, Vol. 9, 13 27. [2] K. Sato, H. Tanaka (1962)

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 339 345 DOI: http://dxdoiorg/105351/csam2013204339 A Functional Central Limit Theorem for an ARMAp, q) Process with Markov Switching

More information

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Acta Mathematicae Applicatae Sinica, English Series Vol. 19, No. 1 (23) 135 142 Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Chun Su 1, Qi-he Tang 2 1 Department of Statistics

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

THE GEOMETRICAL ERGODICITY OF NONLINEAR AUTOREGRESSIVE MODELS

THE GEOMETRICAL ERGODICITY OF NONLINEAR AUTOREGRESSIVE MODELS Statistica Sinica 6(1996), 943-956 THE GEOMETRICAL ERGODICITY OF NONLINEAR AUTOREGRESSIVE MODELS H. Z. An and F. C. Huang Academia Sinica Abstract: Consider the nonlinear autoregressive model x t = φ(x

More information

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES

STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH SPACES Kragujevac Journal of Mathematics Volume 36 Number 2 (2012), Pages 237 249. STRONG CONVERGENCE OF AN IMPLICIT ITERATION PROCESS FOR ASYMPTOTICALLY NONEXPANSIVE IN THE INTERMEDIATE SENSE MAPPINGS IN BANACH

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Optimal scaling of average queue sizes in an input-queued switch: an open problem

Optimal scaling of average queue sizes in an input-queued switch: an open problem DOI 10.1007/s11134-011-9234-1 Optimal scaling of average queue sizes in an input-queued switch: an open problem Devavrat Shah John N. Tsitsiklis Yuan Zhong Received: 9 May 2011 / Revised: 9 May 2011 Springer

More information

A COMPOUND POISSON APPROXIMATION INEQUALITY

A COMPOUND POISSON APPROXIMATION INEQUALITY J. Appl. Prob. 43, 282 288 (2006) Printed in Israel Applied Probability Trust 2006 A COMPOUND POISSON APPROXIMATION INEQUALITY EROL A. PEKÖZ, Boston University Abstract We give conditions under which the

More information