Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Similar documents
A generalization of Dobrushin coefficient

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

On the Central Limit Theorem for an ergodic Markov chain

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

Asymptotics and Simulation of Heavy-Tailed Processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

FIXED POINT ITERATIONS

Stochastic relations of random variables and processes

arxiv: v1 [math.ap] 18 May 2017

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Markov Chains, Stochastic Processes, and Matrix Decompositions

Reversible Markov chains

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Lecturer: Olga Galinina

Rare event simulation for the ruin problem with investments via importance sampling and duality

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching

NEW FRONTIERS IN APPLIED PROBABILITY

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Zdzis law Brzeźniak and Tomasz Zastawniak

Relative growth of the partial sums of certain random Fibonacci-like sequences

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

M. Ledoux Université de Toulouse, France

Prime numbers and Gaussian random walks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

STA 294: Stochastic Processes & Bayesian Nonparametrics

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

1 Continuous-time chains, finite state space

The following definition is fundamental.

A Geometric Interpretation of the Metropolis Hastings Algorithm

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

Stability of Cyclic Threshold and Threshold-like Autoregressive Time Series Models

Exponential tail inequalities for eigenvalues of random matrices

Section 3.9. Matrix Norm

1 Stochastic Dynamic Programming

Markov Chains, Random Walks on Graphs, and the Laplacian

Fall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.

Lecture 21: Convergence of transformations and generating a random variable

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

Inequalities of Babuška-Aziz and Friedrichs-Velte for differential forms

Limit theorems for dependent regularly varying functions of Markov chains

Convergence Rates for Renewal Sequences

Numerical Solutions to Partial Differential Equations

Econ Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n

Lecture 2 : CS6205 Advanced Modeling and Simulation

Transformations from R m to R n.

The Multivariate Normal Distribution. In this case according to our theorem

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013

IEOR 6711, HMWK 5, Professor Sigman

An almost sure invariance principle for additive functionals of Markov chains

Lecture 12: Detailed balance and Eigenfunction methods

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

18.175: Lecture 30 Markov chains

University of Toronto Department of Statistics

Control Systems. Linear Algebra topics. L. Lanari

Functional Analysis Exercise Class

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

STOCHASTIC PROCESSES Basic notions

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Regular Variation and Extreme Events for Stochastic Processes

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

1. General Vector Spaces

BLOWUP THEORY FOR THE CRITICAL NONLINEAR SCHRÖDINGER EQUATIONS REVISITED

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Uniformly Uniformly-ergodic Markov chains and BSDEs

Lecture: Local Spectral Methods (1 of 4)

Exercise Sheet 1.

Overview of Markovian maps

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

arxiv: v1 [cs.ds] 11 Oct 2018

Submitted to the Brazilian Journal of Probability and Statistics

Multiple Random Variables

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ACCURATE SOLUTION ESTIMATE AND ASYMPTOTIC BEHAVIOR OF NONLINEAR DISCRETE SYSTEM

Conditional independence, conditional mixing and conditional association

Homework 2: Solution

Lecture 8: The Metropolis-Hastings Algorithm

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

18.600: Lecture 32 Markov Chains

Analysis of an Infinite-Server Queue with Markovian Arrival Streams

SIMILAR MARKOV CHAINS

Perturbed Proximal Gradient Algorithm

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

Random Bernstein-Markov factors

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

Equilibrium analysis for a mass-conserving model in presence of cavitation

Rare-Event Simulation

Exact Simulation of the Stationary Distribution of M/G/c Queues

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Chapter 7. Markov chain background. 7.1 Finite state space

Transcription:

Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux 12 October 2012 Abstract In this note, the sequence of the interarrivals of a stationary Markovian Arrival process is shown to be ρ-mixing with a geometric rate of convergence when the driving process is ρ-mixing. This provides an answer to an issue raised in the recent paper 4] on the geometric convergence of the autocorrelation function of the stationary Markovian Arrival process. keywords: Markov renewal process AMS 60J05,60K15 1 Introduction We provide a positive answer to a question raised in 4] on the geometric convergence of the autocorrelation function associated with the interarrival times of a stationary m-state Markovian Arrival Process (MAP). Indeed, it is shown in 3, Prop. 3.1] that the increment sequence {T n := S n S n 1 } n 1 associated with a discrete time stationary Markov additive process {(X n, S n )} n N X R d is ρ-mixing with a geometric rate provided that the driving stationary Markov chain {X n } n N is ρ-mixing. There, X may be any measurable set. In the case where the increments {T n } n 1 are non-negative random variables, {(X n, S n )} n N is a Markov Renewal Process (MRP). Therefore, we obtain the expected answer to the question in 4] since such an MRP with {T n } n 1 being the interarrival times can be associated with a m-state MAP and the ρ-mixing property of {T n } n 1 with geometric rate ensures the geometric convergence of the autocorrelation function of {T n } n 1. We refer to 1, Chap. XI] for basic properties of MAPs and Markov additive processes. IRMAR UMR-CNRS 6625 & INSA, 20 avenue des Buttes de Coesmes, CS 70 839, 35708 Rennes cedex 7, France Email address: {Loic.Herve,James.Ledoux}@insa-rennes.fr 1

2 Geometric ρ-mixing of the sequence of interarrivals of an MAP Let us recall the definition of the ρ-mixing property of a (strictly) stationary sequence of random variables {T n } n 1 (e.g. see 2]). The ρ-mixing coefficient with time lag k > 0, denoted usually by ρ(k), is defined by ρ(k) := sup sup sup { ( Corr f(t1,..., T n ); h(t n+k,..., T n+k+m ) ), n 1 m N f, g R-valued functions such that E f(t 1,..., T n ) 2] and E h(t n+k,..., T n+k+m ) 2] are finite } (1) where Corr ( f(t 1,..., T n ); h(t n+k,..., T n+k+m ) ) is the correlation coefficient of the two square-integrable random variables. Note that {ρ(k)} n 1 is a non-increasing sequence. Then {T n } n 1 is said to be ρ-mixing if lim ρ(k) = 0. k + When, for any n N, the random variable T n has a moment of order 2, the autocorrelation function of {T n } n 1 as studied in 4], that is Corr(T 1 ; T k+1 ) as a function of the time lag k, clearly satisfies k 1, Corr(T 1 ; T k+1 ) ρ(k). (2) Therefore, any rate of convergence of the ρ-mixing coefficients {ρ(k)} k 1 is a rate of convergence for the autocorrelation function. We only outline the main steps to obtain from 3, Prop. 3.1] a geometric convergence rate of {ρ(k)} n 1 for the m-state MRP {(X n, S n )} n N associated with a m-state MAP. In 4, Section 2], the analysis of the autocorrelation function in the two-states case is based on such an MRP (notation and background in 4] are that of 5]). Recall that a m-state MAP is a bivariate continuous-time Markov process {(J t, N t )} t 0 on {1,..., m} N where N t represents the number of arrivals up to time t, while the states of the driving Markov process {J t } t 0 are called phases. Let S n be the time at the nth arrival (S 0 = 0 a.s.) and let X n be the state of the driving process just after the nth arrival. Then {(X n, S n )} n N is known to be an MRP with the following semi-markov kernel Q on {1,..., m} 0, ) (x 1, x 2 ) {1,..., m} 2, Q(x 1 ; {x 2 } dy) := (e D 0y D 1 )(x 1, x 2 )dy (3) parametrized by a pair of m m-matrices usually denoted by D 0 and D 1. The matrix D 0 + D 1 is the infinitesimal generator of the background Markov process {J t } t 0 which is always assumed to be irreducible, and D 0 is stable. The process {X n } n N is a Markov chain with state space X := {1,..., m} and transition probability matrix P : (x 1, x 2 ) X 2, P (x 1, x 2 ) = Q(x 1 ; {x 2 } 0, )) = ( ( D 0 ) 1 D 1 ) (x1, x 2 ). (4) 2

{X n } n N has an invariant probability measure φ (i.e. φp = φ). It is well-known that, for n 1, the interarrival time T n := S n S n 1 has a moment of order 2 (whatever the probability distribution of X 0 ). We refer to 1] for details about the above basic facts on an MAP and its associated MRP. Let us introduce the m m-matrix Φ := e φ (5) when e is the m-dimensional row-vector with all components equal to 1. Any R-valued function v on X may be identified to a R m -dimensional vector. We use the subordinate matrix norm induced by l 2 (φ)-norm v 2 := x X v(x) 2 φ(x) on R m M 2 := sup Mv 2. v: v 2 =1 Let E φ be the expectation with respect to the initial conditions (X 0, S 0 ) (φ, δ 0 ). Recall that T n := S n S n 1 for n 1. When X 0 φ, we have (see 3, Section 3]): 1. if g is a R-valued function such that E g(x 1, T 1,..., X n, T n ) ] <, then k 0, n 1 Eg(X k+1, T k+1,..., X k+n, T k+n ) σ(x l, T l : l k)] n = Q(X s ; dx 1 dz 1 ) Q(x i 1 ; dx i dz i )g(x 1, z 1,..., x n, z n ) (X 0, )) n i=2 = (Q n )(g) ( ) X k where Q n denotes the n-fold kernel product n Q of Q defined in (3). i=1 2. Let f and h be two R-valued functions such that E φ f(t1,..., T n ) 2] < and E φ h(tn+k,..., T n+k+m ) 2] < for (k, n) (N ) 2, m N. From (6) with g(x 1, z 1,..., x n+k+m, z n+k+m ) f(z 1,..., z n )h(z n+k,..., z n+k+m ), the process {T n } n 1 is stationary and the following covariance formula holds (see 3, Lem. 3.3] for details) Cov ( f(t 1,..., T n ); h(t n+k,..., T n+k+m ) ) = E φ f(t1,..., T n ) (P k 1 Φ) ( Q m+1 (h) ) (X n ) ]. (7) (6) where matrices P, Φ are defined in (4) and (5). First, note that the random variables f( ) and h( ) in (1) may be assumed to be of L 2 -norm 1. Thus we just have to deal with covariances. Second, the Cauchy-Schwarz inequality 3

and Formula (7) allow us to write Cov ( f(t 1,..., T n ); h(t n+k,..., T n+k+m ) ) 2 E φ f(t1,..., T n ) 2] (P E k 1 φ Φ) ( Q m+1 (h) ) (X n ) ] 2 (P = E k 1 φ Φ) ( Q m+1 (h) ) (X 0 ) ] 2 (φ is P -invariant) = (P k 1 Φ) ( Q m+1 (h) ) 2 2 P k 1 Φ 2 2 Q m+1 (h) 2 2 P k 1 Φ 2 2 (since Q m+1 (h) 2 1). Therefore, we obtain from (1) and (2) that the autocorrelation coefficient Corr(T 1 ; T k+1 ) as studied in 4], satisfies k 1, Corr(T 1 ; T k+1 ) ρ(k) P k 1 Φ 2 2. (8) The convergence rate to 0 of the sequence {Corr(T 1 ; T k+1 )} n 1 is bounded from above by that of { P k 1 Φ 2 } k 1. Under usual assumptions on the MAP, {X n } n N is irreducible and aperiodic so that there exists r (0, 1) such that P k Φ 2 = O(r k ) (9) with r = max( λ, λ is an eigenvalue of P such that λ < 1). For a stationary Markov chain {X n } n N with general state space, we know from 6, p 200,207] that Property (9) is equivalent to the ρ-mixing property of {X n } n N. 3 Comments on 4] In 4], the analysis is based on a known explicit formula of the correlation function in terms of the parameters of the m-state MRP (see 4, (2.6)]). Note that this formula can be obtained using n = 1, m = 0 and f(t 1 ) = T 1, h(t 1+k ) = T 1+k in (7). When m := 2 and under standard assumptions on MAPs, matrix P is diagonalizable with two distinct real eigenvalues, 1 and 0 < λ < 1 which has an explicit form in terms of entries of P. Then, the authors can analyze the correlation function with respect to the entries of matrix P 4, (3.4)-(3.7)]. As quoted by the authors, such an analysis would be tedious and difficult with m > 2 due to the increasing number of parameters defining an m-state MAP. Note that Inequality (8) and Estimate (9) when m := 2 provide the same convergence rate as in 4], that is λ the second eigenvalue of matrix P. References 1] Asmussen, S. Applied probability and queues (2003) Springer-Verlag, New York, 2d edition. 4

2] Bradley, R. C. (2005) Introduction to strong mixing conditions (Volume I). Technical report, Indiana University. 3] Ferré, D. Hervé, L. and Ledoux J. (2012) Limit theorems for stationary Markov processes with L 2 -spectral gap. Ann. Inst. H. Poincaré Probab. Statist., 48, 396 423. 4] Ramirez-Cobo, P. and Carrizosa, E. (2012) A note on the dependence structure of the two-state Markovian arrival process. J. Appl. Probab., 49, 295 302. 5] Ramirez-Cobo, P. Lillo, R. E. and Wiper. M. (2010) Nonidentiability of the two-state Markovian arrival process. J. Appl. Probab., 47, 630 649. 6] Rosenblatt., M. (1971) Markov processes. Structure and asymptotic behavior. Springer-Verlag, New-York. 5