On asymptotic behavior of a finite Markov chain

Similar documents
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Markov Chains, Stochastic Processes, and Matrix Decompositions

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

STOCHASTIC PROCESSES Basic notions

Markov Chains (Part 3)

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov Chains and Stochastic Sampling

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Stochastic Processes

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Convex Optimization CMU-10725

Discrete Markov Chain. Theory and use

Stochastic process. X, a series of random variables indexed by t

SMSTC (2007/08) Probability.

Lecture 20 : Markov Chains

Chapter 16 focused on decision making in the face of uncertainty about one future

The Theory behind PageRank

SIMILAR MARKOV CHAINS

Statistics 992 Continuous-time Markov Chains Spring 2004

P(X 0 = j 0,... X nk = j k )

Markov Processes Hamid R. Rabiee

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Markov Chain Monte Carlo

Stochastic modelling of epidemic spread

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Countable state discrete time Markov Chains

Chapter 2: Markov Chains and Queues in Discrete Time

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

12 Markov chains The Markov property

Classification of Countable State Markov Chains

ISyE 6650 Probabilistic Models Fall 2007

Markov chains. Randomness and Computation. Markov chains. Markov processes

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Markov Chains Handout for Stat 110

Reinforcement Learning

Matrices: 2.1 Operations with Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

2 Discrete-Time Markov Chains

Markov Chains (Part 4)

Statistics 150: Spring 2007

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

ISE/OR 760 Applied Stochastic Modeling

Stochastic modelling of epidemic spread

Markov Chains, Random Walks on Graphs, and the Laplacian

Ch5. Markov Chain Monte Carlo

Probability & Computing

Stochastic Simulation

ISM206 Lecture, May 12, 2005 Markov Chain

M3/4/5 S4 Applied Probability

6 Markov Chain Monte Carlo (MCMC)

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

STAT STOCHASTIC PROCESSES. Contents

Lectures on Probability and Statistical Models

Probabilistic Aspects of Computer Science: Markovian Models

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Entropy Rate of Stochastic Processes

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

Asymptotic properties of imprecise Markov chains

Introduction to Machine Learning CMU-10701

Chapter 1: Systems of Linear Equations

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

ELA

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

An Application of Graph Theory in Markov Chains Reliability Analysis

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Modeling and Stability Analysis of a Communication Network System

Regular finite Markov chains with interval probabilities

Lecture 9 Classification of States

Cover Page. The handle holds various files of this Leiden University dissertation

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Genetic algorithms: bridging the convergence gap

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Chapter 7. Markov chain background. 7.1 Finite state space

The Boundary Problem: Markov Chain Solution

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

6.842 Randomness and Computation February 24, Lecture 6

Question Points Score Total: 70

Data analysis and stochastic modeling

Markov Chains Absorption Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee

AARMS Homework Exercises

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit

Transcription:

1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong ergodicity. AMS: 60J10 Abstract We give in the finite case three sufficient conditions, namely, on convergence, weak strong ergodicity, respectively, of a nonhomogeneous Markov chain in terms of similar properties of a certain chain of smaller size. We apply our results to simulated annealing to provide new results about his asymptotic behavior. 1. Preliminaries Consider a finite Markov chain with state space S = {1,..., r} transition matrices (P n ) n 1. We shall refer to it as the finite Markov chain (P n ) n 1. For all integers m 0, n > m, define P m,n = P m+1 P m+2...p n = ((P m,n ) ij ) i,j S. Assume that the limit lim P n = P (1) exists that limit matrix P has p 1 irreducible aperiodic closed classes, perhaps transient states, we mean it takes the form S 1 0... 0 0 0 S 2... 0 0 P = 0 0... S p 0, (2) L 1 L 2... L p T where S i are the transition matrices, i = 1, p, for the p irreducible aperiodic closed classes, T concerns the chain as long as it stays in the r p t=1 r t transient states L i concern transitions from the transient states into the ergodic sets S i, i = 1, p. Markov chains of this type occun simulated annealing, a stochastic algorithm for global optimization. We refer to van Laarhoven Aarts ([5]) for a general exposition historical background. Definition 1.1. We say that a probability distribution µ = (µ 1,..., µ r ) is invariant with respect to an r r stochastic matrix P if we have µp = µ.

2 We shall need the following result THEOREM 1.2. Consider a finite homogeneous Markov chain on space S having the transition matrix P of the form (2). Then where lim P n = Γ i = Γ 1 0... 0 0 0 Γ 2... 0 0 0 0... Γ p 0 Ω 1 Ω 2... Ω p 0 1............ 1..., (3) is an strictly positive matrix; each row of the matrix Γ i represents the invariant probability vector = ( 1,..., µ(i) ) with respect to the matrix S i Ω i = 1 z r 1+r 2+...+r p+1,i... z r1+r 2+...+r p+1,i......... 1 z r,i... z r,i is an (r p t=1 r t) matrix, where z ji = probability that the chain will enter thus, will be absorbed in S i given that the initial state is j T, j = p r t, r, with convention r 0 = 1, i = 1, p. Proof. For the form of Γ i, i = 1, p, see [2, p. 123] for Ω i, i = 1, p, see [4, p. 91]. Remark 1.3. Clearly, z ji 0, j = r t, r, i = 1, p, (4) z ji = 1, j = i=1 r t, r. (5) THEOREM 1.4 ([6]). Let A = I r + P with P of the form (2). Then exists a nonsingular r r complex matrix Q such that A = QJQ 1 (6)

3 where J is an r r Jordan matrix. Q reads 1 0... 0... 1 0... 0... 0 1... 0... 0 1... 0... 0 0... 0... Q =, 0 0... 0... 0 0... 1... 0 0... 1... z r1+r 2+...+r p+1,1 z r1+r 2+...+r p+1,2... z r1+r 2+...+r p+1,p... z r,1 z r,2... z rp... where the first column contains 1 at the 1, r 1 rows, the next p 1 columns contains 1 at the 1 + 1, rows, i = 2, p, the last r p columns sts for complex numbers. For z ji, j = p r t, r, i = 1, p, we use the signification given in Theorem 1.2. The inverse Q 1 has the form Q 1 = µ (1) 1... µ (1) r 1 0... 0 0... 0 0... 0...... 0... 0 0... 0 µ (p) 1... µ r (p) p 0... 0 q p+1,1 q p+1,r...... q r,1 q rr where are the invariant probability vectors with respect to S i, i = 1, p, the last r p rows sts for complex numbers. Proof. See [6]. If A = (A ij ) is an m n matrix, then for M {1,..., m}, N {1,..., n}, M, N we define A M N = (A ij ) i M,j N., Definition 1.5 (see, e.g., [2, p. 217]). A sequence of stochastic matrices (P n ) n 1 is said to be weakly ergodic if only if, m 0 i, j, k S, lim [(P m,n) ik (P m,n ) jk ] = 0. Definition 1.6 (see, e.g., [2, p. 223]). A sequence of stochastic matrices (P n ) n 1 is said to be strongly ergodic if m 0, i, j S, the limit lim (P m,n) ij = (π m ) j,

4 exists does not depend on i. Remark 1.7 (see, e.g., [2, p. 223]). It is easy to prove that if the relation above holds, then the (π m ) j are also independent of m 0. Definition 1.8 (see, e.g., [6]). We say that a finite Markov chain (P n ) n 1 converge if m 0 the sequence (P m,n ) n>m converges. 2. Weak strong ergodicity results In this section is continued an earlier study of the author from [6]. We give sufficient conditions for convergence, weak strong ergodicity, respectively, of a class of nonhomogeneous Markov chains in terms of similar behavior of a certain Markov chain of smaller size. In the sequel, we shall consider (P n ) n 1 a nonhomogeneous Markov chain on space S such that P n P as n. Suppose that P has exactly p 1 recurrent aperiodic classes S i, i = 1, p,, possibly, transient states, i.e., P is of the form (2). Let be the invariant probability vectors with respect to S i z ji, j = p r t, r, i = 1, p, as in Theorem 1.2. Let P n = P + V n, n 1, where lim V n = 0. Let Q Q 1 as in Theorem 1.4. Set Ṽ n = Q 1 V n Q, n 1. (7) C n = I p + (Ṽn) M M, M = {1,..., p} n 1. (8) PROPOSITION 2.1. C n is a stochastic matrix, n 1. THEOREM 2.2 ([6]). In the above context, lim (P m,n) ij = 0, i S, j = uniformly with respect to m 0. If, moreover, r t, r, (Q 1 V n Q) (S\M) M <, (9) n=1 then the chain (P n ) n 1 is convergent. Proof. See ([6]). The main result of this papes THEOREM 2.3. Suppose that (Ṽn) (S\M) M <. (10) n=1

5 Then (i) If (C n ) n 1 is weakly ergodic, then (P n ) n 1 is weakly ergodic, i.e. the chain (P n ) n 1 is weakly ergodic ; (iii) If (C n ) n 1 is strongly ergodic, then (P n ) n 1 is strongly ergodic, i.e. the chain (P n ) n 1 is strongly ergodic. Remark 2.4. We can apply this results to simulated annealing to describe some properties about his asymptotic behavior. Acknowledgements: CNCSIS-AT Code 61. 3. Bibliography The research was supported in part by the Grant [1] Horn, R. Johnson, C. (1985). Matrix Analysis. Cambrige University Press, New York. [2] Iosifescu, M. (1980). Finite Markov Processes Their Applications. Wiley, Chichester Editura Tehnică, Bucureşti. [3] Isaacson, D. L. Madsen, R. W. (1976). Markov Chains: Theory Applications. Wiley, New York. [4] Karlin, S. Taylor, H. M. (1975). A First Course in Stochastic Processes. Academic Press, New York. [5] van Laarhoven, P. J. M. Aarts, E. H. L. (1987). Simulated Annealing: Theory Applications. D. Reidel Publishing Company, Dordrecht, Holl. [6] Nicolae, A. A sufficient condition for the convergence of a finite Markov chain. To appear.