MARKOV PROCESSES K. Grill Istitut für Statistik ud Wahrscheilichkeitstheorie, TU Wie, Austria Keywords: Markov process, Markov chai, Markov property, stoppig times, strog Markov property, trasitio matrix, ifiitesimal geerator, recurrece, trasiece, trasitio operator, ifiitesimal operator, Feller process, radom walk, birth ad death process, Galto-Watso process, diffusio process Cotets 1. Itroductio 1.1. The Markov Property ad the Trasitio Fuctio 2. Discrete Markov Chais 2.1. Classificatio of the State of a Markov Chai 2.2. Recurrece 2.3. Trasiet States 3. Cotiuous Time Markov Chais 4. Examples of Markov Chais 4.1. Simple Radom Walk 4.2. Birth ad Death Processes 4.3. Galto-Watso Processes 5. Stoppig Times ad the Strog Markov Property 6. Path Properties ad Cotiuity 7. Trasitio Operators 7.1. Defiitio ad Basic Properties 7.2. The Ifiitesimal Operator 7.3. The Resolvet 7.4. Path Properties ad the Ifiitesimal Operator 8. Examples of Markov Processes Glossary Bibliography Biographical Sketch Summary A Markov process is characterized by the property that, give the value of the process at a specified time, the past ad the future of the process are coditioally idepedet. The simplest examples of Markov processes are the oes for which either the set of possible values of the parameters or the set of possible states (or both) is at most coutable, the so-called Markov chais. Their discussio leads to a umber of basic cocepts, which are the exteded to the geeral case. I particular, this leads to the defiitio of the trasitio fuctio ad the trasitio operators of a Markov process, ad to the ifiitesimal operator. A importat set of questios is studied ext, amely how path properties of the process ca be deduced from properties of the trasitio fuctio or the ifiitesimal operator. Fially, the results obtaied are applied to a few examples.
1. Itroductio 1.1. The Markov Property ad the Trasitio Fuctio Markov process ca be viewed as a geeralizatio of the otio of idepedet radom variables. They are characterized by the Markov property which states that the coditioal distributios of the past ad the future, give the preset, are idepedet. I order to give a formal defiitio, let T be a subset of the real lie (either a fiite or ifiite iterval or the itersectio of oe of those with the itegers; i the sequel, it will be assumed that the left edpoit of this iterval is0 ); T is called the parameter space of the process. Furthermore, let ( ξ ( t), t T) be a stochastic process (i.e., a collectio of radom variables idexed byt ) takig values i some subset X of a Euclidea space; usually, this space- the state space will be the real lie. Oe ca cosider more geeral state spaces, but this geerates some additioal (mostly techical) difficulties, ad will be avoided i this expositio. The stochastic process ξ(.) is called a Markov process if for all s < s < < s < t < t < < t (1) 1 2 1 ad 1 1 k A,... A, B,..., B B (2) k it holds with probability oe that P( ξ( s ) A, i, ξ( t ) B, j k ξ( t)) = (3) i i j j P( ξ( s ) A, i ξ( t)) P( ξ( t ) B, j k ξ( t)). i i j j There is a umber of equivalet defiitios i the literature. All of them are just clever ways of sayig that give the value of the process at timet, a evet defied i terms of the values of the process before t ad oe that is defied i terms of the values of the process after t are coditioally idepedet. Oe that is very importat because it is the simplest to verify is the followig: for s 1 < s 2 < < s < t P( ξ( t) B ξ( s ),..., ξ( s )) = P( ξ( t) B ξ( s )). (4) 1 If the parameter space is a subset of the itegers, the Markov process is traditioally called a Markov chai A case of special iterest is give by the discrete Markov chais; for these the state space, too, is at most coutable, which make their aalytic treatmet fairly simple. Aother importat special case is that of a Markov process with a cotiuous parameter (the parameter space is a whole iterval) ad a discrete (i.e., at most coutable) state space. These processes are ofte called cotiuous parameter (or cotiuous time) Markov chais (or, puttig emphasis o the discreteess of the state space, discrete
Markov chais i cotiuous time ). The most importat otio i the theory of Markov processes is the trasitio fuctio of the process: a fuctio PsxtBst (,,, ), Ts, tx, X, B B (5) is called the trasitio fuctio of the Markov process ξ(.) if the followig coditios hold true: 1. For sxtfixed,,, the fuctio Psxt (,,,.) is a probability measure o B. 2. For fixed stbthe,, fuctios Ps (,., tb, ) is measurable. 3. It holds that PsxsB (,,, ) = δ ( B), the measure that assigs mass 1 to the poit x. x 4. For ay s< t, B B, the followig equatio holds with probability oe: P( ξ( t) B ξ( s)) = P( s, ξ( s), t, B). (6) The questio of the existece of a trasitio fuctio is somewhat hard, ad for geeral Markov processes, a trasitio fuctio eed ot exist. I the case cosidered here the process takig values i some Euclidea space there is always a trasitio fuctio. The trasitio fuctio ca be iterpreted i the followig way: PsxtBis (,,, ) the probability that the Markov process will be i the set B at timet, if it starts from x at time s. If this idea is followed a little further, oe arrives at the otio of a family of Markov processes, or for short, a Markov family : this is just the collectio of all Markov processes with a give set of trasitio fuctios, ad started at differet times ad states. More precisely, suppose that for each s T ad x X, there is a probability measure Ps, x o the σ -algebra geerated by the set of radom variables ( ξ( t), t T [ s, )). The collectio of probability measures ( Ps, x) is the called a Markov family with trasitio fuctio P(.,.,.,.) if the followig three coditios are fulfilled: 1. The process ( ξ( t), t T [ s, )) is a Markov process with respect to the probability measure ( P s, x). 2. This process has P(.,.,.,.) as its trasitio fuctio. 3. It starts i x at time s : Psx, ( ξ ( s) = x) = 1. (7) Now, there is the questio of whether a give fuctio P(.,.,.,.) is the trasitio fuctio of a Markov family. First, observe that the fiite-dimesioal distributios of the process ca be calculated i terms of the trasitio fuctio. I fact, for s< t1 < < t, it holds that
P ( ξ( t ) A,..., ξ( t ) A ) = (8) sx, 1 1 A 1 A1 P(, s x, t, dx ) P(, t x, t, dx ) 1 1 1 1 2 2 Pt (, x, t, dx ) Pt (, x, t, A) 2 2 1 1 1 1. Now, by Kolmogorov s extesio theorem, oe ca costruct a process with these fiite-dimesioal distributios if ad oly if they are cosistet, i.e., if oe adds the coditios ξ( ) R,..., ( ) R (9) s1 ξ s k i the probability o the left-had side of (8), the the itegral o the right-had side must ot chage. This coditio is equivalet to the Chapma-Kolmogorov equatio PsxtA (,,, ) = PsxudyPuytA (,,, ) (,,, )( s< u< t). (10) If oe cosiders oly a sigle Markov process istead of a Markov family, the above argumet gets just a little more ivolved; first, oe eeds to supply the distributio P of ξ (0). This eters formula (8) i the followig way: 0 P( ξ( t ) A,..., ξ( t ) A ) = (11) 1 1 R A 1 A1 P ( dx ) P(0, x, t, dx ) P( t, x, t, dx ) P( t, x, t, A ). 0 0 0 1 1 1 1 2 2 1 1 The Chapma-Kolmogorov equatios are still importat it is clear that they are sufficiet for the existece of a Markov process with the give trasitio fuctio. A ecessary ad sufficiet coditio is obtaied by demadig that equatio (10) is satisfied for almost all x (with respect to the distributio of ξ ( s) ). Of particular importace are those Markov processes whose trasitio fuctio is oly a fuctio of the differecet s. I other words, oe has Psxt (,,, A) = Ps ( + hxt,, + ha, ) = Pt ( sxa,, ), (12) which meas that the dyamics of the process does ot chage if it shifted i time. Such a Markov process is called a homogeous Markov process, or a Markov process with statioary trasitio probabilities. This assumptio is ot ecessarily a restrictio; it is easily see that from ay Markov process oe ca obtai a homogeeous oe by addig the parameter as a additioal state variable. I Particular, if ξ() t = ( ξ (), t, ξ ()) t (13) 1 d is a d-dimesioal Markov process, the
η() t = ( η (), t η ()) t = ( ξ (), t, ξ (),) t t (14) 1 d+ 1 1 d is a homogeeous Markov process with trasitio fuctio Py (,( y,, y), y + t s,{( x,, x) : ( x,, x, y + t s A}). (15) d+ 1 1 d d+ 1 1 d 1 d d+ 1 2. Discrete Markov Chais Obviously, this depeds o s ad t oly viat s. I the sequel, oly homogeeous Markov processes will be cosidered. The simplest examples of Markov processes are the Markov chais, both i discrete ad cotiuous time. These will be studied ext. As stated above, these are Markov processes for which both the parameter space T ad the state space X are discrete; without loss of geerality oe ca assume that they are equal to the set of atural umbers (or, for fiite state chais, X may be a set of the form{1,, }, i which case the ifiite matrices below reduce to ordiary square matrices). Furthermore, as stated above, the Markov chais studied here will be assumed to be homogeeous. The first observatio oe makes is that, sice a discrete distributio is determied by the probabilities of the sigletos, it is sufficiet to defie the trasitio fuctio for those; let the trasitio probabilities be defied as p () t = P(,,{}). t i j (16) ij These ca be writte as a matrix Pt () = ( pij ()) t ( X X ), (17) the so-called trasitio matrix. He Usig this, the Chapma-Kolmogorov equatio ca be writte i the simple form Ps ( + t) = PsPt ( ) ( ). (18) Deotig P(1) simply by P, oe obtais from this t Pt () = P. (19) I additio, if p() t deotes the row vector with etries p ( t) = P( ξ ( t) = i), the p( s+ t) = p( s) P( t). (20) Thus, the distributio of a discrete Markov chai is completely specified by its oe-step trasitio Matrix P. i
2.1. Classificatio of the State of a Markov Chai A state i is said to be a predecessor of aother state j if there is a t > 0 such that pij () t > 0, i.e., if it is possible that the process visits j some time i the future, if it starts ati. If j is also a predecessor ofi, the i ad j are said to commuicate. This obviously costitutes a equivalece relatio betwee the states of the Markov chai, ad the set of states ca be split up ito the correspodig equivalece classes. May of the properties that will be studied later are the same for all states i the same equivalece class. This type of property will be called a class property. A case of particular iterest is that of a Markov chai whose states all commuicate, or i other words, for which there is oly oe equivalece class. Such a chai is called irreducible. With may importat questios, the geeral case ca be reduced to a study of irreducible Markov chais. Oe istace f a class property is periodicity. The period d of a state i is the greatest commo divisor of the set of all such that pii ( ) > 0. Now, if j ad i commuicate, there are umbers a ad b such that pij ( a) ad p ji ( b) are positive. This implies that p ( a+ b) p ( a) p ( b) > 0, (21) ii ij ji ad if pjj ( c ) > 0, the also p ( a+ b+ c) p ( a) p ( c) p ( b) > 0. (22) ii ij jj ji By the defiitio of the period, this implies that the period d of i is a divisor of both a+ bad a+ b+ c, hece also of c. This holds for ay c with pjj ( c ) > 0, so the period of i is a divisor of the period of j, ad by reversig the roles of i ad j, oe fids that both periods are equal. If the period d of a class is differet from zero (it ca oly be zero if it cotais oly oe state i with p ii = 0 ), the it is readily see that the class ca be divided ito d subsets S0,, Sd 1such that pij is zero except for the case whe i Sk ad j S k 1for some k {0,, d 1}, where deotes additio modulo d. If the period of a class equals oe, the class is called aperiodic. - - - TO ACCESS ALL THE 33 PAGES OF THIS CHAPTER, Visit: http://www.eolss.et/eolss-sampleallchapter.aspx
Bibliography Billigsley, P. (1968) Statistical Iferece for Markov Processes. 160pp. Chicago: Uiversity of Chicago Press [This studies questios of estimatio ad testig for Markov processes depedig o a parameter] Dyki, E.B. (1965) Markov Processes I (364pp),II(274). Berli: Spriger [Oe of the fudametal referece works] Itô, K., ad Mckea, H.P.jr. (1974) Diffusio Processes ad their sample paths. Berli: Spriger [A detailed discussio of diffusio processes, i.e., Markov processes geerated by a secod-order differetial operator] Karli, S., ad Taylor, H.M. (1974) A First Course i Stochastic Processes. 557pp. New York: Academic Press [This is a itroductio to the theory of stochastic processes ad, i particular, Markov processes] Revuz, D. (1975) Markov Chais. 336pp. Amsterdam: North-Hollad [This discusses Markov chais i discrete time with a geeral state space] Biographical Sketch Karl Grill received the PhD degree i 1983 from TU Wie. He joied TU Wie i 1982 as Assistat Professor i the Departmet of Statistics ad became a Associate Professor i 1988. Durig 1991-92 he was a visitig professor at the Departmet of Statistics, Uiversity of Arizoa, USA. I 1994 he received a six moth NSERC Foreig Researcher Award at Carleto Uiversity, Ottawa, Caada