MARKOV CHAINS. 7. Convergence to equilibrium. Long-run proportions. Part IB Michaelmas 2009 YMS. Proof. (a) state j we have π (i) P ) = π

Similar documents
Chapter 5 Properties of a Random Sample

MATH 247/Winter Notes on the adjoint and on normal operators.

Lecture 3 Probability review (cont d)

CHAPTER VI Statistical Analysis of Experimental Data

Summary of the lecture in Biostatistics

4 Inner Product Spaces

Mu Sequences/Series Solutions National Convention 2014

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

Chapter 9 Jordan Block Matrices

X ε ) = 0, or equivalently, lim

18.413: Error Correcting Codes Lab March 2, Lecture 8

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Lecture 07: Poles and Zeros

1 Lyapunov Stability Theory

CHAPTER 4 RADICAL EXPRESSIONS

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

1 Onto functions and bijections Applications to Counting

d dt d d dt dt Also recall that by Taylor series, / 2 (enables use of sin instead of cos-see p.27 of A&F) dsin

The Mathematical Appendix

MOLECULAR VIBRATIONS

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

TESTS BASED ON MAXIMUM LIKELIHOOD

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

1 Mixed Quantum State. 2 Density Matrix. CS Density Matrices, von Neumann Entropy 3/7/07 Spring 2007 Lecture 13. ψ = α x x. ρ = p i ψ i ψ i.

Parameter, Statistic and Random Samples

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Lecture 3. Sampling, sampling distributions, and parameter estimation

Econometric Methods. Review of Estimation

Point Estimation: definition of estimators

Introduction to local (nonparametric) density estimation. methods

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018

Lecture 9: Tolerant Testing

Lecture Note to Rice Chapter 8

Lecture Notes Types of economic variables

Chapter 4 Multiple Random Variables

Random Variables and Probability Distributions

Non-uniform Turán-type problems

MEASURES OF DISPERSION

PROJECTION PROBLEM FOR REGULAR POLYGONS

Complete Convergence and Some Maximal Inequalities for Weighted Sums of Random Variables

Chapter 14 Logistic Regression Models

Special Instructions / Useful Data

PTAS for Bin-Packing

THE COMPLETE ENUMERATION OF FINITE GROUPS OF THE FORM R 2 i ={R i R j ) k -i=i

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Assignment 7/MATH 247/Winter, 2010 Due: Friday, March 19. Powers of a square matrix

A Remark on the Uniform Convergence of Some Sequences of Functions

Functions of Random Variables

Qualifying Exam Statistical Theory Problem Solutions August 2005

ρ < 1 be five real numbers. The

Physics 114 Exam 2 Fall Name:

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Bayes (Naïve or not) Classifiers: Generative Approach

Simulation Output Analysis

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

Lattices. Mathematical background

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever.

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections

Third handout: On the Gini Index

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

( q Modal Analysis. Eigenvectors = Mode Shapes? Eigenproblem (cont) = x x 2 u 2. u 1. x 1 (4.55) vector and M and K are matrices.

Dimensionality Reduction and Learning

L5 Polynomial / Spline Curves

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Logistic regression (continued)

The Occupancy and Coupon Collector problems

Lebesgue Measure of Generalized Cantor Set

EECE 301 Signals & Systems

BIOREPS Problem Set #11 The Evolution of DNA Strands

MA/CSSE 473 Day 27. Dynamic programming

Class 13,14 June 17, 19, 2015

CHAPTER 3 POSTERIOR DISTRIBUTIONS

A Markov Chain Competition Model

Probabilistic Meanings of Numerical Characteristics for Single Birth Processes

1 Solution to Problem 6.40

FORECASTING USING MARKOV CHAIN

8.1 Hashing Algorithms

Strong Convergence of Weighted Averaged Approximants of Asymptotically Nonexpansive Mappings in Banach Spaces without Uniform Convexity

Law of Large Numbers

5 Short Proofs of Simplified Stirling s Approximation

9 U-STATISTICS. Eh =(m!) 1 Eh(X (1),..., X (m ) ) i.i.d

3. Basic Concepts: Consequences and Properties

n -dimensional vectors follow naturally from the one

We have already referred to a certain reaction, which takes place at high temperature after rich combustion.

III-16 G. Brief Review of Grand Orthogonality Theorem and impact on Representations (Γ i ) l i = h n = number of irreducible representations.

arxiv:math/ v1 [math.gm] 8 Dec 2005

Multiple Choice Test. Chapter Adequacy of Models for Regression

6.867 Machine Learning

MA 524 Homework 6 Solutions

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings

. The set of these sums. be a partition of [ ab, ]. Consider the sum f( x) f( x 1)

STK4011 and STK9011 Autumn 2016

Mean is only appropriate for interval or ratio scales, not ordinal or nominal.

Chapter 4 Multiple Random Variables

C.11 Bang-bang Control

Transcription:

Part IB Mchaelmas 2009 YMS MARKOV CHAINS E-mal: yms@statslabcamacuk 7 Covergece to equlbrum Log-ru proportos Covergece to equlbrum for rreducble, postve recurret, aperodc chas ad proof by couplg Log-ru proporto of tme spet a gve state Covergece to equlbrum meas that, as the tme progresses, the Markov cha forgets about ts tal dstrbuto λ I partcular, f λ = δ (), the Drac delta cocetrated at, the cha forgets about tal state Clearly, ths s related to propertes of the -step matrx P as Cosder frst the case of a fte cha Theorem 7 Suppose that a fte m m trasto matrx P coverges, each etry, to a lmtg matrx P = (π j ): lm p() j = π j,, j I (7) The (a) every row π () of P s a equlbrum dstrbuto π () P = π () or π j = l π l p lj (b) If P s rreducble the all rows π () cocde: π () = = π (m) = π I ths case, lm P(X = j) = π j j I ad the tal dstrbuto λ Proof (a) state j we have ( π () P ) = π j l p lj l I = lm l p() l p lj = lm l p () l p lj = lm p (+) j = π j = ( π ()) j (72) (b) If P s rreducble the all rows π () of P cocde as there s a uque equlbrum dstrbuto Also, lm P(X = j) = lm λ p () j = λ lm p () j = π j (73) For a coutable cha, our argumet Eq (72) requres a justfcato of exchagg the order of the lmt ad summato I ll omt ths argumet: the reader ca fd t the recommeded lterature We see from Theorem 7 that the equlbrum dstrbuto of a cha ca be detfed from the lmt of matrces P as More precsely, f we kow that P coverges to a matrx P whose rows are equal to each other the these rows gve the equlbrum dstrbuto π We see therefore that covergece P Π where Π has a structure factor π π So whe does P Π? A smple couterexample: P = ( ) 0 P =, eve, ( 0 ) 0 P =, odd 0 More geerally, cosder a m m matrx P = s a crucal ( ) 0 Here 0 (74) I ths case the equlbrum dstrbuto s uque: π = (/2, /2) but there s o covergece P Π, as P s perodc (of perod 2) 0 0 0 0 0 0 2 0 0 0

correspodg to Fg 7 m 2 Fgure 7 3 m 2 Fgure 73 3 The P 2 wll correspod to 2 m Fgure 72 Smlarly, for hgher powers, wth the mth power P m = I: 3 The pcture wll the be repeated mod m Aga, the equlbrum dstrbuto s uque: π = (/m,/m), but the covergece P Π fals Defto 7 Trassto matrx P s called aperodc f I p () > 0 for all large eough (75) If addto, P s rreducble the,, j I p () j > 0 for all large eough (76) Theorem 72 Assume P s rreducble, aperodc ad postve recurret The, as, P Π The etres of the lmtg matrx Π are costat alog colums I other words the rows of Π are repettos of the same vector π whch s the (uque) equlbrum dstrbuto for P Hece, the rreducble aperodc ad postve recurret Markov cha forgets ts tal dstrbuto: λ ad j I, lm P(X = j) = π j Proof: a sketch (No-examable but useful may stuatos) Cosder two Markov chas (X () ), whch s ( δ (), P ), ad (X π ), whch s 3 4

(π, P) The p () j = P (X () = j), π j = P(X π = j) To evaluate the dfferece betwee these probabltes, we wll detfy ther commo part, by couplg the two Markov chas, e rug them together Oe way s to ru both chas depedetly It meas that we cosder the Markov cha (Y ) o I I, wth states (k, l) where k, l I, wth the trasto probabltes ad wth the tal dstrbuto p Y (k,l)(u,v) = p ku p lv, k, l, u, v I, (77) P(Y 0 = (k, l)) = (k = )π l, k, l I However, a better way for us s to ru the cha (W ) where the trasto probabltes are { p W (k,l)(u,v) = p ku p lv, f k l, k, l, u, v I, (78) p ku (u = v), f k = l, wth the same tal dstrbuto P(W 0 = (k, l)) = (k = )π l, k, l I (79) Ideed, Eq (78) determes a trasto probablty matrx o I I: all etres p W (k,l)(u,v) 0 ad the sum alog a row equals oe I fact, p ku p lv, f k l p W (k,l)(u,v) = u v = p ku, f k = l u,v I u Further, the partal summato gves the orgal trastoal probabltes P p W (k,l)(u,v) = p ku, p W (k,l)(u,v) = p lv v I Pctorally, the two compoets of the cha (W ) behave dvdually lke (X () ) ad (X π); together they evolve depedetly (e as (Y )) utl the (radom) tme T whe they cocde u I T = f [ : X () = Xπ ], 5 after whch they stay together Therefore, Wrtg ad P W ( X () p () j π j = P ( W X () = j) P W (X π = j) = j) = P W ( X () = j, T ) + P W ( X () = j, T > ) (70) P W (X π = j) = P W (X π = j, T ) + P W (X π = j, T > ), (7) we see that the frst summads cacel each other: P ( W X () = j, T ) = P W (X π = j, T ), { } as the evets X () = j, T ad {X π = j, T } cocde Hece ad p () j π j = P ( W X () = j, T > ) P W (X π = j, T > ) p () j π j P W (T > ) = P Y (T > ) (72) The last boud s called the couplg equalty Thus, t suffces to check that P W (T > ) 0, e P(T < ) = Ths s establshed by usg the fact that the orgal matrx P s rreducble ad aperodc (I omt the detals) I the case of a fte rreducble aperodc cha t s possble to establsh that the rate of covergece of p () j to π j s geometrc I fact, ths case m ad ρ (0, ) such that Theorem 73 p (m) j ρ states, j (73) If P s fte rreducble ad aperodc the states, j p () j π j ( ρ) /m, (74) 6

where m ad ρ are as (73) Proof (No-examable but useful may stuatos) Repeat the scheme of the proof of Theorem 72: we have to assess P Y (T > ) But the fte case, we ca wrte e P W (k,l)(t m) u I p (m) ku p(m) lu ρ p (m) lu = ρ, u I P W (k,l)(t > m) ( ρ) k, l I The, by the strog Markov property [ ) P W (T > ) P (T W > m P m] W (T > m) [/m] ad the asserto of Theorem 73 follows Examples 7 Cosder a m m stochastc matrx whose rows are cyclc shfts of a gve stochastc vector (p,,p m ) where p,, p m > 0 ad p + + p m = : p p 2 p m p m p 2 p 3 p m p P = p m p p m Sce all states commucate drectly, [ ths matrx s rreducble ] ad aperodc; moreover, the value m 0 = m : p () j > 0, j I = The equlbrum dstrbuto s uque: π = (/m,,/m) By Theorems 7 ad 72, P Π geometrcally fast: π j ( ρ) p () j where ρ = m [ p,,p m ] (0, ) 7 72 (Card shufflg) The problem of shufflg a pack of cards s mportat ot oly gamblg but a umber of other applcato See Example Sheet 2 Remark 7 For a traset or ull recurret rreducble aperodc cha, matrx P coverges to a zero matrx: lm P = O We wll ot gve here the formal proof of ths asserto (For a traset case the proof s based o the fact that the seres p () < ) Defto 72 Cosder the umber of vsts to state before tme : V () = (X k = ) (77) The lmt (f t exsts) V () lm s called the log-ru proporto of the tme spet state Theorem 74 state I: where r = k=0 (78) ( ) V () P lm = r =, (79) { π, f s postve recurret, 0, f s ull recurret or traset (720) Proof Frst, suppose that state s traset The, as we kow, the total umber V of vsts to s fte wth probablty See Eqs (58), 8

(58) Hece, V / 0 as wth probablty As 0 V () V, we deduce that V ()/ 0 as wth probablty Now let be recurret The the tmes T (), T (2), betwee successve returs to state are fte wth P -probablty By Theorem 65, they are IID radom varables, wth mea value m equal to /π the postve recurret case ad to the ull recurret case Obvously, but see Fg 74 So, we ca wrte: ( V () T () T () + + T (V()), T () + + T (V() ), ) + + T (V() ) V () ( V () T () ) + + T (V()) By Theorem 66, o a evet of P -probablty, the lmt lm m holds: P ( l= T (l) m, as 9 ) (72) T (l) = l= = (722) 4 2 3 state ( ) ( ) ( ) T T 2 T 3 Fgure 74 H ( ) V _ ( ) T ( ) _ V ( ) _ () V X T ( ) V ( ) tme tme Next, as s recurret, sequece (V ()) creases deftely, aga o a evet of P -probablty : ) P (V () ր, as = (723) The we ca put (722) a summato up to V (), stead of ad, correspodgly, dvde by the factor V (): V () lm T (l) = m V () l= Ths relato holds o the tersecto of the two aforemetoed evets of probablty, whch obvously has aga P -probablty O the same evet, lm V () V () l= T (l) = m I other words, Eqs (722) ad (723) together yeld that P V () V T (l) () m ad T (l) m, as = V () V () l= 0 l= (724)

But the, owg to (72), stll o the same tersecto of two evets of P -probablty, the rato /V () teds to m, e the verse rato V ()/ teds to r = /m Ths gves (79), (720) ad completes the proof of Theorem 74 Remark 73 A careful aalyss of the proof of Theorem 74 shows that f P s rreducble ad postve recurret, the we ca clam that (79) the probablty dstrbuto P ca be replaced by P j, or, fact, by the dstrbuto P geerated by a arbtrary tal dstrbuto λ Ths s possble because sums T () + + T () stll behave asymptotcally as f the RVs T (l) were IID (I realty, the dstrbuto of the frst RV, T () = T = H, wll be dfferet ad deped o the choce of the tal state) Theorem 75 Let P be a fte rreducble trasto matrx The for ay tal dstrbuto λ ad a bouded fucto f o I: ( ) V (f, ) P lm = π(f) =, (725) where π(f) = I π f() (726) Proof The proof of Theorem 75 s a re-femet of that of Theorem 74 More precsely, (725) s equvalet to ( ) P lm V (f, ) π(f) = 0 = I other words, we have to check that o a evet of P-probablty, V (f, ) π(f) 0, as (727) Wrtg V (f, ) = V ()f() ad π(f) = π f(), we ca trasform I I ad boud the left-had sde (727) as follows V (f, ) π(f) = ( ) V () π f() V () I I π f() We kow that, I, o a evet of P -probablty, V ()/ π Remark 7 allows us to clam covergece V ()/ π o a evet of P j - probablty (that s, regardless of the choce of the tal state), or, eve stroger, o a evet of P-probablty, where P s the dstrbuto of the (λ, P) Markov cha wth ay tal dstrbuto λ The (725) follows, whch completes the proof of Theorem 75 Example 73 (Markov Chas, Part IIA, 2002, A40M) Wrte a essay o the log-tme behavour of dscrete tme Markov chas o a fte state space Your essay should clude dscusso of the covergece of probabltes as well as almost-sure behavour You should also expla what happes whe the cha s ot rreducble Soluto The state space splts to ope classes O,, O j ad closed classes C j+,, C j+l If l = (a uque closed class), t s rreducble Startg from a ope class, say O, we ed up closed class C k wth probablty h k These probabltes satsfy j+l h k = p r h k r r= Here, p r s the probablty that we ext class O to class O r or C r, ad for r = j +,, j + l: h k r = δ r,k The cha has a uque equlbrum dstrbuto π (r) cocetrated o C r, r = j+,, j+l (hece, a uque equlbrum dstrbuto whe l = ) Ay equlbrum dstrbuto s a mxture of the equlbrum dstrbutos π (r) Startg C r, we have, for ay fucto f o C r : t=0 f(x t ) π (r) f() almost surely C r 2

Moreover, the aperodc case (where gcd { : p aa () > 0} = for some a C r ), 0 C r : P(X = X 0 = 0 ) π r, ad the covergece s wth a geometrc speed 8 Detaled balace ad reversblty Tme reversal, detaled balace, reversblty; radom walk o a graph Let (X 0, X, ) be a Markov cha ad fx N What ca we say about the tme reversal of (X ), e the famly (X N, = 0,,, N) = (X N, X N,,X 0 )? Theorem 8 Let (X ) be a (π, P) Markov cha where π = (π ) s a equlbrum dstrbuto for P wth π > 0 I The: (a) N, the tme reversal (X N, X N,, X 0 ) s a (π, P) Markov cha where P = ( p j ) has p j = π j p j (8) π (b) If P s rreducble the so s P Proof (a) Frst, observe that P s a stochastc matrx, that s, p j 0 ad p j = π j p j = π = π π Next, π s P-varat π p j = Now pull the factor π through the product j j π j p j = π j p j = π j P (X N = N,, X 0 = 0 ) = P (X 0 = 0,,X N = N ) = π 0 p 0 p N N = p 0 π p N N = p 0 p 2 π 2 = p 0 p N N π N = π N p N N p 0 3 4

We see that (X N ) s a (π, P) Markov cha (b) If P s rreducble the ay par of states, j s coected, that s a path = 0,,, = j wth 0 < p 0 p = (/π 0 )π 0 p 0 p = (/π 0 ) p 0 π p = = (/π 0 ) p 0 p π So, p 0 p > 0, ad j, are coected P The case where cha (X N ) has the same dstrbuto as (X ) s of a partcular terest Theorem 82 are equvalet: () ad states 0,, : Let (X ) be a Markov cha The followg propertes P(X 0 = 0,, X = ) = P(X 0 =, X = 0 ) (82) () (X ) s equlbrum, e (X ) (π, P) where π s a equlbrum dstrbuto for P, ad So, λ = (λp), e λp = λ Hece, the cha s equlbrum λ = π Next,, j () () Wrte P(X 0 =, X = j) = π p j = P(X 0 = j, X = ) = π j p j P(X 0 = 0,,X = ) = π 0 p 0 p ad use Eqs (83) to pull π through the product π 0 p 0 p = p 0 π p = = p 0 p π = π p p 0 = P(X 0 =,,X = 0 ) Defto 8 A Markov cha (X ) satsfyg (82) s called reversble Eqs (83) are called detaled balace equatos (DBEs) So, the asserto of Theorem 82 reads; a Markov cha s reversble f ad oly f t s equlbrum, ad the DBEs are satsfed The DBEs are a powerful tool for detfcato of a ED π p j = π j p j states, j I (83) Theorem 83 If λ ad P satsfy the DBEs Proof () () Take =, P(X 0 =, X = j) = P(X 0 = j, X = ), ad sum over j P(X 0 =, X = j) = P(X 0 = ) = λ, j P(X 0 = j, X = ) = P(X = ) = (λp) j 5 λ p j = λ j p j,, j I, the λ s a ED for P, that s λp = λ Proof Sum over j: λ p j = λ, j λ j p j = (λp) The two expressos are equal, hece the result 6 j

So, for a gve matrx P, f the DBEs ca be solved (that s, a probablty dstrbuto that satsfes them ca be foud), the soluto wll gve a ED Furthermore, the correspodg Markov cha wll be reversble A terestg ad mportat class of Markov chas s formed by radom walks o graphs We have see examples of such chas: a brth-death process (a RW o Z or ts subset), a RW o a plae square lattce Z 2 ad, more geerally, a RW o a d-dmesoal cubc lattce Z d A feature of these examples s that a waderg partcle ca jump to ay of ts eghbourg stes; a symmetrc case, the probablty of each jump s the same Ths dea ca be exteded to a geeral graph, wth drected or o-drected lks (edges) Here, we focus o o-drected graphs; a graph s uderstood as a collecto G of vertces some of whch are joed by o-drected edges, or lks, possbly several No-drected meas here that the edges ca be traversed both drectos; sometmes t s coveet to thk that each edge s formed by a par of opposte arrows Fgure 8 A graph s called coected f ay two dstct vertces are coected wth a path formed by edges The valecy v of a vertex s defed as the umber of edges at The coectedess v j s the umber of edges jog vertces ad j The RW o the graph has the followg trasto matrx P = (p j ) { / v j v, f ad j are coected, p j = (84) 0, otherwse The matrx P s rreducble f ad oly f the graph s coected The vector 7 v = (v ) satsfes the DBEs That s, vertces, j v p j = v j = v j p j, (85) ad hece s P-varat We obta the followg straghtforward result Theorem 84 The RW o a graph, wth trasto matrx P of the form (84), could be of all three types: traset (vz, a symmetrc earesteghbour RW o Z d wth d 3), ull recurret (a symmetrc earesteghbour RW o Z 2 or Z ) or postve recurret It s postve recurret f ad oly f the total valece / v <, whch case π j = v j v s a equlbrum dstrbuto Furthermore, the cha wth equlbrum dstrbuto π s reversble A smple but popular example of a graph s a l-ste segmet of a oedmesoal lattce: here the valecy of every vertex equals 2, except for the edpots where the valece s See Fg 82 a) l = 8 = l 2 a) b) Fgure 82 A terestg class s formed by graphs wth a costat valecy: v v; aga the smplest case s v = 2, where l vertces are placed o a crcle (or o a perfect polygo or ay closed path) See Fg 82 b) A popular example 8

of a graph wth a costat valecy s a fully coected graph wth a gve umber of vertces, say {,, m}: here the valecy equals m, ad the graph has m(m )/2 (o-drected) edges total See Fg 83 m = 5 m = 6 a) b) Fgure 83 Aother mportat example s a regular cube d dmesos, wth 2 d vertces Here the valecy equals d, ad the graph has d2 d (stll odrected) edges jog egbourg vertces See Fg 84 d = 2 d = 3 Fgure 84 d = 4 Popular examples of fte graphs of costat valecy are lattces ad trees I the case of a geeral fte graph of costat valecy v = v vertx, the sum v equals v G where G s the umber of vertces The probabltes p j = p j = v j /v, eghbourg par, j That s, the trasto matrx P = (p j ) s Hermta: P = P T Furthermore, the equlbrum dstrbuto π = (π ) s uform: π = / G I Lear Algebra courses, t s asserted that a (complex) Hermta matrx has a orthoormal bass of ege-vectors, ad ts ege-values are all real Ths hady property s ce to reta wheever possble For a Markov cha, eve whe P s orgally o-hermta, t ca be coverted to a Hermta matrx, by chagg the scalar product We wll explore further ths aveue Sectos 2 4 Example 8 2002, B0M) (Markov Chas, Part IIA, 2002, A0M ad Part IIA, () We are gve a fte set of arports Assume that betwee ay two arports, ad j, there are a j = a j flghts each drecto o every day A cofused traveller takes oe flght per day, choosg at radom from all avalable flghts Startg from, how may days o average wll pass utl the traveller returs aga to? Be careful to allow for the case where may be o flghts at all betwee two gve arports () Cosder the fte tree T wth root R, where for all m 0, all vertces at dstace 2 m from R have degree 3, ad where all other vertces (except R) have degree 2 Show that the radom walk o T s recurret Soluto () Let X 0 = be the startg arport, X the destato of the th flght ad I deote the set of arports reachable from The (X ) s a rreducble Markov cha o I, so the expected retur tme to, s gve by (/π ), where π s the uque equlbrum dstrbuto We wll show that /π = / a jk a k j,k I k I I fact, p jk = a jk a jl l I ad ( ( a jl )p jk = a kl )p kj l I l I 9 20

So the vector v = (v j ) wth v j = l I a jl s detaled balace wth P Hece π j = k I a jk / k,l I a kl () Cosder the dstace X from the root R at tme The (X ) 0 s a brth-death Markov cha wth trasto q = p = /2, f 2 m, q = /3, p = 2/3, f = 2 m By a stadard argumet for h = P (ht 0) ad The codto so h 0 =, h = p h + + q h,, p u + = q u, u = h h, u + = q p u = γ u, γ = q q p p, u + + u = h 0 h, h = A(γ 0 + + γ ) γ = forces A = 0 ad hece h = for all Here, γ 2 m = 2 m, γ = ad the walk s recurret Assume, for defteess, that P s rreducble, ad π > 0 I The aswer comes out after we defe the trasto matrx P RV = (p RV j ) by or π p RV j = π j p j,, j I (86) p RV j = π j π p j,, j I (87) Eqs (86), (87) deed determe a trasto matrx, as, j I, p RV j 0, ad j I p RV j = π j p j = π = π π Next, π gves a ED for P RV : j I, π p RV j = π j p j = π j I I The, repeatg the argumet from the proof of Theorem 8, we obta that N, the tme reversal (X N, 0 N) s a Markov cha equlbrum, wth trasto matrx P RV ad the same ED π Symbolcally, j I (X RV ) ( π, P RV) Markovcha, (88) where (X RV ) = (X N ) stads for the tme reversal of T It s structve to remember that P RV was prove to be a stochastc matrx because π s a ED for P whle the proof that π s a ED for P RV used oly the fact that P s stochastc The DBEs are a coveet tool to fd a equlbrum dstrbuto: f a measure λ 0 s detaled balace wth P ad has λ <, the π j = λ j / λ s a equlbrum dstrbuto Example 82 Suppose π = (π ) forms a ED for trasto matrx P = (p j ), wth πp = π, but the DBE s (83) are ot satsfed What s the tme reversal of cha (X ) equlbrum? 2 Example 83 The detaled balace equatos have a useful geometrc meag Suppose that the state space I = {,,s} Matrx P geerates a lear trasformato R s R s, where vector x = s take to Px Assumg P rreducble, let π be the ED, wth π > 0, =,,s Cosder a tlted scalar product, π R s, where x,y π = x x s s x y π (89) = 22

The detaled balace equatos (83) mea that P s self-adjot (or Hermta) relatve to scalar product, π that s, I fact, x, Py π =,j x, Py π = Px,y π, x,y R s (80) x p j y j π =,j x p j y j π j = Px,y π The coverse s also true: Eq (80) mples (83), as we ca take as x ad y the vectors δ ad δ j wth the oly o-zero etres at postos ad j, respectvely,, j =,,s Ths observato yelds a beeft, as Hermta matrces have all egevalues real, ad ther ege-vectors are mutually orthogoal (relatve to the scalar product questo, ths stace,, π ) We wll use ths Secto 2 Remark 8 The cocept of reversblty ad tme reversal wll be partcularly helpful a cotuous-tme settg of Part II Appled Probablty It s ow tme to gve a bref summary of essetal results establshed about varous equatos emergg the aalyss of Markov chas We have see two sets of equatos: (I) for httg probabltes h A ad mea httg tmes k A ad (II) for equlbrum dstrbutos π = (π ) ad expected tmes γ k spet state before returg to k Although they are a sese smlar, there are also dffereces betwee them whch t s mportat to remember (I) For h j = P (ht j) the equatos are where h j j =, hj = l I Here, h j s always a soluto p l h j l = (h j P T ), j, h j = (h j, I), wth hj j = P T =, as (P T ) = l 23 p l = I (I2) For k j = E (tme to ht j) the equatos are k j j = 0, kj = + p l k j l = + (k j P T ), j where l I, l j k j = (k j, I), wth kj j = 0 Here, takg that 0 = 0, k j = ( δ j) s always a soluto whe the cha s rreducble These equatos are produced by codtog o the frst jump The vectors h j ad k j are labelled by the termal states whle ther etres h j ad k j dcate the tal states The soluto we look for s detfed as a mmal o-egatve soluto satsfyg the ormalsato costrats h j j = ad k j j = 0 (II) For the equatos are or γ k = E k (tme spet before returg to k) γ k k =, γk = l γ k l p l, k, γ k = γ k P, whe k s recurret Here, the codtog s o the last jump, ad vectors γ k are labelled by startg states The detfcato of the soluto s by the codtos γ k 0 ad γ k k = (II2) Smlarly, for a equlbrum dstrbuto (or more geerally, a varat measure) π = πp The detfcato here s through the codto π 0 ad (II3) A soluto to the detaled balace equatos π p j = π j p j, π = always produces a varat measure If addto, π =, t gves a equlbrum dstrbuto As the detaled balace equatos are usually easy to solve (whe they have a soluto), t s a powerful tool whch s always worth tryg whe you eed to fd a equlbrum dstrbuto 24