11.4. RECURRENCE AND TRANSIENCE 93

Size: px
Start display at page:

Download "11.4. RECURRENCE AND TRANSIENCE 93"

Transcription

1 11.4. RECURRENCE AND TRANSIENCE 93 Similar arguments show that simple smmetric random walk is also recurrent in 2 dimensions but transient in 3 or more dimensions. Proposition If x is recurrent and r(x, ) > 0, then is recurrent and r(, x) =1. Proof. First we show r(, x) = 1. Suppose not. Since r(x, ) > 0, there is a smallest n and 1,..., n 1 such that p(x, 1 )p( 1, 2 ) p( n 1,) > 0. Since this is the smallest n, noneofthe i can equal x. Then P x (T x = 1) p(x, 1 ) p( n 1,)(1 r(, x)) > 0, acontradictiontox being recurrent. Next we show that is recurrent. Since r(, x) > 0, there exists L such that p L (, x) > 0. Then p L+n+K (, ) Summing over n, p L+n+K (, ) n p L (, x)p n (x, x)p K (x, ). p L (, x)p K (x, ) n p n (x, x) =1. We sa a subset C of S is closed if x 2 C and r(x, ) > 0implies 2 C. AsubsetD is irreducible if x, 2 D implies r(x, ) > 0. Proposition Let C be finite and closed. Then C contains a recurrent state. From the preceding proposition, if C is irreducible, then all states will be recurrent. Proof. If not, for all we have r(, ) < 1and 1 1 E x N() = P x (N() k) = P x (T k < 1) = k=1 k=1 1 r(x, )r(, ) k 1 = k=1 r(x, ) 1 r(, ) < 1.

2 94 CHAPTER 11. MARKOV CHAINS Since C is finite, then P E x N() < 1. But that is a contradiction since E x N() = p n (x, ) = p n (x, ) n n = P x ( n 2 C) = 1=1. n n Theorem Let R = {x : r(x, x) = 1}, the set of recurrent states. Then R = [ 1 i=1r i, where each R i is closed and irreducible. Proof. Sa x if r(x, ) > 0. Since ever state is recurrent, x x and if x, then x. If x and z, thenp n (x, ) > 0andp m (, z) > 0 for some n and m. Then p n+m (x, z) > 0orx z. Therefore we have an equivalence relation and we let the R i be the equivalence classes. Looking at our examples, it is eas to see that in the Ehrenfest urn model all states are recurrent. For the branching process model, suppose p(x, 0) > 0 for all x. Then 0 is recurrent and all the other states are transient. In the renewal chain, there are two cases. If {k : a k > 0} is unbounded, all states are recurrent. If K =max{k : a k > 0}, then{0, 1,...,K 1} are recurrent states and the rest are transient. For the queueing model, let µ = P ka k,theexpectednumberofpeople arriving during one customer s service time. We ma view this as a branching process b letting all the customers arriving during one person s service time be considered the progen of that customer. It turns out that if µ apple 1, 0 is recurrent and all other states are also. If µ>1allstatesaretransient Stationar measures Aprobabilitµ is a stationar distribution if µ(x)p(x, ) =µ(). (11.8) x

3 11.5. STATIONARY MEASURES 95 In matrix notation this is µp = µ,orµ is the left eigenvector corresponding to the eigenvalue 1. In the case of a stationar distribution, P µ ( 1 = ) =µ(), which implies that 1, 2,... all have the same distribution. We can use (11.8) when µ is a measure rather than a probabilit, in which case it is called a stationar measure. Note µp n =(µp )P n 1 = µp n 1 = = µ. If we have a random walk on the integers, µ(x) =1forallx serves as a stationar measure. In the case of an asmmetric random walk: p(i, i +1)= p, p(i, i 1) = q =1 p and p 6= q, settingµ(x) =(p/q) x also works. To check this, note µp (x) = µ(i)p(i, x) =µ(x 1)p(x 1,x)+µ(x +1)p(x +1,x) p x 1p p x+1q = + q q = px q q + px x q q = px x q. x r In the Ehrenfest urn model, µ(x) =2 r works. One wa to see this x is that µ is the distribution one gets if one flips r coins and puts a coin in the first urn when the coin is heads. A transition corresponds to picking a coin at random and turning it over. Proposition Let a be recurrent and let T = T a. Set Then µ is a stationar measure. T 1 µ() =E a 1 (n=). The idea of the proof is that µ() istheexpectednumberofvisitsto b the sequence 0,..., T 1 while µp is the expected number of visits to b 1,..., T.Theseshouldbethesamebecause T = 0 = a. Proof. Let p n (a, ) =P a ( n =, T > n). So 1 µ() = P a ( n =, T > n) = 1 p n (a, )

4 96 CHAPTER 11. MARKOV CHAINS and µ()p(, z) = 1 p n (a, )p(, z). First we consider the case z 6= a. Then p n (a, )p(, z) = P a (hit in n steps without first hitting a and then go to z in one step) = p n+1 (a, z). So µ()p(, z) = p n (a, )p(, z) n 1 1 = p n+1 (a, z) = p n (a, z) = µ(z) since p 0 (a, z) =0. Second we consider the case a = z. Then p n (a, )p(, z) = P a (hit in n steps without first hitting a and then go to z in one step) = P a (T = n +1). Recall P a (T =0)=0,andsincea is recurrent, T<1. So µ()p(, z) = p n (a, )p(, z) n 1 1 = P a (T = n +1)= P a (T = n) =1.

5 11.5. STATIONARY MEASURES 97 On the other hand, T 1 1 (n=a) =1 (0 =a) =1, hence µ(a) =1. Therefore,whetherz 6= a or z = a, wehaveµp (z) =µ(z). Finall, we show µ() < 1. If r(a, ) =0,thenµ() =0. Ifr(a, ) > 0, choose n so that p n (a, ) > 0, and then 1=µ(a) = µ()p n (a, ), which implies µ() < 1. We next turn to uniqueness of the stationar distribution. We give the stationar measure constructed in Proposition the name µ a.weshowed µ a (a) =1. Proposition If the Markov chain is irreducible and all states are recurrent, then the stationar measure is unique up to a constant multiple. Proof. Fix a 2S.Letµ a be the stationar measure constructed above and let be an other stationar measure. Since = P, then (z) = (a)p(a, z)+ 6=a ()p(, z) Continuing, = (a)p(a, z)+ 6=a (a)p(a, )p(, z)+ x6=a = (a)p a ( 1 = z)+ (a)p a ( 1 6= a, 2 = z) + P ( 0 6= a, 1 6= a, 2 = z). (z) = (a) (x)p(x, )p(, z) 6=a n P a ( 1 6= a, 2 6= a,..., m 1 6= a, m = z) m=1 + P ( 0 6= a, 1 6= a,..., n 1 6= a, n = z) n (a) P a ( 1 6= a, 2 6= a,..., m 1 6= a, m = z). m=1

6 98 CHAPTER 11. MARKOV CHAINS Letting n!1, we obtain (z) (a)µ a (z). We have (a) = x (x)p n (x, a) (a) x µ a (x)p n (x, a) = (a)µ a (a) = (a), since µ a (a) =1(seeproofofProposition11.13). Thismeansthatwehave equalit and so for each n and x either p n (x, a) =0or (x) = (a)µ a (x). Since r(x, a) > 0, thhen p n (x, a) > 0forsomen. Consequentl (x) (a) = µ a(x). Proposition If a stationar distribution exists, then µ() > 0 implies is recurrent. Proof. If µ() > 0, then = µ() = µ(x)p n (x, ) = 1 µ(x) p n (x, ) n=1 n=1 x x n=1 = 1 µ(x) P x ( n = ) = µ(x)e x N() x n=1 x = x µ(x)r(x, )[1 + r(, )+r(, ) 2 + ]. Since r(x, ) apple 1andµ is a probabilit measure, this is less than µ(x)(1 + r(, )+ ) apple 1+r(, )+. Hence r(, ) must equal 1. x Recall that T x is the first time to hit x.

7 11.5. STATIONARY MEASURES 99 Proposition If the Markov chain is irreducible and has stationar distribution µ, then µ(x) = 1 E x T x. Proof. µ(x) > 0 for some x. If 2S,thenr(x, ) > 0andsop n (x, ) > 0 for some n. Hence µ() = µ(x)p n (x, ) > 0. x Hence b Proposition 11.15, all states are recurrent. B the uniqueness of the stationar distribution, µ x is a constant multiple of µ, i.e.,µ x = cµ. Recall and so µ x () = = n 1 µ x () = 1 P x ( n =, T x >n), P x ( n =, T x >n)= n P x (T x >n)=e x T x. P x ( n =, T x >n) Thus c = E x T x.recallingthatµ x (x) =1, µ(x) = µ x(x) c = 1 E x T x. We make the following distinction for recurrent states. If E x T x < 1, then x is said to be positive recurrent. If x is recurrent but E x T x = 1, x is null recurrent. An example of null recurrent states is the simple random walk on the integers. If we let g() =E T x,themarkovproperttellsusthat Some algebra translates this to g() =1+ 1g( 1) + 1 g( +1). 2 2 g() g( 1) = 2 + g( +1) g().

8 100 CHAPTER 11. MARKOV CHAINS If d() =g() g( 1), we have d( +1)=d() 2. If d( 0 )isfinitefor an 0,theng() willbelessthan 1forall larger than some 1, which implies that g() will be negative for su cientl large, acontradiction. We conclude g is identicall infinite. Proposition Suppose a chain is irreducible. (a) If there exists a positive recurrent state, then there is a stationar distribution. (b) If there is a stationar distribution, all states are positive recurrent. (c) If there exists a transient state, all states are transient. (d) If there exists a null recurrent state, all states are null recurrent. Proof. To show (a), suppose x is positive recurrent. We have seen that is a stationar measure. Then µ x (S) = T x 1 µ x () =E x T x 1 µ x () =E x 1 (n=) 1=E x T x < 1. Therefore µ() =µ()/e x T x will be a stationar distribution. definition of µ x we have µ x (x) =1,henceµ(x) > 0. From the For (b), suppose µ(x) > 0forsomex. If is another state, choose n so that p n (x, ) > 0, and then from µ() = x µ(x)p n (x, ) we conclude that µ() > 0. Then 0 <µ() =1/E T,whichimpliesE T < 1. We showed that if x is recurrent and r(x, ) > 0, then is recurrent. So (c) follows. Suppose there exists a null recurrent state. If there exists a positive recurrent or transient state as well, then b (a) and (b) or b (c) all states are positive recurrent or transient, a contradiction, and (d) follows.

9 11.6. CONVERGENCE Convergence Our goal is to show that under certain conditions p n (x, )! (), where is the stationar distribution. (In the null recurrent case p n (x, )! 0.) Consider a random walk on the set {0, 1}, wherewithprobabilitoneon each step the chain moves to the other state. Then p n (x, ) =0ifx 6= and n is even. A less trivial case is the simple random walk on the integers. We need to eliminate this periodicit. Suppose x is recurrent, let I x = {n 1:p n (x, x) > 0}, andletd x be the g.c.d. (greatest common divisor) of I x. d x is called the period of x. Proposition If r(x, ) > 0, then d = d x. Proof. Since x is recurrent, r(, x) > 0. Choose K and L such that p K (x, ),p L (, x) > 0. p K+L+n (, ) p L (, x)p n (x, x)p K (x, ), so taking n =0,wehavep K+L (, ) > 0, or d divides K + L. Sod divides n if p n (x, x) > 0, or d is a divisor of I x. Hence d divides d x.bsmmetr d x divides d. Proposition If d x =1, there exists m 0 such that p m (x, x) > 0 whenever m m 0. Proof. First of all, I x is closed under addition: if m, n 2 I x, p m+n (x, x) p m (x, x)p n (x, x) > 0. Secondl, if there exists N such that N,N +1 2 I x,letm 0 = N 2. If m m 0,thenm N 2 = kn + r for some r<n and m = r + N 2 + kn = r(n +1)+(N r + k)n 2 I x. Third, pick n 0 2 I x and k>0suchthatn 0 + k 2 I x. If k =1,weare done. Since d x =1,thereexistsn 1 2 I x such that k does not divide n 1.

10 102 CHAPTER 11. MARKOV CHAINS We have n 1 = mk + r for some 0 <r<k. Note (m +1)(n 0 + k) 2 I x and (m +1)n 0 + n 1 2 I x. The di erence between these two numbers is (m +1)k n 1 = k r<k.sonowwehavetwonumbersini k di ering b less than or equal to k 1. Repeating at most k times, we get two numbers in I x di ering b at most 1, and we are done. We write d for d x.achainisaperiodicifd =1. If d>1, we sa x if p kd (x, ) > 0forsomek>0WedivideS into equivalence classes S 1,...S d.everd steps the chain started in S i is back in S i.sowelookatp 0 = p d on S i. Theorem Suppose the chain is irreducible, aperiodic, and has a stationar distribution. Then p n (x, )! () as n!1. Proof. The idea is to take two copies of the chain with di erent starting distributions, let them run independentl until the couple, i.e., hit each other, and then have them move together. So define 8 >< p(x 1,x 2 )p( 1, 2 ) if x 1 6= 1, q((x 1, 1 ), (x 2, 2 )) = p(x 1,x 2 ) if x 1 = 1,x 2 = 2, >: 0 otherwise. Let Z n =( n,y n )andt =min{i : i = Y i }.Wehave while Subtracting, P( n = ) =P( n =, T apple n)+p( n =, T > n) = P(Y n =, T apple n)+p( n =, T > n), P(Y n = ) =P(Y n =, T apple n)+p(y n =, T > n). P( n = ) P(Y n = ) apple P( n =, T > n) P(Y n =, T > n) Using smmetr, apple P( n =, T > n) apple P(T >n). P( n = ) P(Y n = ) applep(t >n).

11 11.6. CONVERGENCE 103 Suppose we let Y 0 have distribution and 0 = x. Then p n (x, ) () applep(t >n). It remains to show P(T > n)! 0. To do this, consider another chain W n =( n,y n ), where now we take n,y n independent. Define r((x 1, 1 ), (x 2, 2 )) = p(x 1,x 2 )p( 1, 2 ). The chain under the transition probabilities r is irreducible. To see this, there exist K and L such that p K (x 1,x 2 ) > 0andp L ( 1, 2 ) > 0. If M is large, p L+M (x 2,x 2 ) > 0andp K+M ( 2, 2 ) > 0. So p K+L+M (x 1,x 2 ) > 0and p K+L+M ( 1, 2 ) > 0, and hence we have r K+L+M ((x 1,x 2 ), ( 1, 2 )) > 0. It is eas to check that 0 (a, b) = (a) (b) isastationardistributionfor W. Hence W n is recurrent, and hence it will hit (x, x), hence the time to hit the diagonal {(, ) : 2S}is finite. However the distribution of the time to hit the diagonal is the same as T.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

MTH310 EXAM 2 REVIEW

MTH310 EXAM 2 REVIEW MTH310 EXAM 2 REVIEW SA LI 4.1 Polynomial Arithmetic and the Division Algorithm A. Polynomial Arithmetic *Polynomial Rings If R is a ring, then there exists a ring T containing an element x that is not

More information

50 Algebraic Extensions

50 Algebraic Extensions 50 Algebraic Extensions Let E/K be a field extension and let a E be algebraic over K. Then there is a nonzero polynomial f in K[x] such that f(a) = 0. Hence the subset A = {f K[x]: f(a) = 0} of K[x] does

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your

More information

Discrete time Markov chains

Discrete time Markov chains Chapter Discrete time Markov chains In this course we consider a class of stochastic processes called Markov chains. The course is roughly equally divided between discrete-time and continuous-time Markov

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

Detailed Proofs of Lemmas, Theorems, and Corollaries

Detailed Proofs of Lemmas, Theorems, and Corollaries Dahua Lin CSAIL, MIT John Fisher CSAIL, MIT A List of Lemmas, Theorems, and Corollaries For being self-contained, we list here all the lemmas, theorems, and corollaries in the main paper. Lemma. The joint

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Markov Chains for Everybody

Markov Chains for Everybody Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Markov Chains. Chapter Existence and notation. B 2 B(S) and every n 0,

Markov Chains. Chapter Existence and notation. B 2 B(S) and every n 0, Chapter 6 Markov Chains 6.1 Existence and notation Along with the discussion of martingales, we have introduced the concept of a discrete-time stochastic process. In this chapter we will study a particular

More information

Equations with regular-singular points (Sect. 5.5).

Equations with regular-singular points (Sect. 5.5). Equations with regular-singular points (Sect. 5.5). Equations with regular-singular points. s: Equations with regular-singular points. Method to find solutions. : Method to find solutions. Recall: The

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes:

De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes: Probabilidades y Estadística (M) Práctica 7 2 cuatrimestre 2018 Cadenas de Markov De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes: 6,2, 6,3, 6,7, 6,8,

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

CSC B36 Additional Notes sample induction and well-ordering proofs. c Nick Cheng

CSC B36 Additional Notes sample induction and well-ordering proofs. c Nick Cheng CSC B36 Additional Notes sample induction and well-ordering proofs c Nick Cheng Introduction We present examples of induction proofs here in hope that they can be used as models when you write your own

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Contents. CONTENTS Page 1

Contents. CONTENTS Page 1 CONTENTS Page 1 Contents 1 Markov chains 5 1.1 Specifying and simulating a Markov chain................... 5 1.2 The Markov property.............................. 8 1.3 It s all just matrix theory...........................

More information

Vector Spaces ปร ภ ม เวกเตอร

Vector Spaces ปร ภ ม เวกเตอร Vector Spaces ปร ภ ม เวกเตอร 1 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

IE 5112 Final Exam 2010

IE 5112 Final Exam 2010 IE 5112 Final Exam 2010 1. There are six cities in Kilroy County. The county must decide where to build fire stations. The county wants to build as few fire stations as possible while ensuring that there

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Markov and Gibbs Random Fields

Markov and Gibbs Random Fields Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Discrete Mathematics Logics and Proofs. Liangfeng Zhang School of Information Science and Technology ShanghaiTech University

Discrete Mathematics Logics and Proofs. Liangfeng Zhang School of Information Science and Technology ShanghaiTech University Discrete Mathematics Logics and Proofs Liangfeng Zhang School of Information Science and Technology ShanghaiTech University Resolution Theorem: p q p r (q r) p q p r q r p q r p q p p r q r T T T T F T

More information

4. Ergodicity and mixing

4. Ergodicity and mixing 4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

Stat-491-Fall2014-Assignment-III

Stat-491-Fall2014-Assignment-III Stat-491-Fall2014-Assignment-III Hariharan Narayanan November 6, 2014 1. (4 points). 3 white balls and 3 black balls are distributed in two urns in such a way that each urn contains 3 balls. At each step

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information