MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Similar documents
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Positive and null recurrent-branching Process

88 CONTINUOUS MARKOV CHAINS

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Homework set 3 - Solutions

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

2. Transience and Recurrence

P(X 0 = j 0,... X nk = j k )

Stochastic Processes (Week 6)

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

Math Homework 5 Solutions

6 Markov Chain Monte Carlo (MCMC)

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

STOCHASTIC PROCESSES Basic notions

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

2 Discrete-Time Markov Chains

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

Markov Chains, Stochastic Processes, and Matrix Decompositions

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

The cost/reward formula has two specific widely used applications:

Markov Processes Hamid R. Rabiee

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

Classification of Countable State Markov Chains

Necessary and sufficient conditions for strong R-positivity

Lecture 9 Classification of States

12 Markov chains The Markov property

Markov Chains on Countable State Space

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Probability & Computing

Stochastic Simulation

Markov Chains (Part 3)

Markov Chains (Part 4)

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

µ n 1 (v )z n P (v, )

Markov Chain Monte Carlo

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

Math 180B Homework 4 Solutions

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Question Points Score Total: 70

JOSEPH B. KADANE AND JIASHUN JIN, IN HONOR OF ARNOLD ZELLNER

Markov Chains Handout for Stat 110


Math 304 Answers to Selected Problems

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov Chains Absorption Hamid R. Rabiee

Countable state discrete time Markov Chains

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Convex Optimization CMU-10725

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Lecture 20 : Markov Chains

Markov Chains, Random Walks on Graphs, and the Laplacian

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Hitting Probabilities

Stochastic Models: Markov Chains and their Generalizations

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Linear Algebra Practice Problems

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

arxiv: v1 [math.pr] 6 Jan 2014

Markov Chains and Stochastic Sampling

Homework set 2 - Solutions

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

Macroscopic quasi-stationary distribution and microscopic particle systems

Ergodic Properties of Markov Processes

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Stochastic Processes

Discrete Math, Spring Solutions to Problems V

Section 11.1 Sequences

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Chapter 2. Markov Chains. Introduction

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

4.7.1 Computing a stationary distribution

Markov chains. Randomness and Computation. Markov chains. Markov processes

MARKOV PROCESSES. Valerio Di Valerio

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

FINITE MARKOV CHAINS

6.842 Randomness and Computation March 3, Lecture 8

Stochastic2010 Page 1

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Partial Differential Equations and Random Walks

Random Walk and Other Lattice Models

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

Lecture 5: Random Walks and Markov Chain

Markov Chains Introduction

18.175: Lecture 30 Markov chains

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Solution: (Course X071570: Stochastic Processes)

25.1 Markov Chain Monte Carlo (MCMC)

Transcription:

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest positive solution of the equation â = φ(â). Then, to show that a = â, it suffices to show that a â. But a = lim n a n where a n := P(X n = 0 X 0 = 1). So, it is enough to show (ETS), that a n â. The proof of this is by induction on n. If n = 0 then a 0 = 0. Since â > 0, a 0 = 0 < â. So, the statement holds for n = 0. Suppose the statement holds for n. Then we have to show that it holds for n + 1. But, a n+1 = P(X n+1 = 0 X 0 = 1) = P(X n+1 = 0 X 1 = k)p(x 1 = k X 0 = 1) = P(X n+1 = 0 X 1 = 1) k p k = a k np k By the assumption that a n â, a k np k â k p k = φ(â) = â. So, a n+1 â. By induction this implies that a n â for all n. But then, a = lim n a n â.

66 COUNTABLE MARKOV CHAINS 2.3. Random walk on Z d. We talked about random walks on the d- dimensional integer lattice Z d and discussed whether they are recurrent or transient. We take S = Z d = {(x 1,, x d ) R d x i Z}. This is the integer lattice in R d. Each point x Z d has exactly 2d neighbors y which are the integer points which are exactly 1 unit away from x. This is the easiest to see if x = (0, 0,, 0) is the origin. Then the neighbors are: (±1, 0,, 0), (0, ±1, 0,, 0),, (0,, 0, ±1). There are 2 in each of the d directions. If the point moves to a neighbor with equal probability then we get: { 1 if x y = 1 p(x, y) = P(X n+1 = y X n = x) = 2d 0 otherwise 2.3.1. recurrent or transient? I reminded you that a Markov chain is irreducible if it has only one communication class. Definition 2.4. An irreducible countably infinite Markov chain is called recurrent if, for any starting point X 0 = x, (1) you almost surely return to x. (2) You a.s. return to x an infinite number of times. (3) You a.s. go to every other state z S an infinite number of times. The conclusion is: If a Markov chain is recurrent then you will, with probability one, visit every state in the system an infinite number of times. This is the definition. To figure out if this is true, we make it into an equation. You return to x an infinite number of times if the expected number of visits to x is infinity: E(returns to x X 0 = x) =.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 67 2.3.2. the case d = 1. In the one dimensional case, we are talking about a random walk on the integers S = Z. At each step we go either left or right with probability 1. This is periodic with period 2. We need 2 an even number of steps to get back to the starting point. Let s say the starting point is 0. Then, we want to calculate p 2n (0, 0). But, to return to 0 after 2n moves, we need to take exactly n moves to the right and n moves to the left. There are ( ) 2n = (2n)! n n!n! ways to do this. Each way, say RRLLRL, has probability (1/2) 2n. So, the total probability is ( ) ( ) 2n 2n 1 p 2n (0, 0) = = (2n)! ( ) 2n 1 n 2 n!n! 2 We used Stirling s formula: n! e n n n 2πn where means the ratio of the two sides converges to 1 as n. (In fact, you can check with a computer that this ratio starts at 1.084437 and slowly decreases to 1 as n goes to ). So, and (2n)! n!n! (2n)! e 2n (2n) 2n 4πn e2n (2n) 2n 4πn e 2n n 2n 2πn = 22n πn which means that p 2n (0, 0) 1 πn The expected number of return visits to 0 is p 2n (0, 0) 1 > 1 πn πn =. n=1 So, 0 is recurrent in Z. Since there is only one communication class, the entire Markov chain is recurrent by definition. 2.3.3. Higher dimensions. In the planar lattice Z 2, both coordinates must be 0 at the same time in order for the particle to return to the origin. Therefore, p 2n (0, 0) 1 πn

68 COUNTABLE MARKOV CHAINS and n=1 and Z 2 is also recurrent. When d > 2 we get and n=1 p 2n (0, 0) 1 πn = p 2n (0, 0) 1 (πn) d/2 p 2n (0, 0) 1 (πn) d/2 which converges by the integral test. So, the expected number of visits is finite and Z 3, Z 4, etc are transient. 2.3.4. criterion for transient Markov chains. Next I explained how you can tell when a Markov chain is transient. I used the following function. Take a fixed state z S. Let α(x) be the probability that you go from x to z. I.e., you start at x and see whether you are ever in state z: α(x) := P(X n = z for some n 0 X 0 = x) Since the condition includes the case n = 0, we have (1) α(z) = 1. Also, (2) 0 α(x) 1 (since α(x) is a probability). And: (3) If x z then α(x) = y S p(x, y)α(y) (To get from x z to z you take one step to y and then go from y to z.) This boxed equation, when written in matrix form is: α = P α. In other words, α is a right eigenvector of P with eigenvalue 1. But we have another eigenvector with the same eigenvalue: β = P β where β = (1, 1, 1, ) (Actually β is a column vector.) There is more than one eigenvector with eigenvector 1. The following theorem tells us which vector we want.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 69 Theorem 2.5. The function α is the smallest function satisfying the three conditions above. In other words, if ˆα is any function satisfying these three conditions, then for all states x. α(x) ˆα(x) The next theorem tells us why this is important. Theorem 2.6. The Markov chain is recurrent if α = β, i.e., α(x) = 1 for all x S. The Markov chain is transient if α < β, i.e., α(x) < 1 for some state x S. This second theorem is just the definition of recurrent! α(x) = 1 means we almost surely go from x to z. This is the same as saying the chain is recurrent. The proof of the first theorem is the same as the proof of the extinction lemma. Proof. First I pointed out that where α(x) = im n α n (x) α n (x) := P(X 0 = z or X 1 = z or X n = z X 0 = x). So, it is enough to show that α n (x) ˆα(x) for all n 0. I proved this by induction on n. If n = 0 then { 1 if x = z α 0 (x) = P(X 0 = z X 0 = x) = 0 if x z But ˆα is a function that satisfies the three conditions and condition (1) says that ˆα(z) = 1. α 0 (z) = 1 ˆα(z). Condition (2) says that ˆα(x) 0 for all x. So, α 0 (x) = 0 ˆα(x) for x z. So, the statement holds for n = 0. Suppose the statement holds for n (i.e., α n (x) ˆα(x) for all x S). Then, α n+1 (x) = p(x, y)α n (y) y S

70 COUNTABLE MARKOV CHAINS By induction, this is y S p(x, y)ˆα(y) which is equal to ˆα(x) by condition (3). So, α n+1 (x) ˆα(x) for all x. By induction the statement holds for all n. Taking limits we get that α(x) = lim n α n (x) ˆα(x). In the case of a random walk on S = Z with unequal probabilities of going right and left, we can find the function α and prove that the Markov chain is transient.