Hitting Probabilities

Similar documents
Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Math Homework 5 Solutions

Modern Discrete Probability Branching processes

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

2. Transience and Recurrence

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

Dynamic Programming Lecture #4

14 Branching processes

Introduction to Stochastic Processes

Lecture 06 01/31/ Proofs for emergence of giant component

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Positive and null recurrent-branching Process

Brownian Motion and the Dirichlet Problem

Math 6810 (Probability) Fall Lecture notes

Stochastic Processes

P (A G) dp G P (A G)

Notes 15 : UI Martingales

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

1 Ways to Describe a Stochastic Process

Population growth: Galton-Watson process. Population growth: Galton-Watson process. A simple branching process. A simple branching process

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Lecture 12. F o s, (1.1) F t := s>t

Notes 1 : Measure-theoretic foundations I

Lecture Notes (Math 90): Week IX (Thursday)

Erdős-Renyi random graphs basics

Inference for Stochastic Processes

A D VA N C E D P R O B A B I L - I T Y

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Markov Chains Absorption (cont d) Hamid R. Rabiee

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Lecture 17: May 29, 2002

Notes 18 : Optional Sampling Theorem

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Convergence of random variables, and the Borel-Cantelli lemmas

Markov Chains and MCMC

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

1 Measurable Functions

9.4 Vector and Scalar Fields; Derivatives

4 Branching Processes

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Problem Points S C O R E Total: 120

6.262: Discrete Stochastic Processes 2/2/11. Lecture 1: Introduction and Probability review

MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

Lecture 20 : Markov Chains

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Problems on Evolutionary dynamics

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

An Introduction to Entropy and Subshifts of. Finite Type

A primer on basic probability and Markov chains

25.1 Markov Chain Monte Carlo (MCMC)

Interlude: Practice Final

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 27

Lecture 9 Classification of States

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

Chase Joyner. 901 Homework 1. September 15, 2017

Testing Series With Mixed Terms

PREPRINT 2007:25. Multitype Galton-Watson processes escaping extinction SERIK SAGITOV MARIA CONCEIÇÃO SERRA

Non-homogeneous random walks on a semi-infinite strip

M3/4/5 S4 Applied Probability

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

the time it takes until a radioactive substance undergoes a decay

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

Lecture 1: An introduction to probability theory

6 Markov Chain Monte Carlo (MCMC)

Lecture 2: Tipping Point and Branching Processes

1 Random Walks and Electrical Networks

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes:

Martingale Theory for Finance

Introduction to Stochastic Processes

Stat 516, Homework 1

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0

Lecture 17 Brownian motion as a Markov process

18.175: Lecture 2 Extension theorems, random variables, distributions

Stochastic Processes (Stochastik II)

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Resistance Growth of Branching Random Networks

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

The Theory behind PageRank

Random Process Lecture 1. Fundamentals of Probability

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

Martingale Theory and Applications

Lecture 6 Feb 5, The Lebesgue integral continued

Stochastic process for macro

General Power Series

of the set A. Note that the cross-section A ω :={t R + : (t,ω) A} is empty when ω/ π A. It would be impossible to have (ψ(ω), ω) A for such an ω.

SMSTC (2007/08) Probability.

Biased random walks on subcritical percolation clusters with an infinite open path

18.175: Lecture 30 Markov chains

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Transcription:

Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and a transition matrix P. Suppose we want to find P i (chain hits A before C) for some i S and disjoint subsets A and C of S. We define the boundary B := A C. It is obvious that P i (chain hits A before C) for i B c is determined by the transition probabilities P (j, k) for j / B, k S. Since this probability does not involve P (b, j) for b B, j S, there is no loss of generality in assuming, as we will from now on, that B is an absorbing boundary, meaning that P (b, b) = for all b B. If we let τ := inf{n : X n B}, where B is absorbing, then we can restate our problem as finding the hitting probability P i (X τ A) for some A B. Define: We can condition on X and decompose to get: h A (i) P i (X τ A) P i (X τ A) = j P i (X = j, X τ A) h A (i) = j P ij h A (j) ( ) To see that we re handling the boundary cases correctly, note that if i B, then P ij = for j i. And the values of h A on the boundary are: { if j A h A (j) = if j B \ A The assumption that B is absorbing makes the equations ( ) hold for all i S, which simplifies the discussion. Another way to write ( ) is: h A = P h A. Thus h A is a harmonic function, as defined in the previous lecture. Also, h A satisfies the boundary condition: h = A on B, meaning h A (i) = A (i) for i B. Thus the function h = h A solves the boundary value problem h = P h h = A on B (BVP) One of the homework problems (an easy application of martingale theory) is to show that if P i (hit B eventually) =, then h = h A is the unique solution of this BVP. This can also be deduced from the more general characteriation of h A given in the following theorem, by consideration of both h A and h B A. 2-

Hitting Probabilities 2-2 What if P i (hit B eventually) <? Then consider the escape probability: e B (i) P i (never hit B) = P i (X n / B, n ) For example, consider a biased random walk with P (i, i + ) = p and P (i, i ) = q. Suppose p > q, and let B = {}. Then e B () >. By conditioning on X, we can show e B = P e B : that is, escape probabilities are also harmonic functions. Clearly, e B = on B. Now, note that the set of solutions of h = P h is a vector space: if we add harmonic functions or take scalar multiples, we still get harmonic functions. We have seen that: h = h A solves h = P h and h = A on B h = e B solves h = P h and h = on B Therefore for any constant c, h = h A + ce B solves h = P h and h = A on B. So when e B is not identically ero, h A is not the unique h such that h = P h and h = A on B. The problem now is how to characterie the hitting probability vector h A among all solutions of the BVP. Theorem 2. h = h A is the minimal non-negative solution of (BVP). That is, if h solves (BVP) and h(i) for all i in S, then h(i) h A (i) for all i S. Proof: First note that h A (i) = P i (X n A for some n) = lim P i(x n A) = lim (P n A ) i Suppose h and h satisfies (BVP). Then: by absorbing boundary assumption because X n A implies X n+ A h A P n h P n A n because P ij But P n h = h because h is harmonic. So for all i S, we have: h(i) (P n A ) i n h(i) lim (P n A ) i = h A (i). 2.. Example: Simple random walk on {,, 2,...} with absorbing Let P (, ) =, and for i, let P (i, i + ) = p and P (i, i ) = q. Let h (i) = P i (ever hit ). Then h = h solves: h(i) = qh(i ) + ph(i + ) h() =

Hitting Probabilities 2-3 Rather than solving these equations directly, we can use a shortcut based on the observation that P (ever hit ) = P i (ever hit i ) for all i. So by the strong Markov property: So we just have to compute h(). h(2) = h() 2 h(i) = (h()) i h() = q + p h() 2 x = q + px 2 where x = h(). The roots of this equation are x {, q/p}. So we have two non-negative harmonic functions that solve (BVP): h(i) ( q h(i) = p By the theorem proved above, h is the smaller one. That is: ( ( { h (i) = min, p)) q i if p q = ( ) i q p if p > q ) i 2..2 Example: Extinction probabilities for branching process Suppose we are given an offspring distribution p, p, p 2,... with p i =, p <, and p i. A branching process models a population where each individual has a number of offspring with this distribution (that is, the probability of an individual having offspring is p, etc.). The chain is: X = initial number of individuals X n = number of individuals in generation n Clearly the state is absorbing. So h (i) = P i (population dies out) = P i (X n = eventually). Note that if there are i individuals in a generation, then their descendants form i independent trees. These i trees must all die out in order for the entire population to die out. So h (i) = h () i. Now, by summing over X, we find that: h () = p j (h ()) j j= We now introduce a probability generating function: p j j = E ( X) j= where Pr(X = j) = p j. Note that = h () solves = : in fact, h () is the minimal solution in [, ] of =.

Hitting Probabilities 2-4 subcritical critical supercritical µ < µ = µ > Figure 2.: Plots of the curve (solid) and the 45-degree line (dashed) for branching processes with various values of µ. Thus we can find h () by plotting versus over the interval [, ], and observing where the plot intersects the 45-degree line =. Note that G() = p >, and G() = p j =. Also, is convex. As illustrated in Figure 2., the number of roots depends on the quantity: µ jp j = G () j= In the subcritical case where µ <, the only root is at =, so h () =. In the critical case where µ =, the curve is tangent to the line = at =. Again, = is the only root, so h () =. But in the supercritical case where µ >, there are two roots, and h () is the smaller one. The conclusion is that for µ, the branching process dies out almost surely. For µ >, the branching process dies out with probability (h ()) i, where h () is the smaller of the two solutions of =. The case p = is special: then µ =, but = for all [, ], so h () = (the smallest root). 2.2 Strong Markov property Section 5.2 of Durrett deals with the strong Markov property. Suppose we have a chain X, X,... with sequence space Ω. Recall that a random time τ : Ω {,, 2,..., } is a stopping time if {τ = n} σ(x, X 2,..., X n ). For example, τ = inf{n : X n A} is a stopping time. Theorem 2.2 (Strong Markov property) For any Markov chain X n with transition matrix P, and any stopping time τ: (X, X,..., X τ ) and (X τ, X τ+,...) are conditionally independent given X τ on {τ < }; given (X,..., X τ ) with X τ = j, the process (X τ, X τ+,...) is a Markov chain with transition matrix P started in state j. Sketch of proof: We first show that the theorem holds when τ is a fixed (not random) time. Then we show that it is true in general by conditioning on the value of τ, and summing over all

Hitting Probabilities 2-5 possible τ values. See the textbook for details.