Math Homework 5 Solutions

Similar documents
2. Transience and Recurrence

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Classification of Countable State Markov Chains

Markov Chains (Part 3)

Homework set 2 - Solutions

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Birth-death chain models (countable state)

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Lectures on Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains Handout for Stat 110


Stochastic Models: Markov Chains and their Generalizations

Homework set 3 - Solutions

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

12 Markov chains The Markov property

88 CONTINUOUS MARKOV CHAINS

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Homework 3 posted, due Tuesday, November 29.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MARKOV PROCESSES. Valerio Di Valerio

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

The cost/reward formula has two specific widely used applications:

Hitting Probabilities

1 Gambler s Ruin Problem

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Markov Processes Hamid R. Rabiee

IE 336 Seat # Name (clearly) Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Statistics 150: Spring 2007

Introduction to Stochastic Processes

Markov Chains and Stochastic Sampling

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Stochastic modelling of epidemic spread

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Markov chains. Randomness and Computation. Markov chains. Markov processes

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Readings: Finish Section 5.2

MATH HOMEWORK PROBLEMS D. MCCLENDON

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

ISyE 6650 Probabilistic Models Fall 2007

SMSTC (2007/08) Probability.

Arkansas Tech University MATH 2924: Calculus II Dr. Marcel B. Finan

Sequences and Summations

Necessary and sufficient conditions for strong R-positivity

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Stat 516, Homework 1

RANDOM WALKS IN ONE DIMENSION

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Practice Final Solutions. 1. Consider the following algorithm. Assume that n 1. line code 1 alg(n) { 2 j = 0 3 if (n = 0) { 4 return j

Lecture 9 Classification of States

IEOR 6711: Professor Whitt. Introduction to Markov Chains

M3/4/5 S4 Applied Probability

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

MATH 409 LECTURES THE KNAPSACK PROBLEM

Generating Functions

Lecture 20 : Markov Chains

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Stochastic process. X, a series of random variables indexed by t

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Probability & Computing

Lecture #5. Dependencies along the genome

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Probability, Random Processes and Inference

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques..

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

Stat 451: Solutions to Assignment #1

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Math 110 (S & E) Textbook: Calculus Early Transcendentals by James Stewart, 7 th Edition

Some Definition and Example of Markov Chain

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

MATH 324 Summer 2011 Elementary Number Theory. Notes on Mathematical Induction. Recall the following axiom for the set of integers.

3. The Voter Model. David Aldous. June 20, 2012

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Math 324 Summer 2012 Elementary Number Theory Notes on Mathematical Induction

1 Continuous-time chains, finite state space

Math 564 Homework 1. Solutions.

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

Nonnegative Matrices I

Markov Chains, Stochastic Processes, and Matrix Decompositions

Harmonic Functions and Brownian Motion in Several Dimensions

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Notes for Math 450 Stochastic Petri nets and reactions

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Math 180B Homework 4 Solutions

STOCHASTIC PROCESSES Basic notions

Transcription:

Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram is shown below, with the absorbing states boxed. 4 / 8 / / / / 6 / / / Figure : Digraph for the gambler problem. To find the probability of reaching state before state, we need to solve the following system of equations, where h i is the hitting probability of state starting at i. h = h 4 + h h 4 = h 8 + h h 6 = h + h h 8 = h 6 + h.

Using the values h =, h = the system has a unique solution, which is easily found by simple substitution. We obtain h = /5. Now denote by µ i the expected time before reaching either or given that the process starts at state i. Then µ =, µ =, and the following system holds: µ = + µ 4 + µ µ 4 = + µ 8 + µ µ 6 = + µ + µ µ 8 = + µ 6 + µ. This is also easily solved for µ. The result is µ =. The program below can be used for the simulation. (See program in the cat-and-mouse problem of homework 4.) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %I will enumerate the states in the following %order: 6 4 8, and use the program %stop_at_set of homework 4. rand( seed,34) p=[ ]; P=zeros(6,6); P(,[,5])=/; P(3,[,4])=/; P(5,[,6])=/; P(6,[3,4])=/; P(,)=; P(4,4)=; A=[ 4]; a=; for j=: x=stop_at_set(p,p,a,); a=a+length(x)-; end a=a/ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The above gave the simulated value µ =.976.. The linear system for finding the h i in the cat and mouse problem is (using

notation as in the previous problem): h 7 =, h 9 = and h 3 h h 3 3 h 4 = 4 h 5 3 h 6 3 h 8 This can be solved numerically. The result is. h =.4, h 3 =.6, h 4 =.3, h = h 5 = h 8 =.5, h 6 =.7. The value we want is h 3 =.6. 3. Let X, X, X,... be a Markov chain with state space S. Decide whether the random variable T in each case below is a stopping time or not. Explain your answer. (a) T is the time of the rth visit to state i S, where r is a positive integer. This is a stopping time. The event {T = n} can be written as the union of events of the type: E E E n, where E n = {X n = i}, and each E k for k < n is either {X k i} or {X k = i}, where the latter occurs exactly r times. Therefore, {T = n} is an event expressible in terms of the X k for k up to time n. (b) T = T A + n, where A is a subset of S, T A is the hitting time at A, and n is a positive integer. This is a stopping time. In fact {T = n} = {T A = n n }. As T A is a stopping time, the event {T = n} only depends on X k for k n n. (c) T = T A n, where A is a subset of S, T A is the hitting time at A, and n is a positive integer. Essentially the same argument used in the previous item shows that this T is not a stopping time since the event {T = n} depends on X k for k up to n + n. (d) T is the first nonnegative integer n such that X n+ =, and T = if X n+ i for all n. This T is not a stopping time since the event {T = n} depends on knowledge of X n+. 4. Exercise.5.. This problem refers to the diagram of figure. 5. Exercise.5.. Let (X n ) n be a Markov chain on S = {,,,... } with transition probabilities given by ( ) α i + p =, p i,i+ + p i,i =, p i,i+ = p i,i. i 3

/ /4 / /4 /4 3 4 5 / / /4 / / Figure : We know that for systems with finite state space, recurrent communicating classes are precisely the closed classes. So states, 5 and 3 are recurrent, whereas and 4 are transient. (In problem.3.4, α =.) First note that all p i,i and p i,i+, for i, are not equal to. This shows that the system is irreducible, i.e., there is only one communicating class. Therefore, either all states are recurrent or they are all transient. If all states are recurrent, the probability that X n as n is, and if all states are transient, the same probability is one. In fact, transience means that with probability, after some time the system will never again return to a given state. That is, for ω in a set of probability, for each positive integer L (a state) we can find another positive integer N (a time step) such that X n (ω) > L for all n N. But this is just the calculus definition of the limit X n (ω) as n. Therefore, the probabilities we wish to find are either, if the system is transient, or if it is recurrent. This is decided by using Theorem.5.3. Note that, as p =, P (T < ) = P (hit in finite time) = h {} where h {} is the probability of hitting starting from state. This probability can be calculated just as in the birth-and-death example. It is equal to if the series C = is divergent (see page 6-7 of text), and if the series is convergent h {} = C j= γ j γ j <. (Recall that γ = by definition.) Thus, the probability we seek is or depending on whether the series A is divergent or not, respectively. A j= 4

simple calculation shows that so that γ j = C = j= (j + ) α (j + ) α. We know that this series is divergent exactly when α. The conclusion is that { if α > P (X n as n ) = if α. 6. Exercise.6.. The rooted binary tree is the infinite graph indicated in figure 3. The graph on the right-hand side shows the transition probabilities of moving up or down along the tree. Notice that if we prove that the process on the right is transient, then the same is true for the process along the tree diagram. Thus the problem is reduced to showing transience for a birth-and-death chain. Since the chain is irreducible, we only need to show that the bottom state, denoted by, is transient. By theorem.5.3 it suffices to show that the probability of returning to in finite time is less than. This probability is the same as the hitting probability, h,, to given that the system starts at the second state from the bottom. The calculation used in example.3.4 shows that this number is less than if the series C = j= is convergent. But (with the notations of example.3.4) q i = q = /3 and p i = p = /3 for i, so that γ j = (q/p) j = (/) j. Therefore, the series is a convergent geometric series, and the chain is transient. γ j 5

/3 /3 /3 /3 /3 Figure 3: On the left: the rooted tree transition diagram. At each vertex the probabilities of moving to any of the neighboring vertices are equal. The transition diagram on the right shows the probabilities of moving up or down on the set of levels. 6