MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Similar documents
Math Homework 5 Solutions

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

12 Markov chains The Markov property

Stochastic Processes

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

P(X 0 = j 0,... X nk = j k )

6 Markov Chain Monte Carlo (MCMC)

2. Transience and Recurrence

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

2 Discrete-Time Markov Chains

1 Random walks: an introduction

IEOR 6711: Professor Whitt. Introduction to Markov Chains

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

An Introduction to Entropy and Subshifts of. Finite Type

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

On random walks. i=1 U i, where x Z is the starting point of

Stochastic Simulation

Hitting Probabilities

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Non-homogeneous random walks on a semi-infinite strip

Markov Chains Handout for Stat 110

MARKOV PROCESSES. Valerio Di Valerio

µ n 1 (v )z n P (v, )

Quantitative Model Checking (QMC) - SS12

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Question Points Score Total: 70

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Manual for SOA Exam MLC.

MAS275 Probability Modelling Exercises

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

Solution: (Course X071570: Stochastic Processes)

1 Generating functions

The Distribution of Mixing Times in Markov Chains

Markov Processes Hamid R. Rabiee

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Positive and null recurrent-branching Process

T. Liggett Mathematics 171 Final Exam June 8, 2011

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

(i) What does the term communicating class mean in terms of this chain? d i = gcd{n 1 : p ii (n) > 0}.

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic process. X, a series of random variables indexed by t

Chapter 7. Markov chain background. 7.1 Finite state space

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov processes and queueing networks

Homework set 3 - Solutions

Stat-491-Fall2014-Assignment-III

LTCC. Exercises solutions

Markov Chains and Stochastic Sampling

Lecture 20 : Markov Chains

Homework set 2 - Solutions

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

4 Sums of Independent Random Variables

STOCHASTIC PROCESSES Basic notions

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

Lecture 9 Classification of States

RANDOM WALKS IN ONE DIMENSION

4.7.1 Computing a stationary distribution

Chapter 2: Markov Chains and Queues in Discrete Time

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

arxiv: v2 [math.pr] 4 Sep 2017

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

HOMEWORK #2 - MATH 3260

Link Analysis. Leonid E. Zhukov

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Convex Optimization CMU-10725

4 Branching Processes

Part I Stochastic variables and Markov chains

Social network analysis: social learning

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Markov Chains on Countable State Space

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0

0.1 Naive formulation of PageRank

x x 1 0 1/(N 1) (N 2)/(N 1)

Lecture 5: Random Walks and Markov Chain

Reinforcement Learning

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

1 Gambler s Ruin Problem

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t


LECTURE 3. Last time:

Transcription:

ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which the series converges In particular, if 0 a n, then As is defined for all s [0, If X is a non-negative, integer-valued random variable and p n PX n, n 0, then P s : p ns n is called the probability generating function of X Note that P s is defined for all s [0, ] and P s E [s X ] a point Let q n PX > n, n 0, and let Qs q ns n be the generating function of q n n 0 Prove that Qs P s s, 0 s < b point Conclude that P Q E X possibly + where P denotes the left derivative of P at a Since q n n 0 is decreasing, hence bounded, the series q ns n converges for all s [0,, implying that Q is well defined on [0, Further, q n in+ PX i in+ p i, and therefore Qs p i s n i in+ s i s p i i0 s p i b Let s in the formula for Q Then lim Qs s i0 q n i i s n p i s i s p i p i s i i0 P s s PX > n E X, where both sides can be + On the other hand, P s lim Qs lim s s s lim s P s P s P

2 First passage decomposition Let X n n 0 be a Markov chain with transition matrix P p ij For j S let T j min{n > 0 : X n j} and define f n ij P i T j n Let P ij s p n ij sn, F ij s f n ij s n, be generating functions of p n ij n 0 and f n ij n 0 a point Justify the identity p n ij k f k ij pn k jj, n b point Deduce that P ij s δ ij + F ij sp jj s c point Prove from above that P i T i < if and only if pn ii gives an alternative prove of a part of dichotomy result from class a If X n j, then T j k for some k n Therefore p n ij P i X n j P i X n j, T j k k P i T j k, X Tj +n k j k This By the strong Markov property, conditional on X Tj j, X Tj +l : l 0 is δ j, P - Markov chain independent of F Tj that is X 0, X,, X Tj Since X Tj j on T j k n, it follows that X Tj +l : l 0 is δ j, P -Markov chain independent of X 0, X,, X Tj Therefore, P i T j k, X Tj +n k i P i T j kp j X n k i f k ij pn k jj, b by definition f 0 ij 0 P ij s δ ij n k0 p n ij sn f k ij k0 k0 nk n k pn k jj s n p n k jj s n k f k ij pn k jj s n f k ij sk P jj sf k ij sk P jj sf ij s, where in the second line we used f 0 ij the order of summation 0, and the third line follows by interchanging 2

c Note that lim s F ii s f n ii b we have By letting s, it follows that P i T i < and lim s P ii s P ii s p n ii F ii s P i T i <, which proves that pn ii if and only of P i T i < pn ii From 3 Let Y i i be a sequence of iid random variables with PY p, PY q p Define the simple random walk S n n 0 as S 0 0, S n Y + + Y n, n Let T min{n > 0 : S n } be the hitting time to, f n PT n, n 0, and F s f ns n a point Justify the equation n 2 f n qf k f n k, n 2 b 2 points Deduce that F s ps qsf 2 s and conclude that k c point Derive from the above that F s pqs 2 2qs PT < d point Finally, prove that p q 2q {, p q, p/q, p < q { +, p q, E T, p > q p q a First note that f 0 0 and f p the first step must be to the right Let T 0 min{n > 0 : S n 0} and note that since S n n 0 is spatially homogeneous, we have that P T 0 k P 0 T k f k P 0 P Let n 2 If T n the first step must 3

be to the left Therefore, f n PT n, X qpt n X n 2 qp T n q P T n, T 0 k k n 2 q P T 0 kp T n T 0 k k n 2 q P T 0 kp 0 T n k k n 2 q f k f n k k Here the second and the penultimate lines follow by the strong Markov property of the random walk b Use f 0 0, f s, and part a for f n, n 2, to get F s ps + ps + ps + ps + ps + f n s n n2 n 2 qf k f n k s n n2 k n 2 qf k f n k s n n2 k0 k0 nk+2 f n k s n k f k s k qs f m s m f k s k qs k0 m F sf k s k qs ps + qsf s 2 k0 By solving the quadratic equation we get c + pqs Since lim 2 s 0 qs F s ± pqs 2 qs, but F 0, the solution with the + sign is impossible PT < F pq 2q { p q, p q, 2q p/q, p < q

d If p < q, then PT > 0, and therefore E T For p q we use Problem to calculate E T F We first find F s and then compute F to get F 2p p q p q 2q { +, p q,, p > q p q 2 points Let X n n 0 be a Markov chain on 0,, with transition probabilities given by α i + p 0, p i,i+ + p i,i, p i,i+ p i,i, i, i where α 0, For each α compute the value of Plim n X n First note that X is irreducible Exactly as in Problem 0 of Homework, we compute that P 0 X n for all n j α Since j converges only for α >, we see that the above probability is strictly positive j α for α > and equal to 0 for α 0, ] Hence, we deduce that P 0 T 0 < < for α >, and P 0 T 0 < for α In the first case the process is transient, hence P i lim n X n for all i 0, and consequently the required probability is also In the second case, α, the process is recurrent, hence the required probability is equal to 0 5 2 points The rooted binary tree is an infinite graph T with one distinguished vertex R from which comes a single edge; at every other vertex there are three edges and there are no closed loops The random walk on T jumps from a vertex along each available edge with equal probability Show that the random walk is transient Let X n n 0 denote the random walk on T Denote by d the path distance on T : dv, w the length of the shortest path connecting v and w Let Y n dx n, R - distance of X n to the root R Then Y n n 0 is a random process with state space Z + {0,, 2, } It follows from the structure of the tree T that Y n n 0 is Markov chain Moreover, since at any vertex v R, the random walk will move away from the root with probability 2/3 and move towards the root with probability /3 there is only one neighbor of v leading to R, the transition probabilities are p i,i+ 2/3, p i,i /3, i, and obviously p 0 It was shown in class, cf Example 33 from the textbook, that Y n n 0 is transient In fact, it is shown in the example that, for the chain Y, P i T 0 < /3/2/3 i 2 i Therefore, P 0 T 0 < P T 0 < /2 6 2 points Find all invariant distributions of the transition matrix 0 0 0 2 2 0 0 0 2 2 P 0 0 0 0 0 2 0 0 0 2 j 5

The linear system π πp reads π 2 π + 2 π 5 π 2 2 π 2 + π π 3 π 3 + π π 2 π 2 + π π 5 2 π 2 + π + 2 π 5 The third equation gives π 0, the second π 2 π 0 and the first π π 5 Hence π α/2, 0, α, 0, α/2 with 0 α The solution can be guessed by looking at communicating classes: the class {2, } is transient, hence all mass eventually escapes Two other classes, {, 5} and {3} are recurrent, so any stationary distributions is a convex combination of stationary distributions for these two classes: π α/2, 0, 0, 0, /2 + α0, 0,, 0, 0 7 A particle moves on the eight vertices of a cube in the following way: at each step the particle is equally likely to move to each of the three adjacent vertices, independently of its past motion Let i be the initial vertex occupied by the particle, o the vertex opposite i Calculate each of the following quantities: a point the expected number of steps until the particle returns to i; b point the expected number of visits to o until the first return to i; c point the expected number of steps until the first visit to o a First note that by symmetry the invariant measure π is equal to 8 Hence E i T i π i 8 at every vertex b From class T i E i Xno ν o E i T i π o c Denote by x vertices adjacent to i and by y vertices adjacent to o Let h i E i T o, h x E x T o and h y E y T o By the first step analysis and symmetry h i + h x h x + 3 h i + 2 3 h y h y + 2 3 h x The solution of this system is h i 0, h x 9, h y 7 6

8 2 points Find all invariant measures for the asymmetric random walk X X n n 0 on Z with transition probabilities p i,i q, p i,i+ p, p + q p q Does uniqueness up to scalar multiplication hold? Does X have an invariant distribution? The invariant measure λ satisfies λ λp or in components λ i λ i p + λ i+ q The general solution of this recurrence relation is λ i A + B i p, A, B 0 q By choosing A, B 0 and A 0, B we see that uniqueness up to scalar multiplication does not hold Since both series i Z and i Z p/qi diverge, an invariant distribution does not exist 7