IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent X n + by X n+ max{0, X n [s, ) (X n ) + S [0,s) (X n ) D n+ } which depends only on X n since D n+ is independent of all history. Hence {X n, n } is a Markov chain. It is easy to see assuming 0 for k < 0, α S j if i < s, j > 0 ks P ij if i < s, j 0 α i j if i s, j > 0 ki if i s, j 0 The following three problems (4.2, 4.4, 4.5) needs a fact: P(A B C) P(A B C)P(B C) which requires a proof to use. Try to prove it by yourself. Problem 4.2 Let S be the state space. First we show that P(X nk + j X n i,, X nk i k ) P(X nk + j X nk i k ) by the following : Let A {X nk + j}, B {X n i,, X nk elements of {(X l, l n k, l n,, l n k ) : X l S}. i k } and B b, b I are P(A B) b I P(A B b B) b I P(A B b B)P(B b B) b I P(A X nk i k )P(B b B) P(A X nk i k ) P(B b B) b I P(A X nk i k )P(Ω B) P(X nk + j X nk i k ).
We consider the mathematical induction on l n m. For l, we just showed. Now assume that the statement is true for all l l and consider l l + : P(X n j X n i,, X nk i k ) P(X n j, X n i X n i,, X nk i k ) i S P(X n j X n i, X n i,, X nk i k )P(X n i X n i,, X nk i k ) i S i S P(X n j X n i)p(x n i X nk i k ) By l l cases i S P(X n j X n i, X nk i k )P(X n i X nk i k ) i S P(X n j, X n i X nk i k ) P(X n j X nk i k ) which completes the proof for l l + case. Problem 4.3 Simply by Pigeon hole principle which saying that if n pigeons return to their m(< n) home (through hole), then at least one home contains more than one pigeon. Consider any path of states i 0 i, i,, i n j such that P ik,i k+ > 0. Call this a path from i to j. If j can be reached from i, then there must be a path from i to j. Let i 0., i n be such a path. If all of values i 0,, i n are not distinct, then there must be a subpath from i to j having fewer elements (for instance, if i,, 2, 4,, 3, j is a path, then so is i,, 3, j). Hence, if a path exists, there must be one with all distinct states. Problem 4.4 Let Y be the first passage time to the state j starting the state i at time 0. ij P(X n j X 0 i) P(X n j, Y k X 0 i) P(X n j Y k, X 0 i)p(y k X 0 i) P(X n j X k j)p(y k X 0 i) Pjj n k fij k Problem 4.5 (a) The probability that the chain, starting in state i, will be in state j at time n without ever having made a transition into state k. 2
(b) Let Y be the last time leaving the state i before first reaching to the state j starting Problem 4.7 (a) the state i at time 0. ij P(X n j X 0 i) P(X n j, Y k X 0 i) P(X n j, Y k, X k i X 0 i) P(X n j, Y k X k i, X 0 i)p(x k i X 0 i) P(X n j, Y k X k i)p k ii P(X n j, X l i, l k +,, n X k i)p k ii k ij/i P ii k Here is an argument: Let x be the expected number of steps required to return to the initial state (the origin). Let y be the expected number of steps to move to the left 2 steps, which is the same as the expected number of steps required to move to the right 2 steps. Note that the expected number of steps required to go to the left 4 steps is clearly 2y, because you first need to go to the left 2 steps, and from there you need to go to the left 2 steps again. Then, consider what happens in successive pairs of steps: Using symmetry, we get x 2 + (0 (/2) + y (/2) 2 + y/2 and y 2 + (0 (/4) + y (/2) + (2 y) (/4) If we subtract y from both sides, this last equation yields 2 0. Hence there is no finite solution. The quantity y must be infinite; a finite value cannot solve the equation. (b) Note that the expected number of returns in 2n steps is the sum of the probabilities of returning in 2k steps for k from to n, each term of which is binomial. Thus, we have (2k)! E[N 2n ] k!k! (/2)2k, k 3
which can be shown to be equal to the given expression by mathematical induction. (c) We say that f(n) g(n) as n if f(n)/g(n) as n. By Stirling s approximation, so that (2n + ) (2n)! n!n! (/2)2 n 2 n/π, E[N n ] 2n/π as n. Problem 4.8 (a) P ij α j ki+, j > i (b) {T i, i } is not a Markov chain - the distribution of T i does depend on R i. {(R i+, T i ), i } is a Markov chain. P(R i+ j, T i n R i l, T i m) α j kl+ α j ( l ( l ) n ) n, j > l kl+ (c) If S n j then the (n + ) st record occurred at time j. However, knowledge of when these n + records occurred does not yield any information about the set of values {X,, X j }. Hence, the probability that the next record occurs at time k, k > j, is the probability that both max{x,, X j } max{x,, X k } and that X k max{x,, X k }. Therefore, we see that {S n } is a Markov chain with P jk j k k, k > j. Problem 4. (a) n ij E[number of visits to j X 0 i] E[number of visits to j ever visit j, X 0 i]f ij ( + E[number of visits to j X 0 j])f ij f ij <. f jj since + number of visits to j X 0 j is geometric with mean f jj. 4
(b) Follows from above since f jj + E[number of visits to j X 0 j] + Pjj n. n Problem 4.2 If we add the irreducibility of P, it is easy to see that π n is a (and the unique) limiting probability. 5