IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

Similar documents
P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Lecture 4. David Aldous. 2 September David Aldous Lecture 4

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

On random walks. i=1 U i, where x Z is the starting point of

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

ISE/OR 760 Applied Stochastic Modeling

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

Markov Chains (Part 4)

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

1 if the i-th toss is a Head (with probability p) ξ i = 0 if the i-th toss is a Tail (with probability 1 p)

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains Introduction

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

P(X 0 = j 0,... X nk = j k )

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

STOCHASTIC PROCESSES Basic notions

Sequences and Series, Induction. Review

MAT 135B Midterm 1 Solutions

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

The cost/reward formula has two specific widely used applications:

Math Homework 5 Solutions

Introduction to Queuing Networks Solutions to Problem Sheet 3

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

IEOR 6711, HMWK 5, Professor Sigman

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Information Theory and Statistics Lecture 3: Stationary ergodic processes

MAA704, Perron-Frobenius theory and Markov chains.

Markov Chains (Part 3)

Chapter 2: Markov Chains and Queues in Discrete Time

Boolean Inner-Product Spaces and Boolean Matrices

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

2 Discrete-Time Markov Chains

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Lecture 6: The Pigeonhole Principle and Probability Spaces

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Taylor and Maclaurin Series. Approximating functions using Polynomials.

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

IEOR 6711: Professor Whitt. Introduction to Markov Chains

LECTURE #6 BIRTH-DEATH PROCESS

Censoring Technique in Studying Block-Structured Markov Chains

SMSTC (2007/08) Probability.

The Theory behind PageRank

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Lecture 3: Markov chains.

Geometric Series and the Ratio and Root Test

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

ISM206 Lecture, May 12, 2005 Markov Chain

Markov Chains and Stochastic Sampling

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

M3/4/5 S4 Applied Probability

Recurrence Relations

1 Random walks: an introduction

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

Assignment 3 with Reference Solutions

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

14 Branching processes

Mathematical Foundations of Finite State Discrete Time Markov Chains

Basic math for biology

CHAPTER 6. 1, if n =1, 2p(1 p), if n =2, n (1 p) n 1 n p + p n 1 (1 p), if n =3, 4, 5,... var(d) = 4var(R) =4np(1 p).

Probability Theory and Statistics (EE/TE 3341) Homework 3 Solutions

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya

(i) The optimisation problem solved is 1 min

Math 109 September 1, 2016

Math Bootcamp 2012 Miscellaneous

0 Sets and Induction. Sets

Markov chain Monte Carlo

Stochastic Processes

CPSC 536N: Randomized Algorithms Term 2. Lecture 9

Assignment 4. u n+1 n(n + 1) i(i + 1) = n n (n + 1)(n + 2) n(n + 2) + 1 = (n + 1)(n + 2) 2 n + 1. u n (n + 1)(n + 2) n(n + 1) = n

Number Theory: Niven Numbers, Factorial Triangle, and Erdos' Conjecture

Math 180B Homework 4 Solutions

MATH 301 INTRO TO ANALYSIS FALL 2016

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

4.7.1 Computing a stationary distribution

The Markov Decision Process (MDP) model

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

AQA Level 2 Further mathematics Further algebra. Section 4: Proof and sequences

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Markov Chain Model for ALOHA protocol

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Zdzis law Brzeźniak and Tomasz Zastawniak

SERIES

64 P. ERDOS AND E. G. STRAUS It is possible that (1.1) has only a finite number of solutions for fixed t and variable k > 2, but we cannot prove this

A PECULIAR COIN-TOSSING MODEL

Stochastic Processes

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Homework set 3 - Solutions

Modelling in Systems Biology

Transcription:

IEOR 67: Stochastic Models I Solutions to Homework Assignment 9 Problem 4. Let D n be the random demand of time period n. Clearly D n is i.i.d. and independent of all X k for k < n. Then we can represent X n + by X n+ max{0, X n [s, ) (X n ) + S [0,s) (X n ) D n+ } which depends only on X n since D n+ is independent of all history. Hence {X n, n } is a Markov chain. It is easy to see assuming 0 for k < 0, α S j if i < s, j > 0 ks P ij if i < s, j 0 α i j if i s, j > 0 ki if i s, j 0 The following three problems (4.2, 4.4, 4.5) needs a fact: P(A B C) P(A B C)P(B C) which requires a proof to use. Try to prove it by yourself. Problem 4.2 Let S be the state space. First we show that P(X nk + j X n i,, X nk i k ) P(X nk + j X nk i k ) by the following : Let A {X nk + j}, B {X n i,, X nk elements of {(X l, l n k, l n,, l n k ) : X l S}. i k } and B b, b I are P(A B) b I P(A B b B) b I P(A B b B)P(B b B) b I P(A X nk i k )P(B b B) P(A X nk i k ) P(B b B) b I P(A X nk i k )P(Ω B) P(X nk + j X nk i k ).

We consider the mathematical induction on l n m. For l, we just showed. Now assume that the statement is true for all l l and consider l l + : P(X n j X n i,, X nk i k ) P(X n j, X n i X n i,, X nk i k ) i S P(X n j X n i, X n i,, X nk i k )P(X n i X n i,, X nk i k ) i S i S P(X n j X n i)p(x n i X nk i k ) By l l cases i S P(X n j X n i, X nk i k )P(X n i X nk i k ) i S P(X n j, X n i X nk i k ) P(X n j X nk i k ) which completes the proof for l l + case. Problem 4.3 Simply by Pigeon hole principle which saying that if n pigeons return to their m(< n) home (through hole), then at least one home contains more than one pigeon. Consider any path of states i 0 i, i,, i n j such that P ik,i k+ > 0. Call this a path from i to j. If j can be reached from i, then there must be a path from i to j. Let i 0., i n be such a path. If all of values i 0,, i n are not distinct, then there must be a subpath from i to j having fewer elements (for instance, if i,, 2, 4,, 3, j is a path, then so is i,, 3, j). Hence, if a path exists, there must be one with all distinct states. Problem 4.4 Let Y be the first passage time to the state j starting the state i at time 0. ij P(X n j X 0 i) P(X n j, Y k X 0 i) P(X n j Y k, X 0 i)p(y k X 0 i) P(X n j X k j)p(y k X 0 i) Pjj n k fij k Problem 4.5 (a) The probability that the chain, starting in state i, will be in state j at time n without ever having made a transition into state k. 2

(b) Let Y be the last time leaving the state i before first reaching to the state j starting Problem 4.7 (a) the state i at time 0. ij P(X n j X 0 i) P(X n j, Y k X 0 i) P(X n j, Y k, X k i X 0 i) P(X n j, Y k X k i, X 0 i)p(x k i X 0 i) P(X n j, Y k X k i)p k ii P(X n j, X l i, l k +,, n X k i)p k ii k ij/i P ii k Here is an argument: Let x be the expected number of steps required to return to the initial state (the origin). Let y be the expected number of steps to move to the left 2 steps, which is the same as the expected number of steps required to move to the right 2 steps. Note that the expected number of steps required to go to the left 4 steps is clearly 2y, because you first need to go to the left 2 steps, and from there you need to go to the left 2 steps again. Then, consider what happens in successive pairs of steps: Using symmetry, we get x 2 + (0 (/2) + y (/2) 2 + y/2 and y 2 + (0 (/4) + y (/2) + (2 y) (/4) If we subtract y from both sides, this last equation yields 2 0. Hence there is no finite solution. The quantity y must be infinite; a finite value cannot solve the equation. (b) Note that the expected number of returns in 2n steps is the sum of the probabilities of returning in 2k steps for k from to n, each term of which is binomial. Thus, we have (2k)! E[N 2n ] k!k! (/2)2k, k 3

which can be shown to be equal to the given expression by mathematical induction. (c) We say that f(n) g(n) as n if f(n)/g(n) as n. By Stirling s approximation, so that (2n + ) (2n)! n!n! (/2)2 n 2 n/π, E[N n ] 2n/π as n. Problem 4.8 (a) P ij α j ki+, j > i (b) {T i, i } is not a Markov chain - the distribution of T i does depend on R i. {(R i+, T i ), i } is a Markov chain. P(R i+ j, T i n R i l, T i m) α j kl+ α j ( l ( l ) n ) n, j > l kl+ (c) If S n j then the (n + ) st record occurred at time j. However, knowledge of when these n + records occurred does not yield any information about the set of values {X,, X j }. Hence, the probability that the next record occurs at time k, k > j, is the probability that both max{x,, X j } max{x,, X k } and that X k max{x,, X k }. Therefore, we see that {S n } is a Markov chain with P jk j k k, k > j. Problem 4. (a) n ij E[number of visits to j X 0 i] E[number of visits to j ever visit j, X 0 i]f ij ( + E[number of visits to j X 0 j])f ij f ij <. f jj since + number of visits to j X 0 j is geometric with mean f jj. 4

(b) Follows from above since f jj + E[number of visits to j X 0 j] + Pjj n. n Problem 4.2 If we add the irreducibility of P, it is easy to see that π n is a (and the unique) limiting probability. 5