Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Similar documents
Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

6.262: Discrete Stochastic Processes 2/2/11. Lecture 1: Introduction and Probability review

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Homework set 2 - Solutions

Renewal theory and its applications

Math Homework 5 Solutions

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

6.041/6.431 Fall 2010 Final Exam Wednesday, December 15, 9:00AM - 12:00noon.

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

1 Stat 605. Homework I. Due Feb. 1, 2011

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

2. Transience and Recurrence

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

Markov Chains and Stochastic Sampling

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Math 180C, Spring Supplement on the Renewal Equation

Stochastic Processes

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Twelfth Problem Assignment

1 Some basic renewal theory: The Renewal Reward Theorem

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Statistics 150: Spring 2007

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Disjointness and Additivity

Problem set 1, Real Analysis I, Spring, 2015.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

3. Poisson Processes (12/09/12, see Adult and Baby Ross)

µ n 1 (v )z n P (v, )

Chapter 2: Markov Chains and Queues in Discrete Time

Markov Processes Hamid R. Rabiee

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

M17 MAT25-21 HOMEWORK 6

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Lecture Notes 3 Convergence (Chapter 5)

Stochastic process. X, a series of random variables indexed by t

Probability & Computing

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

MAS275 Probability Modelling Exercises

CS261: Problem Set #3

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

RENEWAL PROCESSES AND POISSON PROCESSES

Manual for SOA Exam MLC.

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

1 Stationary point processes

Notice the minus sign on the adder: it indicates that the lower input is subtracted rather than added.

Continuous-time Markov Chains

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Hints/Solutions for Homework 3

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

Markov chains. Randomness and Computation. Markov chains. Markov processes

Lecture 6: September 22

Lecture 20 : Markov Chains

Lecture 23. Random walks

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

CSE548, AMS542: Analysis of Algorithms, Spring 2014 Date: May 12. Final In-Class Exam. ( 2:35 PM 3:50 PM : 75 Minutes )

Homework 4 Solutions

Reinforcement Learning

Stat 426 : Homework 1.

Metric Spaces and Topology

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Compatible probability measures

Solutions For Stochastic Process Final Exam

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE

x 0 + f(x), exist as extended real numbers. Show that f is upper semicontinuous This shows ( ɛ, ɛ) B α. Thus

ISyE 6761 (Fall 2016) Stochastic Processes I

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Massachusetts Institute of Technology

COMPSCI 240: Reasoning Under Uncertainty

The Heine-Borel and Arzela-Ascoli Theorems

The exponential distribution and the Poisson process

In N we can do addition, but in order to do subtraction we need to extend N to the integers

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Probability and Measure

MATH 324 Summer 2011 Elementary Number Theory. Notes on Mathematical Induction. Recall the following axiom for the set of integers.

Random variables. DS GA 1002 Probability and Statistics for Data Science.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Northwestern University Department of Electrical Engineering and Computer Science

IEOR 6711, HMWK 5, Professor Sigman

Handout 1: Mathematical Background

18.440: Lecture 33 Markov Chains

Stochastic process for macro

2.3 Some Properties of Continuous Functions

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

Convergence of Random Variables Probability Inequalities

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Statistics 992 Continuous-time Markov Chains Spring 2004

Doléans measures. Appendix C. C.1 Introduction

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

0.1 Naive formulation of PageRank

RANDOM WALKS, LARGE DEVIATIONS, AND MARTINGALES

Transcription:

Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in probability. Hint 1: Look for the easy way. Hint 2: The easy way uses the Markov inequality. Solution: Applying the Markov inequality to Y n for arbitrary n and arbitrary ɛ > 0, we have E[ Y n ] Pr{ Y n ɛ} ɛ Thus going to the limit n for the given ɛ, lim Pr{ Y n ɛ} = 0. n Since this is true for every ɛ > 0, this satisfies the definition for convergence to 0 in probability. Exercise 2 (4.2 in text) The purpose of this exercise is to show that, for an arbitrary renewal process, N(t), the number of renewals in (0, t], has finite expectation. a) Let the inter-renewal intervals have the distribution F X (x), with, as usual, F X (0) = 0. Using whatever combination of mathematics and common sense is comfortable for you, show that numbers ɛ > 0 and δ > 0 must exist such that F X (δ) 1 ɛ. In other words, you are to show that a positive rv must take on some range of of positive values with positive probability. Solution: For any ɛ < 1, we can look at the sequence of events {X 1/k; k 1}. The union of these events is the event {X > 0}. Since Pr{X 0} = 0, Pr{X > 0} = 1. The events {X 1/k} are nested in k, so that, from (??), 1 = Pr {X 1/k} = lim Pr{X 1/k} k Thus, for k large enough, Pr{X 1/k} 1 ɛ. Taking δ to be 1/k for that value of k completes the demonstration. b) Show that Pr{S n δ} (1 ɛ) n. Solution: S n is the sum of n interarrival times, and, bounding very loosely, S n δ implies that for each i, 1 i n, X i δ. The X i are independent, so, since Pr X i δ (1 ɛ), we have Pr{S n δ} (1 ɛ) n. c) Show that E[N(δ)] 1/ɛ. k 1

Solution: Since N(t) is nonnegative, E[N(δ)] = Pr{N(δ) n} = Pr{Sn δ} (1 ) n ɛ 1 ɛ 1 ɛ 1 = = 1 (1 ɛ) ɛ ɛ d) Show that for every integer k, E[N(kδ)] k/ɛ and thus that E[N(t)] t+δ ɛδ for any t > 0. Solution: The solution of part c) suggests breaking the interval (0, kδ] into k intervals each of size δ. Letting N i = N((i) N(i 1) be the ith of these intervals, we k have E[N(δk)] = i=1 E[N i]. For the first of these intervals, we have shown that E[N 1 ] 1/ɛ, but that argument does not quite work for the subsequent intervals, since the first arrival in that interval might be at the end of an interarrival interval greater than δ. All the other arrivals in that interval (i) must still be at the end of an interarrival interval at most δ. Thus if let S n be the number of arrivals in the ith interval, we have Pr{S n (i)} δ (1 ɛ) n 1 Repeating the argument in part c), then, E[N i ] = Pr {N i n} = Pr{ S n (i) δ} 1 1 (1 ɛ) n 1 = = 1 (1 ɛ) ɛ k Since E[N(δk)] = i=1 E[N i], we then have E[N(kδ)] k/ɛ Since N(t) is non-decreasing in t, it can be upper bounded by the integer multiple of 1/δ that is just larger than t, i.e., t/δ ) t/δ + 1 E[N(t)] E[N(δ t/δ )] ɛ ɛ e) Use your result here to show that N(t) is non-defective. Solution: Since N(t), for each t, is non-negative and has finite expectation, it certainly must be non-defective. One way to see this is that E[N(t)] is the integral of the complementary distribution function, F c N(t) (n) of N(t). Since this integral is finite, FN(t) c (n) must approach 0 with increasing n. 2

3) Exercise 4.4 in text Is it true for a renewal process that: a) N(t) < n if and only if S n > t? b) N(t) n if and only if S n t? c) N(t) > n if and only if S n < t? Solution: Part a) is true, as pointed out in (4.1). It is simply the complement of the statement that N(t) n if and only if S n t. Parts b) and c) are false, as seen by any situation where S n < t and S n+1 > t. In these cases, N(t) = n. 4) Exercise 4.5 in text (This shows that convergence WP1 implies convergence in probability.) Let {Y n ; n 1} be a sequence of rv s that converges to 0 WP1. For any positive integers m and k, let A(m, k) = {ω : Y n (ω) 1/k for all n m}. a) Show that if lim n Y n (ω) = 0 for some given ω, then (for any given k) ω A(m, k) for some positive integer m. Solution: Note that for a given ω, {Y n (ω); n 1} is simply a sequence of real numbers. Thus by the definition of convergence of a sequence of real numbers, if lim n Y n( ω) = 0 then for every integer k 1, there is an m such that Y n (ω) 1/k for all n m. Thus the given ω must be in the set A(m, k) for that k. b) Show that for all k 1 Pr A(m, k) = 1. m=1 Solution: We saw in a) that, given k 1 and given lim n Y n (ω) = 0, there is some m 1 such that ω A(m, k). Thus given lim n Y n (ω) = 0, ω m A(m, k) c) Show that, for all m 1, A(m, k) A(m+1, k). Use this (plus (1.9) to show that lim Pr A(m, k) = 1. Solution: If ω A(m, k), then Y n (ω) 1/k for all n m and thus for all n m + 1. Thus ω A(m, k) implies that ω A(m + 1, k). This means that A(m, k) A(m + 1, k). From (1.9) then 1 = Pr A(m, k) = lim Pr A(m, k) m d) Show that if ω A(m, k), then Y m (ω) 1/k. Use this (plus part c) to show that lim Pr Y m > 1/k = 0. Solution: By the definition of A(m, k), ω A(m, k) means that Y n (ω) 1/k for all n m, and thus certainly Y m (ω) 1/k. Since lim m Pr A(m, k) = 1, it follows that lim Y m (ω) 1 /k 3

Since k 1 is arbitrary, this shows that {Y n ; n 1} converges in probability. 5) Exercise 4.8 in text: a) Since E[ X] = 0 df X c (x) dx =, we know from the definition of an integral over an infinite limit that b c E[ X] = lim df X (x) dx = b 0 For X = min( X, b), we see that df X (x) = df X (x) for x b and df X (x) = 1 for x > b. Thus E[ ] b X = 0 df X c (x) dx. Since E[X ] is increasing with b toward, we see that for any M > 0, there is a b sufficiently large that E[X ] M. b) Note that X X is a non-negative rv, i.e., it is 0 for X b and greater than b otherwise. Thus X X. It follows then that for all n 1, S n = X 1 + X 2 + X n X 1 + X 2 + X n = S n Since S n S n, it follows for all t > 0 that if S n t then also S n t. This then means that if N(t) n, then also N (t) n. Since this is true for all n, N (t) N(t), i.e., the reduction of inter-renewal intervals causes an increase in the number of renewals. c) Let M and b < such that E[X ] M be fixed in what follows. Since X b, we see that E[X ] <, so we can apply Theorem 4.3.1, which asserts that N (t, ω) 1 Pr ω : lim = = 1 t t E[X ] Let A denote the set of sample points above for which the above limit exists, i.e., for which limt N (t, ω)/t = 1/E[X ]. We will show that, for each ω A, lim t N(t, ω)/t 1/2M. We know that any for ω A, limt N(t, ω)/t = 1/E[X ]. The definition of the limit of a real valued function states that for any ɛ > 0, there is a τ (ɛ) such that N (t, ω) 1 ɛ for all t τ (ɛ) t E[X ] Note that τ(ɛ) depends on b and ω as well as ɛ, so we denote it as τ (ɛ, b, ω). Using only one side of this inequality, N(t, ω)/t ɛ + 1/E[X ] for all t τ(ɛ, b, ω). Since we have seen that N(t, ω) N(t, ω) and X M, we have N(t, ω) 1 ɛ + for all t τ(ɛ, b, ω) t M Since ɛ is arbitrary, we can choose it as 1/M, giving the desired inequality for all ω A. Now for each choice of integer M, let A(M) be the set of probability 1 above. The intersection of these sets also has probability 1, and each ω in all of these sets have 4

lim t N(t, ω)/t) = 0. If you did this correctly, you should surely be proud of yourself!!! Exercise 6: a) In order to find E[N s (t)], we need to use the iterative expectation formula and find E[N s (t) X 0 = j] first. E[N s (t)] = E X0 [E[N s (t) X 0 = j]] M = Pr{X0 = j}e[n s (t) X 0 = j] j=1 M = π j E[N s (t) X 0 = j] j=1 Knowing the first initial state, we can find the expected reward up to time t: M M M E[N (t) X = j] = r + P r + P 2 r + + P t 1 s 0 j ji i ji i ji r i i=1 i=1 i=1 Assuming that E[N s (t) X 0 ] is a vector in which the j-th element is E[N s (t) X 0 = j], we can write: E[N s(t) X 0] = r + [P ] r + [P ] 2 r + + [P ] t 1 r M E[N s (t)] = π j E[N s (t) X 0 = j] j=1 = πe [N s (t) X0] ( = π r + [P ] r + [P ] 2 r + + [P ] t 1 ) r = π r + π[p ] r + π[p ] 2 r + + π[p ] t 1 r = t π r The last equation is due to the fact that π is the steady state probability vector of the Markov chain and thus it is a left eigenvector of [P ] with eigenvalue λ = 1. Thus, π [ P ] k = π. Choosing the rewards as described in the problem where r 1 = 1 and for j = {2, M} r j = 0, we get: E[N s (t)] = π 1 t. From the previous part, we would know that lim t E[Ns(t)]/t = lim t π 1 t/t = π 1. The difference between N s (t) and N 1 (t) is that the first process starts in steady state and the second starts in state 1. The second is a bona-fide renewal counting process and the first is what is called a delayed renewal counting process. After the first occurrence 5

of state 1, in N s (t), the intervals between successive occurrences of state 1 are IID and the same as with N 1 (t). Thus the time to the nth renewal (starting in steady state) is S n = Y 1 + X 2 + X 3 + X n where X 2, X n are IID and Y 1 is another rv. It is not hard to believe (Section 4.8 of the text makes it precise) that lim n S n /n = X WP1 for the process starting in steady state, and N s (t)/t then converges WP1 to 1/X. From the above analysis of steady state, X = 1/π 1. b) The strong law for renewals say that if X i is defined to be the i-th interarrival time of going from state 1 to itself, then with probability 1, lim t N 1 (t)/t = 1/X. Thus, the expectation of the interarrival times of recurrence of state 1 is 1/π 1 6

MIT OpenCourseWare http://ocw.mit.edu 6.262 Discrete Stochastic Processes Spring 2011 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.