Stochastic Processes

Similar documents
Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Random Walk on a Graph

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

88 CONTINUOUS MARKOV CHAINS

MARKOV PROCESSES. Valerio Di Valerio

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

MAS275 Probability Modelling Exercises

Markov Chains, Stochastic Processes, and Matrix Decompositions

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Stochastic Processes (Week 6)

1 Continuous-time chains, finite state space

Chapter 7. Markov chain background. 7.1 Finite state space

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

18.600: Lecture 32 Markov Chains

P(X 0 = j 0,... X nk = j k )

LTCC. Exercises solutions

On random walks. i=1 U i, where x Z is the starting point of

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

CS 798: Homework Assignment 3 (Queueing Theory)

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Problem Set 8

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Continuous Time Markov Chain Examples

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

LECTURE #6 BIRTH-DEATH PROCESS

STOCHASTIC PROCESSES Basic notions

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Convergence Rate of Markov Chains

18.440: Lecture 33 Markov Chains

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Markov Chains Handout for Stat 110

Convex Optimization CMU-10725

Markov Processes Cont d. Kolmogorov Differential Equations

Question Points Score Total: 70

Markov Processes Hamid R. Rabiee

SMSTC (2007/08) Probability.

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Stochastic process. X, a series of random variables indexed by t

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

IEOR 6711, HMWK 5, Professor Sigman

Assignment 3 with Reference Solutions

Interlude: Practice Final

The Transition Probability Function P ij (t)

Applied Stochastic Processes

Solution: (Course X071570: Stochastic Processes)

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Countable state discrete time Markov Chains

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

µ n 1 (v )z n P (v, )

The Theory behind PageRank

IE 5112 Final Exam 2010

Markov Chains (Part 3)

Homework set 3 - Solutions

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

(Practice Version) Midterm Exam 2

18.175: Lecture 30 Markov chains

Homework set 4 - Solutions

Consensus on networks

Data analysis and stochastic modeling

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Lecture 5: Random Walks and Markov Chain

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Readings: Finish Section 5.2

STAT STOCHASTIC PROCESSES. Contents

Homework set 2 - Solutions

Markov chains. Randomness and Computation. Markov chains. Markov processes

On asymptotic behavior of a finite Markov chain

Quantitative Model Checking (QMC) - SS12

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Markov Chains and Stochastic Sampling

Statistics 150: Spring 2007

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Markov Chain Model for ALOHA protocol

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Introduction to Machine Learning CMU-10701

Introduction to Queuing Networks Solutions to Problem Sheet 3

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

An Introduction to Entropy and Subshifts of. Finite Type

Transcription:

Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30

Problem :. True / False questions. For each of the following statements just circle the letter T, if you think the statement is True, or the letter F, if you think the statement is False. T or F : If X, X 2, X 3,... is an irreducible Markov chain on a finite state space S = {,..., N}, then there is an equilibrium probability distribution π such that lim n P[X n = j X 0 = i] = π j for every i. T or F: If X, X 2, X 3,... is a Markov chain on a finite state space S = {,..., N}, then there is an invariant probability distribution π such that πp = π. T or F: Suppose that P is a finite stochastic matrix such that is a simple eigenvalue, and all other eigenvalues λ have λ <. Then lim n P[X n = j X 0 = i] exists, and it is the same for all i. T or F: Let X t, t [0, ), be a continuous time Markov chain with generating matrix A. If A ij > 0 for all i j, then ker(a) has dimension. T or F : The Markov property for a continuous time Markov chain can be equivalently formulated by saying that, for all s < t, we have P[X t = j X s = i] = P[X t X s = j i]. T or F: Let X t be the Poisson process of rate λ. Then, for all s < t, we have P[X t = j X s = i] = P[X t X s = j i]. 2

Problem 2:. Let X, X 2, X 3,... be a Markov chain on S = {, 2, 3, 4, 5, 6} with transition matrix 0 0 0.5 0.5 0 0 0 0 0 0 0 P = 0 0 0 0 0.5 0.5 0 0 0 0 0. 0.5 0.5 0 0 0 0 0 0 0 0 0 Compute (approximately) the following probabilities: (a) P[X 3000000000 = 2 X 0 = ], (b) P[X 300000000 = 2 X 0 = ], (c) P[X 3000000002 = 2 X 0 = ]. The chain is periodic of period d = 3. Indeed, its matrix has the block form: P = where A = [ 0.5 0.5 0 ]. Therefore P 3 = A3 0 0 0 A 3 0 0 0 A 3 blocks A 3 (which are irreducible aperiodic stochastic matrices). The subclasses accessible in one step are as follows: {, 2} {3, 4} {5, 6}. 0 A 0 0 0 A A 0 0 is block diagonal with diagonal Therefore, P[X n = 2 X 0 = ] is non zero only for n divisible by 3. In particular, P[X 300000000 = 2 X 0 = ] = P[X 300000000 = 2 X 0 = ] = 0, answering (b) and (c). As for (a), we have P[X 3000000000 = 2 X 0 = ] lim n P[X 3n = 2 X 0 = ] = π 2, where π is the (unique) equilibrium distribution for the irreducible aperiodic chain with transition matrix A 3, i.e. the unique solution of the equation πa 3 = π, or, equivalently, πa = π. This equation gives π 2 = 2 π, which has solution π = ( 2 3, 3 ). In conclusion, P[X 3000000000 = 2 X 0 = ] 3., (a) P[X 3000000000 = 2 X 0 = ] = 3 (b) P[X 300000000 = 2 X 0 = ] = 0 (c) P[X 3000000002 = 2 X 0 = ] = 0 3

Problem 3:. Let X, X 2, X 3,... be a Markov chain on S = {, 2, 3, 4, 5, 6} with transition matrix 0.8 0. 0 0. 0 0 0 0 0 0 0 P = 0 0 0 0 0 0 0 0 0.5 0.5 0. 0 0 0 0 0 0 0 0 0 0 (a) Describe the communicating classes, specifying whether they are transient or recurrent classes. (b) Compute, in the limit n, the following probability: P[X n = 6 X 0 = ]. The matrix P has the form Q P = 0 S P 0 0 P 2 where Q = [0.8] if the substochastic matrix of transitions among transient states, [ S = ] [0. 0 0. 0 0] 0 gives the transition probabilities from transient to recurrent states, P = is the transition matrix of the recurrent class {2, 3}, which is clearly periodic of period 2, and P 2 = 0 0.5 0.5 0 0 0 is the transition matrix of the recurrent class {4, 5, 6}, which is irreducible 0 0 aperiodic. We have lim n P[X n = 6 X 0 = ] = α {4,5,6} () π {4,5,6} 6, where α R () is the probability that the chain, starting at X 0 =, eventually ends up in the recurrent class R, while π R is the equilibrium distribution for the recurrent class R. In our case, for obvious symmetry reasons, we have α {2,3} () = α {4,5,6} () = 2. Alternatively, we can use the general formula: α {4,5,6} () = ( j=4,5,6 ( Q) S ) =,j ( 0.8) (0.+0+0) = 0. 0.2 = 2. Moreover, π {4,5,6} is solution of π {4,5,6} P 2 = π {4,5,6}. This equation gives π {4,5,6} 6 = π {4,5,6} 5 = 2 π{4,5,6} 4, which has solution π {4,5,6} = ( 2, 4, 4 ). In conclusion, lim n P[X n = 6 X 0 = ] = 2, 4 = 8. (a) Communicating classes: Transient class(es): {}, Recurrent class(es) {2, 3}, {4, 5, 6} (b) lim n P[X n = 6 X 0 = ] = 8 4

Problem 4:. Let X, X 2, X 3,... be a Markov chain on S = {, 2, 3} with transition matrix /3 /3 /3 P = /3 /3 /3. 0 0 Let T = inf{n 0 X n = 3} be the time of absorption at 3. Compute τ = E[T X 0 = ]. In this chain and 2 are transient states, and 3 is an absorbing state. Let τ = E[T X 0 = ], and τ 2 = E[T X 0 = 2]. For obvious symmetry reasons, τ = τ 2. Moreover, by first step analysis, τ = E[T X 0 = ] = E[T X = ]P[X = X 0 = ] + E[T X = 2]P[X = 2 X 0 = ] +E[T X = 3]P[X = 3 X 0 = ] = ( + E[T X 0 = ]) 3 + ( + E[T X 0 = 2]) 3 + 3 = ( + τ ) 3 + ( + τ 2) 3 + 3 = + 2 3 τ. Hence, 3 τ =, i.e. τ = 3. τ = 3 5

Problem 5:. Let X t be a Poisson process of rate λ. Let W, W 2,... be the waiting times (i.e. W n is the time of n-th jump). Compute P[W, W 2, W 3 5 X 7 = 3]. As proved in class, conditioned on X 7 = 3, the random variables W, W 2, W 3 are uniformly distributed in the region 0 w < w 2 < w 3 7, i.e. they have constant density f W,W 2,W 3 X 7=3 = V ol(0 w = <w 2<w 3 7) 7 3 /3!. Hence, P[W, W 2, W 3 5 X 7 = 3] = V ol(0 w < w 2 < w 3 5) V ol(0 w < w 2 < w 3 7) = 53 /3! 7 3 /3! = 53 7 3. Alternatively, if we let S, S 2, S 3 be i.i.d uniform random variable in [0, 7], we have, conditioned on X 7 = 3, that W = min(s, S 2, S 3 ), W 2 = 2nd min(s, S 2, S 3 ), W = max(s, S 2, S 3 ). Hence, P[W, W 2, W 3 5 X 7 = 3] = P[S, S 2, S 3 5] = P[S 5] 3 = ( 5) 3. 7 P[W, W 2, W 3 5 X 7 = 3] = ( 5 7) 3 6

Problem 6:. A radioactive source emits particles according to a Poisson process of rate 2 particles per minute. (a) Compute the probability p a that the first particle appears some time after 3 minutes and before 5 minutes. (b) Compute the probability p b that exactly one particle is emitted in the time interval from 3 to 5 minutes. (a) Recall that the time intervals T, T 2,... for the jumps of the Poisson process are independent identically distributed exponential random variables of rate λ = 2. To say that the first particle appears some time after 3 minutes and before 5 minutes is the same as to say that 3 < T < 5. Hence p a = P[3 < T < 5] = 5 3 2e 2t dt = e 2t 5 3 = e 6 e 0. (b) For be, we ask also that there are no other particles arriving in the interval [3, 5], i.e. that T + T 2 > 5. Hence p b = P[3 < T < 5, T + T 2 > 5] = 5 3 f T (t)p[t 2 > 5 t]dt = 5 3 2e 2t e 2(5 t) dt = 5 3 2e 0 dt = 4e 0. (a) p a = e 6 e 0 (a) p b = 4e 0 7

. (page intentionally left blank) 8

. (page intentionally left blank) 9