MATH37012 Week 10. Dr Jonathan Bagley. Semester

Similar documents
The Transition Probability Function P ij (t)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Statistics 150: Spring 2007

SMSTC (2007/08) Probability.

IEOR 6711, HMWK 5, Professor Sigman

Introduction to Queuing Networks Solutions to Problem Sheet 3

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Continuous-Time Markov Chain

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

MARKOV PROCESSES. Valerio Di Valerio

Question Points Score Total: 70

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Part I Stochastic variables and Markov chains

Time Reversibility and Burke s Theorem

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Data analysis and stochastic modeling

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

1 Continuous-time chains, finite state space

Stochastic process. X, a series of random variables indexed by t

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

STAT 380 Continuous Time Markov Chains

Stochastic Processes

1 IEOR 4701: Continuous-Time Markov Chains

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Stochastic Modelling Unit 1: Markov chain models

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

LTCC. Exercises solutions

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Modelling data networks stochastic processes and Markov chains

LECTURE #6 BIRTH-DEATH PROCESS

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

Lecture 20: Reversible Processes and Queues

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Performance Evaluation of Queuing Systems

Markov Chains and MCMC

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Quantitative Model Checking (QMC) - SS12

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Continuous time Markov chains

CS 798: Homework Assignment 3 (Queueing Theory)

Some Definition and Example of Markov Chain

The Theory behind PageRank

LECTURE 3. Last time:

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

Examination paper for TMA4265 Stochastic Processes

Markov Chains Handout for Stat 110

Continuous time Markov chains

Markov Chain Model for ALOHA protocol

A review of Continuous Time MC STA 624, Spring 2015

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

(implicitly assuming time-homogeneity from here on)

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

MARKOV DECISION PROCESSES

MATH HOMEWORK PROBLEMS D. MCCLENDON

Introduction to queuing theory

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Lectures on Probability and Statistical Models

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Continuous Time Markov Chains

Markov Chains and Stochastic Sampling

MAS275 Probability Modelling Exercises

Continuous Time Processes

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Markov Processes Cont d. Kolmogorov Differential Equations

Lecture 4a: Continuous-Time Markov Chain Models

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Markov Processes Hamid R. Rabiee

Characterization of cutoff for reversible Markov chains

Birth-death chain models (countable state)

EXTINCTION PROBABILITY IN A BIRTH-DEATH PROCESS WITH KILLING

Markov chains. Randomness and Computation. Markov chains. Markov processes

Convex Optimization CMU-10725

Countable state discrete time Markov Chains

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Readings: Finish Section 5.2

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

IEOR 3106: Second Midterm Exam, Chapters 5-6, November 7, 2013

STOCHASTIC PROCESSES Basic notions

THE QUEEN S UNIVERSITY OF BELFAST

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Mathematical Methods for Computer Science

Perron Frobenius Theory

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

The Markov Chain Monte Carlo Method

Logistics. All the course-related information regarding

Transcription:

MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by two independent, competing processes; for example: birth versus death; or, in a queue, arrivals versus departures. Further suppose, when in state j, the time to the next birth/arrival is V exp( ) and the time to the next death/departure, W exp(µ j ). Then, using l.o.m., the holding time in state j is min(v, W ) exp( ) which gives q j =. Next, observe that r j,j+1 = P (V < W ). We have P (V < W min(v, W ) (t, t + δt]) = P (V < W, min(v, W ) (t, t + δt]) P (min(v, W ) (t, t + δt]) P (V (t, t + δt], W > t + δt) ( )exp{ ( )t}δt exp( t)δt exp{ µ j (t + δt)} ( )exp{ ( )t}δt, which is independent of t. We have shown that the jump destination is independent of the holding time, as required for the Markov property. We have also shown that r j,j+1 = P (V < W ) =. Consequently q j,j+1 = r j,j+1 q j = ( ) =. It then follows that r j,j 1 = µ j and hence that q j,j 1 = µ j ( ) = µ j. Now observe that the and µ j above are equal to those defined in 2.15. We will use this when modelling queues as B.D. processes. 1

For the Yule process, we have r j,j 1 0, which is equivalent to µ j 0, or min(v, W ) = V. We shall also need the following. 2.18 b) The thinned Poisson process. Suppose we have a Poisson process, rate λ, where each point is independently retained with probability p. It can be shown (perhaps you did so in Probability 2) that this results in a new Poisson process with rate pλ. The intervals between points in this new Poisson process are independent and each exp(pλ) distributed. Using l.o.m, the time from when we begin observing the process, up to the next point, is exp(pλ) distributed. Apply 2.18 a) and b) to the telephone conversation question on example sheet 7. This process can be viewed as a B.D. process with S = {0, 1, 2}. Continuous time Markov chains: stationary and asymptotic behaviour. Recall, in the discrete case, that under certain conditions, i, j S p ij (n) π j as n. where π is the stationary distribution. We now derive a similar result in continuous time. 2.19 Definition X(t) is irreducible if, for any pair i, j of states,we have that p ij (t) > 0 for some t > 0. Note that it is known, either p ij (t) > 0 t > 0, or p ij (t) = 0 t > 0. 2.20 Lemma X(t) irreducible implies, h > 0, X(nh) n 1 is an irreducible, aperiodic, discrete time M.C. 2

Proof Clearly X(nh) has the Markov property. Second, X(t) irreducible implies p ij (nh) > 0 for all n 1, i, j; and therefore X(nh) is also irreducible. Finally p ii (h) > 0 for all h > 0 gives X(nh) aperiodic. X(nh) is called the skeleton chain (not to be confused with the jump chain defined earlier). 2.21 Definition A distribution π is a stationary distribution for X(t) if π = πp (t) t 0. We can now state 2.22 Theorem Let X(t) be an irreducible continuous time finite state space M.C. or honest B.D. process. Then a distribution π is a stationary distribution for X(t) if and only if πq = 0. Furthermore, if a stationary distribution exists, it is unique. The proof requires the following: 2.23 Theorem Under the hypotheses of 2.22, lim p ij(t) exists i, j S and is independent of i. Proof The proof is omitted. It makes use of the skeleton chain (Kingman 1963). If you are interested in the proof, there are links to Kingman s paper and a biography of the man, on the 3

course materials page. Proof (of 2.22) We consider separately, the cases a) S < and b) honest B.D. a) When S < we have πq = 0 πq n = 0 n 1 π Qn t n n=1 = 0 t Q n t n π{ I} n=0 = 0 t Q n t n π = π t n=0 which, by the definition in 2.14, implies πp (t) = π t 0; and so π is a stationary distribution. b) When X(t) is a B.D. process with S =, we cannot use the argument of 2.14. First, by construction (see later), any distribution satisfying πq = 0 is unique. Now πq = 0 implies πqp (t) = 0 t 0 which (2.13) implies πp (t)q = 0 t 0. Since X(t) is honest, we have p ij (t) = 1 i S, t 0. This, together with π being j S a distribution, implies that πp (t) is also a distribution. Hence, by the above uniqueness, π = πp (t) t 0; and so π is a stationary distribution for X(t). Suppose π is a stationary distribution for X(t). Then π = πp (t) t 0 implies π = πp (nh) h 0 and n 1; and therefore π is a stationary distribution for X(nh). Now, by the discrete theory, we have, for each i, j S, p ij (nh) π j as n. 4

But 2.23 tells us that lim p ij (t) exists; and so must equal π j. Letting t in P (t) = P (t)q now gives lim P (t) = ΠQ where each row of Π is equal to π. But if lim P (t) is a matrix of constants, and all the lim p ij(t) exist, these constants must equal 0. Consequently 0 = πq, as required. Finally, if X(t) has more than one stationary distribution, then so does X(nh), which we know, by discrete theory, is not true. We can now state a limit theorem. 2.24 Theorem Under the hypotheses of 2.22, if X(t) has a (unique) stationary distribution π, then i, j S p ij (t) π j as t. Otherwise, i, j S p ij (t) 0 as t. Proof The proof of the first claim is contained in above. For the second claim, let α j = lim p ij(t). Then also, for h > 0, α j = lim p ij (nh). n Suppose now, at least one α j is not zero. Then, by discrete theory applied to X(nh), we have that α is a distribution. Consequently, as in above, lim P (t) = AQ where each row of A is equal to α. Then it follows, as in above, that 0 = αq. That is, α is a stationary distribution. 5