Statistics 150: Spring 2007

Similar documents
Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

The Transition Probability Function P ij (t)

Continuous-Time Markov Chain

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

IEOR 6711, HMWK 5, Professor Sigman

STAT 380 Continuous Time Markov Chains

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Lecture 20: Reversible Processes and Queues

Continuous Time Markov Chains

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

SMSTC (2007/08) Probability.

Quantitative Model Checking (QMC) - SS12

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Data analysis and stochastic modeling

Introduction to Queuing Networks Solutions to Problem Sheet 3

Markov Processes Cont d. Kolmogorov Differential Equations

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Performance Evaluation of Queuing Systems

1 IEOR 4701: Continuous-Time Markov Chains

Queueing Networks and Insensitivity

Time Reversibility and Burke s Theorem

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

LECTURE #6 BIRTH-DEATH PROCESS

A review of Continuous Time MC STA 624, Spring 2015

Stochastic process. X, a series of random variables indexed by t

Continuous time Markov chains

Markov processes and queueing networks

Examination paper for TMA4265 Stochastic Processes

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Stochastic Processes

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Continuous Time Processes

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

STAT STOCHASTIC PROCESSES. Contents

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

MARKOV PROCESSES. Valerio Di Valerio

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Birth-Death Processes

Readings: Finish Section 5.2

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

M/M/3/3 AND M/M/4/4 RETRIAL QUEUES. Tuan Phung-Duc, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi

QUEUING MODELS AND MARKOV PROCESSES

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Markov Processes Hamid R. Rabiee

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

CS 798: Homework Assignment 3 (Queueing Theory)

Lecture 4a: Continuous-Time Markov Chain Models

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Part I Stochastic variables and Markov chains

Markov Chains Handout for Stat 110

Modelling Complex Queuing Situations with Markov Processes

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

GI/M/1 and GI/M/m queuing systems

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

1 Continuous-time chains, finite state space

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

Renewal theory and its applications

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Question Points Score Total: 70

Introduction to Markov Chains, Queuing Theory, and Network Performance

T. Liggett Mathematics 171 Final Exam June 8, 2011

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

Math 597/697: Solution 5

Introduction to queuing theory

Statistics 992 Continuous-time Markov Chains Spring 2004

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

An M/M/1 Queue in Random Environment with Disasters

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

1 Types of stochastic models

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

M/M/1 Retrial Queueing System with Negative. Arrival under Erlang-K Service by Matrix. Geometric Method

Probability and Stochastic Processes Homework Chapter 12 Solutions

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Structured Markov Chains

MATH37012 Week 10. Dr Jonathan Bagley. Semester

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

STOCHASTIC PROCESSES Basic notions

Markov Chain Model for ALOHA protocol

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Non Markovian Queues (contd.)

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Markov Chains (Part 4)

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Figure 10.1: Recording when the event E occurs

Transcription:

Statistics 150: Spring 2007 April 23, 2008 0-1

1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities p j = lim t P ij (t) are given by p j = π j/ν j i π i/ν i where the π j are the unique nonnegative solution of π j = π i p ij, π i = 1. i i 1

We see that the p j are unique nonnegative solution of ν j p j = ν i p i p ij, p j = 1, i j or, equivalently, using q ij = ν i p ij, ν j p j = i p i q ij, p j = 1. j 2

Another way of obtaining the equations for the p i, is by way of the forward equations P ij(t) = k j q kj P ik (t) ν j P ij (t). If we assume that the limiting probabilities p j = lim t P ij (t) exists, then P ij (t) would necessarily converge to 0 as t. Hence, assuming that we can interchange limit and summation in the above, we obtain upon letting t, 0 = k j p k q kj ν j p j. 3

Let us now determine the limiting probabilities for a birth and death process. The relevant equations are λ 0 p 0 = µ 1 p 1, λ n p n = µ n+1 p n+1 + (λ n 1 p n 1 µ n p n ), n 1. or, equivalently, λ 0 p 0 = µ 1 p 1, λ 1 p 1 = µ 2 p 2 + (λ 0 p 0 µ 1 p 1 ) = µ 2 p 2, λ 2 p 2 = µ 3 p 3 + (λ 1 p 1 µ 2 p 2 ) = µ 3 p 3, λ n p n = µ n+1 p n+1 + (λ n 1 p n 1 µ n p n ) = µ n+1 p n+1. 4

Solving in terms of p 0 yields p 1 = λ 0 µ 1 p 0, p 2 = λ 1 µ 2 p 1 = λ 1λ 0 µ 2 µ 1 p 0, p 3 = λ 2 µ 3 p 2 = λ 2λ 1 λ 0 µ 3 µ 2 µ 1 p 0, p n = λ n 1 µ n p n 1 = λ n 1λ n 2 λ 1 λ 0 µ n µ n 1 µ 2 µ 1 p 0. 5

Using n=0 p n = 1 we obtain or and hence p n = 1 = p 0 + p 0 p 0 = [1 + n=1 n=1 λ n 1 λ 1 λ 0 µ n µ 2 µ 1 λ 0 λ 1 λ n 1 µ 1 µ 2 µ n ] 1, λ 0 λ 1 λ n 1 µ 1 µ 2 µ n (1 + λ 0 λ 1 λ n 1 n=1 µ 1 µ 2 µ n ), n 1. The above equations also show us what condition is needed for the limiting probabilities to exist. Namely, n=1 λ 0 λ 1 λ n 1 µ 1 µ 2 µ n <. 6

Example 1.1. The M/M/1 Queue In the M/M/1 queue λ n = λ, µ n = µ, and thus p n = (λ/µ) n 1 + n=1 ( λ µ )n = ( λ µ )n (1 λ µ ), n 0 provided that λ/µ < 1. Customers arrive at rate λ and are served at rate µ, and thus if λ > µ, they will arrive at a faster rate than they can be served and the queue size will go to infinity. The case λ = µ is null recurrent and thus has no limiting probabilities. 7

2 Time Reversibility Consider an ergodic continuous-time Markov chain and suppose that it has been in operation an infinitely long time; that is, suppose that it started at time. Such a process will be stationary, and we say that it is in steady state. 8

Let us consider this process going backwards in time. Now, since the forward process is a continuous-time Markov chain it follows that given the present state, call it X(t), the past state X(t s) and the future states X(y), y > t are independent. 9

Therefore, P{X(t s) = j X(t) = i, X(y), y > t} = P{X(t s) = j X(t) = i} and so we can conclude that the reverse process is also a continuous-time Markov chain. Also, since the amount of time spent in a state is the same whether one is going forward or backward in time it follows that the amount of time the reverse chain spends in state i on a visit is exponential with the same rate ν i as in the forward process. 10

The sequence of states visited by the reverse process constitutes a discrete-time Markov chain with transition probabilities p ij given by p ij = π jp ji π i where {π j, j 0} are the stationary probabilities of the embedded discrete-time Markov chain with transition probabilities p ij. Let q ij = ν i p ij denote the infinitesimal rates of the reverse chain. We see that q ij = ν iπ j p ji π i. 11

Recalling that p k = π k/ν k C, where C = i π i /ν i, we see that and so, That is, π j π i = ν jp j ν i p i q ij = ν jp j p ji p i p i q ij = p j q ji. = p jq ji p i. 12

Definition 2.1 (Reversibility). A stationary continuous-time Markov chain is said to be time reversible if the reverse process follows the same probabilistic law as the original process. That is, it is time reversible if for all i and j which is equivalent to q ij = q ij p i q ij = p j q ji, for all i, j. Proposition 2.2. Any ergodic birth and death process in steady state is time reversible. 13

Corollary 2.3. Consider an M/M/s queue in which customers arrive in accordance with a Poisson process having rate λ and are served by any one of s servers each having exponentially distributed service time with rate µ. If λ < sµ, then the output process of customers departing is, in steady state, a Poisson process with rate λ. 14

Proof: Let X(t) denote the number of customers in the system at time t. Since the M/M/s process is a birth and death process, it follows that {X(t), t 0} is time reversible. Now going forward in time, the time points at which X(t) increases by 1 constitute a Poisson process since these are just the arrival times of customers. hence, by time reversibility, the time points at which the X(t) increases by 1 when we go backwards in time also constitute a Poisson process. But these latter points are exactly the points of time when customers depart. Hence, the departure times constitute a Poisson process with rate λ. 15

3 Exercises 1) Suppose that the state of the system can be modeled as a two-state continuous-time Markov chain with transition rates ν 0 = λ, ν 1 = µ. When the state of the system is i, events occur in accordance with a Poisson process with rate α i, i = 0, 1. Let N(t) denote the number of events in (0, t). (a) Find lim t N(t)/t. (b) If the initial state is state 0, find E[N(t)]. 16

2) Consider a continuous-time Markov chain with stationary probabilities {p i, i 0}, and let T denote the first time the chain has been in state 0 for t consecutive time unites. Find E[T X(0) = 0]. 17

3) Find the limiting probabilities for the M/M/s system and determine the condition needed for these to exist. 18

4) Consider a time-reversible continuous-time Markov chain having parameters ν i, p ij and having limiting probabilities p j, j 0. Choose some state say state 0 and consider the new Markov chain, which makes state 0 an absorbing state. That is, reset ν 0 to equal 0. Suppose now at time points chosen according to a Poisson process with rate λ, Markov chains all of the above type (having 0 as an absorbing state) are started with the initial states chosen to be j with probabilities p 0j. All the existing chains are assumed to be independent. Let N j (t) denote the number of chains in state j, j > 0, at time t. 19

(a) Argue that if there are no chains alive at time 0, then N j (t), j > 0, are independent Poisson random variables. (b) Argue that the vector process {(N 1 (t), N 2 (t), )} is time reversible in steady state with stationary probabilities p n = j=1 e α αn j j j n j!, for n = (n 1, n 2, ), where α j = λp j /p 0 ν 0. 20