Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Similar documents
Stochastic process. X, a series of random variables indexed by t

Part II: continuous time Markov chain (CTMC)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

The Transition Probability Function P ij (t)

MARKOV PROCESSES. Valerio Di Valerio

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Statistics 150: Spring 2007

Quantitative Model Checking (QMC) - SS12

Part I Stochastic variables and Markov chains

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

IEOR 6711, HMWK 5, Professor Sigman

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Introduction to Queuing Networks Solutions to Problem Sheet 3

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

SMSTC (2007/08) Probability.

Markov Processes Cont d. Kolmogorov Differential Equations

STAT STOCHASTIC PROCESSES. Contents

Stochastic modelling of epidemic spread

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

STOCHASTIC PROCESSES Basic notions

Continuous-Time Markov Chain

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

A review of Continuous Time MC STA 624, Spring 2015

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

AARMS Homework Exercises

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Continuous time Markov chains

Markov Processes Hamid R. Rabiee

ISE/OR 760 Applied Stochastic Modeling

LECTURE #6 BIRTH-DEATH PROCESS

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Lecture 4a: Continuous-Time Markov Chain Models

Birth-Death Processes

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Stochastic modelling of epidemic spread

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 20: Reversible Processes and Queues

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

2 Discrete-Time Markov Chains

Lecture 20 : Markov Chains

6 Continuous-Time Birth and Death Chains

Introduction to Markov Chains, Queuing Theory, and Network Performance

Markov processes and queueing networks

Readings: Finish Section 5.2

Markov Reliability and Availability Analysis. Markov Processes

Data analysis and stochastic modeling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

SYMBOLS AND ABBREVIATIONS

Availability. M(t) = 1 - e -mt

ISM206 Lecture, May 12, 2005 Markov Chain

Probability Distributions

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Stochastic Processes

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Lecturer: Olga Galinina

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

CS 798: Homework Assignment 3 (Queueing Theory)

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Continuous Time Processes

MATH37012 Week 10. Dr Jonathan Bagley. Semester

Performance Evaluation of Queuing Systems

1 IEOR 4701: Continuous-Time Markov Chains

Time Reversibility and Burke s Theorem

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Handout for Stat 110

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

STAT 380 Continuous Time Markov Chains

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

(implicitly assuming time-homogeneity from here on)

Markov Chains (Part 3)

The Markov Chain Monte Carlo Method

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

4.7.1 Computing a stationary distribution

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Convex Optimization CMU-10725

Stochastic Petri Net

Stochastic Petri Net

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

Lecture 10: Semi-Markov Type Processes

Examination paper for TMA4265 Stochastic Processes

Advanced Computer Networks Lecture 2. Markov Processes

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Transcription:

Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate with each other Absorbing state: p jj =1 Reducible chain, i is an absorbing state p ( m0 ) ij 0 1

Recurrence ( ) f n : the probability that first returning to j jj occurs n steps after leaving j ( n) f : the probability that the chain ever jj : f jj n1 returns to j j is recurrent, if f jj =1 j is transient, if f jj <1 If j is recurrent, mean recurrence time: If m jj <, j is positive recurrent If m jj =, j is null recurrent E.g., random walk return to origin 0, ½, ¼, 1/8, 1/16, n1 2 m jj nf ( n) jj

Periodicity Period of a state i Greatest common divisor (gcd) Aperiodic Markov chain A DTMC (discrete time Markov chain) is aperiodic iff all states have period 1 Example period is 2, for DTMC with P 0 1 1 0 Limiting of P n does not exist, P 2n does. ( n) gcd n: pii 0 3

Ergodicity Ergodic Markov chain positive recurrent, irreducible, aperiodic finite, irreducible, aperiodic Physical meaning If ergodic, time average = ensemble average For some moments, process is ergodic Fully ergodic: for all moments, process is ergodic See page 35 of Gross book 4

Limiting distribution If a finite DTMC is aperiodic and irreducible, ( ) then lim p n ij exists and is independent of i n 3 types of probabilities Limiting prob. Time-average prob. Stationary prob. ( ) l : lim n j pij n t 1 p j : lim 1( X ( n) j) t t n1, s.t. P j 5

Limiting probability Limiting probability of i The limiting situation to enter i, may not exist (consider periodic case) Time-average probability of i % of time staying at state i Stationary probability Given the initial state is chosen according to π, all the future states follows the same distribution 6

Example, binary communication Limiting prob. Time-average prob. Stationary prob. channel Solve the equation and Conclusion Limiting prob. may not exist, otherwise they are all identical. π and p are always the same DNE, if a b 1 l j b a (, ) a b a b p j P b a (, ) a b a b e 1 7

Limiting probability Relations of 3 probabilities If the limiting probability exists, then these 3 probabilities match! If ergodic, limiting prob. exists and 3 prob. match How to solve? P, e1 Pe e π is the eigen vector of matrix P with eigen value 1 balance equation 8

Example, discrete time birth-death process Model the dynamic of population Parameters Increase by 1 with a birth prob. b Decrease by 1 with a death prob. d Stay the same state with prob. 1-b-d Chalk write Write the transition probability matrix P Solve the equation for stationary prob. Understand the balance equation 9

Part II: continuous time Markov chain (CTMC) Continuous time discrete state Markov process Definition (Markovian property) X(t) is a CTMC, if for any n and any sequence t 1 <t 2 <,,<t n, we have P[ X ( t ) j X ( t ) i, X ( t ) i,..., X ( t ) i ] n n1 n1 n2 n2 1 1 P[ X ( t ) j X ( t ) i ] n n1 n1 10

Exponentially distributed sojourn time of CTMC Sojourn time at state i, τ i, i=1,2, S, is exponentially distributed Proved with Markovian property P[ s t s] P[ t] i i i P[ s t] P[ s] P[ t] i i i P[ s t, s] P[ s] P[ t] i i i i dp[ i s t] f ( s) P[ ] i i t ds dp[ i t] f (0) ds i P[ t] i i f ( t) f (0) e i s0 i ln P[ t] f (0) t f i (0) t i f i P[ t] e i (0) t Sojourn time is exponentially distributed! f i (0) is transition rate of state i 11

Basic elements of CTMC State space: S={1,2,,S}, assume it is discrete and finite for simplicity Transition prob. In matrix form If homogeneous, denote p ( s, t) : P[ X ( t) j X ( s) i], s t P(t) is independent of s (start time) Chapman Kolmogorov equation P( s, t) P( s, u) P( u, t), s u t ij P( s, t) : pij ( s, t) P( t) : P( s, s t) 12

Infinitesimal generator (transition rate matrix) Differentiation of C-K equation, let ut P( s, t) t P( s, t) Q( t), s t t 0 Q(t) is called the infinitesimal generator of transition matrix function P(s,t) since Its elements are Qt ( ) lim P( t, t t) I t P( s, t) Q() t t P( s, t) t (u) (, ) Q s P s t e u pii ( t, t t) 1 pij ( t, t t) qii ( t) lim 0 qij ( t) lim 0, i j t 0 t 0 t t S qij ( t) 0, for all i Q( t) e 0 j1 relation between transition rate q ij and probability p ij -q ii is the rate flow out of i, q ij is the rate flow into i 13

Infinitesimal generator If homogeneous, denote Q=Q(t) Derivative of transition probability matrix Pt () t P() t Q With initial condition P(0)=I, we have P() t e Qt where Qt 1 1 e I Qt Q t Q t 2! 3! 2 2 3 3 :... Infinitesimal generator (transition rate matrix) Q can generate the transition probability matrix P(t) 14

State probability () t is the state probability at time t We have ( t) (0) P( t) (0) e Qt Derivative operation d () t (0) e Qt Q ( t) Q dt Decompose into element di() t dt ( t) q ( t) q i ii j ji ji 15

Steady state probability For ergodic Markov chain, we have lim ( t) lim p ( t) t Balance equation di () t dt In matrix j j ij t i ( t) qii j ( t) q ji 0 ji Q 0 e 1 Qe 0 Comparing this with discrete case 16

Birth-death process A special Markov chain Continuous time: state k transits only to its neighbors, k-1 and k+1, k=1,2,3, model for population dynamics basis of fundamental queues state space, S={0,1,2, } at state k (population size) Birth rate, λ k, k=0,1,2, Death rate, μ k, k=1,2,3, 17

Infinitesimal generator Element of Q q k,k+1 = λ k, k=0,1,2, q k,k-1 = μ k, k=1,2,3, q k,j = 0, k-j >1 q k,k-1 = - λ k - μ k Chalk writing: tri-diagonal matrix Q 18

State transition probability analysis In Δt period, transition probability of state k kk+1: p k,k+1 (Δt)= λ k Δt+o(Δt) kk-1: p k,k-1 (Δt)= μ k Δt+o(Δt) kk: p k,k (Δt)= [1-λ k Δt-o(Δt)][1- μ k Δt-o(Δt)] = 1- λ k Δt-μ k Δt+o(Δt) kothers: p k,j (Δt)= o(δt), k-j >1 19

State transition equation State transition equation ( t t) ( t) p ( t) ( t) p ( t) ( t) p ( t) o( t) k k k, k k1 k1, k k1 k1, k k-1 k=1,2, Case of k=0 is omitted, for homework k State k k+1 j k-j >1 at time t at time t+ Δt 20

State transition equation State transition equation: ( t t) ( t) ( ) t ( t) t ( t) t ( t) o( t) k k k k k k 1 k 1 k 1 k 1 With derivative form: d k () t dt d 0() t dt ( ) ( t) ( t) ( t), k 1,2,... k k k k1 k1 k1 k1 ( t) ( t), k 0 0 0 1 1 This is the equation of dπ = πq Study the transient behavior of system states ( t) (0) e Qt 21

Easier way to analyze birth-death process State transition rate diagram 0 1 2 k 1 k k 1 0 1 2 k-1 k k+1 1 2 3 k 1 k k 1 This is transition rate, not probability If use probability, multiply dt and add self-loop transition Contain the same information as Q matrix 22

State transition rate diagram Look at state k based on state transition rate diagram Flow rate into state k I ( t) ( t) k k 1 k 1 k 1 k 1 Flow rate out of state k O ( ) ( t) k k k k Derivative of state probability d k () t dt I O ( t) ( t) ( ) ( t) k k k 1 k 1 k 1 k 1 k k k 23

State transition rate diagram Look at a state, or a set of neighboring states Change rate of state probability = incoming flow rate outgoing flow rate Balance equation For steady state, i.e., the change rate of state probability is 0, then we have balance equation incoming flow rate = outgoing flow rate, to solve the steady state probability This is commonly used to analyze queuing systems, Markov systems 24

Pure birth process Pure birth process Death rate is μ k =0 Birth rates are identical λ k = λ, for simplicity We have the derivative equation of state prob. d k () t k1( t) k( t), k 1,2,... dt d 0() t 0( t), k 0 dt assume initial state is at k=0, i.e., π 0 (0)=1, we have () t 0 t e 25

Pure birth process For state k=1 d1() t dt we have () t Generally, we have 1 t t The same as Poisson distribution! e () t 1 te k ( t) t k ( t) e, t 0, k 0,1,2,... k! 26

Pure birth process and Poisson process Pure birth process is exactly a Poisson process Exponential inter-arrival time with mean 1/λ k, number of arrivals during [0,t] k ( t) t Pk e, k 0,1,2,... k! Z-transform of random variable k To calculate the mean and variance of k Laplace transform of inter-arrival time Show it or exercise on class To calculate the mean and variance of inter-arrival time 27

Arrival time of Poisson process X is the time that exactly having k arrivals X = t 1 + t 2 + + t k t k is the inter-arrival time of the kth arrival X is also called the k-erlang distributed r.v. The probability density function of X Use convolution of Laplace transform k ( ) ( ) s * * X s A s k Look at table of Laplace transform k1 ( x) f X ( x) e ( k 1)! arrival rate k-1 arrival during (0,x) x 28