Stochastic process. X, a series of random variables indexed by t

Similar documents
Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Part II: continuous time Markov chain (CTMC)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Part I Stochastic variables and Markov chains

IEOR 6711, HMWK 5, Professor Sigman

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

MARKOV PROCESSES. Valerio Di Valerio

The Transition Probability Function P ij (t)

Markov processes and queueing networks

Continuous-Time Markov Chain

Quantitative Model Checking (QMC) - SS12

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

LECTURE #6 BIRTH-DEATH PROCESS

Introduction to Queuing Networks Solutions to Problem Sheet 3

Statistics 150: Spring 2007

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Data analysis and stochastic modeling

Lecture 20: Reversible Processes and Queues

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Continuous time Markov chains

Stochastic modelling of epidemic spread

Continuous Time Processes

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Basic concepts of probability theory

Lecturer: Olga Galinina

STAT STOCHASTIC PROCESSES. Contents

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Basic concepts of probability theory

Outline. Finite source queue M/M/c//K Queues with impatience (balking, reneging, jockeying, retrial) Transient behavior Advanced Queue.

Figure 10.1: Recording when the event E occurs

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Basic concepts of probability theory

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

STOCHASTIC PROCESSES Basic notions

Performance Evaluation of Queuing Systems

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Introduction to Markov Chains, Queuing Theory, and Network Performance

SMSTC (2007/08) Probability.

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

CS 798: Homework Assignment 3 (Queueing Theory)

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Birth-Death Processes

Stochastic modelling of epidemic spread

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Examination paper for TMA4265 Stochastic Processes

SYMBOLS AND ABBREVIATIONS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Lecture 10: Semi-Markov Type Processes

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

GI/M/1 and GI/M/m queuing systems

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Time Reversibility and Burke s Theorem

Lecture 4a: Continuous-Time Markov Chain Models

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

Lecture 20 : Markov Chains

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

M/G/1 and M/G/1/K systems

1 IEOR 4701: Continuous-Time Markov Chains

STAT 380 Continuous Time Markov Chains

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

ISE/OR 760 Applied Stochastic Modeling

Lecturer: Olga Galinina

Markov Processes Hamid R. Rabiee

Name of the Student:

ISM206 Lecture, May 12, 2005 Markov Chain

An Introduction to Stochastic Modeling

Readings: Finish Section 5.2

Markov Processes and Queues

Non Markovian Queues (contd.)

Stochastic Modelling Unit 1: Markov chain models

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

Modelling data networks stochastic processes and Markov chains

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Analysis of an Infinite-Server Queue with Markovian Arrival Streams

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Stochastic Petri Net

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

LECTURE 3. Last time:

6 Continuous-Time Birth and Death Chains

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Adventures in Stochastic Processes

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Markov Processes Cont d. Kolmogorov Differential Equations

4.7.1 Computing a stationary distribution

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

The Markov Chain Monte Carlo Method

AARMS Homework Exercises

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Transcription:

Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t, also is a random variable X(t) is discrete/continuous, called discrete/continuous state stochastic process 1

Stochastic process, example The number of passengers at a bus station, with respect to (w.r.t.) time Continuous time discrete state stochastic process Temperature of room w.r.t. time Continuous time continuous state stochastic process 2

Stochastic process, example Random walk Sn X1 X 2... X n, n 1,2,... X i is independently identically distributed (i.i.d.) random variable E.g., X i = 1, with p=0.5; X i = -1, with p=0.5. is a discrete time random walk. Question: how is the random walk if the probability of X i is different from above? Brownian motion is a continuous time random walk B(t)-B(s) obeys a normal distribution N(0,t-s) 3

Characteristics of stochastic process Relationship of random variables X(t i ), i=1,2, Joint distribution function of X={X(t 1 ),X(t 2 ),,X(t n )} F X (x;t) = P{X(t 1 ) x 1, X(t 2 ) x 2,,X(t n ) x n }, for all n Very complicated, but can be simplified for most of commonly used stochastic processes 4

Stationary process and independent process Stationary process X(t) is stationary if F X (x;t) is invariant to any shifts in time F (x; t) F (x; t+ ) Independent process X The simplest stochastic process, white noise X(t) are independent each other X F (x; t) P( X ( t ) x ) P( X ( t ) x )... P( X ( t ) x ) X 1 1 2 2 n n 5

Markov process Markovian property (memoryless) P[ X ( t ) x X ( t ) x, X ( t ) x,..., X ( t ) x ] n 1 n 1 n n n 1 n 1 1 1 P[ X ( t ) x X ( t ) x ] n 1 n 1 n n Future state only depends on the current state, independent of the historical states Simplify the stochastic process Sojourn time on a state is exponentially/geometrically distributed for continuous/discrete time Markov process Category: discrete/continuous time discrete/continuous state Markov process 6

Point Process N(t) is a special case of stochastic process N(t) is integer, N(t)=0,1,2, ; t 0 N(t) is increasing w.r.t. time t N(t) 0 t 7

Renewal Process Renewal Process N(t) A special case of point process Independent process Identical distribution F( x) P[ T x], n 1,2,... n T n is defined as the holding time that N(t)=n i.e., holding time is i.i.d., general distribution 8

Poisson process Poisson (arrival) process N(t) A special case of renewal process that the holding time (interarrival time) obeys the exponential distribution with para. λ N(t): the number of arrivals in [0,t] N(t) has a Poisson distribution with para. λt k ( t) t P( N( t) k) e, k 0,1,2,... k! t 9

Poisson process Mean, variance, coefficient of variation E( N( t)) t, ( N( t)) t, cn t 2 2 () T: the interarrival time, interval between two successive arrivals, is a random variable T is exponentially distributed with para. λ 1 2 1 E( T), ( T), c 1 2 T T has memoryless property 1 t 10

Memoryless Property -Interpretation 11

More on Poisson process 12

More on Poisson process 13

More on Poisson process Arrival equally likely happens in each small interval Pure random process Poisson process is extremely useful memoryless property, exponential inter-arrival time excellent model for many practical processes Telephone calls arrival process Many physical phenomena behave like Poisson process, x-ray Job size in computers obeys exponential distribution, while its arrival process does not affect the response time too much Limiting theorem 14

Limiting theorem Limiting theorem (Palm 43, Khinchin 60) Merge n (n>>1) i.i.d. arrival processes, each is an arbitrary interarrival distribution renewal process with arrival rate λ/n, then the aggregated arrival process approaches a Poisson process with rate λ Poisson process is a widely used model Exceptions, e.g., Internet traffic: self-similarity, bursty, heavy tailed traffic, transportation traffic: batch arrival 15

Markov modulated Poisson process MMPP The arrival rate of Poisson process is modulated by a state r on upper level State r is transiting according to Markov process E.g., r(t) obeys a Markov process with 2 states, average sojourn time is 1/r 1 and 1/r 2, respectively; at state 1 or 2, the Poisson arrival rate is λ 1 and λ 2 MMPP is a stochastic process with positive correlation coefficient 16

Markovian Arrival Process MAP is defined by two matrices D 0 and D 1 elements of D 0 represent hidden phase transitions elements of D 1 represent observable transitions Each transition of D 1 represents an arrival event Property (D 0 ) ii <0; (D 0 ) ij 0 for i j; (D 1 ) ij 0 D = D 0 + D 1 is a stochastic matrix, D1=0 Solve ωd=0, ω1=1, equivalent arrival rate is λ=ωd 1 1 MAP is a very general arrival process (similar to PH) Poisson process, MMPP, most of point process 17

Markovian Arrival Process Define the system state of MAP as S(t):=(N(t),J(t)), where N(t) is the number of arrivals until t, J(t) is the phase state of MAP at time t, then S(t) is a Markov process with the following rate matrix If we use MAP to represent a Poisson arrival with rate λ, then D_0 = -λ, D_1= λ 18

Relation map among some typical random processes Semi- Markov process Random walk Markov process Birth death process Poisson process Pure birth process Markov arrival process Renewal process 19

Part I: discrete time Markov chain (DTMC) Discrete time discrete state Markov process Notation: X={X 1,X 2,,X n } Memoryless: P[ X j X i, X i,..., X i ] n n 1 n 1 n 2 n 2 1 1 P[ X j X i ] n n 1 n 1 Transition probability Homogeneous p ij, n p p : P[ X j X i] ij ij, n n n 1 20

Markov chain Basic elements State space, S={1,2,,S} assume discrete and finite Transition probability matrix Some properties m-step trans. prob., Chapman-Kolmogorov equation In matrix form: P [ pij ] i, j S ( m p ) : P[ X j X i] ij n m n S ( m) ( m 1) ij ik kj, 2,3,... k 1 ( m) ( m) m P [ pij ] P p p p m Sojourn time at state i is geometrically distributed with p ii 21

Example1, binary communication channel Bit states: 0 or 1, gets corrupted on the link with prob., p01 a, p00 1 a, p10 b, p11 1 b a and b are the corruption probability Transition probability matrix 1 a a P b 1 b Chalk write draw the state transition figure Write the limiting prob. matrix lim P n ( n ) b a a b a b b a a b a b 22

Example2, simple queue At each discrete time, t=1,2,3, Customer arrival with probability p Interarrival time is geometric distribution with p Job leaves the server with probability q, q>p Job size is geometrically distributed with q State: number of jobs in the system Chalk write state transition map Transition probability matrix, similar to discrete time birth-death process, a slight difference 23

Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate with each other Absorbing state: p jj =1 Reducible chain, i is an absorbing state p ( m0 ) ij 0 24

Recurrence ( ) f n : the probability that first returning to j jj occurs n steps after leaving j ( n) f : the probability that the chain ever jj : f jj n 1 returns to j j is recurrent, if f jj =1 j is transient, if f jj <1 If j is recurrent, mean recurrence time: If m jj <, j is positive recurrent If m jj =, j is null recurrent E.g., random walk return to origin 0, ½, ¼, 1/8, 1/16, n 1 25 m jj nf ( n) jj

Periodicity Period of a state i Greatest common divisor (gcd) Aperiodic Markov chain A DTMC (discrete time Markov chain) is aperiodic iff all states have period 1 Example period is 2, for DTMC with P 0 1 1 0 Limiting of P n does not exist, P 2n does. ( n) gcd n: pii 0 26

Ergodicity Ergodic Markov chain positive recurrent, irreducible, aperiodic finite, irreducible, aperiodic Physical meaning If ergodic, time average = ensemble average For some moments, process is ergodic Fully ergodic: for all moments, process is ergodic See page 35 of Gross book 27

Limiting distribution If a finite DTMC is aperiodic and irreducible, ( ) then lim p n ij exists and is independent of i n 3 types of probabilities Limiting prob. Time-average prob. Stationary prob. ( ) l : lim n j pij n t 1 p j : lim 1( X ( n) j) t t n 1, s.t. P j 28

Limiting probability Limiting probability of i The limiting situation to enter i, may not exist (consider periodic case) Time-average probability of i % of time staying at state i Stationary probability Given the initial state is chosen according to π, all the future states follows the same distribution 29

Example, binary communication Limiting prob. Time-average prob. Stationary prob. channel Solve the equation and Conclusion Limiting prob. may not exist, otherwise they are all identical. π and p are always the same DNE, if a b 1 l j b a (, ) a b a b p j P b a (, ) a b a b e 1 30

Limiting probability Relations of 3 probabilities If the limiting probability exists, then these 3 probabilities match! If ergodic, limiting prob. exists and 3 prob. match How to solve? P, e 1 Pe e π is the eigen vector of matrix P with eigen value 1 balance equation 31

Example, discrete time birth-death process Model the dynamic of population Parameters Increase by 1 with a birth prob. b Decrease by 1 with a death prob. d Stay the same state with prob. 1-b-d Chalk write Write the transition probability matrix P Solve the equation for stationary prob. Understand the balance equation 32

Part II: continuous time Markov chain (CTMC) Continuous time discrete state Markov process Definition (Markovian property) X(t) is a CTMC, if for any n and any sequence t 1 <t 2 <,,<t n, we have P[ X ( t ) j X ( t ) i, X ( t ) i,..., X ( t ) i ] n n 1 n 1 n 2 n 2 1 1 P[ X ( t ) j X ( t ) i ] n n 1 n 1 33

Exponentially distributed sojourn time of CTMC Sojourn time at state i, τ i, i=1,2, S, is exponentially distributed Proved with Markovian property P[ s t s] P[ t] i i i P[ s t] P[ s] P[ t] i i i P[ s t, s] P[ s] P[ t] i i i i dp[ i s t] f ( s) P[ ] i i t ds dp[ i t] f (0) ds i P[ t] i i f ( t) f (0) e i i ln P[ t] f (0) t i f (0) t i f i P[ t] e i (0) t Sojourn time is exponentially distributed! f i (0) is transition rate of state i 34

Basic elements of CTMC State space: S={1,2,,S}, assume it is discrete and finite for simplicity Transition prob. In matrix form If homogeneous, denote p ( s, t) : P[ X ( t) j X ( s) i], s t P(t) is independent of s (start time) Chapman Kolmogorov equation P( s, t) P( s, u) P( u, t), s u t ij P( s, t) : pij ( s, t) P( t) : P( s, s t) 35

Infinitesimal generator (transition rate matrix) Differentiation of C-K equation, let u t P( s, t) t P( s, t) Q( t), s t t 0 Q(t) is called the infinitesimal generator of transition matrix function P(s,t) since Its elements are Qt ( ) lim P( t, t t) I t P( s, t) Q() t t P( s, t) t (u) (, ) Q s P s t e u pii ( t, t t) 1 pij ( t, t t) qii ( t) lim 0 qij ( t) lim 0, i j t 0 t 0 t t S qij ( t) 0, for all i Q( t) e 0 j 1 relation between transition rate q ij and probability p ij -q ii is the rate flow out of i, q ij is the rate flow into i 36

Infinitesimal generator If homogeneous, denote Q=Q(t) Derivative of transition probability matrix Pt () t P() t Q With initial condition P(0)=I, we have P() t e Qt where Qt 1 1 e I Qt Q t Q t 2! 3! 2 2 3 3 :... Infinitesimal generator (transition rate matrix) Q can generate the transition probability matrix P(t) 37

State probability () t is the state probability at time t We have ( t) (0) P( t) (0) e Qt Derivative operation d () t (0) e Qt Q ( t) Q dt Decompose into element d i() t dt ( t) q ( t) q i ii j ji j i 38

Steady state probability For ergodic Markov chain, we have lim ( t) lim p ( t) t Balance equation d i () t dt In matrix j j ij t i ( t) qii j ( t) q ji 0 j i Q 0 e 1 Qe 0 Comparing this with discrete case 39

Birth-death process A special Markov chain Continuous time: state k transits only to its neighbors, k-1 and k+1, k=1,2,3, model for population dynamics basis of fundamental queues state space, S={0,1,2, } at state k (population size) Birth rate, λ k, k=0,1,2, Death rate, μ k, k=1,2,3, 40

Infinitesimal generator Element of Q q k,k+1 = λ k, k=0,1,2, q k,k-1 = μ k, k=1,2,3, q k,j = 0, k-j >1 q k,k-1 = - λ k - μ k Chalk writing: tri-diagonal matrix Q 41

State transition probability analysis In Δt period, transition probability of state k k k+1: p k,k+1 (Δt)= λ k Δt+o(Δt) k k-1: p k,k-1 (Δt)= μ k Δt+o(Δt) k k: p k,k (Δt)= [1-λ k Δt-o(Δt)][1- μ k Δt-o(Δt)] = 1- λ k Δt-μ k Δt+o(Δt) k others: p k,j (Δt)= o(δt), k-j >1 42

State transition equation State transition equation ( t t) ( t) p ( t) ( t) p ( t) ( t) p ( t) o( t) k k k, k k 1 k 1, k k 1 k 1, k k-1 k=1,2, Case of k=0 is omitted, for homework k State k k+1 j k-j >1 at time t at time t+ Δt 43

State transition equation State transition equation: ( t t) ( t) ( ) t ( t) t ( t) t ( t) o( t) k k k k k k 1 k 1 k 1 k 1 With derivative form: d k () t dt d 0() t dt ( ) ( t) ( t) ( t), k 1,2,... k k k k 1 k 1 k 1 k 1 ( t) ( t), k 0 0 0 1 1 This is the equation of dπ = πq Study the transient behavior of system states 44

Easier way to analyze birth-death process State transition rate diagram 0 1 2 k 1 k k 1 0 1 2 k-1 k k+1 1 2 3 k 1 k k 1 This is transition rate, not probability If use probability, multiply dt and add self-loop transition Contain the same information as Q matrix 45

State transition rate diagram Look at state k based on state transition rate diagram Flow rate into state k I ( t) ( t) k k 1 k 1 k 1 k 1 Flow rate out of state k O ( ) ( t) k k k k Derivative of state probability d k () t dt I O ( t) ( t) ( ) ( t) k k k 1 k 1 k 1 k 1 k k k 46

State transition rate diagram Look at a state, or a set of neighboring states Change rate of state probability = incoming flow rate outgoing flow rate Balance equation For steady state, i.e., the change rate of state probability is 0, then we have balance equation incoming flow rate = outgoing flow rate, to solve the steady state probability This is commonly used to analyze queuing systems, Markov systems 47

Pure birth process Pure birth process Death rate is μ k =0 Birth rates are identical λ k = λ, for simplicity We have the derivative equation of state prob. d k () t k 1( t) k( t), k 1,2,... dt d 0() t 0( t), k 0 dt assume initial state is at k=0, i.e., π 0 (0)=1, we have () t 0 t e 48

Pure birth process For state k=1 d 1() t dt we have () t Generally, we have 1 t t The same as Poisson distribution! e () t 1 te k ( t) t k ( t) e, t 0, k 0,1,2,... k! 49

Pure birth process and Poisson process Pure birth process is exactly a Poisson process Exponential inter-arrival time with mean 1/λ k, number of arrivals during [0,t] k ( t) t Pk e, k 0,1,2,... k! Z-transform of random variable k To calculate the mean and variance of k Laplace transform of inter-arrival time Show it or exercise on class To calculate the mean and variance of inter-arrival time 50

Arrival time of Poisson process X is the time that exactly having k arrivals X = t 1 + t 2 + + t k t k is the inter-arrival time of the kth arrival X is also called the k-erlang distributed r.v. The probability density function of X Use convolution of Laplace transform k ( ) ( ) s * * X s A s k Look at table of Laplace transform k 1 ( x) f X ( x) e ( k 1)! arrival rate k-1 arrival during (0,x) x 51