Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Similar documents
Statistics 150: Spring 2007

MARKOV PROCESSES. Valerio Di Valerio

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Readings: Finish Section 5.2

The Transition Probability Function P ij (t)

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Continuous-Time Markov Chain

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Markov Chains (Part 3)

Stochastic process. X, a series of random variables indexed by t

STOCHASTIC PROCESSES Basic notions

Data analysis and stochastic modeling

Part I Stochastic variables and Markov chains

Continuous time Markov chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Time Reversibility and Burke s Theorem

STAT 380 Continuous Time Markov Chains

Stochastic Models: Markov Chains and their Generalizations

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Markov Chain Model for ALOHA protocol

2. Transience and Recurrence

Countable state discrete time Markov Chains

CS 798: Homework Assignment 3 (Queueing Theory)

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Probability, Random Processes and Inference

Quantitative Model Checking (QMC) - SS12

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

LECTURE #6 BIRTH-DEATH PROCESS

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

A review of Continuous Time MC STA 624, Spring 2015

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

STAT STOCHASTIC PROCESSES. Contents

Introduction to Queuing Networks Solutions to Problem Sheet 3

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

1 IEOR 4701: Continuous-Time Markov Chains

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

IEOR 6711, HMWK 5, Professor Sigman

Examination paper for TMA4265 Stochastic Processes

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Markov Processes Cont d. Kolmogorov Differential Equations

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Lecture 20: Reversible Processes and Queues

Markov Processes Hamid R. Rabiee

Continuous Time Markov Chains

Name of the Student:

Math 597/697: Solution 5

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Non-homogeneous random walks on a semi-infinite strip

2 Discrete-Time Markov Chains

Classification of Countable State Markov Chains

Performance Evaluation of Queuing Systems

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Interlude: Practice Final

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

T. Liggett Mathematics 171 Final Exam June 8, 2011

Probability Distributions

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

SMSTC (2007/08) Probability.

LTCC. Exercises solutions

THE QUEEN S UNIVERSITY OF BELFAST

Birth-death chain models (countable state)

(implicitly assuming time-homogeneity from here on)

Adventures in Stochastic Processes

Markov Chains Handout for Stat 110

Week 5: Markov chains Random access in communication networks Solutions

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

1 Continuous-time chains, finite state space

BIRTH DEATH PROCESSES AND QUEUEING SYSTEMS

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

12 Markov chains The Markov property

IE 5112 Final Exam 2010

A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS

Bernoulli Counting Process with p=0.1

Birth-Death Processes

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Chapter 16 focused on decision making in the face of uncertainty about one future

Markov processes and queueing networks

Advanced Computer Networks Lecture 2. Markov Processes

Probability and Statistics Concepts

Transcription:

Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e. The future of the process depends only on the present and not on the past.

Examples:

But

Markov Chains Integer-values Markov Processes are called Markov Chains. Examples: Sum process. Counting process Random walk Poisson process Markov chains can be discrete-time or continuous-time.

Discrete Time Markov Chains Initial PMF : p j (0) P( X 0 = j ) ; j = 0,1,... Transition Probability Matrix:

e.g. Binomial counting process : S n = S n-1 + X n X n ~ Bernoulli 1-p 1-p 1-p 1-p 1-p 0 1 2 k k+1 p p p p p p

n-step Transition Probabilities: Time 0 Time 1 Time 2 i j k

State Probabilities: PMF at any time can be obtained from initial PMF and the TPM.

Steady State Probabilities: In some cases, the probabilities p j (n) approach a fixed point as n Not all Markov chains settle in to steady state, e.g. Binomial Counting

Classification of States ( Discrete time Markov Chains) * State j is accessible from state i if p ij (n) > 0 for some n 0 * States i and j communicate if i is accessible from j and j from i. This is denoted i j. * i i * i j and j k i k. Class: States i and j belong to the same class if i j. If S set of states, then for any Markov chain If a Markov chain has only one class, it is called irreducible.

Recurrence Properties: Let f i P ( X n ever returns to i X 0 = i ) If f i = 1, i is termed recurrent. If f i < 1, i is termed transient. If i is recurrent, X 0 = i infinite # of returns to i. If i is transient, X 0 = i finite # of returns to i.

If i is recurrent and i Class k, then all j Class k are recurrent. If i is transient, all j are transient, i.e. recurrence and transience are class properties. States of an irreducible Markov chain are either all transient or all recurrent. If # of states <, all states cannot be transient All states in a finite-state irreducible Markov Chain are recurrent. Periodicity: If for state i, p ii (n) = 0 except when n is a multiple of d, where d is a largest such integer, i is said to have a period d. Period is also a class property. An irreducible Markov chain is aperiodic, if all of its states have period 1.

1 3 2 4 5 Class 1(Transient) Class 2(Recurrent) 1 2 3 Irreducible Markov Chain 5 4 0 1 2 3 k k+1 Non- Irreducible Markov Chain

1 2 1 1 3 1 A typical periodic M C 1 0 1/2 1/2 1 1 2 3 1 Recurrence times for 0,1 = { 2,4,6,8,.... } Recurrence times for 2,3 = { 4,6,8,.... } period = 2

Let X 0 = i where i is a recurrent state. Define T i (k) interval between (k-1) th and k th returns to i. (by the law of large numbers) where π i is the long-term fraction of time spent in state i. i Positive Recurrent: E(T i ) <, π i > 0 i Null Recurrent: E(T i ) =, π i = 0 (e.g. all states in a random walk with p = 0.5) i is Ergodic if it is positive recurrent, aperiodic. Ergodic Markov Chain: An irreducible, aperiodic, positive recurrent MC.

Limiting Probabilities: π j s satisfy the rule for stationary state PMF : A This is because long-term proportion of time in which j follows i = long-term proportion of time in i P( i j) = π i p ij and long-term proportion of time in j = Σ i (long-term proportion of time in which j follows i) = Σ i π i p ij π j

Theorem: For an Irreducible, aperiodic and positive recurrent Markov Chain Where π j is a unique non-negative solution of A. i.e. Steady state prob of j = stationary state pmf = Long-term fraction of time in j Ergodicity.

Transition Probabilities: Continuous-Time Markov Chains P( X(s+t) = j X(s) = i ) = P( X(t) = j X(0) = i ) p ij (t) t 0 i.e. the transition probabilities depend only on t, not on s (time-invariant transition probabilities homogenous) P(t) = TPM = matrix of p ij (t) i,j Clearly P (0) = I (identity matrix)

Ex 8.12 : Poisson Process Can only transition from j to j+1 or remain in j because δ is small for 2 transitions.

Can only transition from j to j+1 or remain in j because δ is small for 2 transitions.

State Occupancy Times:

Embedded Markov Chains : Consider a continuous-time Markov Chain with the state Occupancy times T i and The corresponding Markov chain is a discrete time MC with the same states as the original MC. Each time the state i is entered, a T i ~ exponential (ν i ) is chosen. After T i is elapsed, a new state is transitioned to with probability q ij, which depend on the original MC as: This is very useful in generating Markov chains in simulations.

Transition Rates:

State Probabilities:

This is a system of Chapman Kolmogorov Equations. These are solved for each p j (t) using the initial PMF p(0)= [p 0 (0) p 1 (0) p 2 (0).... ] Note: If we start with p i (0) = 1, p j (0) = 0 j i, p j (t) p ij (t) C-K equations can be used to find TPM P(t)

Steady State Probabilities: If p j (t) p j j as t, the system reaches equilibrium (SS). Then Solve these equations j to obtain p j s - equilibrium PMF. The GBE states that, at equilibrium, rate of probability flow out of j (LHS) = rate of probability flow in to j (RHS)

Example: M/M/1 queue ( Poisson arrivals/ exp-time arrivals / 1 server) arrival rate = λ, service rate = µ γ i,i+1 = λ i =0,1,2,... ( i customers i+ 1 customers ) γ i,i-1 = µ i =1,2,3,... ( i customers i - 1 customers ) 0 λ 1 λ 2 j λ j+1 λ µ µ µ µ

Ex: Birth- Death processes λ 0 λ 1 λ 2 λ j 0 1 2 3 j j+1 µ 1 µ 2 µ 3 µ j+1 λ j = birth rate at state j µ j = death rate at state j

Theorem: Given CT MC X(t) with associated embedded MC [q ij ] with SS PMF π j, if [q ij ] is irreducible and pos recurrent, the long term fraction of time spent by X(t) is state i is Which is also the unique solution to the GBE s.