MARKOV PROCESSES. Valerio Di Valerio

Similar documents
Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Part I Stochastic variables and Markov chains

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Stochastic process. X, a series of random variables indexed by t

Data analysis and stochastic modeling

2. Transience and Recurrence

IEOR 6711, HMWK 5, Professor Sigman

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Performance Evaluation of Queuing Systems

Readings: Finish Section 5.2

LECTURE #6 BIRTH-DEATH PROCESS

Availability. M(t) = 1 - e -mt

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Lecture 20: Reversible Processes and Queues

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

CS 798: Homework Assignment 3 (Queueing Theory)

Statistics 150: Spring 2007

Time Reversibility and Burke s Theorem

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Introduction to Queuing Networks Solutions to Problem Sheet 3

STOCHASTIC PROCESSES Basic notions

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Stochastic Processes

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Continuous-Time Markov Chain

Introduction to Markov Chains, Queuing Theory, and Network Performance

Markov Chain Model for ALOHA protocol

Advanced Computer Networks Lecture 2. Markov Processes

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Stochastic modelling of epidemic spread

Name of the Student:

Lecture 21. David Aldous. 16 October David Aldous Lecture 21


Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Lecture 20 : Markov Chains

Markov Reliability and Availability Analysis. Markov Processes

Stochastic Petri Net

From Stochastic Processes to Stochastic Petri Nets

Continuous-Valued Probability Review

Math Homework 5 Solutions

Markov Processes Hamid R. Rabiee

MATH HOMEWORK PROBLEMS D. MCCLENDON

The Transition Probability Function P ij (t)

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

LTCC. Exercises solutions

Stochastic Models in Computer Science A Tutorial

Probability and Statistics Concepts

Discrete Markov Chain. Theory and use

1 IEOR 4701: Continuous-Time Markov Chains

Continuous time Markov chains

Week 5: Markov chains Random access in communication networks Solutions

Stochastic modelling of epidemic spread

1 Continuous-time chains, finite state space

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Quantitative Model Checking (QMC) - SS12

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Disjointness and Additivity

Probability, Random Processes and Inference

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Modelling data networks stochastic processes and Markov chains

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Markov processes and queueing networks

Stochastic Petri Net

Modelling data networks stochastic processes and Markov chains

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Eleventh Problem Assignment

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Answers to selected exercises

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

2 Discrete-Time Markov Chains

Sample Spaces, Random Variables

Stochastic Models: Markov Chains and their Generalizations

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

MATH37012 Week 10. Dr Jonathan Bagley. Semester

57:022 Principles of Design II Final Exam Solutions - Spring 1997

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

STAT 380 Continuous Time Markov Chains

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Lecture 3: Markov chains.

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2017 Kannan Ramchandran March 21, 2017.

Lectures on Markov Chains

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Probability Distributions

Markov Chains (Part 3)

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Transcription:

MARKOV PROCESSES Valerio Di Valerio

Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some probability law X is usually called the state space of the process A realization of a stochastic process (sample path) is a specific sequence X t 0 = x 0, X t 1 = x 1,

Stochastic Process Example: toss a coin an infinite number of times, i.e., t = 1, 2, 3, X = {Head, Tail} Sample path: t X(t) 1 Head 2 Head 3 Tail 4 Head

A Simple Classification A stochastic process can have: A continuous state space, i.e., X is a continuous set. E.g., X R A discrete state space, i.e., X is a discrete set {x 1, x 2,, x n } With a finite number of state, i.e., n < With an infinite number of state, i.e., n = The process can be either continuous time, T = 0,, or discrete time (T = N)

Examples Example 1: the process represents the number of people queued at the post office X = {1,, } T = R + 0 discrete state space continuous time Example 2: height of a person on his/her birthday X = R T = {1, 2, } continuous state space discrete time

Stochastic Process Dynamics The process dynamics can be defined using the transition probabilities They specify the stochastic evolution of the process through its states For a discrete time process, transition probabilities can be defined as follows P(X k+1 = x k+1 X k = x k, X k 1 = x k 1,, X 0 = x 0 )

Stochastic Process Dynamics Example: we have a bag with 20 balls. 10 are red and 10 are blue At time any t = 1,2,, n, we draw a ball from the bag, without replacements Question: what is P(X 1 = r)? Question: what is P(X 2 = r X 1 = r)? Question: what is P X 3 = b X 2 = r, X 1 = r)?

Markov Property The term Markov property refers to the memoryless property of a stochastic process: For a discrete time process, the Markov property is defined as: P X k+1 = x k+1 X k = x k, X k 1 = x k 1,, X 0 = x 0 = P X k+1 = x k+1 X k = x k ) Definition: a stochastic process that satisfies the Markov property is called Markov process If the state space is discrete, we refers to these processes as Markov Chains

Time-homogeneous Markov chains A Markov chain is time-homogeneous if transition probabilities are time-independent P X k+1 = x k+1 X k = x k ) is the same for all k If the state space is discrete and finite, transition probabilities are usually represented using a matrix P = p 1,1 p n,1 p n,1 p n,n and the Markov chain can be easily represented using a graph!

Example: Student Markov Chain

Transitory Analysis of a Markov Chain We can define the state probability as π j k = P(X k = j) Definition: it is the probability of finding the process in state j at time k Simple theory allows us to compute next step probabilities as π j k + 1 = P X k+1 = j X k = i) π i k i X

Transitory Analysis of a Markov Chain If we consider all states, we can use the vector π k = [π 0 k, π 1 k, π 2 k, ] In matrix notation it becomes π k + 1 = π(k) P But, if we know initial probabilities π 0, then π k + 1 = π(0) P k

A simple example 0.4 0 1 0.9 0.7 0.6 0.1 2 0.3

A simple example 0.4 0 1 0.9 Transition Probabilities 0.7 0.6 2 0.1 P = 0 0.4 0.6 0.1 0.9 0 0.7 0 0.3 0.3 π 0 (k + 1) = 0.1π 1 (k) + 0.7π 2 (k) π 1 (k + 1) = 0.4π 0 (k) + 0.9π 1 (k) π 2 (k + 1) = 0.6π 0 (k) + 0.3π 2 (k)

State Classification State Transient Recurrent Recurrent Null Recurrent NON null Periodic Aperiodic

State Classification 0.5 0 1 0.5 0.5 0.5 2 1

State Classification 0.5 0 1 0.5 0.5 0.5 2 1 Recurrent state

State Classification 0.5 0 1 0.5 0.5 0.5 Transient states 2 1 Recurrent state

State Classification 0.5 0 1 1 0.5 2 1 Periodic state, with period d = 2

Simple Exercise 0.1 0.9 0.15 1 0.1 0.9 0 0.05 3 4 0.4 0.15 0.7 2 0.95 0.6

Simple Exercise 0.9 0.15 0.1 1 0.1 Where do you expect to find the Markov chain on the long run? 0.9 0 0.05 3 4 0.4 0.15 0.7 2 0.95 0.6

Analysis of a DTMC Let us define the stationary probability of a DTMC as π j = lim k π j (k) It is the probability to find, on the long run, the DTMC in a certain state j Question 1: there exists this steady-state probability? Question 2: if any, what is the stationary probability that the DTCM is in state j, i.e., how can I compute it?

Some Definitions A state j is said to be accessible from a state i (written i j) if a system started in state i has a non-zero probability of transitioning into state j A state i is said to communicate with state j (written i j) if both i j and j i A set of states C is a communicating class if every pair of states in C communicates with each other, and no state in C is communicating with any state not in C A Markov chain is said to be irreducible if its state space is a single communicating class

and some useful results Result 1: if a DTMC has a finite number of states, then at least one state is recurrent Result 2: if i is recurrent and i j, then even state j is recurrent Results 3: if X is an irreducible set of states, then states are all positive recurrent, recurrent null or transient Results 4: if X is a finite irreducible subset of the state space X, then every state in X is positive recurrent

Analysis of a DTMC Theorem 1: in a DTMC irreducible and aperiodic there exists the limits π j = lim k π j (k), j X and they are independent from the initial distribution π 0 Theorem 2: in a DTMC irreducible and aperiodic in which all states are transient or recurrent null π j = lim k π j (k) = 0, j X

Existence of steady-state distribution Consider a time-homogeneous Markov chain is irreducible and aperiodic. Then, the following results hold: If the Markov chain is positive recurrent, then there exists a unique π so that π j = lim k π j k, j, and π = π P If there exists a positive vector π such π = π P and j X π j = 1, then it must be the stationary distribution and the Markov chain is positive recurrent If there exists a positive vector π such that π = π P and j X π j = is infinite, then a stationary distribution does not exist and lim k π j k = 0 for all j

Analysis of a DTMC To sum up: In order to compute the steady-state probabilities, we have to solve the following linear system: π = π P j π j = 1

A simple example 0.4 0 1 0.9 0.7 0.6 0.1 2 0.3

A simple example 0.4 0 1 0.9 0.7 2 0.6 0.3 0.1 This DTMC is: Irreducible Aperiodic It has a finite number of states

A simple example Linear System Transition Probabilities π 0 = 0.1π 1 + 0.7π 2 π 1 = 0.4π 0 + 0.9π 1 π 2 = 0.6π 0 + 0.3π 2 π 0 + π 1 + π 2 = 1 P = 0 0.4 0.6 0.1 0.9 0 0.7 0 0.3 Solution π 0 = 0.17 π 1 = 0.68 π 2 = 0.15

Time spent in a state Can we characterize the time spent in each state by the DTMC? Let s focus on state 1 of the previous example 0 1 0.9 0.1 With p = 0.1 the DTMC will jump to state 0, while with 1-p = 0.1 will remain in state 1 Question: do you remind something similar??

Time spent in a state The time spent in a state follows a geometric distribution! The geometric distribution is used for modelling the number of trials up to and including the first success p = success 1 p = failure P(Success in K trials) = p (1 p) K Key feature of this distribution: the geometric distribution is memoryless!! P T = m + n T > m) = P(T = n)

A more complex example A discrete time birth-death process 1 - p 1 - p 1 - p 1 - p p 0 1 2 i-1 i p p p p i+ 1 The DTMC is irreducible and aperiodic

Birth-death process 1 - p 1 - p 1 - p 1 - p p 0 1 2 i-1 i There exists the steady-state probabilities? Intuitively p p p p if p < ½ the DTMC will probably diverge, so maybe states are transient if p > ½ the DTMC will probably remain near 0, so state 0 could be positive recurrent, and since the DTMC is irreducible, all states would be positive recurrent if p = ½ the DTMC will probably neither diverge or converge, so maybe states are recurrent null i+ 1

Birth-death process solution CHECK OUT THE DASHBOARD!

Birth-death process solution

Continuous Time Markov Chain (CTMC) Let s be the current time instant and τ an arbitrary time interval Markov property for continuous time MC P X(s + τ) = j X(s) = i) No state memory: next state depends only on the current state, and not on all history No age memory: the time already spent in the current state is irrelevant to determining the remaining time and the next state

DTMC versus CTMC The core of Discrete Time MC is the probability matrix P Remember: It defines the probability to jump to another state in the next slot The core of a Continuous Time MC is the rate matrix Q It defines the rate at which the process transits from one state to another E.g., the MC transits from state 0 to state 1 with a rate of 5 times per seconds

Homogeneous CTMC A CTMC is said to be homogeneous if P X(s + τ) = j X(s) = i) is independent from s, i.e., only the time interval τ matters

Design a CTMC 5 4 0 1 3 1 8 2 2 Q = 6 5 1 3 11 8 4 2 6

Design a CTMC 5 4 0 1 3 1 8 2 2 Q = 6 5 1 3 11 8 4 2 6

Existence of steady state distribution Consider a time-homogeneous Markov chain is irreducible and aperiodic. Then, the following results hold: If the Markov chain is positive recurrent, then there exists a unique π so that πq = 0 and π j = lim k π j k, j If there exists a positive vector π such πq = 0 and j X π j = 1, then it must be the stationary distribution and the Markov chain is positive recurrent

Analysis of a CTMC To sum up: In order to compute the steady-state probabilities, we have to solve the following linear system: π Q = 0 j π j = 1

Example 5 4 0 1 3 1 8 Q = 6 5 1 3 11 8 4 2 6 2 2 Flow equilibrium equations! 6π 0 + 3π 1 + 4π 2 = 0 5π 0 11π 1 + 2π 2 = 0 π 0 + 8π 1 6π 2 = 0

Time spent in a state If v(i) is the time spent in state i, for CTMC it follows an exponential distribution: Λ j t P v j < t = 1 e where Λ j is the exit rate from state j Memoryless property: the exponential distribution is memoryless!

Exercise: model a wireless link using a DTMC Consider a simple model of a wireless link where, due to channel conditions, either one packet or no packet can be served in each time slot. Let s[k] denote the number of packets served in time slot k and suppose that s[k] are i.i.d. Bernoulli random variables with mean μ. Further, suppose that packets arrive to this wireless link according to a Bernoulli process with mean λ, i.e., a[k] is Bernoulli with mean λ where a[k] is the number of arrivals in time slot k and a[k] are i.i.d. across time slots. Assume that a[k] and s[k] are independent processes. We specify the following order in which events occur in each time slot: We assume that any packet arrival occurs first in the time slot, followed by any packet departure, i.e., a packet that arrives in a time slot can be served in the same time slot Packets that are not served in a time slot are queued in a buffer for service in a future time slot. Compute, if exists, the steady-state distribution.