Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Similar documents
Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Stochastic process. X, a series of random variables indexed by t

Markov Chains Handout for Stat 110

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

2 Discrete-Time Markov Chains

MARKOV PROCESSES. Valerio Di Valerio

Markov Chains and MCMC

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

The Theory behind PageRank

Markov Chains (Part 3)

Markov Processes Hamid R. Rabiee

Stochastic Models: Markov Chains and their Generalizations

Discrete Markov Chain. Theory and use

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

On asymptotic behavior of a finite Markov chain

Stochastic Processes

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Lecture 20 : Markov Chains

ISM206 Lecture, May 12, 2005 Markov Chain

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

1 Random Walks and Electrical Networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

4.7.1 Computing a stationary distribution

Section 1.7: Properties of the Leslie Matrix

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Quantitative Model Checking (QMC) - SS12

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Chapter 16 focused on decision making in the face of uncertainty about one future

Stochastic modelling of epidemic spread

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Probability Distributions

LECTURE 3. Last time:

Bayesian Methods with Monte Carlo Markov Chains II

Some Definition and Example of Markov Chain

ISE/OR 760 Applied Stochastic Modeling

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Convex Optimization CMU-10725

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

2. Transience and Recurrence

ISyE 6650 Probabilistic Models Fall 2007

Stochastic modelling of epidemic spread

STOCHASTIC PROCESSES Basic notions

CMPSCI 240: Reasoning Under Uncertainty

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

6 Markov Chain Monte Carlo (MCMC)

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov chains. Randomness and Computation. Markov chains. Markov processes

Stochastic2010 Page 1

(implicitly assuming time-homogeneity from here on)

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Markov Chain Model for ALOHA protocol

Availability. M(t) = 1 - e -mt

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Link Analysis. Leonid E. Zhukov

IE 5112 Final Exam 2010

Markov chain Monte Carlo

Markov chains (week 6) Solutions

MAA704, Perron-Frobenius theory and Markov chains.

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Part I Stochastic variables and Markov chains

Stat-491-Fall2014-Assignment-III

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

SMSTC (2007/08) Probability.

Lecture 4a: Continuous-Time Markov Chain Models

Markov Chains and Stochastic Sampling

Markov Chains on Countable State Space

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Probability, Random Processes and Inference

The Google Markov Chain: convergence speed and eigenvalues

Statistics 150: Spring 2007

Basic elements of number theory

Basic elements of number theory

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Mathematical Methods for Computer Science

Lecture 5: Random Walks and Markov Chain

P(X 0 = j 0,... X nk = j k )

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Divisibility in the Fibonacci Numbers. Stefan Erickson Colorado College January 27, 2006

Data analysis and stochastic modeling

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Nonnegative Matrices I

Discrete Time Markov Chain (DTMC)

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Probabilistic Aspects of Computer Science: Markovian Models

Quantitative Verification

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Transcription:

Markov Chains (2)

Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2

pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous Markov chains. For such chains, we use the following notation to denote n-step transition probabilities. p ( n) P( X k X j) jk mn m n The one-step transition probabilities p jk (1) are simply written as, thus: p jk p p (1) P( X k X j) jk jk n n1 3

The pmf of the random variable, often called the initial probability vector, is specified as p(0) [ p (0), p (0),...] 0 1 The one-step transition probabilities are compactly specified in the form of a transition probability matrix P [ ] The entries of the matrix P satisfy the following two properties X 0 p00 p01 p02. p p p......... 10 11 12 pij 0 p ij 1, i, j I; and p 1, i I. ji ij 4

An equivalent description of the one-step transition probabilities can be given by a directed graph called the state transition diagram (state diagram for short) of the Markov chain. A node labeled i of the state diagram represents state i of the Markov chain and a branch labeled pij from node i to j implies that the conditional probability is P[ X j X i] p n n1 ij 5

Example: Two states Suppose a person can be in one of two states "healthy" or "sick". Let X(n), n = 0, 1, 2, refer the state at time n where Define 6

Its corresponding DTMC can be shown by state diagram transition probability matrix 7

We are interested in obtaining an expression for evaluating the n-step transition probability from the onestep probabilities. If we let P(n) be the matrix whose (i, j) entry is, that is, let P(n) be the matrix of n-step transition probabilities, then we can write P( n) P. P( n 1) P Thus the matrix of n-step transition probabilities is obtained by multiplying the matrix of one-step transition probabilities by itself n-1 times. n p ij ( n) 8

We can obtain the pmf of the random variable from the n-step transition probabilities and the initial probability vector as follows p( n) p(0) P( n) p(0) P n X n This implies that step dependent probability vector of a homogeneous Markov chain are completely determined from the one-step transition probability matrix P and the initial probability vector p(0). 9

Example: Stock Exchange 10

Example: Stock Exchange 11

Example: Stock Exchange 12

A state i is said to be transient iff there is a positive probability that the process will not return to this state. A state i is said to be recurrent iff starting from i, the process eventually returns to state i with probability one. For a recurrent state i, p ( n) 0, n 1 define the period of state i denoted by d ii ii, as the greatest common divisor (gcd) of the set of positive integers n such that p ( n) 0. ii 13

A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the period of a state is defined as A recurrent state i is said to be aperiodic if its period d 1, and periodic if d 1. k gcd{ n : Pr( X i X i) 0} i A state i is said to be an absorbing state iff p 1. n 0 ii i 14

Two states i and j communicate if directed paths from i to j and vice-versa exist. A Markov chain is said to be irreducible if every recurrent state can be reached from every other state in a finite number of steps. In other words, for all integer n 1 such that p ( n) 0. ij i, j I, there is an 15

Continuous Time Markov Chain (CTMC) As in DTMCs, we confine our attention to discrete-state processes. This implies that, although the parameter t has a continuous range of values, the set of valuesxt () is discrete. Recall the definition of discrete-state continuous tie stochastic process stated in the class which satisfies P[ X ( t) x X ( t ) x, X ( t ) x,... X ( t ) x ] The behavior of the process is characterized by (1) initial state probability given by the pmf of X t0 P X t0 k k and (2) the transition probabilities p ( v, t) P( X ( t) j X ( v) i) ij n n n1 n1 0 0 P[ X ( t) x X ( t ) x ] n n ( ), ( ( ) ), 0,1,2,... 16

Continuous Time Markov Chain (CTMC) Let denote the pmf of X(t) (or the state probabilities at time t by j ( t) P( X ( t) j), j0,1,2,...; t0 It is clear that ji ( t) 1 j for any t 0, since at any given time the process must be in some state. 17

Continuous Time Markov Chain (CTMC) Using the theorem of total probability, for given t v, we can express the pmf of X(t) in term of the transition probabilities p ( v, t) and the pmf of X(v): If we let v=0, then ij j ( t) P( X ( t) j) P( X ( t) j X ( v) i) P( X ( v) j) ii pij ( v, t) i( v) ii j ( t) pij (0, t) i(0) ii 18

Continuous Time Markov Chain (CTMC) If we let the ( t) [ 0( t), 1( t),...], then in the matrix form we have d () t dt () tq Where Q is the infinitesmial generator matrix containing the transition rates q ij from any state i to any other state j, where i j of a given CTMC. q ii The elements on the main diagonal of Q are defined by q ii q j, ji ij 19

Continuous Time Markov Chain (CTMC) If for a given CTMC, the steady state probabilities are independent of time, we immediately get d () t lim 0 t dt For determining the unconditional state probabilities resolves to much simpler system of linear equations 0 q, j S is In matrix for, we get accordingly ij i 0 Q 20

Continuous Time Markov Chain (CTMC) Example: Discussion on steady state solution of the following CTMC in class 1 2 3 1 2 3 1 21