Continuous Time Markov Chains

Similar documents
Continuous Time Markov Chain

6. Stochastic processes (2)

6. Stochastic processes (2)

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Applied Stochastic Processes

Convergence of random processes

Review: Discrete Event Random Processes. Hongwei Zhang

Introduction to Continuous-Time Markov Chains and Queueing Theory

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

Google PageRank with Stochastic Matrix

TCOM 501: Networking Theory & Fundamentals. Lecture 7 February 25, 2003 Prof. Yannis A. Korilis

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

8.592J: Solutions for Assignment 7 Spring 2005

Expected Value and Variance

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

EXPONENTIAL ERGODICITY FOR SINGLE-BIRTH PROCESSES

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Randomness and Computation

Dynamic Programming. Lecture 13 (5/31/2017)

APPENDIX A Some Linear Algebra

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Problem Set 9 - Solutions Due: April 27, 2005

ON THE BURGERS EQUATION WITH A STOCHASTIC STEPPING STONE NOISY TERM

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Introduction to Hidden Markov Models

AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES. Rong-Rong Chen. ( University of Illinois at Urbana-Champaign)

The Feynman path integral

The Second Anti-Mathima on Game Theory

Suggested solutions for the exam in SF2863 Systems Engineering. June 12,

FINITE-STATE MARKOV CHAINS

DS-GA 1002 Lecture notes 5 Fall Random processes

Complex Numbers, Signals, and Circuits

Hidden Markov Models

Lecture 3: Probability Distributions

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Case Study of Markov Chains Ray-Knight Compactification

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Strong Markov property: Same assertion holds for stopping times τ.

Queuing system theory

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

CS 3750 Machine Learning Lecture 6. Monte Carlo methods. CS 3750 Advanced Machine Learning. Markov chain Monte Carlo

Chapter 4: Root Finding

CSCE 790S Background Results

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

Fluctuation Results For Quadratic Continuous-State Branching Process

Linear Approximation with Regularization and Moving Least Squares

Random Walks on Digraphs

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Poisson brackets and canonical transformations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

Dynamic Systems on Graphs

Analysis of Discrete Time Queues (Section 4.6)

Solutions to Problem Set 6

Network of Markovian Queues. Lecture

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

= z 20 z n. (k 20) + 4 z k = 4

Problem Solving in Math (Math 43900) Fall 2013

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Rate of Absorption and Stimulated Emission

Modelli Clamfim Equazioni differenziali 22 settembre 2016

Queueing Networks II Network Performance

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Lecture 3: Shannon s Theorem

Hidden Markov Model Cheat Sheet

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Lecture 3. Ax x i a i. i i

CS-433: Simulation and Modeling Modeling and Probability Review

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Changing Topology and Communication Delays

Bernoulli Numbers and Polynomials

On complexity and randomness of Markov-chain prediction

MARKOV CHAINS. 5. Recurrence and transience. and the length of the rth excursion to i by. Part IB Michaelmas 2009 YMS.

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Difference Equations

1 Matrix representations of canonical matrices

Primer on High-Order Moment Estimators

Lecture 17 : Stochastic Processes II

Lecture Notes on Linear Regression

Marginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Designing of Combined Continuous Lot By Lot Acceptance Sampling Plan

Probability and Random Variable Primer

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Numerical Heat and Mass Transfer

From Biot-Savart Law to Divergence of B (1)

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Digital Modems. Lecture 2

Lecture 6 More on Complete Randomized Block Design (RBD)

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

CS294 Topics in Algorithmic Game Theory October 11, Lecture 7

Multidimensional Random Motion with Uniformly Distributed Changes of Direction and Erlang Steps

Problem Set 9 Solutions

Transcription:

Contnuous Tme Markov Chans Brth and Death Processes,Transton Probablty Functon, Kolmogorov Equatons, Lmtng Probabltes, Unformzaton Chapter 6 1

Markovan Processes State Space Parameter Space (Tme) Dscrete Contnuous Dscrete Markov chans (Chapter 4) Contnuous tme Markov chans (Chapters 5, 6) Contnuous Brownan moton process (Chapter 10) Chapter 6 2

Contnuous Tme Markov Chan A stochastc process {X(t), t 0} s a contnuous tme Markov chan (CTMC) f for all s, t 0 and nonnegatve ntegers, j, x(u), 0 u < s, ( ) ( ) ( ) ( ) { + = =, =,0 < } P X s t j X s X u x u u s ( ) ( ) { } = P X s+ t = j X s = and f ths probablty s ndependent of s, then the CTMC has statonary transton probabltes: P t = P X s+ t = j X s = s j () ( ) ( ) { } for all Chapter 6 3

Alternate Defnton Each tme the process enters state, The amount of tme t spends n state before makng a transton to a dfferent state s exponentally dstrbuted wth parameter v, and When t leaves state, t next enters state j wth probablty P j, where P = 0 and P = j j 1 Let q = vp, then v = q, j j j j ( h) Pj ( h) 1 P lm = v and lm = q h 0 h h 0 h j Chapter 6 4

Brth and Death Processes If a CTMC has states {0, 1, } and transtons from state n may go only to ether state n - 1 or state n + 1, t s called a brth and death process. The brth (death) rate n state n s λ n (µ n ), so v0 = λ0 v = λ + µ, > 0 P01 = 1 λ µ P, + 1 =, P, 1 =, > 0 λ + µ λ + µ λ o λ n-1 λ n λ 1 0 1 2 n-1 n n+1 µ 1 µ 2 µ n µ n+1 Chapter 6 5

Chapman-Kolmogorov Equatons In order to get from state at tme 0 to state j at tme t + s, the process must be n some state k at tme t From these can be derved two sets of dfferental equatons: Backward P j ( t) = qk Pkj ( t) vp j ( t) Forward ( ) ( ) ( ) P t s P t P s j k kj k = 0 + = k ( ) ( ) ( ) P t = q P t v P t j kj k j j k j Chapter 6 6

Lmtng Probabltes If All states of the CTMC communcate: For each par, j, startng n state there s a postve probablty of ever beng n state j, and The chan s postve recurrent: startng n any state, the expected tme to return to that state s fnte, then lmtng probabltes exst: P = lm P t (and when the lmtng probabltes exst, the chan s called ergodc) Can we fnd them by solvng somethng lke π = π P for dscrete tme Markov chans? j t j ( ) Chapter 6 7

Infntesmal Generator (Rate) Matrx Let R be a matrx wth elements (the rows of R sum to 0) Let t qj,f j = v,f = j n the forward equatons. In steady state: lm P t = lm q P t v P t ( ) ( ) ( ) j kj k j j t t k j 0 = q P v P k j These can be wrtten n matrx form as PR = 0 along wth P = j j 1 and solved for the lmtng probabltes. What do you get f you do the same wth the backward equatons? r kj k j j j Chapter 6 8

Balance Equatons The PR = 0 equatons can also be nterpreted as balancng: vp = q P j j kj k k j rate at whch process leaves j = rate at whch process enters j For a brth-death process, they are equvalent to levelcrossng equatons λ npn = µ n + 1Pn + 1 rate of crossng from n to n+ 1 = rate of crossng from n+ 1 to n so P n = λ λ λ µµ µ P and a steady state exsts f λλ λn µµ µ 0 1 n 1 0 1 2 n 0 1 1 n= 1 1 2 n < Chapter 6 9

Tme Reversblty A CTMC s tme-reversble f and only f Pq j = Pq j j when j There are two mportant results: 1. An ergodc brth and death process s tme reversble 2. If for some set of numbers {P }, P = 1 and Pq = Pq when j j j j then the CTMC s tme-reversble and P s the lmtng probablty of beng n state. Ths can be a way of fndng the lmtng probabltes. Chapter 6 10

Unformzaton Before, we assumed that P = 0,.e., when the process leaves state t always goes to a dfferent state. Now, let v be any number such that v v for all. Assume that all transtons occur at rate v, but that n state, only the fracton v /v of them are real ones that lead to a dfferent state. The rest are fcttous transtons where the process stays n state. Usng ths fcttous rate, the tme the process spends n state s exponental wth rate v. When a transton occurs, t goes to state j wth probablty v 1, j = * v Pj = v P j, j v Chapter 6 11

Unformzaton (2) In the unformzed process, the number of transtons up to tme t s a Posson process N(t) wth rate v. Then we can compute the transton probabltes by condtonng on N(t): j { } () () ( 0) P t = P X t = j X = n= 0 n= 0 n= 0 { () ( 0 ), () } () ( 0) { () ( 0 ), () } * n j vt ( vt) n! n { } = P X t = j X = N t = n P N t = n X = = P X t = j X = N t = n = P e e vt ( vt) n! n Chapter 6 12

More on the Rate Matrx ( ) ( t) Can wrte the backward dfferental equatons as P t = RP and ther soluton s ( ) ( 0 ) t t t = e R R P P = e snce P( 0) = I where n Rt n t e R n= 0 n! but ths computaton s not very effcent. We can also approxmate: e Rt = lm I+ R n t n n 1 t t R or e I R for large n n n Chapter 6 13