stochnotes Page 1

Similar documents
This is now an algebraic equation that can be solved simply:

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

(implicitly assuming time-homogeneity from here on)

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Detailed Balance and Branching Processes Monday, October 20, :04 PM

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

Birth-death chain models (countable state)

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

An Introduction to Probability Theory and Its Applications

The Transition Probability Function P ij (t)

Inventory Model (Karlin and Taylor, Sec. 2.3)

Statistics 150: Spring 2007

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

SMSTC (2007/08) Probability.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

M/G/1 and M/G/1/K systems

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Finite-Horizon Statistics for Markov chains

IEOR 6711, HMWK 5, Professor Sigman

Link Models for Circuit Switching

Markov processes and queueing networks

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Countable state discrete time Markov Chains

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

Performance Evaluation of Queuing Systems

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

The cost/reward formula has two specific widely used applications:

Duality to determinants for ASEP

T. Liggett Mathematics 171 Final Exam June 8, 2011

ADVANCED ENGINEERING MATHEMATICS MATLAB

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Continuous Time Markov Chains

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

A review of Continuous Time MC STA 624, Spring 2015


3: Linear Systems. Examples. [1.] Solve. The first equation is in blue; the second is in red. Here's the graph: The solution is ( 0.8,3.4 ).

STAT 380 Continuous Time Markov Chains

6.041/6.431 Fall 2010 Final Exam Wednesday, December 15, 9:00AM - 12:00noon.

Poisson Point Processes

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Part I Stochastic variables and Markov chains

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Continuous time Markov chains

Continuum Limit of Forward Kolmogorov Equation Friday, March 06, :04 PM

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Stochastic Modelling Unit 1: Markov chain models

Examination paper for TMA4265 Stochastic Processes

Continuous-Time Markov Chain

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

GI/M/1 and GI/M/m queuing systems

Let's contemplate a continuous-time limit of the Bernoulli process:

1. Stochastic Process

Statistics 992 Continuous-time Markov Chains Spring 2004

Availability. M(t) = 1 - e -mt

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Adventures in Stochastic Processes

Markov Processes Cont d. Kolmogorov Differential Equations

2. Transience and Recurrence

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

Office hours: Wednesdays 11 AM- 12 PM (this class preference), Mondays 2 PM - 3 PM (free-for-all), Wednesdays 3 PM - 4 PM (DE class preference)

CS 798: Homework Assignment 3 (Queueing Theory)

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s

Introduce complex-valued transcendental functions via the complex exponential defined as:

5.1 Polynomial Functions

0.1 Naive formulation of PageRank

Modeling with Itô Stochastic Differential Equations

Classification of Countable State Markov Chains

µ n 1 (v )z n P (v, )

Math Camp Notes: Linear Algebra II

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Advanced Difference Equation Theory. Sidharth Ghoshal

Name of the Student:

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

After linear functions, the next simplest functions are polynomials. Examples are

Taylor approximation

Recurrences COMP 215

Lecture 10: Semi-Markov Type Processes

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Link Models for Packet Switching

6 Solving Queueing Models

Markovské řetězce se spojitým parametrem

A Queueing System with Queue Length Dependent Service Times, with Applications to Cell Discarding in ATM Networks

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

M4A42 APPLIED STOCHASTIC PROCESSES

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

8.5 Taylor Polynomials and Taylor Series

Eulerian (Probability-Based) Approach

MARKOV PROCESSES. Valerio Di Valerio

4452 Mathematical Modeling Lecture 16: Markov Processes

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

1 Continuous-time chains, finite state space

Introduction to Queueing Theory

CS 361: Probability & Statistics

State-Space Methods for Inferring Spike Trains from Calcium Imaging

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

Transcription:

stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest? We'll specialize from the Kolmogorov equations which we derived for the probability transition function. Now let's consider a different type of statistic that is also of wide interest: This describes the expected "return" or "value" of the Markov chain at a future time t, given that it starts off in state i.

stochnotes110308 Page 2 For applications with random initial data, the expected value of Markov chain after time interval t is So here one could solve either forward or backward equations to get the desired result. And of course one can generally solve any statistical problem in principle by solving for the probability transition function with whatever equation one chooses, but this involves M 2 equations where M is the number of states, or for the infinite state case, it's like solving for an equation with two variables. On the other hand, the Kolmogorov forward and backward equations for vectors (probability distributions or expectations) involve M equations, or in the infinite-state case, behave like an equation with respect to one variable. Poisson (counting) process as fundamental example of continuous-time Markov chain

stochnotes110308 Page 3 From the definition, we see that this Poisson counting process spends a random time at each value before increasing the value by 1, and the amount of time spent at each value is identically distributed because the birth parameters are the same at each value, and are independent because of the strong Markov property which we will explain a little bit later. Before explaining the deeper subtleties, let's do some easy calculations. What is the probability distribution for the Poisson counting process at a future time t? We'll use the Kolmogorov forward equation for evolving the probability distribution: A few tools to keep in mind when working with the infinite system of linear equations that tend to result from Kolmogorov equations: Laplace transforms probability generating functions Probability generating function approach is useful when the equations have a recursive structure. Our equations do have a recursive structure but the recursive structure is so easy that just doing a Laplace transform in time is already going to simplify the problem enough. The Laplace transform in time will convert differential equations to algebraic equations, and the fact the equations are constant-coefficient (don't depend on time) mean that taking the Laplace transform is straightforward.

stochnotes110308 Page 4 Invert Laplace transform by: look up in a table Bromwich contour integration in the complex plane use tricks More examples with exactly solvable Kolmogorov equations in Karlin & Taylor Sec. 4.1 What are the implications of the results we derived for Poisson counting process?

stochnotes110308 Page 5 This tells me that T 0 is an exponentially distributed random variable with mean In more detail, random variables that have an (absolutely) continuous probability distribution can be described in terms of a probability density function (PDF): Exponentially distributed random variable with mean has PDF which agrees with the formula for an exponentially distributed random variable with mean One can show, by essentially repeating the calculation that for any state j and any time t', the random variable defined by (the random amount of time one must wait after time t' until the Poisson counting process moves to a new state) is also an exponentially distributed random variable with mean Another key observation: If we interpret as the number of events that occur up to time t, it is given by a Poisson distribution with mean

stochnotes110308 Page 6 There are deep reasons why the exponential distribution and Poisson distribution are the ones which appear in describing the statistics of the Poisson counting process. They have direct connections to the Markov property. Let's consider why the time to wait until the next event (jump) is exponentially distributed. Recall that we are dealing with a time-homogenous Markov process. Again, we define T as above to the be the waiting time until the next event (jump). Consider: By the Markov property applied to the time instant t'+s as the current time, this conditional probability must be exactly equal to The Markov property (along with time-homogeneity) says the function must be a solution of the equation: One can show that the only solution to this equation is exponential: Karlin and Taylor Theorem 4.2.2