FDST Markov Chain Models

Similar documents
Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

So in terms of conditional probability densities, we have by differentiating this relationship:

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Countable state discrete time Markov Chains

Let's contemplate a continuous-time limit of the Bernoulli process:

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Mathematical Framework for Stochastic Processes

(implicitly assuming time-homogeneity from here on)

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Mathematical Foundations of Finite State Discrete Time Markov Chains

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Detailed Balance and Branching Processes Monday, October 20, :04 PM

This is now an algebraic equation that can be solved simply:

Birth-death chain models (countable state)

Finite-Horizon Statistics for Markov chains

Stochastic2010 Page 1

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Poisson Point Processes

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

Homework 3 posted, due Tuesday, November 29.

Inventory Model (Karlin and Taylor, Sec. 2.3)

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

CS 798: Homework Assignment 3 (Queueing Theory)

The cost/reward formula has two specific widely used applications:

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

Classification of Countable State Markov Chains

6.842 Randomness and Computation March 3, Lecture 8

Session-Based Queueing Systems

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Interlude: Practice Final

Figure 10.1: Recording when the event E occurs

4452 Mathematical Modeling Lecture 16: Markov Processes

Lecture 1: Brief Review on Stochastic Processes

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Modelling data networks stochastic processes and Markov chains

Finite State Machines. CS 447 Wireless Embedded Systems

Random Walk on a Graph

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Modelling data networks stochastic processes and Markov chains

CONTENTS. Preface List of Symbols and Notation

Stochastic process. X, a series of random variables indexed by t

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Methodology for Computer Science Research Lecture 4: Mathematical Modeling

Time Reversibility and Burke s Theorem

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

NEW FRONTIERS IN APPLIED PROBABILITY

Lecture 20 : Markov Chains

Markov Decision Processes

Part I Stochastic variables and Markov chains

Lectures on Probability and Statistical Models

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains


LECTURE #6 BIRTH-DEATH PROCESS

Lecture 20: Reversible Processes and Queues

Asymptotics for Polling Models with Limited Service Policies

Stochastic Processes

Markov chains (week 6) Solutions

QUEUING MODELS AND MARKOV PROCESSES

Homework 3 due Friday, April 26. A couple of examples where the averaging principle from last time can be done analytically.

Langevin Equation Model for Brownian Motion

Incompatibility Paradoxes

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Waiting Time to Absorption

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE

µ n 1 (v )z n P (v, )

Introduction to Queueing Theory with Applications to Air Transportation Systems

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Stochastic Histories. Chapter Introduction

MOL410/510 Problem Set 1 - Linear Algebra - Due Friday Sept. 30

Markov chains. Randomness and Computation. Markov chains. Markov processes

Little s result. T = average sojourn time (time spent) in the system N = average number of customers in the system. Little s result says that

Lecture 21: Spectral Learning for Graphical Models

MARKOV CHAIN MONTE CARLO

2 Discrete Dynamical Systems (DDS)

Chapter 3: Markov Processes First hitting times

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

Stochastic Processes and Advanced Mathematical Finance. Stochastic Processes

6 Solving Queueing Models

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

Poisson Processes. Stochastic Processes. Feb UC3M

Readings: Finish Section 5.2

Math 304 Handout: Linear algebra, graphs, and networks.

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Markov Chains Handout for Stat 110

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

Markov Chain Model for ALOHA protocol

1 Probabilities. 1.1 Basics 1 PROBABILITIES

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Transcription:

FDST Markov Chain Models Tuesday, February 11, 2014 2:01 PM Homework 1 due Friday, February 21 at 2 PM. Reading: Karlin and Taylor, Sections 2.1-2.3 Almost all of our Markov chain models will be time-homogenous, meaning the dynamical rules are invariant with respect to the epoch, and this simplifies the description of the Markov chain in that the stochastic update rule and the probability transition matrix do not depend explicitly on the epoch: 1) Two-state system (M=2) which can abstractly be thought of as an on/off system. State 1: Off/free/unbound/tumble/rain State 2: On/busy/bound/run/dry When the system is on, then there is a probability q for the system to turn off during the next epoch. When the system is off, there is a probability p for the system to turn on during the next epoch. Probability transition matrix: Supplement with appropriate initial probability distribution. with Stochastic update rule, as always, could be written down in principle, but awkward. 2) Queueing models with maximum capacity M (Karlin and Taylor, Sec. 2.2C) We'll consider for now a queue with a single server that handles one request/demand at a time; any other pending requests are put into the queue. Stoch14 Page 1

into the queue. We define a state space for the queue by counting the number of requests that are either being actively served or in the queue. As for the parameter domain, what should an epoch correspond to? Equally spaced time intervals Each completion of a request Each arrival of a request Let's first consider the case in which an epoch corresponds to a fixed time interval. We will assume that the time interval in question is such that it is very unlikely that two or more changes will happen to the system over that time interval (typical, convenient, but not always necessary assumption). Otherwise the model is much more complicated to write down. With this simplifying assumption about the time step corresponding to the epoch, the following can happen: Request can be completed (with probability q) New request arrives (with probability p) Nothing changes (with probability 1-p-q) Queue has maximum capacity M; rejects further requests. We'll write down the Markov chain model in both formulations Probability transition matrix (again with suitable initial distribution) Stochastic update rule Stoch14 Page 2

Intuitively, with a model like this where the state is incremented or decremented randomly, it's natural to write the role of the noise as additive. But not quite...there are "reflecting" boundary conditions. Here is a compact way to handle these boundary conditions: Now let's consider formulating a queueing model where the epochs are defined by the moments at which a service is completed. We'll look at this again from both modeling standpoints. is now the number of requests being served or in the queue after the nth request has been processed. The information needed to formulate the Markov chain model in this setting is: where service period. Probability transition matrix is the probability that j requests arrive during a Stoch14 Page 3

Stochastic update rule: Or better yet, where is the random number of arrivals during the nth service period, and has probability distribution given by the 3) Random Walk on a Graph (Lawler Ch. 1) In the simplest version, when the random walker is at a given node (state) of the system, it chooses an edge with equal probability to make Stoch14 Page 4

its next move. But for applications it's more useful to allow general probabilities to move along the possible edges, provided that the probabilities for each edge leaving a node adds up to 1. No need for the probabilities corresponding to a given edge to be the same in both directions. An general probability transition matrix for the above graph could be: With all entries being nonnegative and all row sums equal 1. Interpretation is that is the probability that a random walker at node i will move to node j over the next epoch. are the probability that a random walker at node i stays at node i over the next epoch. This more general framework is useful in applications nodes could represent actual discrete spatial locations, i.e., patches between which an animal moves nodes could represent more abstract categories of state electronic excitation states configurations of biomolecules (see the work of Christof Schütte) financial/credit conditions of organizations/countries/individuals Stochastic update rule awkward. We could imagine that the model presented above corresponds to some system being observed at regular time intervals. One could alternatively write down a Markov chain model where the epochs are defined in terms of the times at which the state changes. This could be done from scratch. Then the probability transition matrix would have the same structure except that the diagonals would be 0. Alternatively, one could derive such a Markov chain model from the originally posed Markov chain model (formulated in terms of regular time steps), provided we assume the original Markov chain model had a small enough time step that it didn't miss any transitions. This is what's known as deriving the embedded Markov chain from the original Markov chain. Stoch14 Page 5

To derive the probability transition matrix for the embedded Markov chain from the probability transition matrix of the original Markov chain, we do a conditional probability calculation. This is just one particular relation that follows from the fact than any probability rule remains valid if a condition is added consistently to all probabilities appearing in it. This is because one is simply replacing the Stoch14 Page 6

given probability measure by the corresponding conditioned probability measure (think Bayesian). Stoch14 Page 7