Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

Similar documents
FDST Markov Chain Models

Mathematical Foundations of Finite State Discrete Time Markov Chains

Let's contemplate a continuous-time limit of the Bernoulli process:

Finite-Horizon Statistics for Markov chains

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

So in terms of conditional probability densities, we have by differentiating this relationship:

Countable state discrete time Markov Chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Mathematical Framework for Stochastic Processes

This is now an algebraic equation that can be solved simply:

(implicitly assuming time-homogeneity from here on)

Inventory Model (Karlin and Taylor, Sec. 2.3)

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

Birth-death chain models (countable state)

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Poisson Point Processes

QUEUING MODELS AND MARKOV PROCESSES

Detailed Balance and Branching Processes Monday, October 20, :04 PM

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

CS 798: Homework Assignment 3 (Queueing Theory)

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Queueing Systems: Lecture 3. Amedeo R. Odoni October 18, Announcements

stochnotes Page 1

Classification of Countable State Markov Chains

COMP9334: Capacity Planning of Computer Systems and Networks

Stochastic2010 Page 1

Office hours: Wednesdays 11 AM- 12 PM (this class preference), Mondays 2 PM - 3 PM (free-for-all), Wednesdays 3 PM - 4 PM (DE class preference)

Chapter 3: Markov Processes First hitting times

Office hours: Mondays 5-6 PM (undergraduate class preference) Tuesdays 5-6 PM (this class preference) Thursdays 5-6 PM (free-for-all)

Readings: Finish Section 5.2

The cost/reward formula has two specific widely used applications:

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Homework 3 posted, due Tuesday, November 29.

Session-Based Queueing Systems

6 Solving Queueing Models

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Random Walk on a Graph

Stochastic inventory system with two types of services

Introduction to Queueing Theory

Performance Evaluation of Queuing Systems

Queuing Networks. - Outline of queuing networks. - Mean Value Analisys (MVA) for open and closed queuing networks

Lecture 20: Reversible Processes and Queues

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

Assignment 3 with Reference Solutions

Answers to selected exercises

Advanced Queueing Theory

Time Reversibility and Burke s Theorem

Figure 10.1: Recording when the event E occurs

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

M4A42 APPLIED STOCHASTIC PROCESSES

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE

Little s result. T = average sojourn time (time spent) in the system N = average number of customers in the system. Little s result says that

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Convolution Algorithm

Solutions to COMP9334 Week 8 Sample Problems

Announcements Monday, September 18

Introduction to Queueing Theory with Applications to Air Transportation Systems

Discrete-event simulations

Modelling data networks stochastic processes and Markov chains

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Discrete Event Systems Exam

Section 3.3: Discrete-Event Simulation Examples

CSE 4201, Ch. 6. Storage Systems. Hennessy and Patterson

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Eulerian (Probability-Based) Approach

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Introduction to Queueing Theory

Exponential Distribution and Poisson Process

Basic Probability space, sample space concepts and order of a Stochastic Process

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

LECTURE #6 BIRTH-DEATH PROCESS

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

7 Variance Reduction Techniques

Lecture 3: Channel Capacity

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

reversed chain is ergodic and has the same equilibrium probabilities (check that π j =

MODELING WEBCHAT SERVICE CENTER WITH MANY LPS SERVERS

QUEUING SYSTEM. Yetunde Folajimi, PhD

Firefly algorithm in optimization of queueing systems

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017

Performance analysis of clouds with phase-type arrivals

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Discrete and Continuous Random Variables

THE QUEEN S UNIVERSITY OF BELFAST

Continuous Time Processes

Analysis of Call Center Data

Quiz 1 EE 549 Wednesday, Feb. 27, 2008

Modelling Complex Queuing Situations with Markov Processes

Chapter 2: Random Variables

λ 2 1 = 2 = 2 2 2P 1 =4P 2 P 0 + P 1 + P 2 =1 P 0 =, P 1 =, P 2 = ρ 1 = P 1 + P 2 =, ρ 2 = P 1 + P 2 =

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

The shortest queue problem

Data analysis and stochastic modeling

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Transcription:

Formulation of Finite State Markov Chains Friday, September 23, 2011 2:04 PM Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo) First an addendum to generating functions. They can also be used to compute the probability distribution for random sums. Stochastic11 Page 1

Or, by the same argument, if X is a discrete random variable, then I could work purely with probability generating functions: This is discussed in Karlin and Taylor Sec. 1.1, and Resnick Ch. 0 (more detail). Back to finite state discrete time Markov chains. We presented last time the classical definition of the Markov property for a stochastic process (in discrete time). An interesting equivalent characterization of the Markov property (proof not given here): Given the current state of the stochastic process, the future is conditionally independent of the past. Stochastic11 Page 2

This alternative characterization shows that the Markov property is actually not dependent on the direction of time. A finite state discrete time Markov chain is just a finite time discrete state stochastic process that has the Markov property. To define a specific model for a Markov chain using this classical characterization of Markov chains, the mathematical properties suggest that we need to specify the following quantities: 1. Probability Transition Matrix For time-homogenous Markov chains, we just need one matrix: 2. Initial probability distribution Stochastic11 Page 3

These mathematical structures can be defined arbitrarily and give a valid finite state discrete time Markov chain model provided the following conditions are met: Note the structure of the probability transition matrix: To complete the mathematical foundations of FSDT (finite-state discrete-time) Markov chains, we will show that the stochastic update rule and probability transition matrix give equivalent characterizations of a Markov chain. (Resnick Sec. 2.1) We'll do this by first showing that a stochastic process defined by the stochastic update rule satisfies the Markov property (and therefore has a well-defined formulation in terms of probability transition matrix). Stochastic11 Page 4

Markov property (and therefore has a well-defined formulation in terms of probability transition matrix). Given: Do not throw away the condition! Now we claim that is independent of X 0,X 1,X 2,,X n. Stochastic11 Page 5

Provided (as we will always assume) that the initial conditions are independent of the random variables Z n determining the dynamics, we see that since X 0,Z 0,Z 1, are all independent random variables, Z n is independent of X 0,,X n. Now we will use the following consequence of independence: To prepare for showing the converse, that every FSDT stochastic process with the Markov property can be encoded by a stochastic update rule, we will digress briefly to discuss the simulation of finite state random variables. Typically a computational platform will at least have a algorithm that generates pseudorandom numbers which are independent and uniformly distributed on [0,1]. Stochastic11 Page 6

distributed on [0,1]. To generate a random variable Y with a finite state space, we have to first specify its probability distribution: We can use this idea to simulate FSDT Markov chains by just using it to sample from the initial probability distribution to get X 0 and then given X n =I, we generate X n+1 by sampling from the probability distribution given by the ith row of the probability transition matrix We also need to introduce a useful piece of notation: Indicator function is a binary random variable: We will use this notation and the numerical simulation idea to show that any Markov chain specified by an initial probability distribution and a probability transition Stochastic11 Page 7

matrix can also be encoded by a stochastic update rule. Given the probability transition matrix(ces) Let be a sequence of independent, identically distributed Random variables which are uniformly distributed on the unit interval Then the stochastic update rule: Will generate a stochastic process with the same statistical properties as the Markov chain defined by the probability transition matrix. We have therefore shown that any FSDT Markov chain can equivalently be expressed in terms of a probability transition matrix or a stochastic update rule. Sometimes one formulation is much more convenient than another. Examples of Markov Chains Readings: Karlin and Taylor, Sections 2.1-2.3 1) Two-state on/off system State 1: off/free/unbound State 2: on/busy/bound When the system is in off state, it has a probability p in each epoch to turn on. When the system is in on state, it has a probability q in each epoch to turn off. Stochastic11 Page 8

This system is easily encoded by a probability transition matrix Specify an initial probability distribution This gives a well-defined Markov chain model based on a probability transition matrix. One could also in principle write down a stochastic update rule, but here it would be more cumbersome than the probability transition matrix. 2) Queueing models with maximum capacity M (Karlin and Taylor, Sec. 2.2 C) We'll consider a queueing system with a single server which can handle only one request/demand at a time; other requests or demands are put into the queue. Stochastic11 Page 9

One can construct a state space for the queue by simply counting how many requests are being either served or waiting in the queue: How should the epochs of a Markov chain model for a queue be defined? Several choices with various merits: Equally spaced intervals of time Each completion of a request Each arrival of a request Let's first consider the simplest setting where the epochs are spaced equally in time and the time interval is so short that we will neglect the possibility that more than one change happens to the system between successive epochs. During each epoch, one of the following happens: A request arrives (with probability p) A service is completed (with probability q) Nothing changes (with probability 1-p-q) Probability transition matrix: Stochastic11 Page 10