Part I Stochastic variables and Markov chains

Similar documents
Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Markov processes and queueing networks

Stochastic process. X, a series of random variables indexed by t

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

MARKOV PROCESSES. Valerio Di Valerio

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Continuous-time Markov Chains

Data analysis and stochastic modeling

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

IEOR 6711, HMWK 5, Professor Sigman

LECTURE #6 BIRTH-DEATH PROCESS

Figure 10.1: Recording when the event E occurs

Introduction to Markov Chains, Queuing Theory, and Network Performance

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

1 IEOR 4701: Continuous-Time Markov Chains

Continuous time Markov chains

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

Birth-Death Processes

The exponential distribution and the Poisson process

Continuous Time Processes

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Northwestern University Department of Electrical Engineering and Computer Science

Lecture 4a: Continuous-Time Markov Chain Models

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Performance Evaluation of Queuing Systems

Poisson Processes. Stochastic Processes. Feb UC3M

Lecture 20: Reversible Processes and Queues

Part II: continuous time Markov chain (CTMC)

Chapter 2 Queueing Theory and Simulation

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Time Reversibility and Burke s Theorem

Introduction to Queuing Networks Solutions to Problem Sheet 3

Slides 8: Statistical Models in Simulation

ECE 313 Probability with Engineering Applications Fall 2000

STOCHASTIC PROCESSES Basic notions

Queueing Theory and Simulation. Introduction

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

The Transition Probability Function P ij (t)

Random variables. DS GA 1002 Probability and Statistics for Data Science.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Exponential Distribution and Poisson Process

Lecturer: Olga Galinina

Introduction to Queueing Theory

M/G/1 and M/G/1/K systems

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Introduction to Queueing Theory

Statistics 150: Spring 2007

Lecture 4: Bernoulli Process

THE QUEEN S UNIVERSITY OF BELFAST

Continuous-Time Markov Chain

Assignment 3 with Reference Solutions

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Networking = Plumbing. Queueing Analysis: I. Last Lecture. Lecture Outline. Jeremiah Deng. 29 July 2013

Queueing systems. Renato Lo Cigno. Simulation and Performance Evaluation Queueing systems - Renato Lo Cigno 1

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Performance Modelling of Computer Systems

Non Markovian Queues (contd.)

Probability and Statistics Concepts

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Queues and Queueing Networks

Probability Distributions

Queueing. Chapter Continuous Time Markov Chains 2 CHAPTER 5. QUEUEING

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Stochastic Models in Computer Science A Tutorial

EE126: Probability and Random Processes

Things to remember when learning probability distributions:

Computer Networks More general queuing systems

Chapter 6: Random Processes 1

Contents LIST OF TABLES... LIST OF FIGURES... xvii. LIST OF LISTINGS... xxi PREFACE. ...xxiii

EE126: Probability and Random Processes

Manual for SOA Exam MLC.

Computer Systems Modelling

Link Models for Circuit Switching

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

Markov Chain Model for ALOHA protocol

GI/M/1 and GI/M/m queuing systems

Basic concepts of probability theory

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Chapter 1. Introduction. 1.1 Stochastic process

Availability. M(t) = 1 - e -mt

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

NICTA Short Course. Network Analysis. Vijay Sivaraman. Day 1 Queueing Systems and Markov Chains. Network Analysis, 2008s2 1-1

Introduction to Queuing Theory. Mathematical Modelling

Name of the Student:

NATCOR: Stochastic Modelling

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Introduction to Queueing Theory

Transcription:

Part I Stochastic variables and Markov chains

Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function) Complementary distribution function (tail distribution)

Continuous random variables can be described by means of the probability density function (pdf)

Discrete random variables The set of values a discrete random variable X can take is either finite or countably infinite, X { x 1, x 2,... } Associated a set of point probabilities with these values p i = P{ X = x i } Probability mass function (pmf):

Independent random variables The random variables X and Y are independent if and only if the events {X x} and {Y y} are independent: (the two conditions are equivalent)

Expectation Parameters of distributions denoted by continuous distribution discrete distribution in general Properties of Expectation ( c constant ) when X and Y are independent

Parameters of distributions Variance, denoted by V[X], or Var[X] Standard deviation, denoted by σ: Coefficient of variation, denoted by C v : Covariance, denoted by Cov[X,Y] Cov[X,X] = Var[X] Cov[X,Y] = 0 if X and Y are independent

Parameters of distributions Properties of Variance ( c constant ) when Xi are independent Properties of Covariance

Discrete variables: Bernoulli distribution X ~ Bernoulli (p) p 0 = q p 1 = p = 1 - q E[X] = p V[X] = pq Example: The cell stream arriving at an input port of an ATM switch in a time slot (cell slot) there is a cell with probability p or the slot is empty with probability q

Discrete variables: Binomial distribution X ~ Bin (n,p) the number of successes in a sequence of n independent Bernoulli(p) trials Can be written as the sum of n Bernoulli variables

Discrete variables: Geometric distribution X ~ Geom(p) the number of trials in a sequence of independent Bernoulli(p) trials until the first success occurs

Discrete variables: Geometric distribution probability that for the first success one needs more than n trials Memoryless property:

Discrete variables: Poisson distribution X ~ Poisson(a) Poisson(λt) represents the number of occurrences of events (e.g. arrivals) in an interval of length t from a Poisson process (see later) with intensity λ the probability of an event ( success ) in a small interval dt is λdt the probability of two simultaneous events is O(λdt) the number of events in disjoint intervals are independent

Properties of Poisson distribution The sum of two Poisson random variables X 1 ~ Poisson(a 1 ) and X 2 ~ Poisson(a 2 ) is Poisson distributed, and we have X 1 +X 2 ~ Poisson(a 1 +a 2 ) If the number N of elements in a set obeys Poisson distribution, N Poisson(a), and one makes a random independent selection with probability p, then the size of the selected set K Poisson(pa) If the elements of a set with size N Poisson(a) are randomly assigned to one of two groups 1 and 2 with probabilities p 1 and p 2 = 1 p 1, then the sizes of the two sets, N 1 and N 2, are independent and distributed as N 1 Poisson(p 1 a), N 2 Poisson(p 2 a)

Continuous variables: Deterministic distribution Takes on a single value t 0 pdf is impulsive Distribution is a step function E[ X ] 2 = v t0, E[ V ] = 0, C = 0

Continuous variables: Uniform distribution f ( x) = 1 b 0 a a < x < b elsewhere

Continuous variables: Normal distribution X N(µ, σ 2 ) Property: Proof: thus:

Continuous variables: Normal distribution X i independent and identically distributed random variables, with means µ i and standard deviations σ i (µ i, σ i, and µ i /σ i finite) then as n nothing assumed about Xi, but their arithmetic average tends to be normally distributed for large n useful in statistical evaluation of simulation results known as the Central Limit Theorem

Continuous variables: Exponential Distribution X Exp(λ) probability density function: f ( x) = λe 0, λx, x 0 x < 0. cumulative distribution function: F( x) = 1 e 0, λx, x 0 x < 0. mean and standard deviation:

Continuous variables: Exponential Distribution A continuous R.V. X is exponentially distributed if and only if for s, t 0 or equivalently P { X > s + t X > t} = P{ X > s} P { X > s + t} = P{ X > s} P{ X > t} A random variable with this property is said to be memoryless.

Continuous variables: Exponential Distribution A queueing system with two servers The service times are assumed to be exponentially distributed (with the same parameter) Upon arrival of a customer ( ), both servers are occupied ( ) but there are no other waiting customers. What is the probability that the customer ( ) will be the last to depart from the system?

Continuous variables: Exponential Distribution minimum and maximum of exponentially distributed random variables X 1 X 2 X n Exp(λ) The minimum obeys the distribution Exp(nλ), with ending intensity equal to nλ : The cdf of the maximum is The expectation can be deduced by considering one exp at time :

Continuous variables: Erlang Distribution X Erl(n,λ) X = X 1 + X 2 + + X n, where X i Exp(λ) i.i.d. 0, for t < 0 f X (t) = λt n 1 e λ n t /( n 1)!, E[ X ] = n, λ V [ X ] = for t n 2, C 2 v = λ An Erlang-n distribution is a series of n independent and identically distributed exponential distributions (n-stage Erlang distribution) n=1 >>> Erl(n, λ) tends to be exponential n inf., n/λ finite >>> Erl(n, λ) tends to be deterministic 1 n 0

Erlang distribution: example A system consists of two servers. Customers arrive with Exp(λ) distributed interarrival times and are alternately sent to servers 1 and 2. The interarrival time distribution of customers arriving at a given server is Erl(2,λ) Let N t, the number of events in an interval of length t, obey the Poisson distribution: N t Poisson(λt). Then the time T n from an arbitrary event to the nth event thereafter obeys the distribution Erl(n,λ)

Continuous variables: Gamma distribution X Gamma(a,λ) generalizes the Erlang distribution 0, for t < 0 f X (t) = λt a 1 e λ a t / G ( a ), for t 0 a E[ X ] =, λ where V[ X ] = G( a) = a 2, C = 2 λ a: form parameter, λ: scale parameter a < 1 the distribution is more irregular + 0 s 1 a e ( a 1) s ds

Continuous variables: Hypo-exponential distribution like Erlang distributions, is made of n successive exponential stages: X 1 + X 2 + + X n but here the exponential stages need not have the same mean With two exponential variables X 1 Exp(λ 1 ) and X 2 Exp(λ 2 ) :

Continuous variables: Hyper-exponential distribution a random choice between exponential variables For example, let X 1 and X 2 be two independent exponentially distributed random variables with parameters λ 1 and λ 2 Let X be a variable that, with probability, p 1 is distributed as X 1 and with probability p 2 is distributed as X 2 (where p 1 +p 2 = 1) X has a 2-stage hyperexponential distribution:

Continuous variables: Phase-type distributions Phase-type distributions are formed by summing exponential distributions, or by probabilistically choosing among them Some examples include the distributions seen above: Erlang distribution Hypo-exponential distribution Hyper-exponential distribution

Random variables in computer modelling In modelling of computer and communication systems make assumptions about the involved job service times or the packet transmission times often the exponential, Erlang, hypo- and hyper-exponential distribution are used Exponential distribution is especially advantageous to use Memoryless property advantageous for mathematics Useful in many practical cases: for example, extensive monitoring of telephone exchanges has revealed that telephone call durations typically are exponentially distributed the average time between successive call arrivals obeys the exponential distribution.

Stochastic processes Systems we consider evolve in time. We are interested in their dynamic behaviour, usually involving some randomness the length of a queue the temperature outside the number of data packets in a network, etc. A stochastic process X t,or X(t), is a family of random variables indexed by a parameter t Formally, it is a mapping from the sample space S to functions of t. Each element e of S is associated to X t (e), a function in time t For a given value of t, X t is a random variable For a given value e, the function X t (e) is called the realization of the stochastic process (also trajectory or sample path).

Stochastic processes Stochastic processes can be either: Discrete-time/Discrete-state e.g., the number of jobs present in a computer system at the time of departure of the kth job Continuous-time/Discrete-state e.g., the number of jobs present in a computer system at time t Discrete-time/Continuous-state e.g., the time the kth job has to wait until the service starts Continuous-time/Continuous-state e.g., the total amount or service that needs to be done on all jobs present in the system at a given time t

Stochastic processes First-order distribution: n-th order distribution: } ) ( { ), ( x t X P t x F = } ) (,, ) ( { ), ( 1 1 n t n x X x t X P t x F = L The process is strictly stationary when ), ( ), ( τ + = t x F t x F Independent process: = = = = n i i i n i i i x t X P t x F t x F 1 1 ) ) ( { ), ( ), ( Stationarity in wide sense 1st and 2nd order statistics are translation invariant

Markov Process Next state taken on by a stochastic process only depends on the current state of the process does not depend on states that were assumed previously called first-order dependence or Markov dependence } ) (,, ) ( ) ( { x t X x t X x t X P = = = Markov Process: discrete- or continuous- time Time-homogeneous Markov process: } ) ( ) ( { } ) (,, ) ( ) ( { 1 1 0 0 1 1 n n n n n n n n x t X x t X P x t X x t X x t X P = = = = + + + + K } (0) ) ( { } ) ( ) ( { s s x X x s t X P x s X x t X P = = =

Discrete-Time Markov Chain Discrete-time stochastic process {X n n = 0,1,2, } Takes values in {0,1,2, } Memoryless property: P{ X = j X = i, X = i,..., X = i } = P{ X = j X = i} n+ 1 n n 1 n 1 0 0 n+ 1 n P = P { X = j X = i } ij n+ 1 n Transition probabilities P ij Transition probability matrix P=[P ij ] It is a stochastic matrix: 0 P ij 1, Pij = 1 j P ij 0, Pij = 1 j= 0

Discrete-Time Markov Chain: an example Transition matrix and graphical representation (can both be used to represent a DTMC)

Discrete-Time Markov Chain Probability of a path:

Discrete-Time Markov Chain The probability that the system, initially in state i, will be in state j after two steps is (takes into account all paths via an intermediate state k) This summation coincides with the element {i, j} of the P matrix P 2 P P 2

State Probabilities Stationary Distribution P n describes how the probabilities change in n steps Define the probability distribution as π = P{ X = j}, π = (π,π,...) n n n n j n 0 1 the probability the process is in state j is π n j for each j If the state of the process at time n is described by π n then the new probabilities after k steps are simply calculated as π n P k If the π n distribution converges to a limit for n : π = lim π n π = πp n π is called the stationary, or equilibrium, distribution Existence depends on the structure of Markov chain

State Probabilities Stationary Distribution Example p=1/3

Kolmogorov s theorem In an irreducible, aperiodic Markov chain there always exists the stationary, or equilibrium, distribution: π = limn and this is independent of the initial state. j If the limit probabilities (the components of the vector) π exist, they must satisfy the equation π = π P, because n n+ 1 n π lim π = lim π = lim π P = πp = n n n π defines which proportion of time (steps) the system stays in state j. equilibrium does not mean that nothing happens in the system, but merely that the information on the initial state of the system has been forgot π n j

Global Balance Equations Markov chain with infinite number of states Global Balance Equations (GBE) π j P = P P = π π i= 0 ji i= 0 i ij j i j ji i j = π j π P i ij, j 0 π j P ji is the frequency of transitions from j to i Frequency of Frequency of transitions out of j = transitions into j Intuition: j visited infinitely often; for each transition out of j there must be a subsequent transition into j with probability 1

Solving the balance equations Assume here a finite state space with n states Write the balance condition for all but one of the states (n 1 equations) the equations fix the relative values of the equilibrium probabilities the solution is determined up to a constant factor The last balance equation (automatically satisfied) is replaced by the normalization condition π j = 1

Solving the balance equations: example by the normalization π j = π 0 + π 1 + π 2 = 1

Discrete-Time Markov Chain: an example Consider a two-processor computer system where time is divided into time slots, operating as follows: at most one job can arrive during any time slot an arrival in any slot can always happen with probability α = 0.5 jobs are served by whichever processor is available if both are available then the job is given to processor 1 if both processors are busy, then the job is lost when a processor is busy, it can complete the job with probability β = 0.7 during any one time slot if a job is submitted during a slot when both processors are busy but at least one processor completes a job, then the job is accepted (departures occur before arrivals) What is the long-term system efficiency? What is the fraction of lost jobs?

Discrete-Time Markov Chain: an example p 01 p 12 p 22 p 00 0 1 2 p 10 p 21 p 20 p = ( α ) p01 00 1 = α p 02 = 0 p10 = β ( 1 α ) p11 = ( 1 β )( 1 α ) + αβ p = α ( β ) 2 2 p = β ( α ) p21 β α 2β ( 1 β )( 1 α ) 20 1 12 1 = + p = ( 1 β ) 2 + αβ ( 1 β ) 22 2

Discrete-Time Markov Chain: an example p 01 p 12 p 22 p 00 0 1 2 p 10 p 21 p 20 with α = 0.5 and β = 0.7 : P 0.5 0.5 0 = p ij 0.35 0.5 0.15 = 0.245 0.455 0.3

Discrete-Time Markov Chain: an example p 01 p 12 p 22 p 00 0 1 2 p 10 p 21 p 20 P 0.5 0.5 0 = p ij 0.35 0.5 0.15 = 0.245 0.455 0.3 π = π P, π j =1 yields: π = [0.0278, 0.3094, 0.6628]

Discrete-Time Markov Chain: an example What is the long-term system efficiency? What is the fraction of lost jobs?

Continuous-time Markov chain (CTMC) Index t is a real time By definition of Markov property, the probability distribution of residence time in state n does depend on state n, but does not depend on the time spent in that state and in the previous ones the only memoryless continuous distribution is Exp() associate a rate µ i with each state: out-transition rate ~Exp(µ i ) as in DTMCs, use a transition matrix Q to describe the state transition process behaviour CTMCs can be built from DTMCs by associating a state residence time distribution with each state

Continuous-time Markov chain (CTMC) p i,j : as in DTMCs, the probability that, being in state i, the state after the next transition will be j p i,i =0: a transition always changes the process state Matrix [p i,j ] defines the embedded DTMC in the CTMC Define a generator matrix Q = [ q i,j ] j with q i,j = µ i p i,j when i j and q i,i = - j i q i,j = -µ i the implication of defining q i,i as above will be clear soon For each j i there is an F i j (t) = 1 - e -qi,j t these distributions model the delay perceived in state i when going from i to j the residence time is the minimum of n Exp: still exponentially distributed, with rate j i q i,j

Continuous-time Markov chain (CTMC) Matrix Q contains information about the transition probabilities of the embedded DTMC: given n random variables X i ~Exp(λ k ) with rate λ k, X j is the minimum of them with probability λ j / λ i so, being in state i, the shortest delay (i.e. the actual next transition) will lead to state j with probability q p i, j i, j = q k i i, k µ i µ i = p i, j

Continuous-time Markov chain (CTMC) Generator matrix Q=[q ij ] also describes the infinitesimal transitions as follows: P{ X(t+ t)=j X(t)=i } = q ij t + o( t), i j follows from the fact that state residence time are negative exponentially distributed q ij is the rate at which state i change to state j Denote p i (t) = P{ X(t)=i }. Then

Continuous-time Markov chain (CTMC) using q i,i = - j i q i,j : rearranging the terms, dividing by h and taking the limit for h 0 expressing the transient behaviour. In matrix notation: Using a Taylor series expansion:

Continuous-time Markov chain (CTMC) Steady-state probabilities: p i =lim t p i (t) That would mean lim t p i (t) = 0 p i can be obtained by solving the following system of linear equations: pq=0 p i =1 Very similar to v=vp for DTMCs, since it yields v(p-i)=0, so the matrix P-I can be interpreted as a generator matrix in the discrete case

Continuous-time Markov chain (CTMC) It is also possible to calculate the p i via the associated DTMC (the embedded DTMC) Build the discrete transition matrix from the q i,j : p i,j = q i,j / q i,i (i j) p i,i = 0 Solve v=vp: the solutions v i represent the probability that state i is visited, irrespective of the length of stay in this state Renormalize the probabilities according to the expected state residence time 1/q i for every state i p i = vi / qi v / q j j j for all i

CTMC: example Example a resource (e.g. a processor) can be either available or busy a computer system that can be either operational or not time the system is available (operational) exponentially distributed ~Exp(λ) (i.e. average time is 1/ λ ) time busy (not operational) ~Exp(µ)

CTMC: example CTMC with generator matrix: Assume initial propabilities p(0)=(0,1) steady-state probability vector pq = 0, i S pi = 1

CTMC: example Probability vector can also be computed via the embedded DTMC this indicated that both states are visited equally often Incorporating the average state residence times (1/ λ and 1/µ, respectively):

CTMC: example Transient behaviour of the CTMC Can be solved explicitly from p(t)=p(0) e Qt : graphical representation of p i (t) for 3λ = µ = 1

Discrete time Birth-Death Process S S c P 01 Pn 1, n P n, n + 1 0 1 2 n n+1 P 00 P 10 Pn, n 1 P n, n P n + 1, n One-dimensional DTMC with transitions only between neighboring states: P ij =0, if i-j >1 Detailed Balance Equations (DBE) π P = π P n = 0,1,... n n, n+ 1 n+ 1 n+ 1, n

Continuous time Birth-Death Process One-dimensional CTMC: λ 0 λ 1 λn 1 S λ n S c 0 1 2 n n+1 µ 1 µ 2 µ n µ n + 1 Detailed Balance Equations q = λ, q = µ, q = 0, i j > 1 i, i+ 1 i i, i 1 i ij λ, 0,1,... n pn = µ n + 1pn + 1 n =

Continuous time Birth-Death Process Use DBE to determine state probabilities as a function of p 0 µ p = λ p n n n 1 n 1 n 1 λ n 1 λ n 1 λ n 2 λ n 1λ n 2Lλ 0 λ i n = n 1 = n 2 =... = 0 = 0 µ n µ n µ n 1 µ nµ n 1Lµ 1 i= 0 µ i+ 1 p p p p p Use the probability conservation law to find p 0 1 n 1 n 1 n 1 λ i λ i λi pn 1 p0 1 1 p0 1, if n= 0 n= 1 i= 0 µ i+ 1 n= 1 i= 0 µ i+ 1 n= 1 i= 0 µ i+ 1 = + = = + <

Poisson Process One of the most important models used in queueing theory often suitable when arrivals originate from a large population of independent users N(t): tells the number of arrivals that have occurred in the interval (0, t), or more generally: N(t) : number of arrivals in the interval (0, t) (the stochastic process we consider) N(t1, t2) : number of arrival in the interval (t1, t2) (the increment process N(t2) N(t1) )

Poisson Process: definition A Poisson process can be characterized in different ways: Process of independent increments Pure birth process: the arrival intensity λ (mean arrival rate; probability of arrival per time unit The most random process with a given intensity λ

Poisson Process: definition Three equivalent definitions: 1) Poisson process is a pure birth process: In an infinitesimal time interval dt there may occur only one arrival. This happens with the probability λdt independent of arrivals outside the interval 2) The number of arrivals N(t) in a finite interval of length t obeys the Poisson(λt) distribution, P{N(t) = n} = e λt (λt) n / n! Moreover, the number of arrivals N(t1, t2) and N(t3, t4) in non-overlapping intervals (t1 t2 t3 t4) are independent 3) The interarrival times are independent and obey the Exp(λ) distribution: P{ interarrival time > t} = e λ

Interarrival and Waiting Times The times between arrivals T 1, T 2, are independent exponential random variables with mean 1/λ: N(t) P(T 1 >t) = P(N(t) =0) = e -λt The total waiting time until the nth event has a Gamma distribution S 1 S 2 S 3 S 4 T 1 T 2 T 3 T 4 t

Poisson Process: properties Conditioning on the number of arrivals: Given that in the interval (0, t) the number of arrivals is N(t) = n, these n arrivals are independently and uniformly distributed in the interval. Superposition: The superposition of two Poisson processes with intensities λ 1 and λ 2 is a Poisson process with intensity λ = λ 1 + λ 2. Random selection: If a random selection is made from a Poisson process with intensity λ such that each arrival is selected with probability p, independently of the others, the resulting process is a Poisson process with intensity pλ.

Poisson Process: properties Random split: If a Poisson process with intensity λ is randomly split into two subprocesses with probabilities p 1 and p 2, where p 1 + p 2 = 1, then the resulting processes are independent Poisson processes with intensities p 1 λ and p 2 λ.. PASTA: Poisson Arrivals See Time Averages. For instance, customers with Poisson arrivals see the system as if they came into the system at a random instant of time despite they induce the evolution of the system

PASTA property Proof consider a system with a number of users X(t) consider the event e = there was at least one arrival in (t-h, t] homogeneous Poisson process P{e} = P{N(h) 1} (for a non Poisson process this wouldn t be true!) memoryless: P{ N(h) 1 X t-h = i } = P{ N(h) 1 }, thus P{ N(h) 1 X t-h = i } = P{ N(h) 1 } P{ X t-h = i } Hence: P{ X t-h = i N(h) 1 } = P{ X t-h = i } take the limit for h 0

PASTA property arrivals act as random observers (also called Random Observer Property, ROP) follows from the memoryless property of the exponential distribution the remaining time to the next arrival has the same exponential distribution irrespective of the time that has already elapsed since the previous arrival