1 Stationary point processes

Size: px
Start display at page:

Download "1 Stationary point processes"

Transcription

1 Copyright c 22 by Karl Sigman Stationary point processes We present here a brief introduction to stationay point processes on the real line.. Basic notation for point processes We consider simple point processes, sequences of points on the real line, ψ = {t n : n }; t < t < t 2 <, with n t n =. {N(t) : t } is the counting process; N(t) = the number of points that lie in the interval (, t], and N() def =. Interarrival times are defined by T n = t n+ t n, n, and we can alternatively describe ψ by its interarrival time representation {t, {T n : n }}; it completely determines ψ and visa versa. Now suppose that t =. If t n t < t n+, then N(t) = n, and so in general we can write t N(t) t < t N(t)+ ; t N(t) is the last point before or at time t, and t N(t)+ is the first point strictly after time t. We thus define A(t) def = t N(t)+ t, the forward recurrence time; it is the time until the next point strictly after time t and is also called the excess. Similarly, we define B(t) = t t N(t), the backwards recurrence time; it is the time since the last point before or at time t and is also called the age (If t >, then B(t) def = for t < t ). Finally, S(t) def = B(t) + A(t) = t N(t)+ t N(t) = T N(t) is called the spread, it is the length of the interarrival time covering t. Note that graphing A(t) yields a sequence of right triangles, where at each point t n it jumps with height T n, and linearly decreases at rate reaching at time t n+. B(t) is the mirror image; at time t n it starts at and linearly increases at rate until time t n+ at which time it is of height T n. Finally, graphing S(t) yields squares of height and length T n during t [t n, t n+ ). For any s, ψ s = {t n (s) : n } is the point process shifted into the future to make time s the origin. t (s) = A(s), the excess or forward recurrence time, and t n (s) = t N(s)+n s, n. T n (s) = t n+ (s) t n (s). We can also shift to a particular point, t j, yielding ψ (j) = {t n (t j ) : n } which has its intial point t (t j ) = and t n (t j ) = t j+n t j, n. T n (t j ) = T j+n, n. Thus ψ (j) is the point process with a point at the origin and interarrival times {T j+n : n }..2 Random point processes When the t n are rvs, then the point process is called a random point process; but we will just say it is a point process (pp) for short. We say that two point processes ψ = {t () n } and ψ 2 = {t (2) n } have the same distribution (e.g., are identically distributed) if all the finite joint distributions of {t () n } are the same as for {t (2) n }. For example (t (), t () 5, t () 9 ) must have the same joint distribution as (t (2), t (2) 5, t (2) 9 ); P (t () x, t () 5 x 5, t () 9 x 9 ) = P (t (2) x, t (2) 5 x 5, t (2) 9 x 9 ), for all x, x 5, x 9. Equivalently, all finite joint

2 distributions of the interarrival time representations are the same, {t (), {T n () : n }} and {t (2), {T n (2) : n }}. Definition. ψ is called time stationary if ψ s has the same distribution for all s. ψ is called point stationary if ψ (j) has the same distribution for all j. Thus, under time stationarity, shifting by any s does not change the distribution; it has the same distribution for all s, namely the distribution of ψ itself: ψ d s ψ. Similarly, under point stationarity, shifting to any point, t j, does not change the distribution: ψ d (j) ψ. The following is immediate from the definitions of stationarity: Proposition. A point process ψ = {t n : n } is point stationary iff P (t = ) = and the interarrival time sequence {T n : n } is stationary, whereas it is time stationary iff P (t = ) = and the counting process {N(t) : t } has stationary increments. Note that the Poisson process is point stationary if we place a point t at the origin, whereas it is time stationary if we remove the point from the origin. A renewal process (with iid interarrival times distributed as F ) will be point stationary if the initial delay t =. A renewal process will be time stationary if the delay t is distributed as F e, the equilibrium distribution of F. (This fact will be proved later.).3 Functionals of point processes Let f(ψ) be any non-negative function of ψ. Examples. f(ψ) = t, in which case f(ψ s ) = t (s) = A(s). (Note that f(ψ (j) ) = because ψ (j) has its initial point at the origin by definition). 2. f(ψ) = I{t > x}, in which case f(ψ s ) = I{A(s) > x} 3. f(ψ) = I{t x, t y,..., t k y k }in which case f(ψ s ) = I{A(s) x, t (s) y,..., t k (s) y k }. 4. f(ψ) = T in which case f(ψ (j) ) = T j. 5. f(ψ) = I{T x} in which case f(ψ (j) ) = I{T j x} The indicator examples above all are special cases of f(ψ) = I{ψ E} where E denotes a special set of point processes. For example E = {all point processes : t > x}, 2

3 or or E = {all point processes : t x, t y,..., t k y k }, E = {all point processes : T x}. Taking the expected value of such an indicator yields P (ψ E) and the entire distribution of ψ is determined by considering all such E. We denote the distribution of ψ by P (ψ ) just as, in the case of real-valued random variables X with cdf F (x) = P (X x) we sometimes write F = F ( ). This is done for simplicity and elegance of notation..4 Point and time-stationary versions Given a point process ψ, assuming the its exist, define the distributions of two (new) point processes ψ and ψ as P (ψ ) P (ψ ) def = n n def = t t n P (ψ (j) ), () t P (ψ s )ds. (2) P (ψ ) represents the point-stationary distribution of ψ and P (ψ ) represents the time-stationary distribution of ψ. ψ is called a point-stationary version of ψ and ψ is called a time-stationary version of ψ. We view the distribution P (ψ ) as that obtained by randomly observing the point process way out at a point. By random we mean that we observe at point t J, and consider the shifted pp ψ (J) where J is chosen randomly according to the discrete uniform distribution over the integers {, 2,..., n} and then we let n. J has probability mass function P (J = j) = /n, j {,..., n}, and thus the distribution of ψ (J) is exactly the right-hand-side of () before we take the it. We view the distribution P (ψ ) as that obtained by randomly observing the point process ψ way out in time. By random we mean that we observe at time S, and consider the shifted pp ψ S where S is uniformly distributed over the interval (, t), and then let t. S has density function f(s) = /t, < s < t, and thus the distribution of ψ S is exactly the right-hand-side of (2) before we take the it. Proposition.2 The distribution of ψ is time stationary and the distribution of ψ is point stationary. Proof : We prove the time stationary case, the point stationary case being similar. We will show that ψ u has the same distribution as ψ for any u. Note that P (ψu t ) = P (ψ s+u )ds. t t 3

4 So, for any fixed u, the distribution of ψ u is identical to the distribution obtained by randomly observing ψ u way out in time. But this is the same as observing ψ way out in time as is seen by using the equalities below and letting t : t t t P (ψ s+u )ds = u+t P (ψ s )ds t t u = t u+t P (ψ s )ds t u P (ψ s )ds, as t, since u is fixed, while u+t P (ψ s )ds = ( ) u+t u+t t u+t P (ψ s )ds P (ψ )..5 Examples u P (ψ s )ds.. Renewal Process. Let Ψ = {t n } be a renewal process with interarrival time distribution F (x) def = P (T n x), F () =, and delay t G (independent of the other T n ). Let < λ = {E(T )} <. Clearly, by the i.i.d. structure, Ψ () is a non-delayed version of Ψ, as is Ψ (j) for any j. Thus Ψ becomes point stationary after only shifting to the initial point t (e.g., removing the delay); so construction of a point-stationary version is immediate. We now proceed to the time stationary version. Consider t P (Ψ s E)ds. t t For each s, the distribution of Ψ s is completely determined by the distribution of the forward recurrence time A(s), because by the i.i.d. structure, A(s) is independent of the future interarrival times, {T n (s) : n }, which are i.i.d. F. Letting {X n : n } be i.i.d. F, and independent of Ψ, it follows that for each s, (t (s), T(s)) d (A(s), X, X,..., ), that is, Ψ s is a delayed renewal process with delay t = A(s). As s varies, only the distribution of A(s) is changing; thus, we conclude that the time stationary version is in fact a delayed renewal process with delay t F e, where F e is the distribution obtained by randomly observing A(s) way out in time: t F e (x) = P (A(s) x)ds. t t This distribution is referrred to as the stationary forward recurrence time distribution for the renewal process. By the Renewal Reward Theorem, it can be computed as x F e (x) = λ F (y)dy, where F (x) = F (x) = P (T > x) is the tail of F. 4

5 In general, constructing point and time stationary versions is not as easy as for renewal processes. But a lot can be learned by studying the simple example when the interarrival time sequence is alternating {T n } = {, 2,, 2,, 2,...}. For then it is immediate that Ψ is given by t = and {T n} = {, 2,, 2,..., w.p..5; 2,, 2,,..., w.p..5. Randomly choosing a point t j, we would be equally likely to be at the beginning of a length or a length 2 interarrival time; this yields the.5 mixture for Ψ. For ψ, let U be uniformly distributed on (, ). Then {t, {T n : n }} = { U, 2,, 2..., w.p. /3; 2U,, 2,..., w.p. 2/3. Note how t has a continuous distribution given as a /3, 2/3 mixture of U and 2U. Also observe (verify) that this mixture is in fact the equilibrium distribution, F e, of F where F is distributed as T n: F (x) = P (T n x), P (T n = ) = P (T n = 2) =.5.: If you randomly select a time s way out in the infinite future then you will land within some interarrival time; with probability /3 it will be of length and have a remaining length distributed as U (the equilibrium distribution of ), with probability 2/3 it will be of length 2 and have a remaining length distributed as 2U Unif(, 2) (the equilibrium distribution of 2). 2 From ψ to ψ Our objective here is to derive the inversion formula for expressing the distribution of time stationary (and ergodic) ψ in terms of point stationary (and ergodic) ψ : P (ψ ) = E{ T I{ψ s }ds } E(T ) Analogous to Renewal Reward Theory, this formula says that. (3) A time average can be expressed as an expected value over a cycle divided by an expected cycle length, where now a cycle length is an interarrival time. Cycles from ψ are stationary and thus identically distributed, so we pick the first one (starting from t = ) for simplicity and denote it by T = T in the formula. (Note that T = t t = t.) The inversion formula can be re-written as P (ψ ) = λe { T I{ψs }ds }. 5

6 Recall that by definition, to obtain ψ from ψ, we take ψ and randomly observe it way out in time : P (ψ t ) = P (ψ t s )ds, t so we need to show that the it above yields the right-hand-side of (3) To do so we use functional notation and proceed with a sample-path analysis as follows: t f(ψ t s)ds N(t) t j f(ψ t s)ds, (4) where the actual difference between them is asymptotically negligeable (as t ) if for example f is a bounded function. Defining Y j = t j t j t j f(ψ s)ds, {Y j : j } forms a stationary (and ergodic) sequence because ψ is assumed point stationary (and ergodic). Thus with probability, t f(ψ t t s)ds = ( N (t))( ) N (t) t t N (t) t j t j f(ψ s)ds (5) = λe(y ) (6) = λe { T f(ψs)ds } (7) = E{ T f(ψs)ds } (8) E(T ) by the SLLN for stationary ergodic sequences. If f is a bounded function (such as an indicator which is always bounded by ), we can use the dominated convergence theorem which allows us to take expected values inside (of the left-hand-side of (5)) : t t t Using f(ψ) = I{ψ } then yields (3). 3 From ψ to ψ E{f(ψ s)}ds = E{ T f(ψ s)ds } E(T ) Our objective here is to derive the inversion formula for expressing the distribution of point stationary (and ergodic) ψ in terms of time stationary (and ergodic) ψ :. 6

7 P (ψ ) = E{ N () j= I{ψ(j) }}. (9) E(N ()) Here the interpretation is that A point average can be expressed as the expected value over a unit of time divided by the expected number of points in a unit of time ψ has the same distribution over any one unit of time by time stationarity, so we choose the first unit of time for simplicity in the formula. Recall that by definition, to obtain ψ from ψ, we take ψ and randomly observe it way out at a point : n n n P (ψ(j) ), so we need to show that the it above yields the right-hand-side of (9) We use functional notation and proceed with a sample-path analysis as follows: First observe that by n n n N f(ψ(j)) (t) = t N (t) f(ψ (j)). () Taking the time it along the integer time points n, the it must also be given n N (n) N (n) f(ψ (j)). () But ψ is time stationary; that is, {N (t) : t } has stationary increments. Thus { i : i } forms a stationary (and ergodic) sequence where i = N (i) N (i ), i, the number of points in the unit time interval (i, i], and N () =. So each i has the same distribution as N (), and N (n) = + + n. Let J i = N (i) j=n (i ) f(ψ (j)). Then, again by stationarity of ψ, {J i : i } also form a stationary (and ergodic) sequence with each J i distributed as J (the sum over the first unit of time). Therefore N (n) ni= f(ψ J i N (n) (j)) = ni= (2) i 7 = ni= n J i n ni= i (3)

8 Taking the it as n we use the SLLN for stationary ergodic sequences on both numerator and denominator yielding (with probability one) E(J ) E( ), which is precisely the right-hand-side of (9). Thus we conclude that wp, n n n f(ψ(j)) = E{ N () j= f(ψ(j) )}. E(N ()) If f is bounded, then the dominated convergence theorem yields n n n E{f(ψ(j))} = E{ N () j= f(ψ(j) )}, E(N ()) which when applied to f(ψ) = I{ψ } yields (9). Remark Since E(N ()) = λ the formula can also be expressed as λ E { N () j= 3. Computing expected values f(ψ (j)) }. A more general way of stating the inversion formulas (and a consequence of our method of proof of them) is in terms of functions f = f(ψ): E(f(ψ )) = E{ T f(ψs)ds } (4) E(T ) E(f(ψ )) = E{ N () j= f(ψ(j) )}. (5) E(N ()) For example, when f(ψ) = I{ψ } we get the original formulas. 3.2 Examples Here we explore (4) and see that it is completely analogous to what happens in renewal theory. We will give an application of (5) in Section??. 8

9 . f(ψ) = t, in which case f(ψ s ) = t (s) = A(s) and then (4) yields E(t ) = E{ T A (s)ds } E(T ) = E({T } 2 ) 2E(T ). 2. f(ψ) = I{t > x}, in which case f(ψ s ) = I{A(s) > x} and then (4) yields P (t > x) = E{ T I{A (s) > x}ds } E(T ) = E(T x) + E(T ) = F e (x), the tail of the equilibrium distribution of F (x) = P (T x). Note that the previous example computed the mean of F e. 3. Recall the interevent time representation for ψ, {t, {Tn : n }}. Let us compute the distribution of T = t t the first interarrival time. We use f(ψ) = I{T x} yielding P (T x) = E{ T I{T (s) x}ds }. E(T ) Noting that T (s) = T yields for s [, T ) and hence is constant over the first cycle and finally T I{T (s) x}ds = T I{T x}ds = T I{T x}, P (T x) = λe{t I{T x}}. (6) In general, T and T are dependent rvs so that in general we need to know the joint distribution of them inorder to compute the above expected value. But for a renewal process they are independent in which case E{T I{T x}} = E(T )P (T x) = λ F (x), and so (as we well know) P (T x) = F (x) for a renewal process: For a renewal process all interarrival times are iid distributed as F ; only the delay t is different. Let us apply (6) to the case when {T n : n } = {, 2,, 2,, 2,...}, t =. Since here, the rvs {T n} are discrete, we can consider the probability mass function analog P (T = ) = λe{t I{T = }} 9

10 and P (T = 2) = λe{t I{T = 2}}. Conditioning on the events {T = } and {T = 2} (each with probability.5) yields P (T = ) = 2/3 and P (T = 2) = /3. For example, given T =, T I{T = } =, and given T = 2, T I{T = } = 2. Thus P (T = ) = λ(2)(.5) = λ = 2/3. 4. The above Example can be extended to all finite dimensional distributions. For example we can derive a formula for which the reader can check is given by P (t x, T y,, T k y k ), λe { (T x) + I{T y,..., T k+ y k } }. The beauty of this is that we can completely express all joint distributions of {t, {T n}} in terms of the joint distributions of {T n}.

1 Some basic renewal theory: The Renewal Reward Theorem

1 Some basic renewal theory: The Renewal Reward Theorem Copyright c 27 by Karl Sigman Some basic renewal theory: The Renewal Reward Theorem Here, we will present some basic results in renewal theory such as the elementary renewal theorem, and then the very

More information

Marked Point Processes in Discrete Time

Marked Point Processes in Discrete Time Marked Point Processes in Discrete Time Karl Sigman Ward Whitt September 1, 2018 Abstract We present a general framework for marked point processes {(t j, k j ) : j Z} in discrete time t j Z, marks k j

More information

1 Poisson processes, and Compound (batch) Poisson processes

1 Poisson processes, and Compound (batch) Poisson processes Copyright c 2007 by Karl Sigman 1 Poisson processes, and Compound (batch) Poisson processes 1.1 Point Processes Definition 1.1 A simple point process ψ = {t n : n 1} is a sequence of strictly increasing

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

1 Inverse Transform Method and some alternative algorithms

1 Inverse Transform Method and some alternative algorithms Copyright c 2016 by Karl Sigman 1 Inverse Transform Method and some alternative algorithms Assuming our computer can hand us, upon demand, iid copies of rvs that are uniformly distributed on (0, 1), it

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

EDRP lecture 7. Poisson process. Pawe J. Szab owski

EDRP lecture 7. Poisson process. Pawe J. Szab owski EDRP lecture 7. Poisson process. Pawe J. Szab owski 2007 Counting process Random process fn t ; t 0g is called a counting process, if N t is equal total number of events that have happened up to moment

More information

1 Delayed Renewal Processes: Exploiting Laplace Transforms

1 Delayed Renewal Processes: Exploiting Laplace Transforms IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s

More information

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Optional Stopping Theorem Let X be a martingale and T be a stopping time such Plan Counting, Renewal, and Point Processes 0. Finish FDR Example 1. The Basic Renewal Process 2. The Poisson Process Revisited 3. Variants and Extensions 4. Point Processes Reading: G&S: 7.1 7.3, 7.10

More information

The exponential distribution and the Poisson process

The exponential distribution and the Poisson process The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

arxiv: v2 [math.pr] 24 Mar 2018

arxiv: v2 [math.pr] 24 Mar 2018 Exact sampling for some multi-dimensional queueing models with renewal input arxiv:1512.07284v2 [math.pr] 24 Mar 2018 Jose Blanchet Yanan Pei Karl Sigman October 9, 2018 Abstract Using a recent result

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7 MS&E 321 Spring 12-13 Stochastic Systems June 1, 213 Prof. Peter W. Glynn Page 1 of 7 Section 9: Renewal Theory Contents 9.1 Renewal Equations..................................... 1 9.2 Solving the Renewal

More information

1 IEOR 6712: Notes on Brownian Motion I

1 IEOR 6712: Notes on Brownian Motion I Copyright c 005 by Karl Sigman IEOR 67: Notes on Brownian Motion I We present an introduction to Brownian motion, an important continuous-time stochastic process that serves as a continuous-time analog

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in

More information

Manual for SOA Exam MLC.

Manual for SOA Exam MLC. Chapter 10. Poisson processes. Section 10.5. Nonhomogenous Poisson processes Extract from: Arcones Fall 2009 Edition, available at http://www.actexmadriver.com/ 1/14 Nonhomogenous Poisson processes Definition

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Poisson Cluster process as a model for teletraffic arrivals and its extremes

Poisson Cluster process as a model for teletraffic arrivals and its extremes Poisson Cluster process as a model for teletraffic arrivals and its extremes Barbara González-Arévalo, University of Louisiana Thomas Mikosch, University of Copenhagen Gennady Samorodnitsky, Cornell University

More information

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations Chapter 5 Statistical Models in Simulations 5.1 Contents Basic Probability Theory Concepts Discrete Distributions Continuous Distributions Poisson Process Empirical Distributions Useful Statistical Models

More information

Renewal Process Models for Crossover During Meiosis

Renewal Process Models for Crossover During Meiosis Stat 396 May 24, 1999 (Eric Anderson) REVISED NOTES (25 May 1999) 1 Renewal Process Models for Crossover During Meiosis Today we have another example of stochastic modeling in genetics. This time we will

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

3. Poisson Processes (12/09/12, see Adult and Baby Ross)

3. Poisson Processes (12/09/12, see Adult and Baby Ross) 3. Poisson Processes (12/09/12, see Adult and Baby Ross) Exponential Distribution Poisson Processes Poisson and Exponential Relationship Generalizations 1 Exponential Distribution Definition: The continuous

More information

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model.

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model. Poisson Processes Particles arriving over time at a particle detector. Several ways to describe most common model. Approach 1: a) numbers of particles arriving in an interval has Poisson distribution,

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Stationary remaining service time conditional on queue length

Stationary remaining service time conditional on queue length Stationary remaining service time conditional on queue length Karl Sigman Uri Yechiali October 7, 2006 Abstract In Mandelbaum and Yechiali (1979) a simple formula is derived for the expected stationary

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Note Set 5: Hidden Markov Models

Note Set 5: Hidden Markov Models Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process Lecture Notes 7 Stationary Random Processes Strict-Sense and Wide-Sense Stationarity Autocorrelation Function of a Stationary Process Power Spectral Density Continuity and Integration of Random Processes

More information

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue J. Blanchet, P. Glynn, and J. C. Liu. September, 2007 Abstract We develop a strongly e cient rare-event

More information

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis. Service Engineering Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis. G/G/1 Queue: Virtual Waiting Time (Unfinished Work). GI/GI/1: Lindley s Equations

More information

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Thursday, October 4 Renewal Theory: Renewal Reward Processes

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Thursday, October 4 Renewal Theory: Renewal Reward Processes IEOR 67: Stochastic Models I Fall 202, Professor Whitt, Thursday, October 4 Renewal Theory: Renewal Reward Processes Simple Renewal-Reward Theory Suppose that we have a sequence of i.i.d. random vectors

More information

Class 8 Review Problems 18.05, Spring 2014

Class 8 Review Problems 18.05, Spring 2014 1 Counting and Probability Class 8 Review Problems 18.05, Spring 2014 1. (a) How many ways can you arrange the letters in the word STATISTICS? (e.g. SSSTTTIIAC counts as one arrangement.) (b) If all arrangements

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

N =

N = Problem 1. The following matrix A is diagonalizable. Find e At by the diagonalization method. 3 8 10 4 8 11 4 10 13 Problem 2. The following matrix is nilpotent. Find e tn. 0 1 0 0 N = 0 0 1 0 0 0 0 1.

More information

CS261: Problem Set #3

CS261: Problem Set #3 CS261: Problem Set #3 Due by 11:59 PM on Tuesday, February 23, 2016 Instructions: (1) Form a group of 1-3 students. You should turn in only one write-up for your entire group. (2) Submission instructions:

More information

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST J. Appl. Prob. 45, 568 574 (28) Printed in England Applied Probability Trust 28 RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST EROL A. PEKÖZ, Boston University SHELDON

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes ECE353: Probability and Random Processes Lecture 18 - Stochastic Processes Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu From RV

More information

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002 EE/CpE 345 Modeling and Simulation Class 5 September 30, 2002 Statistical Models in Simulation Real World phenomena of interest Sample phenomena select distribution Probabilistic, not deterministic Model

More information

1 Acceptance-Rejection Method

1 Acceptance-Rejection Method Copyright c 2016 by Karl Sigman 1 Acceptance-Rejection Method As we already know, finding an explicit formula for F 1 (y), y [0, 1], for the cdf of a rv X we wish to generate, F (x) = P (X x), x R, is

More information

STAT 380 Markov Chains

STAT 380 Markov Chains STAT 380 Markov Chains Richard Lockhart Simon Fraser University Spring 2016 Richard Lockhart (Simon Fraser University) STAT 380 Markov Chains Spring 2016 1 / 38 1/41 PoissonProcesses.pdf (#2) Poisson Processes

More information

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths

More information

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation.

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation. Solutions Stochastic Processes and Simulation II, May 18, 217 Problem 1: Poisson Processes Let {N(t), t } be a homogeneous Poisson Process on (, ) with rate λ. Let {S i, i = 1, 2, } be the points of the

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stat 150 Practice Final Spring 2015

Stat 150 Practice Final Spring 2015 Stat 50 Practice Final Spring 205 Instructor: Allan Sly Name: SID: There are 8 questions. Attempt all questions and show your working - solutions without explanation will not receive full credit. Answer

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions 1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time

More information

Random Number Generators

Random Number Generators 1/18 Random Number Generators Professor Karl Sigman Columbia University Department of IEOR New York City USA 2/18 Introduction Your computer generates" numbers U 1, U 2, U 3,... that are considered independent

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr. Simulation Discrete-Event System Simulation Chapter 4 Statistical Models in Simulation Purpose & Overview The world the model-builder sees is probabilistic rather than deterministic. Some statistical model

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Solutions 2017 AB Exam

Solutions 2017 AB Exam 1. Solve for x : x 2 = 4 x. Solutions 2017 AB Exam Texas A&M High School Math Contest October 21, 2017 ANSWER: x = 3 Solution: x 2 = 4 x x 2 = 16 8x + x 2 x 2 9x + 18 = 0 (x 6)(x 3) = 0 x = 6, 3 but x

More information

The Kemeny constant of a Markov chain

The Kemeny constant of a Markov chain The Kemeny constant of a Markov chain Peter Doyle Version 1.0 dated 14 September 009 GNU FDL Abstract Given an ergodic finite-state Markov chain, let M iw denote the mean time from i to equilibrium, meaning

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

7.1 Coupling from the Past

7.1 Coupling from the Past Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov

More information

Modeling Recurrent Events in Panel Data Using Mixed Poisson Models

Modeling Recurrent Events in Panel Data Using Mixed Poisson Models Modeling Recurrent Events in Panel Data Using Mixed Poisson Models V. Savani and A. Zhigljavsky Abstract This paper reviews the applicability of the mixed Poisson process as a model for recurrent events

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

FRACTIONAL BROWNIAN MOTION WITH H < 1/2 AS A LIMIT OF SCHEDULED TRAFFIC

FRACTIONAL BROWNIAN MOTION WITH H < 1/2 AS A LIMIT OF SCHEDULED TRAFFIC Applied Probability Trust ( April 20) FRACTIONAL BROWNIAN MOTION WITH H < /2 AS A LIMIT OF SCHEDULED TRAFFIC VICTOR F. ARAMAN, American University of Beirut PETER W. GLYNN, Stanford University Keywords:

More information

Poisson processes and their properties

Poisson processes and their properties Poisson processes and their properties Poisson processes. collection {N(t) : t [, )} of rando variable indexed by tie t is called a continuous-tie stochastic process, Furtherore, we call N(t) a Poisson

More information

This paper investigates the impact of dependence among successive service times on the transient and

This paper investigates the impact of dependence among successive service times on the transient and MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol. 14, No. 2, Spring 212, pp. 262 278 ISSN 1523-4614 (print) ISSN 1526-5498 (online) http://dx.doi.org/1.1287/msom.111.363 212 INFORMS The Impact of Dependent

More information

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential. Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information