Homework set 4 - Solutions

Size: px
Start display at page:

Download "Homework set 4 - Solutions"

Transcription

1 Homework set 4 - Solutions Math 495 Renato Feres Probability background for continuous time Markov chains This long introduction is in part about topics likely to have been covered in math 3200 or math 493, such as exponential and Poisson random variables I hope I ve provided enough detail to make the assignment self-contained I also want to make up for the fact that our textbook is a bit terse on some of these topics You ll need to read this introduction carefully in order to work on the problems below (page 18) At the same time, there is more information here than really needed for this assignment The sequence of random variables of a Markov chain may represent the states of a random system recorded at a succession of time steps For a full description of the process we often need to specify another sequence of random variables, T 0 < T 1 < < T N, giving the random times at which the state transitions occur A continuous time Markov chain with state space S consists of a sequence X 0, X 1,, X N taking values in S together with a sequence T 0,T 1,,T N in [0, ), where T i is the random time when the chain jumps to the state X i A more precise definition will be given shortly It will be helpful to keep in mind the following diagram as we discuss some of the main ideas related to continuous time Markov chains, at least in the case of discrete S The nodes (circles) stand for the elements of S In this discrete case, they may be labeled by integers: S = {1,2,,k} The black dot (a token) indicates which state is presently occupied, and the arrows represent state transitions The significance of the labels λ i j, called transition rates (they are not probabilities; they can take on any positive value, possibly greater than 1) will be explained below in the context of Theorem 1 They are numbers that encode information specifying both the transition probabilities and the waiting times between state transitions The resulting process can be imagined as a kind of random walk of the token around the diagram The walk starts at X 0 S at time T 0, jumping at time T i to state X i for each i = 1,2, Figure 1: Diagram representing a continuous time Markov chain system

2 A useful identity The following general observation will be used in the proof of Theorem 1 Suppose that X and Y are random variables where Y is of the continuous type, having probability density function f Y (y) Then P(X A) = P(X A Y = y)f Y (y)d y This can be derived as follows First observe that P(X A) = E(1 A (X )), where 1 A (x) is the indicator function of the set A (Recall that, by definition, 1 A (x) equals 1 if x A and 0 if x A) This is clear since E(1 A (X )) = 0 P(1 A (X ) = 0) + 1 P(1 A (X ) = 1) = P(1 A (X ) = 1) = P(X A) On the other hand we know that E[X 2 ] = E[E[X 2 X 1 ]] for any random variables X 1, X 2 Therefore, P(X A) = E(1 A (X )) = E [E[1 A (X ) Y ]] = E [ 1 A (X ) Y = y ] f Y (y)d y = P(X A Y = y)f Y (y)d y The same argument shows, more generally, that given another random variable Z, P(X A Z = z) = P(X A Z = z,y = y)f Y (y)d y (1) An example of the use of this identity will appear in the proof of Theorem 1, below Exponential random variables Exponential random variables will be used as probabilistic models of waiting times between successive events in a random sequence We recall here their main properties 1 The exponential distribution A random variable T of the continuous type is said to be exponential with parameter λ if its probability density function is f (t) = λe λt for all t 0 and 0 elsewhere By simple integration it follows that F T (t) = 1 e λt, E(T ) = 1 λ, Var(T ) = 1 λ 2 The four functions in R associated with the exponential distribution (random numbers, pdf, cdf and quantile) are rexp, dexp, pexp, and qexp The graph of Figure 2 was obtained with the following script: x=seq(0,5,by=005) y0=dexp(x,05) y1=dexp(x,10) y2=dexp(x,20) plot(x,y0,type= l,lty=1,ylim=range(c(0,2)),xlab= waiting time,ylab= probability density ) lines(x,y1,lty=2) lines(x,y2,lty=4) legend(x=30,y=2, #place a legend at an appropriate place on the graph c( lambda=05, lambda=10, lambda=20 ), #legend text lty=c(1,2,4),#define the line types (dashed, solid, etc) 2

3 probability density lambda=05 lambda=10 lambda= waiting time Figure 2: Graphs of exponential densities for different values of the parameter λ If T is an exponentially distributed random waiting for some random event to happen, we will think of P(T t) = 1 e λt as the probability that the event will have happened by time t grid() bty= n ) 2 The memoryless property In all the examples discussed in this assignment, random events of some kind will happen in succession at random times I will use the term waiting time to refer to the time difference between a random event and its successor (I am using the term event here in its ordinary sense, as some sort of discrete occurrence, and not in the technical sense as sets of the σ-algebra of a probability space) Waiting times will be modeled using exponential random variables The property of exponential random variables that makes them useful models of waiting time for some (but not all!) purposes is that they have the memoryless property This means that if T is exponentially distributed waiting time then P (T > s + t T > s) = P (T > t) In words, if the random event with exponentially distributed waiting time has not happened by time s (T > s) and if the distribution of T has parameter λ, then your best prediction of how much longer to wait is still E[T ] = 1/λ The underlying mechanism that causes the event to happen cannot have any sort of internal clock telling the time s already elapsed For example, if the random time for a kernel of popcorn to pop after being added to a heated pot were exponentially distributed with mean time 30 seconds, then the probability that it would not have yet popped by 30ln(2) 208 seconds the kernel s half-life would be 05; but if a kernel didn t pop after a full two minutes and the exponential distribution assumption were valid, then it would still only have probability 05 of popping during the next 208 seconds, which is not very realistic If instead of popcorn kernels we are interested in the time of radioactive decay of an unstable atom with a given decay rate λ, then the assumption that the time of decay is exponential with mean time 1/λ is believed to be theoretically exact Exponential random times also appear naturally in stochastic models of chemical reactions Even when the exponential assumption does not hold exactly, it 3

4 is often used in applications as a convenient approximation To see that the memoryless property holds for an exponential random variable T, set A = {T > t + s} and B = {T > s}, so A B = A Therefore, P(A B) = P(A B)/P(B) = P(A)/P(B) and the proof reduces to showing P(T > t + s) = P(T > t)p(t > s) Note that P(T > t) = 1 P(T t) = 1 F T (t) = e λt Therefore, the memoryless property reduces to noting the relation e λ(t+s) = e λt e λs, which is, of course, a property of the exponential function It is useful to keep in mind this characterization of an exponential random variable: The probability that the random event has not happened by time t decreases exponentially as e λt if the waiting time is exponential with parameter λ 3 Sums of independent exponential random variables Suppose you meet on campus at random and independently with each of 3 friends once every two hours on average The respective waiting times T 1,T 2,T 3 of meeting each friend are assumed exponentially distributed The waiting time to meet any one of the three friends is naturally M = min{t 1,T 2,T 3 } What is the expected value of M? It may be intuitively clear that the answer should be two thirds of an hour This is indeed true due to property (a) in the following theorem Theorem 1 (Independent exponential random variables) Let T 1,,T k be independent, exponentially distributed random variables with parameters λ 1,,λ k, respectively Let M = min{t 1,,T k } Then (a) M is exponentially distributed with parameter λ λ k (b) M is independent of which T i is minimum In other words, P(M > t T i = M) = P(M > t) (c) The probability that T i is the minimum is P(T i = M) = λ i /(λ λ k ) Proof Due to independence and noting that P(T i > t) = 1 F Ti (t) = e λ i t, we have P(M > t) = P(T 1 > t,,t k > t) = P(T 1 > t) P(T k > t) = e λ 1t e λ k = e (λ 1+ +λ k )t But this means that M is exponentially distributed with parameter λ λ k, proving (a) For property (b) note: P(M > t T i = M) = P(T i > t T i = M) = We have used here identity 1 given at the beginning of this tutorial Now, 1 if s > t P(T i > t T i = s, M = s) = 0 if s t Therefore, P(M > t T i = M) = t 0 P(T i > t T i = s, M = s)f M (s)d s f M (s)d s = P(M > t) But this is the claim of part (b) Now for part (c), using again identity 1, P(T i = M) = 0 P(T i = M T i = t)f Ti (t)d t = Using the assumption that the T 1,,T k are independent, 0 P(T j t for all j i T i = t)λ i e λ i t d t (2) P(T j t for all j i T i = t) = P(T j t for all j i ) = P(T j t) = e λ j t = e (λ 1+ +λ k )t+λ i t j i j i (3) 4

5 Putting together identities 2 and 3, P(T i = M) = 0 λ i e λ i t e (λ 1+ +λ k )t+λ i t d t = 0 λ i e (λ 1+ +λ k )t d t = λ i λ λ k This is what we wanted to prove 4 Interpretation of the theorem and the diagram We are now ready to interpret diagram 1 shown above In diagram 3, below, I did not draw the nodes that are not connected to node i containing the token and the arrows that are not directed from i to one of the other nodes This is the part of the diagram needed for determining the next state transition Figure 3: Part of diagram 1 indicating the current state i (which contains the moving token) and the states to which the token can jump in the next step Only the arrows issuing from i are shown The mechanism of state transition is as follows Suppose that there are k arrows issuing from i, which I indicate by the pairs (i,1),,(i,k) Among these is the pair (i, j ) with the label λ i,j shown on the diagram Think of each arrow (i, j ) as a possible action, or event, happening at an exponentially distributed waiting time T i,j with parameter λ i,j The times T i,1,,t i,k are assumed to be independent Then, according to Theorem 1, the transition goes as follows (a) Generate a random number t 0 from an exponential distribution with parameter λ i = λ i λ i k This is the time when the next transition will happen (Part (a) of the theorem) (b) Now, independently of the value obtained for the minimal time M = t, generate an integer between 1 and k with pmf p(j ) = λ i j λ i λ i k This integer is then the index of the node to which the token is moved and t is the new time (Part (c) of the theorem) 5 Example: random switching The simplest example is defined by diagram 4 Figure 4: Diagram representing an exponential random variable with parameter λ 5

6 The set of states is {0,1} and the system is initially in state 0 After an exponentially distributed waiting time with parameter λ, the state switches to 1 and remains there forever Here the chain has only two steps: X 0 = 0 and X 1 = 1 The only quantity of interest is the random time when the switch occurs So, in a sense, the diagram represents an exponential random variable with parameter λ 6 Example: branching out In this case the chain starts at X 0 = 0, jumps to X 1 {1,2,3}, then stops there The information of interest here is the transition time T and the value of the state X 1 Figure 5: Switching to one of several possible states in one step According to Theorem 1, the transition time is exponentially distributed with parameter λ 1 + λ 2 + λ 3, and the new state is taken from {1,2,3} with probabilities p(i ) = λ i λ 1 + λ 2 + λ 3 7 Example: reversible switching This process is described in Figure 6 The chain is X 0 = 0, X 1 = 1, X 2 = 0, X 3 = 1, The states alternate between 0 and 1 in a perfectly deterministic way The quantities of interest are the random times of the back and forth state jumps: if the current state is 0, the system will next switch to 1 after an exponentially distributed waiting time with parameter λ; and if the current state is 1, the next switch to 0 will happen after an exponentially distributed waiting time with parameter µ Figure 6: A reversible switching process For a concrete example, suppose λ = 2 and µ = 1 A typical sample history of the process from time 0 to 10 can be obtained as follows (The use of stepfun is a somewhat tricky You should look at?stepfun for the details Note that the vector times has length one unit less that the vector states ) #We generate a sample history of the process up to time tmax tmax=20 #The exponential parameters are: lambda=2 6

7 mu=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states, #which may be either 0 or 1 states=matrix(0,1,1) #The step index is n n=1 while (t<tmax) { if (states[n]==0) { dt=rexp(1,lambda) states[n+1]=1} else { dt=rexp(1,mu) states[n+1]=0 } t=t+dt times[n]=t n=n+1 } #We can visualize the history by plotting states #against times Note the use of the step function operation #Check?stepfun for the details of how to use it F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State,main= A sample history ) grid() 8 Example: Poisson processes The Poisson process is introduced in Section 32 of the textbook It can be defined by the diagram of Figure 8 The token is initially in the circle node representing state X 0 = 0; if at any given time t the token is in state n, it will jump to state n + 1 at time t + T, where T is an exponentially distributed waiting time with parameter λ Let N (t) denote the state of the process at time t This is a piecewise constant discontinuous function of t that counts the total number of transitions up to time t The following graph shows one sample history of a Poisson process with rate λ = 1 (per minute, say) over 10 minutes The graph was generated by the following R script lambda=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) 7

8 A sample history State Time Figure 7: A sample history of the Markov chain defined by the diagram of Figure 6 Each circle indicates the value of the state (0 or 1) over the time interval to the right of the circle Figure 8: Diagram defining a Poission process with rate parameter λ Sample history of a Poisson process State Time Figure 9: A sample history of the Poisson process with λ = 1 over the time interval [0,10] 8

9 #The step index is n n=1 while (t<tmax) { dt=rexp(1,lambda) t=t+dt times[n]=t states[n+1]=n #The states vector is one unit longer than times n=n+1 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State, main= Sample history of a Poisson process ) grid() The following definition gives an alternative way to introduce the Poisson process Definition 2 A Poisson process with rate λ > 0 is a random process N (t) {0,1,2,}, which we interpret as the number of random events occurring between times 0 and t, such that (a) The process starts at 0 That is, N (0) = 0 (b) The numbers of events occurring over disjoint time intervals are independent That is, if (a, b] and (c, d] are disjoint time intervals then N (b) N (a) and N (d) N (c) are independent random variables (c) The process is time homogeneous This means that N (b) N (a) and N (b + s) N (a + s), which are the numbers of events over an interval (a,b] and over this interval s time translate (a + s,b + s], have the same probability distribution for all s 0 (d) The probability of transition from N (0) = 0 to N (h) = 1 for a small time h > 0 is, up to first order in h, given by λh More precisely, P(N (h) = 1) lim = λ h 0 h (e) The probability of more than one transition over a small time interval is 0 up to first order in the length of the interval That is, P(N (h) 2) lim = 0 h 0 h From this definition, we can recover the characterization of the Poisson process in terms of the diagram This is shown in the next theorem (The details of the proof of Theorem 3 are not needed for solving the homework problems It is OK to simply skim through it, at least for now I hope to return to some of this in class when covering chapter 3) Theorem 3 The Poisson process N (t) with rate λ > 0 satisfies the following properties: (a) For each non-negative integer n P(N (t) = n) = (λt)n e λt (b) Let T 1 < T 2 < be the state transition times of N (t) and D j = T j T j 1 for j = 1,2, the waiting times between transitions Then D 1,D 2, are independent, exponentially distributed random variables with parameter λ n! 9

10 Proof We begin by observing that, by properties (a) and (b), P(N (s + t) = n N (s) = m) = P(N (s + t) N (s) = n m N (s) N (0) = m) = P(N (s + t) N (s) = n m) By property (c), P(N (s + t) = n N (s) = m) = P(N (t) N (0) = n m) = P(N (t) = n m) Defining p i (t) = P(N (t) = i ) we obtain n n p n (t + s) = P(N (s + t) = n) = P(N (s + t) = n N (s) = m)p(n (s) = m) = p n m (t)p m (s) m=0 m=0 Note: if we define a matrix P(t) = ( p i j (t) ) whose elements are p i j (t) = p j i (t), the expression just proved can be written in matrix form as P(t + s) = P(t)P(s) Note that P(t)P(s) = P(t + s) = P(s + t) = P(s)P(t) Properties (a), (d), and (e) imply that P(t) converges toward the identity matrix as t approaches 0 That is, 1 if m = n p mn (t) = P(N (t) = n m) 0 if m n as t 0 Let I denote the identity matrix (Since the number of states is infinite, P(t) is an infinite matrix This, however, does not create any essential difficulties) We have shown P(s) I as s converges towards 0 The matrix-valued function P(t) can then be shown to be continuous: P(t + s) P(t) = P(t)P(s) P(t) = P(t)[P(s) I ] 0, therefore P(t + s) P(t) as s converges towards 0 We can also show that P(t) is a differentiable function of t First observe the following consequence of properties (d) and (e): 0 if n m + 2 p mn (h) p mn (0) p lim = lim 1 (h) h 0 h h 0 h = λ if n = m + 1 p lim 0 (h) 1 P(N (h) 1) h 0 h = lim h 0 h = λ if n = m Therefore, p mn (0) = λ mn where 0 if n m + 2 λ if n = m + 1 λ mn = λ if n = m 0 if n < m We conclude that P P(t + h) P(t) P(h)P(t) P(t) (P(h) I )P(t) (t) = lim = lim = lim = ΛP(t) h 0 h h 0 h h 0 h 10

11 where Λ = (λ mn ) The matrix differential equation P (t) = ΛP(t) can be written out explicitly as a system of ordinary differential equations in the entries of P(t) Each equation has the form p i (t) = j =1 λ i j p j (t) As most entries of Λ are zero, the above sum only has finitely many terms for each k Explicitly, p 0 (t) = λp 0(t) p 1 (t) = λp 1(t) + λp 0 (t) p 2 (t) = λp 2(t) + λp 1 (t) p n (t) = λp n(t) + λp n 1 (t) It is now a simple induction argument to check that p n (t) = (λt)n e λt n! is a solution of the system of ordinary differential equations with the correct initial condition We can finally appeal to the uniqueness property of solutions of systems of differential equations to conclude that part (a) of the theorem holds It remains to show part (b): the waiting times D 1,D 2, are independent and exponentially distributed with parameter λ Note the equivalences of events (keep in mind that N (T n ) = n): N (T n 1 + t) N (T n 1 ) = 0 N (T n 1 + t) = n 1 T n 1 + t < T n D n > t Therefore, P(D n > t) = P (N (T n 1 + t) N (T n 1 ) = 0) = P(N (t) = 0) = p 0 (t) = e λt This means that the D n are all exponentially distributed with parameter λ To prove independence, let m < n and suppose that D m = s Then T m 1 +s = T m T n 1 So the intervals (T m 1,T m 1 +s] and (T n 1, ) are disjoint Thus for all t 0, we have by property (b) that the random variables N (T n 1 + t) N (T n 1 ) and N (T m 1 + s) N (T m 1 ) are independent Independence of D m and D n now results from the observation that (1) the event D n > t is the same as N (T n 1 + t) N (T n 1 ) = 0 and (2) D m = s is the same as N (T m 1 + s) N (T m 1 ) = 1 and N (T m 1 + s ) N (T m 1 ) = 0 for all s < s 9 General continuous type Markov chains Arguments used in the proof of Theorem 3 also prove a much more general result Going back to the diagram of Figure 1, let λ i j be the rate constant for the arrow connecting state i to state j Define λ i i = j λ i j and the n-by-n matrix Λ = (λ i j ) Note that the elements of each row of Λ add up to 0 For the Poisson process this matrix is λ λ λ λ 0 Λ = 0 0 λ λ Also define the matrix-valued function P(t) = (p i j (t)) where each element p i j (t) gives the probability that the process started at time 0 in state i will be in state j at time t Then it can be shown that (a) P(0) = I, where I is the identity matrix; (b) P(t + s) = P(t)P(s) for all non-negative t and s; (c) P (t) = ΛP(t) = P(t)Λ, where P (t) is the derivative of P(t) in t 11

12 If you have taken a course in ordinary differential equations or matrix algebra, you may have learned that these conditions characterize the matrix exponential: P(t) = e Λt In the proof of Theorem 3 we have effectively computed a matrix exponential by solving a system of differential equations The resulting matrix was in that case e Λt = e λt λte λt (λt)2 e λt 2! 0 e λt λte λt 0 0 e λt Methods for finding matrix exponentials are studied in matrix algebra courses I simply note here that the Taylor series expansion P(t) = I + Λt + + Λn t n + n! makes sense for matrices in general, and indeed holds true for finite matrices It is possible to prove this fact for infinitely many states in many cases 10 The Poisson distribution A random variable Y is said to have the Poisson distribution with parameter λ if Y = N (1), where N (t) is the Poisson process with rate parameter λ discussed above From Theorem 3 it follows that Y has probability mass function supported on the non-negative integers 0,1,2, and given by the function p(x) = λx e λ Therefore, a Poisson random variable gives the random number of transition events occurring in the time interval [0,1] for the process described by the diagram of Figure 8 The R functions associated to Poisson random variables are dpois(x, lambda) ppois(q, lambda) qpois(p, lambda) rpois(n, lambda) x! More generally, Theorem 3 says that the random variable N (t), t > 0, is also supported on the set of nonnegative integers, and has pmf P(N (t) = x) = (λt)x e λt x! 11 The gamma distribution Recall that T n is the time of the nth transition event for the process represented by the diagram of Figure 8 From Theorem 3 we obtain the cumulative distribution function of T n : (λt) j e λt F Tn (t) = P (T n t) = P (N (t) n) = p j (t) = j! The probability density function of T n is obtained by taking the derivative of F Tn (t) with respect to t In j =n j =n 12

13 problem 1, below, you will show that the pdf is λt (λt)n 1 f Tn (t) = λe (n 1)! (4) Definition 4 (The Gamma distribution) A continuous random variable X has the gamma distribution with parameters α,β (or X has the Γ(α,β) distribution) for α > 0,β > 0, if the probability density function of X is f X (x) = xα 1 e x/β Γ(α)β α supported on the interval (0, ) The parameter α is called the shape, and β the rate of the gamma distribution; 1/β is called the scale Recall that the gamma function Γ(α) equals (α 1)! if α is a positive integer For values of α > 0 that are not necessarily integer, the gamma function is defined by Γ(α) = 0 y α 1 e y d y It follows from the above discussion that T n is a gamma random variable with a Γ(n,1/λ) distribution This shows the close relationship among the exponential, Poisson, and gamma distributions dgamma(x, shape=a, scale = b) pgamma(q, shape=a, scale = b) qgamma(p, shape=a, scale = b) rgamma(n, shape=a, scale = b) 12 Example: birth-and-death processes A widely studied Markov chain goes by the name birth-and-death chain It may be defined by the following diagram Figure 10: The continuous time birth-and-death Markov chain, with birth rate λ and death rate µ This chain can serve as a crude probabilistic model of a queueing system, in which new customers join the queue at a rate λ and are served at a rate µ The random process N (t), giving the number of customers in line at time t, has time dependent pmf p n (t) = P(N (t) = n) that can be computed using matrix exponentiation Writing P(t) = (p i j (t)), where p i j (t) = p j i (t) we have, as indicated in the remark about general continuous time Markov chain, that P(t) = e Λt Here Λ is the matrix λ λ µ (µ + λ) λ 0 0 Λ = 0 µ (µ + λ) λ 0 You will simulate this chain in one of the below problems 13

14 A random service line In an idealized waiting line (or queue), the time for a new person to join the line is exponentially distributed with parameter λ The service time for the person at the front of the line is exponential with parameter µ (I assume that µ and λ are measured in reciprocal of minute) We wish to simulate the process for 30 minutes with λ = 1 and µ = 1 and graph the length of the line as a function of time Sample history of a birth and death process State Time Figure 11: Sample history of the birth-and-death process of problem 3 It was produced by the following script #Birth and death chain #We generate a sample history of the process up to time tmax tmax=30 #The parameters are: lambda=1 mu=1 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) #I will let the first state be 0 #The step index is n n=1 while (t<tmax) { if (states[n]==0) { 14

15 dt=rexp(1,lambda) states[n+1]=1 }else{ dt=rexp(1,lambda+mu) s=states[n] s=sample(c(s-1,s+1),1,prob=c(mu/(mu+lambda),lambda/(mu+lambda))) states[n+1]=s #The states vector is one unit longer than times } t=t+dt times[n]=t n=n+1 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State, main= Sample history of a birth and death process ) grid() Growth in a cell culture This is an example of a branching process We wish to simulate the growth in the number of cells by the following algorithm When a cell is born, draw sample exponential random times T b and T d with rates λ and µ, respectively If T b < T d, then the simulated cell divides at T b into two new cells (and T d is discarded) If T b < T b, then the cell dies (and T b is discarded) We simulate this process for 20 minutes starting from a single cell with µ = 1 (per minute) and 1 λ = 100 per minute 2 λ = 105 per minute 3 λ = 110 per minute Observe that if there are n cells presently in the culture, the rate at which a new division or death happens is nλ and nµ, respectively This is due to Theorem 1 Figure 12: Diagram for problem The R-function NumberCells(tmax,lambda,mu,k) (see below) gives the number of cells at the end of tmax (here k is the initial number of cells) Prior to showing the program, think about the following questions: 1 If the current number of cells is n, what is the probability distribution of the waiting time till the next event (birth or division)? 2 Let p b and p d be the probabilities that the next event is a birth (cell division) or a death What are the values of p b and p d in terms of λ and µ? 15

16 Note: If the number of cells at the end of the 20 minute period is recorded in a vector Cells of length N = 10 4, then the mean and standard deviation of the data are given in R by mean(cells) and sd(cells) If you have some problem with the sd function, try this: sd(asnumeric(cells)) The function NumberCells can be as follows NumberCells=function(tmax,lambda,mu,k,plotgraph) { #tmax is the final time #lambda is the division rate #mu is the death rate #k is the initial number of cells #plotgraph is 1 if a sample history of the process is to #be plotted, and 0 if not #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states #from the set {0, 1, 2, } states=matrix(0,1,1) #Initial state: states[1]=k #The step index is n n=1 while (t<tmax) { if (states[n]==0) { dt=tmax states[n+1]=0 }else{ s=states[n] #The next event happens at a exponential waiting time with rate rate=s*(lambda+mu) dt=rexp(1,rate) #Probability that the next event is a division (birth) pb=lambda/(lambda+mu) #Probability that the next event is a death pd=mu/(lambda+mu) s=sample(c(s-1,s+1),1,prob=c(pd,pb)) states[n+1]=s #The states vector is one unit longer than times } t=t+dt times[n]=t n=n+1 } #The final number of cells is if (plotgraph==1) { 16

17 } #We now plot the sample history F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= Number of cells, main= Sample history of population growth process ) grid() } states[n] We can now perform the experiments asked for in the problem N=10^4 tmax=20 lambda=100 mu=1 k=1 plotgraph=0 Cells=matrix(0,1,N) for (i in 1:N){ Cells[i]=NumberCells(tmax,lambda,mu,k,plotgraph) } mean(asnumeric(cells)) sd(asnumeric(cells)) ######################### lambda=1 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] ######################### lambda=105 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] ######################### lambda=11 > mean(asnumeric(cells)) [1] > sd(asnumeric(cells)) [1] Each sample history can end in extinction (number of cells equal to 0) or not Here is a typical looking graph for the number of cells when extinction does not occur, for λ = 13 Note that the final number of cells can vary drastically since the standard deviation can be very large 17

18 Sample history of population growth process Number of cells Time Figure 13: Sample history of the cell culture process with λ = 13 and µ = 1 Problems 1 (Text, Exercise 23, page 58) Consider the Markov chain with state space S = {0,1,2,} and transition probabilities p(x, x + 1) = 2/3; p(x,0) = 1/3 Show that the chain is positive recurrent and give the limiting probability π Solution The chain is clearly irreducible To show that the chain is positive recurrence it is sufficient to show that it admits an invariant probability measure The equation for an invariant measure is π(x) = x S π(y)p(y, x) We obtain the equations: π(x) = 2 ( ) 2 n 3 π(x 1) = = π(0) for x 1 and π(0) = 1 π(x) = x=0 Then ( ) 2 n for n 0 π(x) = (Text, Exercise 213, page 60) Consider a population of animals with the following rule for (asexual) reproduction: an individual that is born has probability q of surviving long enough to produce offspring If the individual 18

19 does produce offspring, she produces one or two offspring, each with equal probability After this the individual no longer reproduces and eventually dies Suppose the population starts with four individuals (a) For which values of q is it guaranteed that the population will eventually die out? (b) If q = 09, what is the probability that the population survives forever? Solution (a) Let Y be the random variable giving the number of offspring of one individual It is given that 1 q if k = 0 P(Y = k) = q/2 if k = 1 q/2 if k = 2 The expected value of Y is µ := E(Y ) = 0(1 q) + 1q/2 + 2q/2 = 3q/2 We know that if µ < 1 then the population will eventually die out with probability 1 In fact, as stated in the textbook, the necessary and sufficient condition is µ 1 (here p 0 > 0) This corresponds to q 2/3 (b) Let φ(s) be the generating function of Y This is the function φ(s) = p k s k = (1 q) + (q/2)s + (q/2)s 2 The extinction probability, defined by k=0 a = P 1 (population eventually dies out), is the smallest positive root of φ(s) = s Now s = (1 q) + (q/2)(s + s 2 ) Equivalently, a is the smallest positive root of When q = 09, the equation becomes s 2 2 q 2(1 q) s + = 0 q q s s = 0 This equation has roots s = 1,2/9 Therefore, the extinction probability is a = 2/9 The extinction probability for initial population size X 0 = 4 is a 4 = (2/9) 4 and the probability of not going extinct is probability of population surviving forever if q = 09 is 1 (2/9) (Text, Exercise 31, page 82) Suppose that the number of calls per hour arriving at an answering service follows a Poisson process with λ = 4 (a) What is the probability that fewer than two calls come in the first hour? 19

20 (b) Suppose that six calls arrive in the first hour What is the probability that at least two calls will arrive in the second hour? (c) The person answering the phones waits until fifteen phone calls have arrived before going to lunch What is the expected amount of time that the person will wait? (d) Suppose it is known that exactly eight calls arrived in the first two hours What is the probability that exactly five of them arrived in the first hour? (e) Suppose it is known that exactly k calls arrived in the first four hours What is the probability that exactly j of them arrived in the first hour? Solution (a) We know that the probability of k calls by time t is P(X t = k) = e The probability of fewer than 2 calls in the first hour is λt (λt)k k! P(X 1 = 0) + P(X 2 = 1) = e 4 + 4e 4 = 5/e (b) From the result of (a), the probability of at least 2 calls in the first hour is 1 5/e Because of the independence of X 2 X 1 and X 1 X 0, the probability that at least two calls arrive in the second hour is the same as in the first hour So this probability is approximately 0908 (c) Let T 1,,T 15 be the times between consecutive calls We know that these are independent exponentially distributed random variables with parameter λ = 4 Thus the expected time is E(T T 15 ) = 15E(T ) = 15/λ = 15/4 Therefore, the expected time is 15/4 hours (d) We want to find the probability P(X 1 = 5 X 2 = 8) The definition of conditional probability implies that P(X 1 = 5 X 2 = 8) = P(X 1 = 5, X 2 = 8) P(X 2 = 8) = P(X 2 = 8 X 1 = 5) P(X 1 = 5) P(X 2 = 8) This gives Therefore, P(X 1 = 5 X 2 = 8) = P(X 1 = 3)P(X 1 = 5) P(X 2 = 8) (4 1) (4 1)5 3! e 5! e 4 1 ( ) 8 1 = = (4 2) 8 8! e = 7 32 P(X 1 = 5 X 2 = 8) = 7 32 (e) We now want the conditional probability P(X 1 = j X 4 = k) for k j The argument is the same as in (d) The result is ( ) ( ) k 3 k j ( ) 1 j P(X 1 = j X 4 = k) = j This problem is not part of the homework Only the first three are Do a stochastic simulation of the Markov 20

21 chain of exercise 35, page 84 of the textbook: X t is the Markov chain with state space {1,2} and rates α(1,2) = 1, α(2,1) = 4 Then (a) Plot the graph of a sample history of the process over the time interval [0,20] (b) Give the mean (sample) waiting time (c) What should be the exact value of the mean waiting time for a very long sample history? (Check whether your experimentally obtained result is close to the exact value For better precision, you may take a longer time interval, say [0,500]) Solution (a) The graph of the sample history over the time interval from 0 to 20 is shown below: A sample history Time Figure 14: Sample history of the process of problem 4 This graph can be obtained using the program #We generate a sample history of the process up to time tmax tmax=20 #The exponential parameters are: lambda=1 mu=4 #Initialize the current time to 0 t=0 #The vector times will record the state transition times times=matrix(0,1,1) #The vector states will record the sequence of states, #which may be either 0 or 1 states=matrix(0,1,1) #The step index is n n=1 while (t<tmax) { 21

22 if (states[n]==0) { dt=rexp(1,lambda) states[n+1]=1} else { dt=rexp(1,mu) states[n+1]=0 } t=t+dt times[n]=t n=n+1 } #We can visualize the history by plotting states #against times Note the use of the step function operation #Check?stepfun for the details of how to use it F=stepfun(times,states,f=0) plot(f,xlab= Time,xlim=range(c(0,tmax)),ylab= State,main= A sample history ) grid() (b) To obtain the mean waiting time, we may do as follows: run the previous program with a larger value of tmax, say tmax=500 Then n=length(times) times_shift=c(0,times[1:n-1]) difference=times-times_shift mean(difference) gives the mean time I obtained the value 0621 (c) The exact value can be obtained as follows The mean time between transitions from 1 to 2 is 1 and from 2 to 1 is 1/4 Because these states alternate, the mean one-step time is the average (1 + 1/4)/2 = 0625 So the value obtained in (b) is reasonable 22

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element

More information

Birth-Death Processes

Birth-Death Processes Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

The exponential distribution and the Poisson process

The exponential distribution and the Poisson process The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]

More information

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time Math 416 Lecture 11 Math 416 Lecture 16 Exam 2 next time Birth and death processes, queueing theory In arrival processes, the state only jumps up. In a birth-death process, it can either jump up or down

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Notes for Math 450 Stochastic Petri nets and reactions

Notes for Math 450 Stochastic Petri nets and reactions Notes for Math 450 Stochastic Petri nets and reactions Renato Feres Petri nets Petri nets are a special class of networks, introduced in 96 by Carl Adam Petri, that provide a convenient language and graphical

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Expected Values, Exponential and Gamma Distributions

Expected Values, Exponential and Gamma Distributions Expected Values, Exponential and Gamma Distributions Sections 5.2-5.4 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 14-3339 Cathy Poliak,

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Queueing Theory. VK Room: M Last updated: October 17, 2013. Queueing Theory VK Room: M1.30 knightva@cf.ac.uk www.vincent-knight.com Last updated: October 17, 2013. 1 / 63 Overview Description of Queueing Processes The Single Server Markovian Queue Multi Server

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Assignment 3 with Reference Solutions

Assignment 3 with Reference Solutions Assignment 3 with Reference Solutions Exercise 3.: Poisson Process Given are k independent sources s i of jobs as shown in the figure below. The interarrival time between jobs for each source is exponentially

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Expected Values, Exponential and Gamma Distributions

Expected Values, Exponential and Gamma Distributions Expected Values, Exponential and Gamma Distributions Sections 5.2 & 5.4 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 13-3339 Cathy

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model.

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model. Poisson Processes Particles arriving over time at a particle detector. Several ways to describe most common model. Approach 1: a) numbers of particles arriving in an interval has Poisson distribution,

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Problem Set 8

Problem Set 8 Eli H. Ross eross@mit.edu Alberto De Sole November, 8.5 Problem Set 8 Exercise 36 Let X t and Y t be two independent Poisson processes with rate parameters λ and µ respectively, measuring the number of

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam.

Page 0 of 5 Final Examination Name. Closed book. 120 minutes. Cover page plus five pages of exam. Final Examination Closed book. 120 minutes. Cover page plus five pages of exam. To receive full credit, show enough work to indicate your logic. Do not spend time calculating. You will receive full credit

More information

Arrivals and waiting times

Arrivals and waiting times Chapter 20 Arrivals and waiting times The arrival times of events in a Poisson process will be continuous random variables. In particular, the time between two successive events, say event n 1 and event

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Continuous Random Variables

Continuous Random Variables 1 Continuous Random Variables Example 1 Roll a fair die. Denote by X the random variable taking the value shown by the die, X {1, 2, 3, 4, 5, 6}. Obviously the probability mass function is given by (since

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations Chapter 5 Statistical Models in Simulations 5.1 Contents Basic Probability Theory Concepts Discrete Distributions Continuous Distributions Poisson Process Empirical Distributions Useful Statistical Models

More information

Common ontinuous random variables

Common ontinuous random variables Common ontinuous random variables CE 311S Earlier, we saw a number of distribution families Binomial Negative binomial Hypergeometric Poisson These were useful because they represented common situations:

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March

More information

Math 597/697: Solution 5

Math 597/697: Solution 5 Math 597/697: Solution 5 The transition between the the ifferent states is governe by the transition matrix 0 6 3 6 0 2 2 P = 4 0 5, () 5 4 0 an v 0 = /4, v = 5, v 2 = /3, v 3 = /5 Hence the generator

More information

Stochastic Processes

Stochastic Processes qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot

More information

CS 798: Homework Assignment 3 (Queueing Theory)

CS 798: Homework Assignment 3 (Queueing Theory) 1.0 Little s law Assigned: October 6, 009 Patients arriving to the emergency room at the Grand River Hospital have a mean waiting time of three hours. It has been found that, averaged over the period of

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber

More information

FDST Markov Chain Models

FDST Markov Chain Models FDST Markov Chain Models Tuesday, February 11, 2014 2:01 PM Homework 1 due Friday, February 21 at 2 PM. Reading: Karlin and Taylor, Sections 2.1-2.3 Almost all of our Markov chain models will be time-homogenous,

More information

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100]

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100] HW7 Solutions. 5 pts.) James Bond James Bond, my favorite hero, has again jumped off a plane. The plane is traveling from from base A to base B, distance km apart. Now suppose the plane takes off from

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012 SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012 This exam is closed book. YOU NEED TO SHOW YOUR WORK. Honor Code: Students are expected to behave honorably, following the accepted

More information

Exponential, Gamma and Normal Distribuions

Exponential, Gamma and Normal Distribuions Exponential, Gamma and Normal Distribuions Sections 5.4, 5.5 & 6.5 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 9-3339 Cathy Poliak,

More information

MAS1302 Computational Probability and Statistics

MAS1302 Computational Probability and Statistics MAS1302 Computational Probability and Statistics April 23, 2008 3. Simulating continuous random behaviour 3.1 The Continuous Uniform U(0,1) Distribution We have already used this random variable a great

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 Plan for rest of semester (1) st week (8/31, 9/6, 9/7) Chap 0: Diff eq s an linear recursion (2) n week (9/11...) Chap 1: Finite Markov chains (3) r week (9/18...)

More information

Performance Modelling of Computer Systems

Performance Modelling of Computer Systems Performance Modelling of Computer Systems Mirco Tribastone Institut für Informatik Ludwig-Maximilians-Universität München Fundamentals of Queueing Theory Tribastone (IFI LMU) Performance Modelling of Computer

More information

CONTENTS. Preface List of Symbols and Notation

CONTENTS. Preface List of Symbols and Notation CONTENTS Preface List of Symbols and Notation xi xv 1 Introduction and Review 1 1.1 Deterministic and Stochastic Models 1 1.2 What is a Stochastic Process? 5 1.3 Monte Carlo Simulation 10 1.4 Conditional

More information

Exam 3, Math Fall 2016 October 19, 2016

Exam 3, Math Fall 2016 October 19, 2016 Exam 3, Math 500- Fall 06 October 9, 06 This is a 50-minute exam. You may use your textbook, as well as a calculator, but your work must be completely yours. The exam is made of 5 questions in 5 pages,

More information

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Optional Stopping Theorem Let X be a martingale and T be a stopping time such Plan Counting, Renewal, and Point Processes 0. Finish FDR Example 1. The Basic Renewal Process 2. The Poisson Process Revisited 3. Variants and Extensions 4. Point Processes Reading: G&S: 7.1 7.3, 7.10

More information

PROBABILITY DISTRIBUTIONS

PROBABILITY DISTRIBUTIONS Review of PROBABILITY DISTRIBUTIONS Hideaki Shimazaki, Ph.D. http://goo.gl/visng Poisson process 1 Probability distribution Probability that a (continuous) random variable X is in (x,x+dx). ( ) P x < X

More information

STAT 380 Markov Chains

STAT 380 Markov Chains STAT 380 Markov Chains Richard Lockhart Simon Fraser University Spring 2016 Richard Lockhart (Simon Fraser University) STAT 380 Markov Chains Spring 2016 1 / 38 1/41 PoissonProcesses.pdf (#2) Poisson Processes

More information

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon. 604/643 Fall 200 Final Exam Solutions Wednesday, December 5, 9:00AM - 2:00noon Problem (32 points) Consider a Markov chain {X n ; n 0,, }, specified by the following transition diagram 06 05 09 04 03 2

More information

Part 3: Parametric Models

Part 3: Parametric Models Part 3: Parametric Models Matthew Sperrin and Juhyun Park August 19, 2008 1 Introduction There are three main objectives to this section: 1. To introduce the concepts of probability and random variables.

More information

Probability Distributions

Probability Distributions Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

EE126: Probability and Random Processes

EE126: Probability and Random Processes EE126: Probability and Random Processes Lecture 18: Poisson Process Abhay Parekh UC Berkeley March 17, 2011 1 1 Review 2 Poisson Process 2 Bernoulli Process An arrival process comprised of a sequence of

More information

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P = / 7 8 / / / /4 4 5 / /4 / 8 / 6 P = 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Andrei Andreevich Markov (856 9) In Example. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 P (n) = 0

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information