1. Homework 1 answers Linear diffeq s and recursions. four answers:

Size: px
Start display at page:

Download "1. Homework 1 answers Linear diffeq s and recursions. four answers:"

Transcription

1 MATH 56A: HOMEWORK From the syllabus: There will be weekly homework. The first HW might have the problem: Find a formula for the n-th Fibonacci number by solving the linear recurrence. Students are encouraged to work on their homework in groups and to access all forms of aid including expert advice, internet and other resources. The work you hand in should, however, be in your own words and in your own handwriting. And you should understand what you have written.

2 . Homework answers.. Linear diffeq s and recursions. four answers: 0. Find all functions x(t), y(t) so that x (t) = x + y, y (t) = 3x 3y Find the particular solution so that x(0) = y(0) = /. The matrix is ( ) A = 3 3 This has eigenvalues 0, 4 with corresponding eigenvectors X = ( QDQ where ( ) ( ) 0 0 Q =, D =, Q = ( ) 3 4 And e ta = Qe td Q = 4 ), X = ( 3). So A = ( ) ( ) ( ) e 4t = ( ) 3 + e 4t e 4t 4 3 3e 4t + 3e 4t The general solution is X = e ta X 0 or x = x 0 ( ) 3 + e 4t + y 0 ( ) e 4t 4 4 y = x 0 ( ) 3 3e 4t + y 0 ( ) + 3e 4t 4 4 When x 0 = y 0 then the e 4t terms all cancel and we get that x, y are constant functions. In particular, x = y = is the particular solution in the homework. Some people found another method but didn t carry it through: Some of you noticed that the equations say: y = 3x or dy dt = 3dx dt Cancel the dt s and integrate: dy = 3dx which gives: y = 3x + C. Now you have to continue and put it back into the original equation dx dt = x + y = x 3x + C = 4x + C dx 4x + C = dt 4 ln 4x + C = t + C So, 4x + C = e 4t 4C x = C 3 e 4t + 4 C

3 and y = 3x + C = 3C 3 e 4t + 4 C When you put in the initial conditions you find C =, C 3 = 0. Remember that you need to add +C with a new C every time you integrate. 0.5 Find all functions f from integers to real numbers so that f(n) = f(n + ) + f(n ) [Show first that f(n) = n is a particular solution.] To solve the homogeneous equation, try f = a n. If a is a double root then the second solution is f(n) = na n. The homogenous equation gives a a + = 0 This has only one root: a =. So, the solutions are f(n) = and f(n) = n. Thus the general solution is n + bn + c where b and c are constants. There is no constant in front of the particular solution. 0.6 (a) Find all functions f : R R so that f (x) + f (x) + f(x) = 0 or Here you try f(x) = e λx and you find that λ + λ + = 0 λ = ± i 3 3x 3x e λx = e x/ (cos ± i sin ) To get a real solution students correctly took a linear combination of the real and imaginary parts: 3x 3x f(x) = ae x/ cos + be x/ sin (b) Find all functions f : Z R so that f(n + ) + f(n + ) + f(n) = 0 Or You try f(n) = a n and you get a + a + = 0 a = ± i 3

4 So, an arbitrary complex solution is given by ( + i ) n ( 3 i ) n 3 f(n) = a + b In order for this to be a real number it must be equal to its complex conjugate. So, b = a. I.e., a = c + id, b = c id where c = f(0)/ d = f() 3/3 3

5 p. 35 #.,.,.3. Math 56a: Homework.. This is a Markov process with states 0,,, 3, 4 representing the number of newspapers in the pile in the evening. The transition matrix is /3 / /3 0 /3 0 0 P = /3 0 0 /3 0 / / Given that P = ( /3 ) /3 3/4 /4 the probability that X 3 = given that X 0 = 0 is the (0, ) entry of P 3. Instead of doing this in the straightforward boring method I will use right eigenvectors: P v i = λ i v i. The eigenvalues of P are, 5/ with right eigenvectors v 0 = (, ) and v = (8, 9): e 0 = 9 7 v v whose 0th coordinate is P 3 e 0 = 9 7 v 0 + ( 5 ) 3 7 v p 3 (0, ) = 9 ( ) = Given that.4..4 P = what is the long term probability of being in state? ) By directly raising this matrix to a high power. By squaring the matrix 4 times you get P 6 = ) By directly computing the invariant distribution as a left eigenvector. The equation is (x, y, z)p = (x, y, z) with solution ( 5 π = (x, y, z) = 66, 7 66, 4 ) = ( , , ) 3

6 p. 36 #.8,.9,.0,.5 3. Homework 3.8. (a) π(a) = 3/ = /4 (b) E(T ) = /π(a) = 4 (c) Make A absorbing. M = (I Q) The C-th row of M is given by solving the equation X(I Q) = e C, X = (9/, 4/, /, 4/) So, the answer is M CB = E C (#visits to B before A) = 9/ (d) Make A, C absorbing. α(b, C) = P B (A before C) = 4/7 is the BC coordinate of MS (e) Make A absorbing. Then E C (#steps to A) = 9/ + 4/ + / + 4/ = 9/..9. (a) Irreducible (b) 3 (c) p 000 (, x) = p (, x). So, p 000 (, ) = 0, p 000 (, ) = 0, p 000 (, 4) = 5/. (d) T = 3 is constant. So, π() = /3. (e) π = ( 3, 9, 9, 5 36, 7 36 ).0. (a) This is irreducible and aperiodic. π = (33, 89, 34, 3, 5,, ) 377 (b) The answer is π(4)p(4, 5)p(5, 0) = (c) /44 (d) E(T ) = /π(3) = 377/3 = 9. (e) Make 6 absorbing. The answer is sum of M 0x which is 4( ) = Use the equation EI = P. Then for each fixed T you get: rp (k) = j T r(j) = P(X n = j) n=0 T r(j)p(j, k) = P(X n = j)p(j, k) = n=0 j } {{ } =P(X n+ =k) T P(X n = k) = r(k) n= Since the equation holds for each fixed T it holds when T is random. Now divide by r(j) = E(T ) to get π(j) = r(j)/e(t ) But r(i) =. So, π(i) = /E(T ) 3

7 p. 59 #.7, 8, 8 4. Homework 4 (Chap ).7. Are these positive recurrent, null recurrent or transient? (a) This process is null recurrent: p(x, 0) = x +, p(x, x + ) = x + x + In this process, you keep coming back to state 0. However, to be recurrent you need to know that the probability of returning to 0 is. The probability that you will get to state n is p(0, )p(, ) p(n, n) = n n + = n + Since this goes to 0, the probability of returning to 0 is. So, this is recurrent. To see if it is positive recurrent you need to find an invariant distribution or show that one exists. This is a vector solution of the matrix equation πp = π so that the entries of π add to. But the matrix equation gives: π(x + ) = π(x)p(x, x + ) = π(x)(x + ) x + So, π(x)(x + ) = C is constant. But this is impossible since = π(x) = C x + is a diverging sum. So, there is no invariant distribution. So, this process is null recurrent. (b) This one is positive recurrent: p(x, 0) = x +, p(x, x + ) = x + x + This one has a higher probability of returning to 0 than the last one. So, it must also be recurrent. To see if is it positive recurrent we again look for an invariant distribution: The equation π(x) = gives π(x + ) = π(x)p(x, x + ) = π(x) x + π(0) = π(x) = π(0) (x + )! ( ) = (e ) (x + )! So, (e ) π(x) = (x + )! is an invariant distribution and the process is positive recurrent. 4

8 (c) This one is transient: p(x, 0) = x +, p(x, x + ) = x + x + Since the return to 0 probabilities converge the process is transient: x + < x = π 6 < x= (Or use the integral test or the p-test for convergence.) To see that the process is transient, take a number n so that x + < ɛ x=n Then, once you reach state x, the probability that you will ever return to 0 is less than ɛ. Since there is only one communication class, you keep returning to x or higher and eventually you never return to Branching process. So, (a) p 0 =.5, p =.4, p =.35 The extinction probability is the smallest positive solution of a = φ(a) = p i a i =.5 +.4a +.35a a =.6 ± =, The smaller number is the answer: a = 5/7. (b) p 0 =.5, p =., p 3 =.4 Here you get the cubic equation.4a 3.9a +.5 = 0 But you can factor out a since a = is always a solution. You get 4a + 4a 5 = 0 6 a =.747 (c) p 0 =.9, p =.05, p =.0, p 3 =.0, p 6 =.0, p 3 =.0 Here the average number of offspring is ipi =.9 < Therefore, the probability of extinction is one. (d) p i = ( q)q i for some 0 < q <. This time, the average number of offspring is µ = ip i = ( q) iq i = q q This is if q /. So a = in that case. 5

9 If q > / then the extinction probability is the solution of which gives a = ( q)q i a i = q qa a = q q.8. This is a rigorous proof of Stirling s formula (a) lim The central limit theorem says: n n k<n+a n n! πn n+/ e n p(n, k) = a 0 π e x / dx Theorem.8.. If Y n = X + X + + X n is the sum of i.i.d random variables with mean µ and standard deviation σ then the random variable Y n nµ σ n approaches a standard normal distribution in the sense that lim n P(nµ + bσ n Y n < nµ + aσ n) = a b π e x / dx For the Poisson distribution we have µ = = σ. If we take b = 0 then the limit becomes: lim P(n Y n < n + a n) = lim p(n, k) n n (b) We need to show that, for n k < n + a n, Let δ = k n. Then But, since + δ/n e δ/n and n k k! = nn n! since δ < a n. This shows that n k<n+a n e a p(n, n) p(n, k) p(n, n) n n + n nk p(n, k) = e k! The other inequality is easy. n n + n n + δ nn n! n δ (n + δ) = δ ( + δ/n) /n δ e δ e δ /n > e a n δ (n + δ) δ e n nn n! e a e a p(n, n). 6

10 (c) Finally we are supposed to conclude that from which Stirling s formula follows. From (a) and (b) we get a ne a p(n, n) ɛ p(n, n) πn a 0 π e x / dx a np(n, n) + ɛ where ɛ > 0 is arbitrarily small. (This comes from replacing only the middle terms with its limit: If a n < b n then lim a n lim b n but lim a n b n + ɛ.) Divide by a and take limit as a 0 gives np(n, n) ɛ π np(n, n) + ɛ which is what we wanted to prove. 7

11 p. 84 #3.5, 8,, 5. Homework 5 (Chap 3) 3.5. Let X t be a Markov chain with state space S = {, } and rates α(, ) =, α(, ) = 4. Find P t. The infinitesmal generator is A = ( ) 4 4 This matrix has eigenvalues 0, 5 with corresponding right eigenvectors (, ) t, (, 4) t forming the matrix Q ( ) ( ) ( ) 0 0 4/5 /5 Q =, D =, Q = /5 /5 P t = e ta = Qe td Q = ( ) ( ) ( ) 0 4/5 /5 4 0 e 5t /5 /5 So, P t = ( ) 4 + e 5t e 5t 5 4 4e 5t + 4e 5t 3.8. The infinitesmal generator is 3 A = (a) Find the invariant distribution π. This is the solution of πa = 0 normalized so that the sum of the coordinates is : π = (3, 7, 9, 9) (.079,.84,.37,.5) 38 (b) If X 0 = what is the expected amount of time until the first jump? The change rate is 3 times per unit time. So, the expected wait is /3 of a unit of time. (c) If X 0 = what is the expected time until you reach state 4? You take à = A with the 4th row and 4th column deleted. Then you want to solve the equation Ãb = or: b() b() = 4 b(3) The solution is easy: b() = b() = b(3) =. So, b(x) = (expected time to get from x to 4) = for x =,, 3. 8

12 3.. X t is the birth-death process with λ n = + /(n + ) and µ n =. Is this positive recurrent, null recurrent or transient? Since the product collapses: λ n = + n + = n + n + λ n λ n λ 0 = n + n + n + n = n + The sum λn λ n λ 0 = n + = µ n+ µ n µ So, the process is not positive recurrent. Also, µn µ λ n λ = n + = So, the process is not transient. Thus, it must be null recurrent. What about λ n = /(n + )? This is almost the same thing: λ n = n + = n + n + the product collapses again: λ n λ n λ 0 = n + n + n n + = n + The sum λn λ n λ 0 = µ n+ µ n µ n + = So, the process is not positive recurrent. Also, µn µ = n + = λ n λ So, the process is not transient. Thus, it must be null recurrent. 3.. For λ n = nλ, µ n = nµ what values of λ, µ make extinction probability? In this problem we first have to make the Markov chain irreducible by changing λ 0 to be. Then the extinction probability is one if and only if the new irreducible chain is recurrent, i.e., not transient. So, we take the sum: µn µ = µ n λ n λ λ n This converges (making the chain transient) if and only if µ < λ. So, the extinction probability is one if and only if µ λ and µ > 0. (When µ > λ the chain is positive recurrent and the expected time to extinction is finite.) 9

13 6. Homework 6 (Chap 4) p. 98 #4.,, 3, We have a simple random walk with absorbing walls on {0,,,, 0} with payoff function: x : f(x) : Find the optimal stopping time rule and the value function v(x) which gives the expected payoff at each state. This one is easy. You use the convex function rule. Interpolating linearly we get: x : f(x) : v(x) : The optimal stopping rule is to stop at x = 4, 9 and continue otherwise (if you can) (a) Add a cost function of g(x) =.75 at each move. Now, when we have to subtract g(x) at each single gap, g(x) at each double gap and 3g, 4g, 3g at each triple gap. If the gap were longer we would have to solve the linear recursion: v(k + ) = v(k) + v(k + ) g(k + ) For constant g the solution, for a gap of n, is v(k) = k(n k)g (The particular solution is v(k) = k g. The homogeneous solutions are v(k) =, v(k) = k.) This gives: x : f(x) : v (x) : v (x) : v 3 (x) : v(x) : The optimal stopping rule is to continue only at x = 3, 5. (b) a discount rate of α =.95. By iteration we get the following: x : f(x) : v(x) : The optimal stopping rule is to stop at x =, 4, 6, 9. You can get the exact value of the (present) value function v(x) by solving the equation v(x) = max(f(x), α(v(x ) + v(x + ))/) 0

14 to get and v() = , v(3) = v(7) = 88 48, v(8) = (c) with both cost g(x) =.75 and discount rate of α =.95. By iteration we get the following: x : f(x) : v(x) : The optimal stopping rule is to continue at x = 3, 5 and stop elsewhere. You can get the exact value of the (present) value function v(x) by solving the equation So, and are exact. v(x) = max(f(x), α(v(x ) + v(x + ))/ g(x)) v(3) =.95(4/).75 = 5.9 v(5) =.95(6/).75 = 6.85

15 4.. Now you roll two dice and f(x) is the sum of the two numbers except for f(7) = 0. a) What is your expected payoff if you always stop after the first roll? This is just b) What is your optimal payoff? E = p(x)f(x) = 0/36 = 35/ The question itself is a hint. Instead of trying to compute v n (x) we compute the expected payoff E n for v n. E = x 7 p(x) = 0 Given E n we can compute E n+ by E n+ = x 7 p(x) max(f(x), E n ) which gives: E = E 3 = E 4 = 7.00 E 5 = E 6 = Once you realize that the optimal stopping time is to stop when you get more than 7 then you can calculate the expected value: E(f(X T )) = 40/ Since this is more than 6, the strategy is correct a) Do it again with cost function g = [,,,,,,,,,, ]. Start with E = 0. And repeat: E n+ = x 7 p(x) max(f(x), E n g(x)) This gives: E = 7.5 E 3 = E 4 = E 5 = E 6 = E 7 = E 8 = So, E = and the optimal strategy is to continue only if you get or 3. You can solve for E by E = (E )(3/36) + 0/36 to get E = 96/ (If you get a 4 you should keep it instead of paying and getting for a net of )

16 b) with discount rate α =.8 Start with E = 0. And repeat: E n+ = x 7 p(x) max(f(x),.8e n ) This gives: E = 7. E 3 = 6.37 E 4 = 6.83 E 5 = E 6 = E 7 = E 8 = So, E = and the optimal strategy is to continue only if you get,3 or 4. You can get the exact value for E by solving the equation E = 6(.8E)/ /36 which gives E = 900/ (If you get a 5 you should keep it instead of rolling again to get an expected discounted payoff of 80% of which would be 4.878) c) both. Start with E = 0. And repeat: E n+ = x 7 p(x) max(f(x),.8e n g(x)) This gives: E = E 3 = E 4 = E 5 = E 6 = So, E = and the optimal strategy is to continue only if you get. You can get exact answers by solving for E: E = (.8E )/ /36 This gives E = 060/ (With a you should pay the fee of and get a net payoff of =.688) What is your expected payoff if you always stop after the first roll? In all three cases this is the same as before (since you don t pay to continue and you don t get discounted) E = p(x)f(x) = 0/36 = 35/

17 p. 5 #5.,5,7,5 5. Homework 7 (Chap 5) 5.. If X t is a Poisson process with λ = then find E(X X ), E(X X ). By the definition of a Poisson process, E(X X ) = E(X ) = λ =. Also, X X is independent of X. So, E(X X ) = E(X X X ) + E(X X ) = X + If the total X + (X X ) is given then, by symmetry, E(X X ) = X Suppose that X n is the number of individuals in the nth generation in a branching process. If the mean number of offspring is µ then show that is a martingale wrt X 0, X, M n = µ n X n By definition of µ we have E(X n+ X n ) = µx n So, E(M n+ F n ) = µ n E(X n+ X n ) = µ n µx n = M n and we see that M n is a martingale Take the random walk on Z where the probability of going right at each step is p < / and the probability of going left is p. Take S n = a + X + + X n where 0 < a < N. (a) Show that [ ] Sn p M n = p is a martingale. First note that [ p M n+ = p And ( [ ] ) Xn+ [ p p E = p p So, So, M n is a martingale. ] Sn+Xn+ [ ] Xn+ p = M n p ] p + [ ] p ( p) = ( p) + p = ( [ ] ) Xn+ p E(M n+ F n ) = M n E = M n p p 4

18 (b) Suppose that T is the first time that S n reaches 0 or N. Compute P(S T = 0). By the OST we have [ p E(M T ) = M 0 = p But this expected value is also given by [ ] N p E(M T ) = P(S T = 0) + ( P(S T = 0)) p So, [ ] a [ ] N p p p p P(S T = 0) = [ ] N = ( p)n ( p) a p N a ( p) N p N p p 5.5. Suppose that M n is a martingale. Suppose there exists Y 0 so that E(Y ) < and M n < Y for all n. Then show that M n are uniformly integrable. Since E(Y ) <, the size of the tail of Y goes to zero. I.e., for any ɛ > 0, E(Y I Y >K ) < ɛ for K sufficiently large. But M n > K implies Y > M n > K. So, the indicator function of M n > K is less than or equal to the indicator function for Y > K. So, M n I Mn >K Y I Y >K and E( M n I Mn >K) E(Y I Y >K ) < ɛ Since the same K works for all n we have uniform integrability. ] a 5

19 M/G/ queueing is explained on p Time is money (Chap 6) 8.. The insurance company. We have an insurance company which starts with a certain amount of capital x. It has a constant income (premiums) which arrives at the rate of (one unit money per unit time). For example, we can take units to be day = $,000,000 The insurance company settles (pays) claims from time to time. Assume the occurrence of claims is a Poisson process with rate λ. For example, it could be λ = /0 which means one claim every 0 days on average. Let U n be the amount of the n-th claim. (in millions of dollars). Assume that λe(u n ) < This means the company is profitable in the long run. For example, suppose that every claim is exactly 5 (million $). Then the company expects to keeps about half its premiums. 8.. Conversion to queueing. Here is an outline of how you convert this to an M/G/- queueing problem. Imagine that the company puts its money into a safe. Every time a claim is paid, money is taken out of the safe and there is a hole where the bundle of cash used to be. For each claim there is a new hole. As the money comes in, the holes are filled (at the rate of in the order that they were made). So, if the first claim is 5 million $, it will takes 5 days to fill that hole. Now imagine that the holes are people standing in line. When the n-th hole is being filled, the n-th person is being served. The time it takes to fill the hole is equal to the amount of the claim. This is U n, the service time for the n-th person. Your homework this week is to complete this analogy to answers the questions, or at least convert the questions into queueing questions Homework questions. Suppose that λ = /0 and U n = 5 is constant. () Complete the analogy so that the relevant questions about the insurance company are converted into questions about the queue. () The queue has an equilibrium distribution because it is positive recurrent. What does this mean in terms of the company and can you find the particular equilibrium for the given rate of claims. (3) What is the bankruptcy distribution function B(x) := P(company will go bankrupt if its initial capital is x) (What is the meaning of it in terms of the queue and can you calculate it?) Some students asked if the premiums arrive at a particular time each day. No, we are doing a continuous process. The premiums come in at a continuous rate all the time. Claims come in at the (exponential) rate of λ = /0 and they are paid immediately with cash. The company is bankrupt when it cannot pay a claim immediately. (It cannot borrow money.) I will do the problem this weekend and let you know if you need more information or more formulas. 8

20 First we need to interpret the random variable Y n. In the queue, this is the number of people still in line when the nth person has been served. Serving people in line means fully paying for claims. So, Y n is the number of claims left to be fully paid when the nth claim has been fully paid. We know exactly how long this takes. So, Y n times 5 million dollars is the amount of money that we are in the red for 5n days after the first claim came in. During the 5 days that it takes to fully pay for the n + st claim, the number of new claims that will come in is given by the Poisson distribution. So, 5λ λk P(Y n+ = Y n + k ) = e k! = e / λk k! = p(k) p(0) = p() = p() = p(3) = p(4) = p(5) = p(6) =.366E 05 This means the transition matrix P has entries p(j i + ) if j i 0 p(i, j) = if (i, j) = (0, ) 0 otherwise The equilibrium distribution for Y is some infinite vector π = (π(0), π(), π(), ) so that πp = π (and π(n) = ). This gives an infinite sequence of equations: π(0) = π()p(0) π() = π(0)+ π()p() + π()p(0) π() = π()p() + π()p() + π(3)p(0) π(k) = π()p(k) + π()p(k ) + + π(k + )p(k) Since we know that π(0) = E(τ) + = λµ λµ = 3 we can solve these equations one by one: π(0) π() = p(0) = π() π(0) π()p() π() = p(0) = π() π()p() π()p() π(3) = p(0) = π(3) π()p(3) π()p() π(3)p() π(4) = p(0) = π(5) = π(4) π()p(4) π()p(3) π(3)p() π(4)p() p(0) = π(6) = = The numbers π(n) represent the long term proportion of the time that the number of unpaid claims will be n. 9

21 The Markov chain method of analyzing the bankruptcy probability is to compute the probability that Y n will reach k before reaching 0: P k := P(Y n reaches k before reaching 0 Y 0 = ) This is the (, ) entry of the matrix (I Q) S P = ( Q) S = ( p()) ( p(0) p()) = ( ( )) ( ) p() p() p(0) p() p() P 3 = I = p(0) p() p(0) p() p() p() p(3) p(0) p() p() p(3) P 4 = I 3 p(0) p() p() p(0) p() p() = p(0) p() p(0) p() P 5 = P 6 = Suppose that the company starts with 0 million dollars. Then it will go bankrupt in the first round of claims if it has its first claim after 5k days (k 0 and before 5k + 5 days and Y n goes up to k +. (And this is only approximate since it measures only whole numbers of claims and does not take into account how far the company has gone to partially paying off the nth claim). P(first claim comes in between 5k and 5k + 5 days) = e k/ e (k+)/ k e k/ e (k+)/ P k+3 P(bankruptcy) E E So, the probability of bankruptcy is approximately % according to this analysis. Most of you just took the number P 3 = % as the probability of bankruptcy during the first round of claims. This turns out to be more accurate. The exact bankruptcy probability (starting at 0 million) being % as calculated on the next page. The sum of numbers in the third column % is slightly too big probably because it counts things twice. For example, the second number should be multiplied by to get the probability that the company survives the first round and then dies on the second. With these corrections the third column adds up to which is more accurate. I need to think about it more to figure out if this is exactly the correct formula. 0

22 The exact bankruptcy probability has a formula very similar to the formula for the equilibrium distribution. Suppose that B(n) is the probability of bankruptcy for the company assuming that it starts with 5n million $. Then S(n) = B(n) is the survival probability (the probability that the company will never go bankrupt). For, example, S(0) is the probability that the company can make it starting with zero capital. Obviously, it needs to survive for 5 days with zero claims. This has probability p(0) = e /. Then it has 5 million dollars and its survival probability is S(). So, S(0) = p(0)s() If the company has 5 million then it can survive one claim but not two in the next 5 days. So, it has a p(0) + p() probability of surviving for 5 days. After that it has either S() or S() probability of survival. So, S() = p(0)s() + p()s() And so on. S(0) = S()p(0) S() = S()p() + S()p(0) S() = S()p() + S()p() + S(3)p(0) S(k) = S()p(k) + S()p(k ) + + S(k + )p(k) If we knew what was S(0) we could calculate the rest: S() = S() = S(3) = S(4) = S(5) = S(6) = S(7) = S(0) S(0) S(0) S(0) S(0) S(0) S(0) If the company starts with a lot of money then its survival probability is close to. So, S(0) = / and S(n) and B(n) = S(n) are given by: S(0) = 0.5 B(0) = 0.5 S() = B() = S() = B() = S(3) = B(3) = S(4) = B(4) = S(5) = B(5) = S(6) = B(6) = S(7) = B(7) = S(8) = B(8) =.85044E 05 So, for example, if the company starts with 0 million dollars its probability of bankruptcy is %. Notice that if it starts with nothing it has a chance of survival.

23 9. Homework 9 (Chap 7) p. 70 #7., 7.0 and rewrite the ALOHA protocal clearly as a Markov process. In other words, make the question clear. You don t have to go through the answer (why it is null recurrent). 7.. Show that every irreducible, discrete time, two-state Markov chain is reversible with respect to its invariant probability. This just follows from the definitions. There are two states,. Irreducible means the in the transition matrix ( ) p p P = q q we have p, q > 0. The invariant distribution is a distribution π so that πp = π: ( ) p p (π(), π()) = (π()( p) + π()q, π()p + π()( q)) = π q q This implies that π()p = π()q or π()p(, ) = p(, )π() This is the balance equation showing that the Markov chain is reversible Let α(x, y) be a symmetric rate function on the edges of Z d. Suppose there are real numbers 0 < c < c < so that for all x, y with x = y =, c α(x, y) c (a) Show that X t is recurrent if d = or. This uses the Theorem stated on page 67 and proved in the next few pages. The theorem is that if α, β are symmetric transition rates on a graph and β(x, y) α(x, y) for all vertices x, y and α is recurrent then so is β. For the first part we use the fact proved in class that the integer lattice in Z and Z are recurrent. This implies recurrence for a constant rate c and the theorem implies that α c is also recurrent. (b) Show that X t is transient for d 3. Same thing. If α were recurrent then the constant rate c α would also be recurrent. But we know that for a constant rate, Z d is transient d 3.

24 0. Homework 0 (Chap 8a) Three problems: a) reflection principle Take the example on page 79 and redo it for arbitrary starting point and any variance. I.e., suppose that X t is Brownian motion with variance σ starting at X 0 = a. Calculate the probability: P(X s = a for some < s < t ) This is just a calculation. The point is to see why the values of a, σ don t matter. We want the probability that Brownian motion with variance σ, starting at a, will return to a sometime between time and time t >. P(X s = a for some < s < t X 0 = a) =? By symmetry and the reflection principle this is 4P(X > a and X t < a X 0 = a) = 4P(X X 0 = b > 0 and X t X < b X 0 = a) (At this point I already made a irrelevant.) The probability for fixed b (in the interval (b, b + db]) is φ σ (b)db Φ σ (t )( b) where /σ φ σ (b) db = πσ e b db = e x / dx = φ (x) dx π (with x = b/σ) and Φ σ (t ) is the cumulative distribution function: Φ σ (t )( b) = b φ σ (t )(x) dx = b φ σ (t )(x) dx = b/σ t φ (y) dy where we used the convert to standard normal rule. Substituting b = xσ makes this So, the answer is given by: Φ σ (t )( b) = x/ t P(X s = a for some < s < t X 0 = a) = 4 φ (y) dy 0 x/ t φ (x)φ (y) dydx Both a and σ are gone. So, the answer is the same as before. b) fractal dimension Suppose that X t is standard Brownian motion. Let Y t be the continuous function: Y t = max 0 s t X s Show that () Y t is monotonically increasing. () Y t is differentiable almost everywhere (except on a set W of measure zero) with derivative zero. (3) Calculate the box dimension of W. You can use the theorem Theorem If the dimension of a set A R d is less than d then it has measure zero.

25 Proof. For ɛ small, we can cover A with Cɛ D cubes of size ɛ. The measure (d-dimensional volume) of these cubes is ɛ d. So, If D < d then lim ɛ 0 ɛ d D = 0. So, µ(a) = 0. µ(a) Ce D ɛ d = Cɛ d D The first two steps are easy. For the box dimension the answer is that W looks exactly like Z and therefore has dimension /. You could argue this intuitively or you could prove it rigorously. First you need the distribution function of Y t : P(Y t > b) = P(X t > b) by the reflection principle ( ( )) b = Φ t So, the distribution function of Y t is ( ) b F Yt (b) = P(Y t b) = Φ t and the density function is f Yt (b) = ( ) b φ t t You also need to realize that Y = max 0 s X s has the same distribution as its time reversal, and Y op = max 0 u (X u X ) Now, we want to calculate P(W [, t] ). This event happens if max (X s X ) = b s t max (X u X ) b 0 u The probability that this happens for b in (b, b + db] is f Yt (b) db F Y (b) = = = 4 t φ x t 0 0 tan t 0 0 ( b t ) db (Φ(b) ) 4φ(x)dxφ(y)dy π e r / rdrdθ = π tan t = π tan t which is exactly the same as P(Z [, t] ) Therefore, W and Z have the same box dimension.

26 c) challenge question Why does it make sense to say that the infinitesmal generator of Brownian motion is / x? Hint: In the discrete case (discrete state space, continuous time), p t (x, y) = (e ta ) xy and the xy coordinate of the infinitesimal generator is given by A xy = p(x, y) t If f t (x) is the distribution of states at time t then t f t(y) = f t (x)a xy x The hint more or less gives the answer. You just need to formulate it. Here is one way. The infinitesimal generator of a Markov process can be defined to be the space operator A satisfying the equation t f t(x) = (Af t )(x) where f t (x) is probability density function of the process at time t. (This is the same as the distribution of states if the number of particles is very large.) A space operator is any linear function A : C (S) C (S) where S is the state space. The point is that Af depends only on the value of f t (x) for the fixed time t and variable x (whereas f t t depends only on the value of f s (x) for s close to t and for x fixed, making a time operator.) If S is discrete (finite or countably infinite) t then any function f : S R is C. So, the equation: t f t(x) = x f t(x) governing Brownian motion makes A = the infinitesimal generator. x 3

27 MATH 56A: QUIZZES From the syllabus: quizzes will be given every week or other week. Students should form groups of 3 or 4 to work on these problems in class, solve them and help other students in the group to understand them. Each group should hand in their answers signed by all members of the group. More rules which apply to all quizzes and practice quizzes: Bring calculators and/or laptops. (However, the problems will be ones I can do by hand.) You can hand in attachments to your quiz including text and calculations. I will try to remember to bring in my UBS memory stick formatted for PC. Date: December 5, 006.

28 MATH 56A: QUIZ PRACTICE This first quiz is for practice and does not count. The purpose is for me to see what you can do and for you to see what kinds of questions I think of. Practice Quiz I. Give an example of a Markov chain that has two recurrent classes and two transient classes.. (Random walk with reflecting walls) Suppose there are four states,, 3, 4 in a line. If you are at one of the endpoints you always move inward in the next step. If you are at one of the inside points you move left with probability /3 and right with probability /3. () What is the transition matrix? (Put a dot in place of each 0 in the matrix.) () In the long run how much time is spent in each state? What formula did you use? (3) What is the expected length of time between visits to state 3? What is the formula? (4) What is the period of this Markov chain? How is this reflected in your answer to ()? 3. A mouse is put through a maze over and over. At the end there are two trap doors. One gives a big reward, the other doesn t. The reward is placed 3/4 of the time on the left and /4 of the time on the right according to a random number generator. The mouse somehow know that the reward is more often on one side than the other. He picks one of the two sides at random and keeps picking that side until he is wrong twice in a row. Then he switches to the other side and continues. () Is this a Markov chain? Explain why or why not. If it isn t then change the assumptions or set up so that it becomes a Markov chain. () What are the states of your chain? (Fewer is better.) (3) What is the transition matrix? (Put a dot in place of each 0 in the matrix.)

29 MATH 56A: QUIZZES Answers to Practice Quiz I. Give an example of a Markov chain that has two recurrent classes and two transient classes. Here is one answer: p q 3 where p, q are both positive.. (Random walk with reflecting walls) Suppose there are four states,, 3, 4 in a line. If you are at one of the endpoints you always move inward in the next step. If you are at one of the inside points you move left with probability /3 and right with probability /3. () What is the transition matrix? (Put a dot in place of each 0 in the matrix.) P = /3 /3 /3 /3 () In the long run how much time is spent in each state? What formula did you use? The proportion of time spent at the states is given by the invariant distribution ( π = 4, 3 4, 6 4, 4 ) 4 This is the left eigenvector corresponding to eigenvalue, i.e., πp = π (3) What is the expected length of time between visits to state 3? What is the formula? The expected time between visits to 3 is /π(3) = 4/6 = 7/3. (4) What is the period of this Markov chain? How is this reflected in your answer to ()? The period is. So, half the time is spent in states,3: = Date: September 7, 006.

30 MATH 56A: QUIZZES 3. A mouse is put through a maze over and over. At the end there are two trap doors. One gives a big reward, the other doesn t. The reward is placed 3/4 of the time on the left and /4 of the time on the right according to a random number generator. The mouse somehow know that the reward is more often on one side than the other. He picks one of the two sides at random and keeps picking that side until he is wrong twice in a row. Then he switches to the other side and continues. () Is this a Markov chain? Explain why or why not. If it isn t then change the assumptions or set up so that it becomes a Markov chain. This is not a Markov chain because the future depends on the past instead of just on the present. To make it into a Markov chain, past events must be made into present states of mind of the mouse. So, we assume that the mouse has four possible present states of mind: (L, y), (L, n), (R, y), (R, n). Here L or R tells whether the mouse thinks the reward is on the left or right. The second coordinate y =yes or n =no tells whether his present hypothesis was correct last time. So, e.g., (L, n) means he thinks it is on the left even though it was on the right last time. () What are the states of your chain? (Fewer is better.) There are at least three different ways to write the four possible states. (L, n) is the same as (L, R) meaning he is guessing left and it was right last time. It is also (L, ) where the is the number of consecutive mistakes he has made with the present hypothesis. (3) What is the transition matrix? (Put a dot in place of each 0 in the matrix.) 3/4 /4 P = 3/4 /4 /4 3/4 3/4 /4 Experimental data (using people instead of mice and skip the maze) shows that people will eventually guess left 3/4 of the time and guess right /4 of the time. The optimal strategy is to guess left all the time.

31 MATH 56A: QUIZ. A cab driver works better when he is alone but he doesn t like. When he has no passengers he pages his buddy and five minutes later they talk for one unit of time. (The time unit is five minutes in this problem.) When the cab driver is talking to his friend, he has a /0 chance of picking up a passenger. If he is not talking to his friend he has a /3 chance of getting a fare (a passenger). The passenger has a half-life of 5 minutes. I.e., five minutes later there is a chance that the passenger will still be in the cab. a) Write down the four states of this problem. b) What are the transition probabilities? Draw a graph (with four points and several arrows) and write down the transition matrix. [Hint: If he is alone and not talking to his friend, then in one step he will be talking to his friend and in two steps (0 minutes) he won t be talking.] c) Is this a Markov process? Is it irreducible? What are the communication classes. Are they transient, recurrent? d) If the man is alone with no passengers, what is the probability that he will have a passenger and be talking to his friend 3 steps (5 minutes) later?. (Simple random walk) Draw a square with corners (labeled counterclockwise) A, B, C, D draw a line connecting the opposite corners A, C. The Markov chain is simple random walk on this graph. For example, at point A you have an equal chance of moving to B, C or D in one step but you cannot stay at A. a) What is the equilibrium distribution? b) If you start at B what is the expected number of times you will visit A and C before returning to B? c) If you start at A what is the probability that you will reach B before you reach C? 3. You are given the following transition matrix. 0 0 / 0 / 0 /3 0 /3 0 P = 0 0 / 0 / 0 / 0 /4 /4 0 0 / 0 / a) What are the communication classes? Are they triansient or recurrent? b) If you start in state 4 what is the probability that you will ever visit state? c) In the long run, now much time is spent in each state?

32 MATH 56A: QUIZ ANSWERS. A cab driver works better when he is alone but he doesn t like [it]. When he has no passengers he pages his buddy and five minutes later they talk for one unit of time. (The time unit is five minutes in this problem.) When the cab driver is talking to his friend, he has a /0 chance of picking up a passenger. If he is not talking to his friend he has a /3 chance of getting a fare (a passenger). The passenger has a half-life of 5 minutes. I.e., five minutes later there is a chance that the passenger will still be in the cab. a) Write down the four states of this problem. The four states are: () NN no passenger, not talking to friend. Also, he is calling his friend and looking for a fare. () NT no passenger, talking to friend. He is looking for a fare but not that hard. (3) P N has passenger, not talking to friend. And he is not calling his friend. (4) P T has passenger, talking to friend. b) What are the transition probabilities? P(NN NT ) = /3 P(NN P T ) = /3: Since he is not talking, he has a /3 chance of picking up a passenger. P(NT P N) = /0: Since he is talking, he only has a /0 chance of picking up a passenger. He talks only one time period. P(NT NN) = 9/0 P(P T P N) = /: It is whether or not the passenger is still there but he never talks for two consecutive time periods. P(P T NN) = / P(P N P N) = /: It is whether or not the passenger is still there but he never pages his friend if he has a fare. P(P N NN) = /: If this happens he pages his friend, but he has to wait another 5 minutes before they talk. The other probabilities are zero. Draw a graph (with four points and several arrows) and write down the transition matrix. [Hint: If he is alone and not talking to his friend, then in one step he will be talking to his friend and in two steps (0 minutes) he won t be talking.] Date: October 4, 006.

33 MATH 56A: QUIZ ANSWERS P = NN NT P N P T NN NT P N P T 0 /3 0 /3 9/0 0 /0 0 / 0 / 0 / 0 / 0 c) Is this a Markov process? Yes. Is it irreducible? Yes. What are the communication classes. There is only one communication class: the whole set of 4 states. Are they transient, recurrent? Recurrent. (Finite and irreducible implies recurrent.) d) If the man is alone with no passengers, what is the probability that he will have a passenger and be talking to his friend 3 steps (5 minutes) later? By computer: p 3 (NN, P T ) = (P 3 ) 4 = 3/90 = Without computer: There are only two ways to get from NN to P T in three steps NN /3 NT 9/0 NN /3 P T with probability /5 and NN /3 P T / NN /3 P T with probability /8 for a total of = 3/90.. (Simple random walk) Draw a square with corners (labeled counterclockwise) A, B, C, D draw a line connecting the opposite corners A, C. The Markov chain is simple random walk on this graph. For example, at point A you have an equal chance of moving to B, C or D in one step but you cannot stay at A. 0 /3 /3 /3 P = / 0 / 0 /3 /3 0 /3 / 0 / 0 a) What is the equilibrium distribution? π = (.3,.,.3,.) b) If you start at B what is the expected number of times you will visit A and C before returning to B? It takes /π(b) = 5 turns on average to return to B. So, you should multiply π by 5 (or divide by 0.) and you get: π/π(b) = (.5,,.5, ) The expected number of visits to A, C is 3/ for each. So the answer is 3. Another formula is to make B recurrent. Then the expected number of visits to A, C, D are the entries of (/, /, 0)(I Q) = (3/, 3/, ) So, the answer again is 3/ + 3/ = 3.

34 MATH 56A: QUIZ ANSWERS 3 c) If you start at A what is the probability that you will reach B before you reach C? You make B and C absorbing. Then you have to take the (, ) entry of the matrix (I Q) S S = ( /3 /3 0 / ) Q = ( 0 ) /3 / 0 with det(i Q) = 5/6. So, M = (I Q) = 6 ( ) /3 = 5 / ( ) /5 3/5 MS = /5 4/5 So, the answer is /5. I Q = 3. You are given the following transition matrix. 0 0 / 0 / 0 /3 0 /3 0 P = 0 0 / 0 / 0 / 0 /4 /4 0 0 / 0 / ( 6/5 ) /5 3/5 6/5 ( ) /3 / a) What are the communication classes? Are they transient or recurrent?, 4 form a transient class 3, 5 form a recurrent class. is its own communication class. (The definition is p n (i, j) > 0 and p m (j, i) > 0 for some n, m 0. Here p 0 (, ) = > 0. Also we proved in class that this is an equivalence relation, in particular reflexive.) This class is transient. b) If you start in state 4 what is the probability that you will ever visit state? P = ( ) 4 + = / /4 = / 3/4 = 3 c) In the long run, now much time is spent in each state? You eventually end up in the recurrent class {3, 5} and you spend the rest of the time going back and forth between those two classes with equal probability (p(3, 5) = p(5, 3)). So, in the long run you spend half the time in state 3 and half the time in state 5.

35 MATH 56A: FINAL EXAM This final exam includes a new topic called Kendall s Diffusion Principle. I picked out the elementary aspects of is this idea with hints for you to work through them. I hope you feel good about the fact that you now know enough to understand this. The rules are the same as for homework. You can work on the problems in groups if you like. But the writing of the answers you are supposed to do yourself. What is your personal interpretation of the problem even if you got the same answer as your friend?

36 Math 56a: Final Exam This final exam includes a new topic called Kendall s Diffusion Principle. I picked out the elementary aspects of is this idea with hints for you to work through them. I hope you feel good about the fact that you now know enough to understand this. The rules are the same as for homework. You can work on the problems in groups if you like. But the writing of the answers you are supposed to do yourself. What is your personal interpretation of the problem even if you got the same answer as your friend?.. Brownian motion and stochastic differentials.... multivariable Itô formula. Prove the following multivariable Itô rule: Suppose f is a C function of 4 variables and a t, b t are C functions. Suppose also that A t, B t are square summable processes similar to Z t in the text. Then df(a t, b t, A t, B t ) = f ȧ t dt + f ḃ t dt + f 3 da t + f 4 db t + f 33 d A t + f 44 d B t + f 34 d A, B t... Finding S t. In this problem your job is to solve the stochastic differential equation (for the value of a stock with constant drift and constant volatility). ds t = µs t dt + σs t dw t In the form S t = [The case µ = 0 is done in the book and I will explain it in class and later in this worksheet in example..] () Calculate d(ln S t ) using Itô s formula. () Find ln S t using the linearity of stochastic differentiation: d(at + bw t ) = adt+bdw t if a, b are constant. (Linearity is a useful concept!) (3) Find S t... Kendall s principle of diffusion of arbitrary constants. This is an intuitive method for approximating the variance of a stochastic process Z t assuming it is a martingale. The idea is to first use the law of large numbers to transform a probabilistic equation into a (deterministic) differential equation. The solution of this differential equation will give the average behavior of the process. Now we want the variance of this result. Kendall s principle gives an approximation of this variation. It goes like this. First write down the differential of the quadratic variation d Z t. Then integrate this along the deterministic path z t found by solving the differential equation. The answer is approximately the variance of Z t : V ar(z t ) t 0 d Z t Zt=z t The arbitrary constants are the constants of integration that you get when you solve a differential equation. The idea is that these constants are random and the errors in these constants tend to accumulate along the path.

37 ... example. I will go over the example that we need to solve the Black- Scholes equation. Suppose that µ = 0. Then So, ds t = σs t dw t ( d ln S t = ds t S t + S t = σ dw t σ dt ) σ S t dt ln S t = σw t σ t + C Plugging in t = 0 gives C = ln S 0. So, S t = S 0 exp ( ) σw t σ t This is called an exponential martingale. () The deterministic solution is the expected value which is given by: E(S t ) = S 0. So, s t = S 0. () The quadratic variation is given by d S t = σ St dt. (3) So, Kendall s rule gives V ar(s t ) = t 0 t 0 σ S t dt St=s t σ S 0 dt = σ S 0t If I ask you to compare this with the actual variance of S t, you could answer with varying degrees of precision. The exact variance of S t is: V ar(s t ) := E(St ) S0 = E( S ( t S 0 ) since St S t is a martingale ) t = E 0 d S s At this point you can see where Kendall s approximation comes from and why he needs his process to be a martingale. Continuing: ( t ) t E(St ) S0 = E σ Ss ds = σ E(Ss ) ds 0 This is the point at which you are supposed to realize that the approximation is not 00% accurate because St is not a martingale. Differentiating both sides with respect to t gives de(s t ) = σ E(S t ) So, E(St ) = Ke σt with K = S0. So, the final result is V ar(s t ) = E(St ) S0 = S 0 σ t + S 0 σ4 t +! The first term in this expansion is Kendall s approximation. 0

38 ... extension to µ 0. Your job is to extend the above analysis to the general case where µ 0. Thus ds t = µs t dt + σs t dw t. We first need a martingale. () Show that M t := S t e µt is a martingale. () Find the deterministic solution of the differential equation. (3) Calculate the Kendall approximation to the variance of M t. (4) Find E(S t ) and approximate V ar(s t ). (5) Compare your answer to what you got in ideal gasses. Another application is the ideal classical gas. Suppose you have a large number of particles N which are all distinguishable (so there is no quantum effect) and there is a permeable membrane that separates the gas into two parts: A t + B t = N. At each time interval δt one particle (out of N) is picked at random with equal probability and moved to the other side. Your first task is to derive the following stochastic differential equation where X t = A t /N dx t = λ( X ) dt + σx t ( X t ) dw t This is a limit as N gets very large and δt gets very small. (Use the central limit theorem to approximate the binomial distribution.) To get an approximate solution to this equation first change variables to Z t = X t. Then change Z t into a martingale M t. Find the expected value of the martingale. Then find the expected value of Z t and X t. Now use Kendall s diffusion principle to approximate the variance of M t then convert this to an approximation for the variance of X t. Next, find the equilibrium distribution of X t. (This is a symmetric process so it is easy to find the equilibrium.) Starting with X t = / (not the equilibrium!) estimate how long it takes to reach equilibrium..3. Other problems..3.. Two problems from the book. Do 9. and Queueing. () Do 6.8 on page 53. () Explain why M/M/k queueing is the same as a birth-death process (see p. 75) one more. Find the box dimension of the set Z = {, /, /3, /4, } and prove it..4. Instruction for submitting. This exam is due either at school on Friday Dec 5 or at my house on Saturday, Dec 6, 006. I prefer attachments. But you can also send it fedex (to arrive Saturday) to Kiyoshi Igusa, 3 Parker Ave, Newton, MA or fax it to the math dept at

39 Math 56a: Final Exam answers.. Brownian motion and stochastic differentials.... multivariable Itô formula. Prove the following multivariable Itô rule: Suppose f is a C function of 4 variables and a t, b t are C functions. Suppose also that A t, B t are square summable processes similar to Z t in the text. Then df(a t, b t, A t, B t ) = f ȧ t dt + f ḃ t dt + f 3 da t + f 4 db t + f 33 d A t + f 44 d B t + f 34 d A, B t The Taylor expansion tells us that the change in f is 4 δf(x, x, x 3, x 4 ) = f i δx i + 4 f ij δx i δx j + o((δx) ) i= i,j= When each variable is a function of time t, the quadratic terms are δx i δx j = δ x i, x j t by the definition of the covariance x i, x j t. In class we proved (Lemma 9.) that the covariance is zero if one of the variables is a function with finite variation (for example, a C function). So, in the limit as δt 0 there are only three nonzero quadratic terms: 4 f ij d x i, x j t = f 33d x 3 t + f 4d x 4 t + f 34 d x 3, x 4 t i,j=3 Substituting x 3 = A t, x 4 = B t we get the second line of our formula. For the linear terms we use the chain rule which holds for the C function x = a t, x = b t. (The chain rule does not hold for non-differentiable functions such as A t, B t.) 4 i= f i dx i = f a t dt + f bt dt + f 3 da t + f 4 db t Combining the linear and quadratic terms we get our formula.... Finding S t. In this problem your job is to solve the stochastic differential equation (for the value of a stock with constant drift and constant volatility). ds t = µs t dt + σs t dw t In the form S t = () Calculate d(ln S t ) using Itô s formula. Itô s formula is df(s t ) = f (S t )ds t + f (S t )d S t

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x: 108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 MATH 56A: STOCHASTIC PROCESSES CHAPTER 3 Plan for rest of semester (1) st week (8/31, 9/6, 9/7) Chap 0: Diff eq s an linear recursion (2) n week (9/11...) Chap 1: Finite Markov chains (3) r week (9/18...)

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18. IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Introduction to Probability

Introduction to Probability Introduction to Probability Salvatore Pace September 2, 208 Introduction In a frequentist interpretation of probability, a probability measure P (A) says that if I do something N times, I should see event

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

Summary of Stochastic Processes

Summary of Stochastic Processes Summary of Stochastic Processes Kui Tang May 213 Based on Lawler s Introduction to Stochastic Processes, second edition, and course slides from Prof. Hongzhong Zhang. Contents 1 Difference/tial Equations

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 MATH 56A: STOCHASTIC PROCESSES CHAPTER 0 0. Chapter 0 I reviewed basic properties of linear differential equations in one variable. I still need to do the theory for several variables. 0.1. linear differential

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Math221: HW# 7 solutions

Math221: HW# 7 solutions Math22: HW# 7 solutions Andy Royston November 7, 25.3.3 let x = e u. Then ln x = u, x2 = e 2u, and dx = e 2u du. Furthermore, when x =, u, and when x =, u =. Hence x 2 ln x) 3 dx = e 2u u 3 e u du) = e

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

Math 1b Sequences and series summary

Math 1b Sequences and series summary Math b Sequences and series summary December 22, 2005 Sequences (Stewart p. 557) Notations for a sequence: or a, a 2, a 3,..., a n,... {a n }. The numbers a n are called the terms of the sequence.. Limit

More information

Chapter 5 Simplifying Formulas and Solving Equations

Chapter 5 Simplifying Formulas and Solving Equations Chapter 5 Simplifying Formulas and Solving Equations Look at the geometry formula for Perimeter of a rectangle P = L W L W. Can this formula be written in a simpler way? If it is true, that we can simplify

More information

Modelling data networks stochastic processes and Markov chains

Modelling data networks stochastic processes and Markov chains Modelling data networks stochastic processes and Markov chains a 1, 3 1, 2 2, 2 b 0, 3 2, 3 u 1, 3 α 1, 6 c 0, 3 v 2, 2 β 1, 1 Richard G. Clegg (richard@richardclegg.org) November 2016 Available online

More information

Random Walk on a Graph

Random Walk on a Graph IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure

More information

Calculator Exam 2009 University of Houston Math Contest. Name: School: There is no penalty for guessing.

Calculator Exam 2009 University of Houston Math Contest. Name: School: There is no penalty for guessing. Calculator Exam 2009 University of Houston Math Contest Name: School: Please read the questions carefully. Unless otherwise requested, round your answers to 8 decimal places. There is no penalty for guessing.

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

Homework set 2 - Solutions

Homework set 2 - Solutions Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Review for Final Exam, MATH , Fall 2010

Review for Final Exam, MATH , Fall 2010 Review for Final Exam, MATH 170-002, Fall 2010 The test will be on Wednesday December 15 in ILC 404 (usual class room), 8:00 a.m - 10:00 a.m. Please bring a non-graphing calculator for the test. No other

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

AP Calculus Chapter 9: Infinite Series

AP Calculus Chapter 9: Infinite Series AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Stat 150 Practice Final Spring 2015

Stat 150 Practice Final Spring 2015 Stat 50 Practice Final Spring 205 Instructor: Allan Sly Name: SID: There are 8 questions. Attempt all questions and show your working - solutions without explanation will not receive full credit. Answer

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 )

1. Let X and Y be independent exponential random variables with rate α. Find the densities of the random variables X 3, X Y, min(x, Y 3 ) 1 Introduction These problems are meant to be practice problems for you to see if you have understood the material reasonably well. They are neither exhaustive (e.g. Diffusions, continuous time branching

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Main topics for the First Midterm Exam

Main topics for the First Midterm Exam Main topics for the First Midterm Exam The final will cover Sections.-.0, 2.-2.5, and 4.. This is roughly the material from first three homeworks and three quizzes, in addition to the lecture on Monday,

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

14 Branching processes

14 Branching processes 4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits Lecture for Week 2 (Secs. 1.3 and 2.2 2.3) Functions and Limits 1 First let s review what a function is. (See Sec. 1 of Review and Preview.) The best way to think of a function is as an imaginary machine,

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Generating Function Notes , Fall 2005, Prof. Peter Shor

Generating Function Notes , Fall 2005, Prof. Peter Shor Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

PRACTICE PROBLEMS FOR EXAM 2

PRACTICE PROBLEMS FOR EXAM 2 PRACTICE PROBLEMS FOR EXAM 2 Math 3160Q Fall 2015 Professor Hohn Below is a list of practice questions for Exam 2. Any quiz, homework, or example problem has a chance of being on the exam. For more practice,

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Effective July 5, 3, only the latest edition of this manual will have its

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

ASM Study Manual for Exam P, Second Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

ASM Study Manual for Exam P, Second Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata ASM Study Manual for Exam P, Second Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata

More information

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions: 174 BROWNIAN MOTION 8.4. Brownian motion in R d and the heat equation. The heat equation is a partial differential equation. We are going to convert it into a probabilistic equation by reversing time.

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers Fry Texas A&M University! Fall 2016! Math 150 Notes! Section 1A! Page 1 Chapter 1A -- Real Numbers Math Symbols: iff or Example: Let A = {2, 4, 6, 8, 10, 12, 14, 16,...} and let B = {3, 6, 9, 12, 15, 18,

More information

What can you prove by induction?

What can you prove by induction? MEI CONFERENCE 013 What can you prove by induction? Martyn Parker M.J.Parker@keele.ac.uk Contents Contents iii 1 Splitting Coins.................................................. 1 Convex Polygons................................................

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Math Circle at FAU 10/27/2018 SOLUTIONS

Math Circle at FAU 10/27/2018 SOLUTIONS Math Circle at FAU 10/27/2018 SOLUTIONS 1. At the grocery store last week, small boxes of facial tissue were priced at 4 boxes for $5. This week they are on sale at 5 boxes for $4. Find the percent decrease

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b). Confidence Intervals 1) What are confidence intervals? Simply, an interval for which we have a certain confidence. For example, we are 90% certain that an interval contains the true value of something

More information

THE SIMPLE URN PROCESS AND THE STOCHASTIC APPROXIMATION OF ITS BEHAVIOR

THE SIMPLE URN PROCESS AND THE STOCHASTIC APPROXIMATION OF ITS BEHAVIOR THE SIMPLE URN PROCESS AND THE STOCHASTIC APPROXIMATION OF ITS BEHAVIOR MICHAEL KANE As a final project for STAT 637 (Deterministic and Stochastic Optimization) the simple urn model is studied, with special

More information

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

x x 1 0 1/(N 1) (N 2)/(N 1)

x x 1 0 1/(N 1) (N 2)/(N 1) Please simplify your answers to the extent reasonable without a calculator, show your work, and explain your answers, concisely. If you set up an integral or a sum that you cannot evaluate, leave it as

More information

Math 101 Review of SOME Topics

Math 101 Review of SOME Topics Math 101 Review of SOME Topics Spring 007 Mehmet Haluk Şengün May 16, 007 1 BASICS 1.1 Fractions I know you all learned all this years ago, but I will still go over it... Take a fraction, say 7. You can

More information

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5.

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5. Chapter 5 Exponents 5. Exponent Concepts An exponent means repeated multiplication. For instance, 0 6 means 0 0 0 0 0 0, or,000,000. You ve probably noticed that there is a logical progression of operations.

More information