MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above. 3. No notes, books, calculators, smartphones, ipods, etc allowed. 4. Show all your work. Unsupported answers will receive NO CREDIT. 5. You are expected to do your own work. # #2 #3 #4 #5 TOTAL
. The only information you have about a random variable X is that its moment generating function φ(t) satisfies the inequality φ(t) e t2 for all t 0. Find the upper bound for P (X 0). Solution: For t 0, we have P(X 0) = P(tX 0t) = P ( e tx e 0t) e 0t E[e tx ] (using Markov s inequality) = e 0t φ(t) e 0t+t2 (since φ(t) e t2 for t 0) To find the best upper bound, we need to find inf t 0 e0t+t2. Differentiating e 0t+t2 with respect to t and setting it to zero, we find that the infimum is attained at t = 5 and inf t 0 e0t+t2 = e 50+25 = e 25. Therefore P(X 0) e 25. Grading Rubric: (+3 points) for writing P(X 0) = P(e tx e 0t ) when t 0. (+2 points) for the correct version of the Markov s inequality. (+2 points) for applying φ(t) e t2 when t 0. (+2 points) for optimizing over t 0. (+ points) Final answer (only or 2 points) if P(S n 0n) exp( ni(0)) is used.
2. Prove that for any fixed x 0 k: k n x n n k x k! en e u2 /2 du, x 2π where the notation means that the ratio of the left hand side and the right hand side goes to as n. n nk Solution: First of all, we notice that if S n P oisson(n) then P(S n = k) = e k! for all 0 k <. Therefore for any x 0 k: k n x n n nk e k! = k: k n x n P(S n = k) = P( S n n x n) = P( x n S n n x n) ( = P x S ) n n x. n We know that if Y P oisson(λ ) and Y 2 P oisson(λ 2 ) are independent, then Y +Y 2 P oisson(λ + λ 2 ). So we can write S n = n i= X i.i.d. i where X i P oisson(). We know that E[X i ] = and V ar[x i ] =. Therefore E[S n ] = n and V ar(s n ) = n. Using the central limit theorem we can conclude that S n n d N(0, ), as n. n In other words, as n ( P x S ) n n x x n ( P x Sn n n ) x x x k: k n x n nk k! e n x x 2π e u2 /2 du k: k n x n 2π e u2 /2 du n k k! en x x x 2π e u2 /2 du 2π e u2 /2 du. Grading Rubric: (+3 points) for identifying P oisson(n). (+2 points) for writing the mean and variance of P oisson(n). ( ) (+2 points) for writing a form similar to P x Sn n n x. (+2 points) for applying the CLT. (+ points) Final answer
3. A fair coin is tossed n times, showing heads H n times and tails T n times. Let S n = H n T n. Show that P (S n > an) /n ( + a) +a ( a) a if 0 < a <. Solution: Let us define a sequence of random variables {X i } as { if the ith outcome is a head X i = if the ith outcome is a tail Since the coin is a fair coin, P(X i = ) = 2 = P(X i = ) for all i n. Since the tosses are independent of each other, X i s are independent of each other. Also notice that n i= X i = H n T n = S n. Since a > 0 = E[X i ], using the large deviation technique we have P(S n an) exp[ ni(a)], where I(a) = sup t 0 {at log φ(t)}, and φ(t) = E[e txi ] is the moment generating function of X i. It is easy to see that φ(t) = 2 (et + e t ). Define the function h(t) = at log [ 2 (et + e t ) ]. Differentiating h(t) with respect to t and setting h (t) = 0 we have a et e t e t + e t = 0 a(e t + e t ) = e t e t (a )e t = ( + a)e t e 2t = + a a t = 2 log + a a. Also notice that h 4 (t) = (e t +e t ) < 0. Therefore t = +a 2 2 log a ) ( I(a) = h 2 log + a a [ = a 2 log + a a log 2 = a 2 log + a [ a log ( ) a + a 2 = log log a a 2 [ ( ) a + a = log a 2] a [ ] = log ( + a) +a ( a) a. + a a + 2 is a maxima of h. So ] a + a [ 2 ( + a)2 + ]] ( a) 2 a 2 Therefore lim [P(S n an)] /n exp[ I(a)] =. () n ( + a) +a ( a) a
To obtain the lower bound, we notice that P(S n an) P(S n = an) = P(H n T n = an) = P(H n = n(a + )/2) ( ) n = n(+a) 2 2 n n! = (n( + a)/2)!(n( a)/2)! By Stirling s approximation formula we know that k! 2πk(k/e) k for large k. Applying the Stirling s approximation on n!, (n( + a)/2)! and (n( a)/2)! we obtain [ lim [P(S n an)] /n lim n n 2 = [ 2 lim n n! (n( + a)/2)!(n( a)/2)! ] /n 2 n. ] /2n n/e (n( + a)/2e) (+a)/2 (n( a)/2e) ( a)/2 2πn π 2 n 2 ( a 2 ) = 2 (since lim 2 ( + a) (+a)/2 ( a) ( a)/2 n n/n = ) = (2) ( + a) +a ( a) a Combining () and (2) we have the result lim [P(S n an)] /n = n ( + a) +a ( a) a
4. A coin has probability p of Heads. Adam flips it first, then Becca, then Adam, etc., and the winner is the first to flip Heads. Compute the probability that Adam wins. Solution: Let f(p) be the probability that Adam wins. Using the Bayes theorem we have f(p) = P(Adam wins The first toss is a head)p(the first toss is a head) +P(Adam wins The first toss is a tail)p(the first toss is a tail) = p + P(Adam wins The first toss is a tail)( p). But if the first toss is a tail, then Adam may win the game only if Becca toss a tail ( the second toss is a tail) and then the game restarts again. So Combining all the above equations we get P(Adam wins The first toss is a tail) = ( p)f(p). f(p) = p + ( p) 2 f(p). Solving the above equation we obtain f(p) = 2 p.
5. A die is rolled repeatedly. Which of the following are Markov chains? For those that are, supply the transition matrix. (a) The largest number X n shown up to the nth roll. (b) The number N n of sixes in n rolls. (c) At time n, the time C n since the most recent six. (d) At time n, the time B n until the next six. Solution: A stochastic process {Z n } n is a Markov chain if P(Z n+ = i n+ Z n = i n, Z n = i n,..., Z = i ) = P(Z n+ = i n+ Z n = i n ). (a) The largest number up to nth roll is a number from the set {, 2, 3, 4, 5, }. Therefore if {X n } n is a Markov chain then the state space is {, 2, 3, 4, 5, }. Since X n+ = max{x n, outcome of the (n + ) th toss}, X n+ depends only on X n and not on the past. i if j = i and i P(X n+ = j X n = i) = if j > i and i < j 0 if j < i and i. So {X n } n is a Markov chain whose state space is {, 2, 3, 4, 5, } and the transition matrix is given by P = {p ij } i,j, where p ij = P(X n+ = j X n = i) i, j. (b) Number of sixes in n rolls can be 0,, 2,..., n. Since n can go up to infinity, if {N n } n is a Markov chain then the state space is the set of positive integers N. Since N n+ = N n + I {(n + )th roll is a six }, we have P(N n+ = j N n = i) = 5 if j = i if j = i + 0 otherwise. Therefore {N n } n is a Markov chain with state space N and the transition matrix P = {p ij } i,j N, where p ij = P(N n+ = j N n = i) i, j N. (c) The time C n since the most recent six can be any positive integer. So the state space of this (possibly) Markov chain is N. Observe that C n+ = C n + I {(n+)th roll is not a six }. Let us compute the transition probabilities 5 if j = i + P(C n+ = j C n = i) = if j = i 0 otherwise. {C n } n is a Markov chain with the transition matrix P = {p ij } i,j N, where p ij = P(C n+ = j C n = i) i, j N. (d) In this case, notice that if B n = 2, 3, 4, 5,... then B n+ = B n. For example at time n, if it is given that the next six is going to come at the 8th roll from now, then at time (n + ) we definitely
know that the next six will come at the 7th roll. The only notrivial case is when B n = at time n we know that the next six will come in the next roll (i.e, n + th roll). In that case the process restarts at time (n + ) and then B n+ follows the Geomertic(/). So combining all the above information we have P(B n+ = j B n = i) = ( 5 if j = i and i = 2, 3, 4,... ) j if i = and j =, 2, 3,... 0 otherwise. So {B n } n is a Markov chain with the state space N and transition matrix P = {p ij } i,j N, where p ij = P(B n+ = j B n = i) i, j N.