UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 23 This paper is also taken for the relevant examination for the Associateship. M3S4/M4S4 (SOLUTIONS) APPLIED PROBABILITY DATE: Tuesday, 3rd June 23 TIME: 2 pm 4 pm Credit will be given for all questions attempted but extra credit will be given for complete or nearly complete answers. Calculators may not be used. Statistical tables will not be available. c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 1 of 6
1. a) seen [1 F (x)] dx = = y f (y) dy dx = f (y) dx dy x y f (y) dx dy = f (y) y dy = E (X), or, alternatively, [1 F (x)] dx = [x{1 F (x)}] = + xf(x) dx = E(X), x{ f(x)} dx provided x{1 F (x)} as x. 4 b) i) Let T be the time between consecutive events. Then P (T > t) = P (no events in [, t]) = P (X(t) = ) = e λt (λt)! = e λt. Hence, F (t) = 1 e λt and f(t) = λe λt. 3 E(T ) = λ 1. 1 i From the question, and using the memoryless property of the exponential distribution, the probability that the kth event will occur in [t, t + δt] is P ((k 1) events in [, t]) P (1 event in [t, t + δt]). Using the definition of a Poisson process given in the question, this is P (t < T t + δt) = e λt (λt) k 1 = e λt (λt) k 1 = e λt (λt) k 1 e λδt λδt [1 λδt + o(λδt)] λδt λδt [1 + o(δt)] c) so that the pdf is f(t) = e λt λ(λt) k 1, which is the pdf of a Gamma distribution. 6 ( Π Z (s) = exp( µ[1 Π X (s)]) = exp Π Z(s) = exp ( µ [ 1 ps 1 qs ]) µ [ 1 ps ]) µ 1 qs ( p 1 qs + psq (1 qs) 2 ) Hence, E(Z) = Π Z(1) = µ(1 + q/p) = µ/p. c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 2 of 6 6
2. a) i) The defining characteristic of an irreducible Markov chain is that all the states seen intercommunicate. 1 p (m+n) ij = k p(m) ik p(n) kj, or some equivalent statement, meaning that the probability of moving from state i to state j in (m + n) steps is the sum, over all states k, of the product of the probabilities of moving first from i to k in m steps and then from k to j in n steps. That is, the sum of the probabilities of all paths from i to j. 2 i Transient: the probability that the chain will return to a transient state n times tends to as n. Recurrent: the state once visited is certain to recur an infinite number of times. Null recurrent: recurrent states with an infinite mean recurrence time. Positive recurrent: recurrent states with a finite mean recurrence time. 4 b) i) We want the top left cell of P 2. P 2 = from which the answer is.5.4.1.3.4.3.2.3.5.5.4.1.3.4.3.2.3.5.5.5 +.4.3 +.1.2 =.25 +.12 +.2 =.39. The limiting distributions are given by solving the system of equations π = πp πi = 1. 2.5π +.3π 1 +.2π 2 = π.4π +.4π 1 +.3π 2 = π 1 π + π 1 + π 2 = 1. i c) i) This is easily solved to yield π = 21/62 π 1 = 23/62 π 2 = 18/62. 5 The mean recurrence time of state is, by the basic limit theorem for Markov [ chains, lim n p (n) ] 1. From part ( this is 62/21 days. 2 3/3 1/3 2/3 2/3 1/3 3/3. 2 1 2 3 1 i It does not have a limiting distribution because it is periodic, with period 2: it is impossible to return to a starting state in an odd number of steps. 1 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 3 of 6
3. a) i) Let Z i be the number in the ith generation, then seen Z n+1 = X 1 + X 2 + + X Zn (Z n > ), Z n+1 = (Z n = ) i iv) v) so E(Z n+1 Z n ) = Z n µ, so E(Z n+1 ) = µe(z n ), hence E(Z n ) = µ n 2 E(1 + Z 1 +... + Z n ) = 1 + µ + µ 2 +... + µ n = 1 µ n+1 1 µ µ 1; n + 1 µ = 1. From P (X = ) 1/ e we have µ 1/2 < 1 so that the mean generation size tends to zero. 2 The probability of ultimate extinction is given by the smallest positive solution of θ = Π(θ), where the pgf is Π(θ) = e µ(1 θ). Hence θ = e (1 θ)2 ln 2 = 2 2(1 θ). By substitution, θ = 1/2 is a solution. From the lectures (or by looking at the first and second derivatives of Π(θ)) we know that there is at most one solution in θ < 1, so this is the required probability. 5 2 b) We have P (Extinct) = P ( Process from 1st individ dies out) P (Process from 2nd individ dies out)...... P (Process from Mth individ dies out). Since the processes are independent, this is (1/2) M. This will be less than 1/32 if M > 5. 3 Differentiating wrt s gives Π n (s) = Ψ (s) Π n 1 (Π (s)) Π n (s) = Ψ (s) Π n 1 (Π (s)) + Ψ (s) Π n 1 (Π (s)) Π (s). So that, setting s = 1, gives µ n = ν + µ n 1µ, where we have used the fact that Π (1) = Ψ (1) = Π n (1) = 1. 6 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 4 of 6
4. a) We have so that from which f = 1 s2 s The auxiliary equations are thus t Π s = s Π t + stπ + s2 t s (Π), (1 s 2 ) Π s s Π t t = sπ, g = 1 t h = Π. s ds 1 s 2 = t dt = dπ Π. Taking the first and second expressions gives s 1 s 2 ds = Taking the second and third expressions gives t dt = t dt 1 2 log (1 s 2) = 1 2 t2 + const. 1 Π dπ 1 2 t2 = log Π + const. From this c 1 = ( 1 s 2) e t2 c 2 = e t2 /2 Π. ( (1 Putting c 2 = Ψ (c 1 ) gives e t2 /2 Π = Ψ s 2 ) e t2) so that ( (1 Π(s, t) = e t2 /2 Ψ s 2 ) e t2) Now using the initial condition that Π(s, ) = s 2 we get s 2 = Π(s, ) = Ψ(1 s 2 ). Putting x = 1 s 2 s = 1 x and Ψ(x) = 1 x gives Π(s, t) = e t2 /2 { 1 (1 s 2 )e t2}. b) i) The general equation was derived and discussed at some length in the lectures: we assume p x (t) (x < ) d dt p x(t) = β x 1 p x 1 (t) + ν x+1 p x+1 (t) (β x + ν x )p x (t) x =, 1,... 13 seen 3 From the question, we have ν x = νx and β x = β/(1 + x), so that d dt p x(t) = β ( ) β x p x 1(t) + ν(x + 1)p x+1 (t) 1 + x + νx p x (t) x =, 1,... Multiply the equation for x by s x and sum over x. Express the result in terms of Π(s, t) and its derivatives via the definition of Π(s, t) = x p(x)sx. part seen 4 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 5 of 6
5. a) Let R j be the event that A loses, given that there are initially j bogus votes in A s favour. Let W 1 be the event that the first person votes for A, W 2 the event that the first person abstains, and W 3 the event that the first person votes for B. Then P (R j ) = P (R j W 1 ) P (W 1 ) + P (R j W 2 ) P (W 2 ) + P (R j W 3 ) P (W 3 ), The first vote is like an extra bogus vote, so q j = q j+1 p + q j r + q j 1 q. 6 b) From part (a) and (1 r) q j = q j+1 p + q j 1 q q j = q j+1 p/ (1 r) + q j 1 q/ (1 r). Putting p/ (1 r) = a and q/ (1 r) = b, we can use the hint in the question to obtain the solutions { c1 + c q j = 2 (q/p) j p q; c 3 + c 4 j p = q. For the case p q the initial conditions q M = 1 and q M = lead to from which c 1 = (q/p)2m 1 (q/p) 2M and c 2 = (q/p)m 1 (q/p) 2M, q j = (q/p)m+j (q/p) 2M 1 (q/p) 2M. For the case p = q the initial conditions q M = 1 and q M = lead to c 3 = 1/2 and c 4 = 1/2M from which q j = 1 2 j 2M. 6 c) Let Y be a random variable taking the value 1 if the first vote is for A, if the first vote is an abstention, and 1 if the first vote is for B. So P (Y = 1) = p, P (Y = ) = r, P (Y = 1) = q. Then if X is the number of votes until an outcome is reached, we have D j = E (X) = E (X Y = 1) p + E (X Y = ) r + E (X Y = 1) q = (1 + D j+1 ) p + (1 + D j ) r + (1 + D j 1 ) q. Using the fact that p + q + r = 1, this simplifies to (p + q) D j = pd j+1 + qd j 1 + 1. ( ) d) Substitution of D j = c 3 + c 4 j j 2 /2p, with p = q, satisfies ( ). The boundary conditions are D M = D M =, which yield c 5 = M 2 /2p and c 6 =. 4 4 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 6 of 6