UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

Similar documents
= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

4 Branching Processes

THE QUEEN S UNIVERSITY OF BELFAST

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Quantitative Model Checking (QMC) - SS12

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Stochastic process. X, a series of random variables indexed by t

Assignment 3 with Reference Solutions

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Probability Distributions

Statistics 150: Spring 2007

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Lecture 4a: Continuous-Time Markov Chain Models

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Question Points Score Total: 70

IEOR 6711, HMWK 5, Professor Sigman

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

M/G/1 and M/G/1/K systems

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Probability Distributions

STAT STOCHASTIC PROCESSES. Contents

Math Spring Practice for the Second Exam.

Part I Stochastic variables and Markov chains

Markov Chains (Part 4)

Math 1B Final Exam, Solution. Prof. Mina Aganagic Lecture 2, Spring (6 points) Use substitution and integration by parts to find:

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Markov Processes Cont d. Kolmogorov Differential Equations

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Markov Chains (Part 3)

MATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM

6 Continuous-Time Birth and Death Chains

Statistics & Data Sciences: First Year Prelim Exam May 2018

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Slides 8: Statistical Models in Simulation

Exponential Distribution and Poisson Process

Applied Stochastic Processes

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

Continuous Distributions

Things to remember when learning probability distributions:

UNIVERSITY OF MANITOBA

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

PROBABILITY DISTRIBUTIONS

LECTURE #6 BIRTH-DEATH PROCESS

ISyE 6761 (Fall 2016) Stochastic Processes I

Before you begin read these instructions carefully.

Continuous-time Markov Chains

M2S2 Statistical Modelling

Markov Processes and Queues

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Stochastic Processes

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Exam 3, Math Fall 2016 October 19, 2016

Final Exam: Probability Theory (ANSWERS)

Branching Processes II: Convergence of critical branching to Feller s CSB

Stochastic Processes

Probability A exam solutions

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

x x 1 0 1/(N 1) (N 2)/(N 1)

Lecture 20 : Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Random variables. DS GA 1002 Probability and Statistics for Data Science.

ENGIN 211, Engineering Math. Laplace Transforms

Introduction to Queuing Networks Solutions to Problem Sheet 3

Northwestern University Department of Electrical Engineering and Computer Science

STOCHASTIC PROCESSES Basic notions

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

MATH Solutions to Probability Exercises

Markov Processes Hamid R. Rabiee

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Manual for SOA Exam MLC.

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Data analysis and stochastic modeling

Chapter 2: Markov Chains and Queues in Discrete Time

ACM 116 Problem Set 4 Solutions

Part II: continuous time Markov chain (CTMC)

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

M3A23/M4A23. Specimen Paper

PHYS 404 Lecture 1: Legendre Functions

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

3 Modeling Process Quality

Probability Midterm Exam 2:15-3:30 pm Thursday, 21 October 1999

MTH 3102 Complex Variables Solutions: Practice Exam 2 Mar. 26, 2017

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

1.1 Review of Probability Theory

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Birth-Death Processes

Math 4263 Homework Set 1

Prelim 1 Solutions V2 Math 1120

Stat 426 : Homework 1.

Transcription:

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 23 This paper is also taken for the relevant examination for the Associateship. M3S4/M4S4 (SOLUTIONS) APPLIED PROBABILITY DATE: Tuesday, 3rd June 23 TIME: 2 pm 4 pm Credit will be given for all questions attempted but extra credit will be given for complete or nearly complete answers. Calculators may not be used. Statistical tables will not be available. c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 1 of 6

1. a) seen [1 F (x)] dx = = y f (y) dy dx = f (y) dx dy x y f (y) dx dy = f (y) y dy = E (X), or, alternatively, [1 F (x)] dx = [x{1 F (x)}] = + xf(x) dx = E(X), x{ f(x)} dx provided x{1 F (x)} as x. 4 b) i) Let T be the time between consecutive events. Then P (T > t) = P (no events in [, t]) = P (X(t) = ) = e λt (λt)! = e λt. Hence, F (t) = 1 e λt and f(t) = λe λt. 3 E(T ) = λ 1. 1 i From the question, and using the memoryless property of the exponential distribution, the probability that the kth event will occur in [t, t + δt] is P ((k 1) events in [, t]) P (1 event in [t, t + δt]). Using the definition of a Poisson process given in the question, this is P (t < T t + δt) = e λt (λt) k 1 = e λt (λt) k 1 = e λt (λt) k 1 e λδt λδt [1 λδt + o(λδt)] λδt λδt [1 + o(δt)] c) so that the pdf is f(t) = e λt λ(λt) k 1, which is the pdf of a Gamma distribution. 6 ( Π Z (s) = exp( µ[1 Π X (s)]) = exp Π Z(s) = exp ( µ [ 1 ps 1 qs ]) µ [ 1 ps ]) µ 1 qs ( p 1 qs + psq (1 qs) 2 ) Hence, E(Z) = Π Z(1) = µ(1 + q/p) = µ/p. c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 2 of 6 6

2. a) i) The defining characteristic of an irreducible Markov chain is that all the states seen intercommunicate. 1 p (m+n) ij = k p(m) ik p(n) kj, or some equivalent statement, meaning that the probability of moving from state i to state j in (m + n) steps is the sum, over all states k, of the product of the probabilities of moving first from i to k in m steps and then from k to j in n steps. That is, the sum of the probabilities of all paths from i to j. 2 i Transient: the probability that the chain will return to a transient state n times tends to as n. Recurrent: the state once visited is certain to recur an infinite number of times. Null recurrent: recurrent states with an infinite mean recurrence time. Positive recurrent: recurrent states with a finite mean recurrence time. 4 b) i) We want the top left cell of P 2. P 2 = from which the answer is.5.4.1.3.4.3.2.3.5.5.4.1.3.4.3.2.3.5.5.5 +.4.3 +.1.2 =.25 +.12 +.2 =.39. The limiting distributions are given by solving the system of equations π = πp πi = 1. 2.5π +.3π 1 +.2π 2 = π.4π +.4π 1 +.3π 2 = π 1 π + π 1 + π 2 = 1. i c) i) This is easily solved to yield π = 21/62 π 1 = 23/62 π 2 = 18/62. 5 The mean recurrence time of state is, by the basic limit theorem for Markov [ chains, lim n p (n) ] 1. From part ( this is 62/21 days. 2 3/3 1/3 2/3 2/3 1/3 3/3. 2 1 2 3 1 i It does not have a limiting distribution because it is periodic, with period 2: it is impossible to return to a starting state in an odd number of steps. 1 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 3 of 6

3. a) i) Let Z i be the number in the ith generation, then seen Z n+1 = X 1 + X 2 + + X Zn (Z n > ), Z n+1 = (Z n = ) i iv) v) so E(Z n+1 Z n ) = Z n µ, so E(Z n+1 ) = µe(z n ), hence E(Z n ) = µ n 2 E(1 + Z 1 +... + Z n ) = 1 + µ + µ 2 +... + µ n = 1 µ n+1 1 µ µ 1; n + 1 µ = 1. From P (X = ) 1/ e we have µ 1/2 < 1 so that the mean generation size tends to zero. 2 The probability of ultimate extinction is given by the smallest positive solution of θ = Π(θ), where the pgf is Π(θ) = e µ(1 θ). Hence θ = e (1 θ)2 ln 2 = 2 2(1 θ). By substitution, θ = 1/2 is a solution. From the lectures (or by looking at the first and second derivatives of Π(θ)) we know that there is at most one solution in θ < 1, so this is the required probability. 5 2 b) We have P (Extinct) = P ( Process from 1st individ dies out) P (Process from 2nd individ dies out)...... P (Process from Mth individ dies out). Since the processes are independent, this is (1/2) M. This will be less than 1/32 if M > 5. 3 Differentiating wrt s gives Π n (s) = Ψ (s) Π n 1 (Π (s)) Π n (s) = Ψ (s) Π n 1 (Π (s)) + Ψ (s) Π n 1 (Π (s)) Π (s). So that, setting s = 1, gives µ n = ν + µ n 1µ, where we have used the fact that Π (1) = Ψ (1) = Π n (1) = 1. 6 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 4 of 6

4. a) We have so that from which f = 1 s2 s The auxiliary equations are thus t Π s = s Π t + stπ + s2 t s (Π), (1 s 2 ) Π s s Π t t = sπ, g = 1 t h = Π. s ds 1 s 2 = t dt = dπ Π. Taking the first and second expressions gives s 1 s 2 ds = Taking the second and third expressions gives t dt = t dt 1 2 log (1 s 2) = 1 2 t2 + const. 1 Π dπ 1 2 t2 = log Π + const. From this c 1 = ( 1 s 2) e t2 c 2 = e t2 /2 Π. ( (1 Putting c 2 = Ψ (c 1 ) gives e t2 /2 Π = Ψ s 2 ) e t2) so that ( (1 Π(s, t) = e t2 /2 Ψ s 2 ) e t2) Now using the initial condition that Π(s, ) = s 2 we get s 2 = Π(s, ) = Ψ(1 s 2 ). Putting x = 1 s 2 s = 1 x and Ψ(x) = 1 x gives Π(s, t) = e t2 /2 { 1 (1 s 2 )e t2}. b) i) The general equation was derived and discussed at some length in the lectures: we assume p x (t) (x < ) d dt p x(t) = β x 1 p x 1 (t) + ν x+1 p x+1 (t) (β x + ν x )p x (t) x =, 1,... 13 seen 3 From the question, we have ν x = νx and β x = β/(1 + x), so that d dt p x(t) = β ( ) β x p x 1(t) + ν(x + 1)p x+1 (t) 1 + x + νx p x (t) x =, 1,... Multiply the equation for x by s x and sum over x. Express the result in terms of Π(s, t) and its derivatives via the definition of Π(s, t) = x p(x)sx. part seen 4 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 5 of 6

5. a) Let R j be the event that A loses, given that there are initially j bogus votes in A s favour. Let W 1 be the event that the first person votes for A, W 2 the event that the first person abstains, and W 3 the event that the first person votes for B. Then P (R j ) = P (R j W 1 ) P (W 1 ) + P (R j W 2 ) P (W 2 ) + P (R j W 3 ) P (W 3 ), The first vote is like an extra bogus vote, so q j = q j+1 p + q j r + q j 1 q. 6 b) From part (a) and (1 r) q j = q j+1 p + q j 1 q q j = q j+1 p/ (1 r) + q j 1 q/ (1 r). Putting p/ (1 r) = a and q/ (1 r) = b, we can use the hint in the question to obtain the solutions { c1 + c q j = 2 (q/p) j p q; c 3 + c 4 j p = q. For the case p q the initial conditions q M = 1 and q M = lead to from which c 1 = (q/p)2m 1 (q/p) 2M and c 2 = (q/p)m 1 (q/p) 2M, q j = (q/p)m+j (q/p) 2M 1 (q/p) 2M. For the case p = q the initial conditions q M = 1 and q M = lead to c 3 = 1/2 and c 4 = 1/2M from which q j = 1 2 j 2M. 6 c) Let Y be a random variable taking the value 1 if the first vote is for A, if the first vote is an abstention, and 1 if the first vote is for B. So P (Y = 1) = p, P (Y = ) = r, P (Y = 1) = q. Then if X is the number of votes until an outcome is reached, we have D j = E (X) = E (X Y = 1) p + E (X Y = ) r + E (X Y = 1) q = (1 + D j+1 ) p + (1 + D j ) r + (1 + D j 1 ) q. Using the fact that p + q + r = 1, this simplifies to (p + q) D j = pd j+1 + qd j 1 + 1. ( ) d) Substitution of D j = c 3 + c 4 j j 2 /2p, with p = q, satisfies ( ). The boundary conditions are D M = D M =, which yield c 5 = M 2 /2p and c 6 =. 4 4 c 23 University of London M3S4/M4S4 (SOLUTIONS) Page 6 of 6