Math 180B Homework 4 Solutions

Similar documents
Expected Value 7/7/2006

1 Random variables and distributions

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 1

Random Variable. Pr(X = a) = Pr(s)

Solution to HW 12. Since B and B 2 form a partition, we have P (A) = P (A B 1 )P (B 1 ) + P (A B 2 )P (B 2 ). Using P (A) = 21.

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

MAT 135B Midterm 1 Solutions

Chapter 11 Advanced Topic Stochastic Processes

What is a random variable

18.05 Practice Final Exam

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

1 Sequences of events and their limits

Basic Probability. Introduction

CSE 312 Foundations, II Final Exam

Quiz 1 Date: Monday, October 17, 2016

18.05 Final Exam. Good luck! Name. No calculators. Number of problems 16 concept questions, 16 problems, 21 pages

3.5. First step analysis


Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Entropy. Probability and Computing. Presentation 22. Probability and Computing Presentation 22 Entropy 1/39

Math 180B Problem Set 3

Total. Name: Student ID: CSE 21A. Midterm #2. February 28, 2013

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Math , Fall 2012: HW 5 Solutions

MAS275 Probability Modelling Exercises

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Thus f is continuous at x 0. Matthew Straughn Math 402 Homework 6

Introduction to Probability, Fall 2009

HW2 Solutions, for MATH441, STAT461, STAT561, due September 9th

First Digit Tally Marks Final Count

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Probability reminders

1 Basic continuous random variable problems

Polytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009

Exam 1 Solutions. Problem Points Score Total 145

MATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman.

k P (X = k)

V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Probabilistic models

Math 1313 Experiments, Events and Sample Spaces

Lecture 2 : CS6205 Advanced Modeling and Simulation

MATH 151, FINAL EXAM Winter Quarter, 21 March, 2014

Math Homework 5 Solutions

Math 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =

Discrete Random Variable

A COMPOUND POISSON APPROXIMATION INEQUALITY

Probability. Lecture Notes. Adolfo J. Rumbos

1 Stat 605. Homework I. Due Feb. 1, 2011

CS 361: Probability & Statistics

Probability, Random Processes and Inference

1 if the i-th toss is a Head (with probability p) ξ i = 0 if the i-th toss is a Tail (with probability 1 p)

Lecture 6: Entropy Rate

Problem Points S C O R E Total: 120

Review of Probability. CS1538: Introduction to Simulations

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

Probability Theory and Simulation Methods

x x 1 0 1/(N 1) (N 2)/(N 1)

MATH1231 Algebra, 2017 Chapter 9: Probability and Statistics

STAT/MA 416 Answers Homework 4 September 27, 2007 Solutions by Mark Daniel Ward PROBLEMS

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin

STAT 516 Midterm Exam 2 Friday, March 7, 2008

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture Notes. This lecture introduces the idea of a random variable. This name is a misnomer, since a random variable is actually a function.

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

Notes 1 : Measure-theoretic foundations I

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 4: Probability and Discrete Random Variables

1. If X has density. cx 3 e x ), 0 x < 0, otherwise. Find the value of c that makes f a probability density. f(x) =

MARKOV PROCESSES. Valerio Di Valerio

Master s Written Examination - Solution

A birthday in St. Petersburg

Problem Points S C O R E Total: 120

1 Probability theory. 2 Random variables and probability theory.

Chapter 2: The Random Variable

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

Exam #1 March 9, 2016

4/17/2012. NE ( ) # of ways an event can happen NS ( ) # of events in the sample space

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch

9. DISCRETE PROBABILITY DISTRIBUTIONS

Probability inequalities 11

12 Markov chains The Markov property

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

Chapter 3: Random Variables 1

TOPIC 12: RANDOM VARIABLES AND THEIR DISTRIBUTIONS

1 Basic continuous random variable problems

Chapter 18 Sampling Distribution Models

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Elementary Discrete Probability

CS 125 Section #10 (Un)decidability and Probability November 1, 2016

Conditional Probability

Homework 9 (due November 24, 2009)

2. This exam consists of 15 questions. The rst nine questions are multiple choice Q10 requires two

Chapter 2: Markov Chains and Queues in Discrete Time

1 Presessional Probability

Transcription:

Math 80B Homework 4 Solutions Note: We will make repeated use of the following result. Lemma. Let (X n ) be a time-homogeneous Markov chain with countable state space S, let A S, and let T = inf { n 0 : X n A } be the first time that the Markov chain reaches a state in A. Then for each i S \ A, we have E[T X 0 = i] = + S\A P (X = X 0 = i)e[t X 0 = ]. Proof. Suppose i S \ A and S. By the Markov property and time homogeneity, we have E[T X 0 = i, X = ] = P (T > n X 0 = i, X = ) = P (X 0 / A,..., X n / A X 0 = i, X = ) = + P (X / A,..., X n / A X 0 = i, X = ) n= = + P (X / A,..., X n / A X = ) n= = + P (X 0 / A,..., X n / A X 0 = ) n= = + P (T n X = ) n= = + E[T X 0 = ]. Also, note that if A, then E[T X 0 = ] = 0. Therefore, by first-step analysis, if i / A, then E[T X 0 = i] = np (T = n X 0 = i)

Math 80B Homework 4 Solutions = n P (T = n, X = X 0 = i) S = np (T = n X 0 = i, X = )P (X = X 0 = i) S = S P (X = X 0 = i)e[t X 0 = i, X = ] = P (X = X 0 = i) ( + E[T X 0 = ] ) S = S P (X = X 0 = i) + S P (X = X 0 = i)e[t X 0 = ] = + S\A P (X = X 0 = i)e[t X 0 = ]. Problem (Pinsky & Karlin, Exercise 3.4.). Find the mean time to reach state 3 starting from state 0 for the Markov chain whose transition probability matrix is P = 0 3 0 0.4 0.3 0. 0. 0 0.7 0. 0. 0 0 0.9 0. 3 0 0 0. Solution. Let T be the time it takes for the Markov chain to reach state 3. That is, if (X n ) is the Markov chain in question, then T = inf { n 0 : X n = 3 }. We need to compute E[T X 0 = 0]. Let g i = E[T X 0 = i] for i {0,,, 3}. By Lemma, we have the following linear system of equations: g 0 = + 0.4g 0 + 0.3g + 0.g, g = + 0.7g + 0.g, g = + 0.9g. Solving this system, we get E[T X 0 = 0] = g 0 = 0.

Math 80B Homework 4 Solutions Problem (Pinsky & Karlin, Exercise 3.4.4). A coin is tossed repeatedly until two successive heads appear. Find the mean number of tosses required. Hint: Let X n be the cumulative number of successive heads. The state space is 0,,, and the transition probability matrix is P = 0 0 0 0 0 0. Determine the mean time to reach state starting from state 0 by invoking a first step analysis. Solution. Let T = inf { n 0 : X n = }. We wish to compute E[T X 0 = 0]. Let g i = E[T X 0 = i] for i {0,, }. By Lemma, we get the following linear system of equations: Therefore, E[T X 0 = 0] = g 0 = 6. g 0 = + g 0 + g, g = + g 0. Problem 3 (Pinsky & Karlin, Problem 3.4.). Which will take fewer flips, on average: successively flipping a quarter until the pattern HHT appears, i.e., until you observe two successive heads followed by a tails; or successively flipping a quarter until the pattern HT H appears? Can you explain why these are different? Solution. Let (X n ) be the Markov chain whose states are the possible patterns of three consecutive coin tosses. Then (X n ) has the following 3

Math 80B Homework 4 Solutions transition probability matrix (blank entires represent 0): P = T T T T T H T HT T HH HT T HT H HHT HHH T T T / / T T H / / T HT / / T HH / / HT T / / HT H / / HHT / / HHH / / Define T HHT = inf { n 0 : X n = HHT }, T HT H = inf { n 0 : X n = HT H }. We wish to compare E[T HHT ] and E[T HT H ]. Let g i = E[T HHT X 0 = i] for each i. Then by Lemma, we have the following linear system of equations: g T T T = + g T T T + g T T H, g HT T = + g T T T + g T T H, g T T H = + g T HT + g T HH, g HT H = + g T HT + g T HH, g T HT = + g HT T + g HT H, g T HH = + g HHH, g HHH = + g HHH. Solving this system yields g HHH = g T HH =, g HT H = g T T H = 6, g T T T = g HT T = g T HT = 8, and hence E[T HHT ] = P (X 0 = )g = 8( + + 6 + 6 + 8 + 8 + 8 ) = 5. Next, let h i = E[T HT H X 0 = i] for each i. Then by Lemma, we have the following linear system of equations: h T T T = + h T T T + h T T H, h HT T = + h T T T + h T T H, 4

Math 80B Homework 4 Solutions h T T H = + h T HT + h T HH, h T HT = + h HT T, h HHT = + h HT T, h T HH = + h HHT + h HHH, h HHH = + h HHT + h HHH. Solving this system yields h T HT = h HHT = 6, h T T H = h T HH = h HHH = 8, h HT T = h T T T = 0, and hence E[T HT H ] = P (X 0 = )h = 8( 6 + 6 + 8 + 8 + 8 + 0 + 0 ) = 7. Thus, on average, it will take fewer flips to observe HHT than HT H. Problem 4 (Pinsky & Karlin, Problem 3.4.). A zero-seeking device operates as follows: If it is in state m at time n, then at time n +, its position is uniformly distributed over the states 0,,..., m. Find the expected time until the device first hits zero starting from state m. Note: This is a highly simplified model for an algorithm that seeks a maximum over a finite set of points. Solution. This device can be modeled by a Markov chain (X n ) taking values in the nonnegative integers with transition probabilities given by Let, if m > 0 and 0 < m, m P (X n+ = X n = m) =, if m = = 0, 0, otherwise. T = inf { n 0 : X n = 0 }. We wish to compute E[T X 0 = m] for all m 0. If m = 0, then clearly E[T X 0 = m] = 0. Suppose m > 0. By Lemma, we have E[T X 0 = m] = + P (X = X 0 = m)e[t X 0 = ] = 5

Math 80B Homework 4 Solutions = + m m = E[T X 0 = ] We will prove by strong induction that E[T X 0 = m] = m k=, and the base k case m = is clear. Now suppose m > and that E[T X 0 = ] = k= k when < m. Then E[T X 0 = m] = + m = + m = + m m = m = k= m k= E[T X 0 = ] k = + m m k k = + m k= m k= m =k k k m m = m k= k. Problem 5 (Pinsky & Karlin, Problem 3.4.7). Let X n be a Markov chain with transition probabilities P i,. We are given a discount factor β with 0 < β < and a cost function c(i), and we wish to determine the total expected discounted cost starting from state i, defined by [ ] h i = E β n c(x n ) X 0 = i. Using a first step analysis show that h i satisfies the system of linear equations h i = c(i) + β P i, h. Solution. Assuming c(i) 0 for all i (so that we may interchange summation and expectation), we have [ ] h i = E β n c(x n ) X 0 = i = β n E[c(X n ) X 0 = i] = E[c(X 0 ) X 0 = i] + β β n E[c(X n ) X 0 = i] n= 6

Math 80B Homework 4 Solutions = c(i) + β β n E[c(X n+ ) X 0 = i] = c(i) + βe [ ] β n c(x n+ ) X 0 = i. Thus, it suffices to show that [ ] E β n c(x n+ ) X 0 = i = P i, h. (5.) To see this, we use time-homogeneity and the Markov property: P (X = X 0 = i)e[c(x n ) X 0 = ] = P (X = X 0 = i) c(k)p (X n = k X 0 = ) k =,k =,k =,k = k c(k)p (X = X 0 = i)p (X n+ = k X = ) c(k)p (X = X 0 = i)p (X n+ = k X 0 = i, X = ) c(k)p (X n+ = k, X = X 0 = i) c(k)p (X n+ = k X 0 = i) = E[c(X n+ ) X 0 = i], and so P i, h = [ ] P (X = X 0 = i)e β n c(x n ) X 0 = = β n P (X = X 0 = i)e[c(x n ) X 0 = ] = β n E[c(X n+ ) X 0 = i] [ ] = E β n c(x n+ ) X 0 = i. Therefore, (5.) holds. 7

Math 80B Homework 4 Solutions Problem 6 (Pinsky & Karlin, Problem 3.4.8). An urn contains five red and three green balls. The balls are chosen at random, one by one, from the urn. If a red ball is chosen, it is removed. Any green ball that is chosen is returned to the urn. The selection process continues until all of the red balls have been removed from the urn. What is the mean duration of the game? Solution. Let X 0 be the initial number of red balls in the urn, and for n, let X n denote the number of red balls in the urn after the nth draw. Then (X n ) is a Markov chain with transition matrix P = 0 3 4 5 0 0 0 0 0 0 /4 3/4 0 0 0 0 0 /5 3/5 0 0 0 3 0 0 3/6 3/6 0 0 4 0 0 0 4/7 3/7 0 5 0 0 0 0 5/8 3/8. Let T = inf { n 0 : X n = 0 } be the time until all red balls are removed from the urn and the game ends. We wish to compute E[T X 0 = 5]. Let g i = E[T X 0 = i]. By Lemma, we have the following linear system of equations: g = + 3 4 g, g = + 5 g + 3 5 g, g 3 = + 3 6 g + 3 6 g 3, g 4 = + 4 7 g 3 + 3 7 g 4, g 5 = + 5 8 g 4 + 3 8 g 5. Solving this system, we get E[T X 0 = 5] = g 5 = 77/0. 8

Math 80B Homework 4 Solutions Problem 7 (Pinsky & Karlin, Problem 3.4.5). A simplified model for the spread of a rumor goes this way: There are N = 5 people in a group of friends, of which some have heard the rumor and the others have not. During any single period of time, two people are selected at random from the group and assumed to interact. The selection is such that an encounter between any pair of friends is ust as likely as between any other pair. If one of these persons has heard the rumor and the other has not, then with probability α = 0. the rumor is transmitted. Let X n denote the number of friends who have heard the rumor at the end of the nth period. Assuming that the process begins at time 0 with a single person knowing the rumor, what is the mean time that it takes for everyone to hear it? Solution. The stochastic process (X n ) is a Markov chain with state space {,, 3, 4, 5} and transition probabilities given by P (X n+ = 5 X n = 5) =, and if i {,, 3, 4}, then and P (X n+ = i + X n = i) = α P (X n+ = i X n = i) = All other transition probabilities are zero. Let ( )( ) i 5 i ( ) 5 = T = inf { n 0 : X n = 5 } i(5 i) 00. i(5 i) 00, be the time until all five people have heard the rumor. We wish to compute E[T X 0 = ]. Let g i = E[T X 0 = i] for i =,, 3, 4, 5. By Lemma, we have the following linear system of equations: ( g = + 4 ) g + 4 ( 00 00 g, g = + 3 ) 00 ( g 3 = + 3 ) g 3 + 3 ( 00 00 g 4, g 4 = + 4 00 Solving this system, we get E[T X 0 = ] = g = 50/3. g + 3 00 g 3, ) g 4. 9

Ex 3.5.4 The transition probability matrix of the Markov chain (of states 0,,, 3) is 0 0 0 0 0 0, 0 0 0 which is of the form of a success runs Markov chain with all probabilities p i = q i =. Ex 3.5.8 The state space of the Markov chain is the nonnegative integers. The transition probabilities are as follows: α = 0 = k α = 0, k = ( α)β > 0, k = P(X n+ = k X n = ) =. αβ + ( α)( β) = k > 0 α( β) > 0, k = + 0 otherwise So corresponding to (3.38), r 0 = α, p 0 = α, q i = ( α)β, r i = αβ + ( α)( β), p i = α( β) for i. Pr 3.5. We are taking M = 3 in equation (3.35) in section 3.5., with P(ξ = i) = a i for i = 0,,, 3 and a 0 + a + a + a 3 =. All the a i = 0 with i 4. Putting the corresponding a 3, a 4,... into equation (3.35), we have µ = /a 3. Since in this case, T = min{n : X n 3} = min{n : X n = 3}, µ = ν 0, the mean time until absorption starting from 0. Pr 3.5.4 We let x i, i =,,... to be i.i.d. discrete random variable have uniform distribution on the integers,,..., 6. X 0 = 0 and X n+ = X n + ξ n+ is the Markov chain we are interested in. Set T = min{n 0 : X n > 0}. We wil solve u i = P(X T = 3 X 0 = i) for 0 i 0. Clearly, u i = 6 P(X T = 3 X i + = i + k)p(ξ i+ = k) = k= 6 k= u i+k 6.

So we have u 0 = 6 u 9 = 6 u 0 + 6 u 8 = 6 u 9 + 6 u 0 + 6 u 7 = 6 u 8 + 6 u 9 + 6 u 0 + 6 u 6 = 6 u 7 + 6 u 8 + 6 u 9 + 6 u 0 u 5 = 6 u 6 + 6 u 7 + 6 u 8 + 6 u 9 + 6 u 0 u 4 = 6 u 5 + 6 u 6 + 6 u 7 + 6 u 8 + 6 u 9 + 6 u 0 u 3 = 6 u 4 + 6 u 5 + 6 u 6 + 6 u 7 + 6 u 8 + 6 u 9 u = 6 u 3 + 6 u 4 + 6 u 5 + 6 u 6 + 6 u 7 + 6 u 8 u = 6 u + 6 u 3 + 6 u 4 + 6 u 5 + 6 u 6 + 6 u 7 u 0 = 6 u + 6 u + 6 u 3 + 6 u 4 + 6 u 5 + 6 u 6. Solving the system of linear equations yields u 0 = 659903 36797056 Pr 3.5.5 (a) X n is clearly nonnegative. We observe that for k, E[X n+ X n = k] = (k ) + (k + ) = k and Thus E[X n+ X n ] = X n and since E[X n+ X n = 0] = 0. E[X n+ ] = E[X n+ X n = k]p(x n = k) = kp(x n = k) = E[X n ], k=0 k= we have E[ X n ] < for all n. (b) Using the maximal inequality, if X 0 < N, and if X 0 N. P(max n X n N) E[X 0] N P(max n X n N) =