MAT 135B Midterm 1 Solutions

Similar documents
Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

Lecture Notes 3 Convergence (Chapter 5)

1 Sequences of events and their limits

18440: Probability and Random variables Quiz 1 Friday, October 17th, 2014

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

What is Probability? Probability. Sample Spaces and Events. Simple Event

Problem Points S C O R E Total: 120

Exercises with solutions (Set D)

Problem Points S C O R E Total: 120

8 Laws of large numbers

Polytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015.

Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8.

Probability Theory and Simulation Methods

4 Branching Processes

MATH 151, FINAL EXAM Winter Quarter, 21 March, 2014

Math 180B Homework 4 Solutions

1 Stat 605. Homework I. Due Feb. 1, 2011

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3).

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Math , Fall 2012: HW 5 Solutions

n N CHAPTER 1 Atoms Thermodynamics Molecules Statistical Thermodynamics (S.T.)

MAT 271E Probability and Statistics

Monty Hall Puzzle. Draw a tree diagram of possible choices (a possibility tree ) One for each strategy switch or no-switch

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

Math 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =

Twelfth Problem Assignment

Deep Learning for Computer Vision

1 Variance of a Random Variable

Section 13.3 Probability

Great Theoretical Ideas in Computer Science

Math 510 midterm 3 answers

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Mathematical Methods for Computer Science

Math 493 Final Exam December 01

MAKING MONEY FROM FAIR GAMES: EXAMINING THE BOREL-CANTELLI LEMMA

M378K In-Class Assignment #1

Lecture 4: Probability and Discrete Random Variables

STAT 516 Midterm Exam 2 Friday, March 7, 2008

18.175: Lecture 14 Infinite divisibility and so forth

MAS108 Probability I

EE514A Information Theory I Fall 2013

MAT 271E Probability and Statistics

Introduction to Probability 2017/18 Supplementary Problems

. Find E(V ) and var(v ).

MATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010

The enumeration of all possible outcomes of an experiment is called the sample space, denoted S. E.g.: S={head, tail}

1 Basic continuous random variable problems

Problems and results for the ninth week Mathematics A3 for Civil Engineering students

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

1 Random Variable: Topics

Discrete Probability Refresher

Martingale Theory for Finance

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

STA205 Probability: Week 8 R. Wolpert

Things to remember when learning probability distributions:

1. Compute the c.d.f. or density function of X + Y when X, Y are independent random variables such that:

A COMPOUND POISSON APPROXIMATION INEQUALITY

Lecture 2: Review of Basic Probability Theory

1.1 Review of Probability Theory

Basic Probability. Introduction

Quick Tour of Basic Probability Theory and Linear Algebra

Probability theory for Networks (Part 1) CS 249B: Science of Networks Week 02: Monday, 02/04/08 Daniel Bilar Wellesley College Spring 2008

Assignment 4: Solutions

Stochastic Models of Manufacturing Systems

Lecture Lecture 5

Selected Exercises on Expectations and Some Probability Inequalities

(Practice Version) Midterm Exam 2

Presentation on Theo e ry r y o f P r P o r bab a il i i l t i y

The probability of an event is viewed as a numerical measure of the chance that the event will occur.

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

18.175: Lecture 13 Infinite divisibility and Lévy processes

STAT 414: Introduction to Probability Theory

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya

Chapter 1 Principles of Probability

Probability and Statistics. Vittoria Silvestri

ECE 302, Final 3:20-5:20pm Mon. May 1, WTHR 160 or WTHR 172.

1 Generating functions

MATH Solutions to Probability Exercises

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Probability & Statistics - FALL 2008 FINAL EXAM

Introduction to probability theory

This exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

STAT 418: Probability and Stochastic Processes

P (A G) dp G P (A G)

Review of Probability. CS1538: Introduction to Simulations

Lecture 10: Bayes' Theorem, Expected Value and Variance Lecturer: Lale Özkahya

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 1

Discrete Probability

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

UC Berkeley, CS 174: Combinatorics and Discrete Probability (Fall 2008) Midterm 1. October 7, 2008

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Expectation, inequalities and laws of large numbers

Before you begin read these instructions carefully.

Discrete Random Variable

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

Transcription:

MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above. 3. No notes, books, calculators, smartphones, ipods, etc allowed. 4. Show all your work. Unsupported answers will receive NO CREDIT. 5. You are expected to do your own work. # #2 #3 #4 #5 TOTAL

. The only information you have about a random variable X is that its moment generating function φ(t) satisfies the inequality φ(t) e t2 for all t 0. Find the upper bound for P (X 0). Solution: For t 0, we have P(X 0) = P(tX 0t) = P ( e tx e 0t) e 0t E[e tx ] (using Markov s inequality) = e 0t φ(t) e 0t+t2 (since φ(t) e t2 for t 0) To find the best upper bound, we need to find inf t 0 e0t+t2. Differentiating e 0t+t2 with respect to t and setting it to zero, we find that the infimum is attained at t = 5 and inf t 0 e0t+t2 = e 50+25 = e 25. Therefore P(X 0) e 25. Grading Rubric: (+3 points) for writing P(X 0) = P(e tx e 0t ) when t 0. (+2 points) for the correct version of the Markov s inequality. (+2 points) for applying φ(t) e t2 when t 0. (+2 points) for optimizing over t 0. (+ points) Final answer (only or 2 points) if P(S n 0n) exp( ni(0)) is used.

2. Prove that for any fixed x 0 k: k n x n n k x k! en e u2 /2 du, x 2π where the notation means that the ratio of the left hand side and the right hand side goes to as n. n nk Solution: First of all, we notice that if S n P oisson(n) then P(S n = k) = e k! for all 0 k <. Therefore for any x 0 k: k n x n n nk e k! = k: k n x n P(S n = k) = P( S n n x n) = P( x n S n n x n) ( = P x S ) n n x. n We know that if Y P oisson(λ ) and Y 2 P oisson(λ 2 ) are independent, then Y +Y 2 P oisson(λ + λ 2 ). So we can write S n = n i= X i.i.d. i where X i P oisson(). We know that E[X i ] = and V ar[x i ] =. Therefore E[S n ] = n and V ar(s n ) = n. Using the central limit theorem we can conclude that S n n d N(0, ), as n. n In other words, as n ( P x S ) n n x x n ( P x Sn n n ) x x x k: k n x n nk k! e n x x 2π e u2 /2 du k: k n x n 2π e u2 /2 du n k k! en x x x 2π e u2 /2 du 2π e u2 /2 du. Grading Rubric: (+3 points) for identifying P oisson(n). (+2 points) for writing the mean and variance of P oisson(n). ( ) (+2 points) for writing a form similar to P x Sn n n x. (+2 points) for applying the CLT. (+ points) Final answer

3. A fair coin is tossed n times, showing heads H n times and tails T n times. Let S n = H n T n. Show that P (S n > an) /n ( + a) +a ( a) a if 0 < a <. Solution: Let us define a sequence of random variables {X i } as { if the ith outcome is a head X i = if the ith outcome is a tail Since the coin is a fair coin, P(X i = ) = 2 = P(X i = ) for all i n. Since the tosses are independent of each other, X i s are independent of each other. Also notice that n i= X i = H n T n = S n. Since a > 0 = E[X i ], using the large deviation technique we have P(S n an) exp[ ni(a)], where I(a) = sup t 0 {at log φ(t)}, and φ(t) = E[e txi ] is the moment generating function of X i. It is easy to see that φ(t) = 2 (et + e t ). Define the function h(t) = at log [ 2 (et + e t ) ]. Differentiating h(t) with respect to t and setting h (t) = 0 we have a et e t e t + e t = 0 a(e t + e t ) = e t e t (a )e t = ( + a)e t e 2t = + a a t = 2 log + a a. Also notice that h 4 (t) = (e t +e t ) < 0. Therefore t = +a 2 2 log a ) ( I(a) = h 2 log + a a [ = a 2 log + a a log 2 = a 2 log + a [ a log ( ) a + a 2 = log log a a 2 [ ( ) a + a = log a 2] a [ ] = log ( + a) +a ( a) a. + a a + 2 is a maxima of h. So ] a + a [ 2 ( + a)2 + ]] ( a) 2 a 2 Therefore lim [P(S n an)] /n exp[ I(a)] =. () n ( + a) +a ( a) a

To obtain the lower bound, we notice that P(S n an) P(S n = an) = P(H n T n = an) = P(H n = n(a + )/2) ( ) n = n(+a) 2 2 n n! = (n( + a)/2)!(n( a)/2)! By Stirling s approximation formula we know that k! 2πk(k/e) k for large k. Applying the Stirling s approximation on n!, (n( + a)/2)! and (n( a)/2)! we obtain [ lim [P(S n an)] /n lim n n 2 = [ 2 lim n n! (n( + a)/2)!(n( a)/2)! ] /n 2 n. ] /2n n/e (n( + a)/2e) (+a)/2 (n( a)/2e) ( a)/2 2πn π 2 n 2 ( a 2 ) = 2 (since lim 2 ( + a) (+a)/2 ( a) ( a)/2 n n/n = ) = (2) ( + a) +a ( a) a Combining () and (2) we have the result lim [P(S n an)] /n = n ( + a) +a ( a) a

4. A coin has probability p of Heads. Adam flips it first, then Becca, then Adam, etc., and the winner is the first to flip Heads. Compute the probability that Adam wins. Solution: Let f(p) be the probability that Adam wins. Using the Bayes theorem we have f(p) = P(Adam wins The first toss is a head)p(the first toss is a head) +P(Adam wins The first toss is a tail)p(the first toss is a tail) = p + P(Adam wins The first toss is a tail)( p). But if the first toss is a tail, then Adam may win the game only if Becca toss a tail ( the second toss is a tail) and then the game restarts again. So Combining all the above equations we get P(Adam wins The first toss is a tail) = ( p)f(p). f(p) = p + ( p) 2 f(p). Solving the above equation we obtain f(p) = 2 p.

5. A die is rolled repeatedly. Which of the following are Markov chains? For those that are, supply the transition matrix. (a) The largest number X n shown up to the nth roll. (b) The number N n of sixes in n rolls. (c) At time n, the time C n since the most recent six. (d) At time n, the time B n until the next six. Solution: A stochastic process {Z n } n is a Markov chain if P(Z n+ = i n+ Z n = i n, Z n = i n,..., Z = i ) = P(Z n+ = i n+ Z n = i n ). (a) The largest number up to nth roll is a number from the set {, 2, 3, 4, 5, }. Therefore if {X n } n is a Markov chain then the state space is {, 2, 3, 4, 5, }. Since X n+ = max{x n, outcome of the (n + ) th toss}, X n+ depends only on X n and not on the past. i if j = i and i P(X n+ = j X n = i) = if j > i and i < j 0 if j < i and i. So {X n } n is a Markov chain whose state space is {, 2, 3, 4, 5, } and the transition matrix is given by P = {p ij } i,j, where p ij = P(X n+ = j X n = i) i, j. (b) Number of sixes in n rolls can be 0,, 2,..., n. Since n can go up to infinity, if {N n } n is a Markov chain then the state space is the set of positive integers N. Since N n+ = N n + I {(n + )th roll is a six }, we have P(N n+ = j N n = i) = 5 if j = i if j = i + 0 otherwise. Therefore {N n } n is a Markov chain with state space N and the transition matrix P = {p ij } i,j N, where p ij = P(N n+ = j N n = i) i, j N. (c) The time C n since the most recent six can be any positive integer. So the state space of this (possibly) Markov chain is N. Observe that C n+ = C n + I {(n+)th roll is not a six }. Let us compute the transition probabilities 5 if j = i + P(C n+ = j C n = i) = if j = i 0 otherwise. {C n } n is a Markov chain with the transition matrix P = {p ij } i,j N, where p ij = P(C n+ = j C n = i) i, j N. (d) In this case, notice that if B n = 2, 3, 4, 5,... then B n+ = B n. For example at time n, if it is given that the next six is going to come at the 8th roll from now, then at time (n + ) we definitely

know that the next six will come at the 7th roll. The only notrivial case is when B n = at time n we know that the next six will come in the next roll (i.e, n + th roll). In that case the process restarts at time (n + ) and then B n+ follows the Geomertic(/). So combining all the above information we have P(B n+ = j B n = i) = ( 5 if j = i and i = 2, 3, 4,... ) j if i = and j =, 2, 3,... 0 otherwise. So {B n } n is a Markov chain with the state space N and transition matrix P = {p ij } i,j N, where p ij = P(B n+ = j B n = i) i, j N.