Massachusetts Institute of Technology

Similar documents
6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

Massachusetts Institute of Technology

6.041/6.431 Fall 2010 Quiz 2 Solutions

Massachusetts Institute of Technology

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Northwestern University Department of Electrical Engineering and Computer Science

Massachusetts Institute of Technology

MIT Spring 2015

Class 8 Review Problems solutions, 18.05, Spring 2014

STAT Chapter 5 Continuous Distributions

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

ISyE 3044 Fall 2017 Test #1a Solutions

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

ECE 313 Probability with Engineering Applications Fall 2000

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

Midterm Exam 1 Solution

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

Class 8 Review Problems 18.05, Spring 2014

We introduce methods that are useful in:

Introduction to Probability

2 Continuous Random Variables and their Distributions

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

Continuous Probability Distributions. Uniform Distribution

Chapter 5 continued. Chapter 5 sections

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Chapter 4 Multiple Random Variables

STAT 430/510: Lecture 16

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

1 Random Variable: Topics

6.041/6.431 Fall 2010 Final Exam Wednesday, December 15, 9:00AM - 12:00noon.

HW Solution 12 Due: Dec 2, 9:19 AM

ECE 313: Conflict Final Exam Tuesday, May 13, 2014, 7:00 p.m. 10:00 p.m. Room 241 Everitt Lab

Things to remember when learning probability distributions:

Continuous Expectation and Variance, the Law of Large Numbers, and the Central Limit Theorem Spring 2014

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

1.1 Review of Probability Theory

Continuous Distributions

This does not cover everything on the final. Look at the posted practice problems for other topics.

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Math Review Sheet, Fall 2008

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Probability and Distributions

18.440: Lecture 28 Lectures Review

Twelfth Problem Assignment

STAT 516 Midterm Exam 3 Friday, April 18, 2008

Probability, Random Processes and Inference

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

Chapter 2: Random Variables

Massachusetts Institute of Technology

Math Spring Practice for the final Exam.

Queueing Theory and Simulation. Introduction

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

CHAPTER 6. 1, if n =1, 2p(1 p), if n =2, n (1 p) n 1 n p + p n 1 (1 p), if n =3, 4, 5,... var(d) = 4var(R) =4np(1 p).

Quick Tour of Basic Probability Theory and Linear Algebra

6.1 Moment Generating and Characteristic Functions

STA 256: Statistics and Probability I

Expectation of Random Variables

ECE353: Probability and Random Processes. Lecture 5 - Cumulative Distribution Function and Expectation

Chapter 2 Queueing Theory and Simulation

14.30 Introduction to Statistical Methods in Economics Spring 2009

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

Learning Objectives for Stat 225

Probability reminders

Lecture 2: Repetition of probability theory and statistics

Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

STAT/MA 416 Answers Homework 6 November 15, 2007 Solutions by Mark Daniel Ward PROBLEMS

Stat410 Probability and Statistics II (F16)

(u v) = f (u,v) Equation 1

Chapter 5. Chapter 5 sections

Part I Stochastic variables and Markov chains

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

15 Discrete Distributions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

FINAL EXAM: Monday 8-10am

Random Variables and Their Distributions

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes.

Exponential Distribution and Poisson Process

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Basic concepts of probability theory

Probability Midterm Exam 2:15-3:30 pm Thursday, 21 October 1999

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

Continuous r.v practice problems

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Functions of Several Random Variables (Ch. 5.5)

Basic concepts of probability theory

CMPSCI 240: Reasoning Under Uncertainty

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Continuous Data with Continuous Priors Class 14, Jeremy Orloff and Jonathan Bloom

1 Joint and marginal distributions

ACM 116: Lectures 3 4

Tom Salisbury

Lecture 3 Continuous Random Variable

Section 9.1. Expected Values of Sums

Basics of Stochastic Modeling: Part II

Transcription:

Problem. (0 points) Massachusetts Institute of Technology Final Solutions: December 15, 009 (a) (5 points) We re given that the joint PDF is constant in the shaded region, and since the PDF must integrate to 1, we know that the constant must equal 1 over the area of the region. Thus, 1 c. 1/ (b) (5 points) The marginal PDFs of X and Y are found by integrating the joint PDF over all possible y s and x s, respectively. To find the marginal PDF of X, we take a particular value x and integrate over all possible y values in that vertical slice at X x. Since the joint PDF is constant, this integral simplifies to just multiplying the joint PDF by the width of the slice. Because the width of the slice is always 1/ for any x [0, 1], we have that the marginal PDF of X is uniform over that interval: 1, 0 x 1, f X (x) Since the joint PDF is symmetric, the marginal PDF of Y is also uniform: 1, 0 y 1, f Y (y) (c) (5 points) To find the conditional expectation and variance, first we need to determine what the conditional distribution is given Y 1/4. At Y 1/4, we take a horizontal slice of a uniform joint PDF, which gives us a uniform distribution over the interval x [1/4, 3/4]. Thus, we have 1 E[X Y 1/4], var(x Y 1/4) (1/) 1. 1 48 (d) (5 points) At Y 3/4, we have a horizontal slice of the joint PDF, which is nonzero when x [0, 1/4] [3/4, 1]. Since the joint PDF is uniform, the slice will also be uniform, but only in the range of x where the joint PDF is nonzero (i.e. where (x, y) lies in the shaded region). Thus, the conditional PDF of X is, x [0, 1/4] [3/4, 1], f X Y (x 3/4) Page 1 of 7

Problem 3. (5 points) Massachusetts Institute of Technology (a) (5 points) The recurrent states are 3,4}. (b) (5 points) The -step transition probability from State to State 4 can be found by enumerating all the possible sequences. They are 1 4} and 4 4}. Thus, (c) (5 points) Generally, 1 1 1 7 P(X 4 X 0 ) + 1. 3 6 3 18 4 r 11 (n + 1) p 1j r j1 (n). Since states 3 and 4 are absorbing states, this expression simplifies to Alternatively, j1 1 1 r 11 (n + 1) r 11 (n) + r 1 (n). 4 4 4 r 11 (n + 1) r 1k (n)p k1 k1 1 1 r 11 (n) + r 1 (n). 4 3 (d) (5 points) The steady-state probabilities do not exist since there is more than one recurrent class. The long-term state probabilities would depend on the initial state. (e) (5 points) To find the probability of being absorbed by state 4, we set up the absorption probabilities. Note that a 4 1 and a 3 0. Solving these equations yields a 1 8 3. 1 1 1 1 a 1 a 1 + a + a 3 + a 4 4 4 3 6 1 1 1 a 1 + a + 4 4 6 1 1 1 a a 1 + a 3 + a 4 3 3 3 1 1 a 1 + 3 3 Page of 7

Problem 4. (30 points) Massachusetts Institute of Technology (a) (5 points) Given the problem statement, we can treat Al, Bonnie, and Clyde s running as 3 independent Poisson processes, where the arrivals correspond to lap completions and the arrival rates indicate the number of laps completed per hour. Since the three processes are independent, we can merge them to create a new process that captures the lap completions of all three runners. This merged process will have arrival rate λ M λ A + λ B + λ C 68. The total number of completed laps, L, over the first hour is then described by a Poisson PMF with λ M 68 and τ 1: 68 le 68 l!, l 0, 1,,..., p L (l) (b) (5 points) Let L be the total number of completed laps over the first hour, and let C i be the number of cups of water consumed at the end of the ith lap. Then, the total number of cups of water consumed is L C C i, i1 which is a sum of a random number of i.i.d. random variables. Thus, we can use the law of iterated expectations to find ( ) 1 5 340 E[C] E[E[C L]] E[LC i ] E[L]E[C i ] (λ M τ) 1 + 68 3 3 3 3. (c) (5 points) Let X be the number of laps (out of 7) after which Al drank cups of water. Then, in order for him to drink at least 130 cups, we must have which implies that we need 1 (7 X) + X 130, X 58. Now, let X i be i.i.d. Bernoulli random variables that equal 1 if Al drank cups of water following his ith lap and 0 if he drank 1 cup. Then X X 1 + X + + X 7. X is evidently a binomial random variable with n 7 and p /3, and the probability we are looking for is 7 ( ) ( ) k ( ) 7 1 7 k P(X 58). k 3 3 k58 This expression is difficult to calculate, but since we re dealing with the sum of a relatively large number of i.i.d. random variables, we can invoke the Central Limit Theorem to approx imate this probability using a normal distribution. In particular, we can approximate X as Page 3 of 7

Massachusetts Institute of Technology a normal random variable with mean np 7 /3 48 and variance np(1 p) 16 and approximate the desired probability as ( ) 58 P(X 58) 1 P(X < 58) 1 Φ 48 1 Φ(.5) 0.006. 16 (d) (5 points) The event that Al is the first to finish a lap is the same as the event that the first arrival in the merged process came from Al s process. This probability is λ A 1. λ A + λ B + λ C 68 (e) (5 points) This is an instance of the random incidence paradox, so the duration of Al s current lap consists of the sum of the duration from the time of your arrival until Al s next lap completion and the duration from the time of your arrival back to the time of Al s previous lap completion. This is the sum of independent exponential random variables with parameter λ A 1 (i.e. a second- order Erlang random variable): 1 te 1t, t 0, f T (t) (f) (5 points) As in the previous part, the duration of Al s second lap consists of the time remaining from t 1/4 until he completes his second lap and the time elapsed since he began his second lap until t 1/4. Let X be the time elapsed and Y be the time remaining. We can still model the time remaining Y as an exponential random variable. However, we can no longer do the same for the time elapsed X because we know X can be no larger than 1/4, whereas the exponential random variable can be arbitrarily large. To find the PDF of X, let s first consider its CDF. P(X x) P(The 1 arrival occurred less than x hours ago from time 1/4) P(1 arrival in the interval [1/4 x, 1/4] and no arrivals in the interval [0, 1/4 x]) P(1 arrival in the interval [0, 1/4]) P(1 arrival in the interval [1/4 x, 1/4])P(no arrivals in the interval [0, 1/4 x]) P(1 arrival in the interval [0, 1/4]) P(1, x)p(0, 1/4 x) P(1, 1/4) e 1x (1x)e 1(1/4 x) e 1/4 (1/4) 0, x < 0, 4x, x [0, 1/4], 1, x > 1/4. Thus, we find that the X is uniform over the interval [0, 1/4], with PDF 4, x [0, 1/4], f X (x) Page 4 of 7

Massachusetts Institute of Technology The total time that Al spends on his second lap is T X + Y. Since X and Y correspond to disjoint time intervals in the Poisson process, they are independent, and therefore we can use convolution to find the PDF of T : f T (t) f X (x)f Y (t x) dx Problem 5. (5 points) min(1/4,t) 4 1e 1(t x) dx 0 ( ) 4e 1t e 1 min(1/4,t) 1, t 0, (a) (5 points) Using the law of iterated expectations and the law of total variance, where var(n X) E[N X] X. (b) (5 points) E[N] E[E[N X]] E[X], λ var(n) E[var(N X)] + var(e[n X]) E[X] + var(x) +, λ λ p N (n) x f X (x)p N X (n x)dx λ x n+1 e (1+λ)x dx x0 n! λ (n + 1)! n! (1 + λ) n+ λ (n+1) n 0, 1,... (1+λ) n+ 0 otherwise. (c) (5 points) The equation for Xˆlin(N), the linear least-squares estimator of X based on an observation of N, is ˆ Xlin (N) E[X] + cov(x, N) (N E(N)). var(n) Page 5 of 7

Massachusetts Institute of Technology The only unknown quantity is cov(x, N) E[XN] E[X]E[N] E[XN] (E[X]). Using the law of iterated expectations again, E[XN] E[E[XN X]] E[XE[N X]] E[X ] var(x) + (E[X]) 6. λ Thus, cov(x, N) 6/λ 4/λ /λ. Combining this result with those from (a), Xˆlin(N) + λ λ + + N. 1 + λ λ λ ( ) N λ (d) (5 points) The expression for XˆMAP(N), the MAP estimator of X based on an observation of N is XˆMAP(N) arg max f X N (x n) x arg max f X (x)p N X (n x) x p N (n) arg max f X (x)p N X (n x) x λ arg max x n+1 e (1+λ)x x n! arg max x n+1 e (1+λ)x, x where the third equality holds since p N (n) has no dependency on x and the last equality holds by removing all quantities that have no dependency on x. The max can be found by differentiation and the result is: XˆMAP(N) 1 + N. 1 + λ This is the only local extremum in the range x [0, ). Moreover, f X N (x n) equals 0 at x 0 and goes to 0 as x and f X N (x n) > 0 otherwise. We can therefore conclude that XˆMAP(N) is indeed a maximum. (e) (5 points) To minimize the probability of error, we choose the hypothesis that has the larger posterior Page 6 of 7

Massachusetts Institute of Technology probability. We will choose the hypothesis that X if P(X N 3) > P(X 3 N 3) P(X )P(N 3 X ) > P(X 3)P(N 3 X 3) P(N 3) P(N 3) P(X )P(N 3 X ) > P(X 3)P(N 3 X 3) 3 3 3 e 3 3 3 e 3 > 35 3! 35 3! e > e 3. The inequality holds so we choose the hypothesis that X to minimize the probability of error. Page 7 of 7

MIT OpenCourseWare http://ocw.mit.edu 6.041 / 6.431 Probabilistic Systems Analysis and Applied Probability Fall 010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.