Massachusetts Institute of Technology

Similar documents
Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom

14.30 Introduction to Statistical Methods in Economics Spring 2009

Joint Distribution of Two or More Random Variables

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

SDS 321: Introduction to Probability and Statistics

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Introduction to Probability

Continuous Random Variables

F X (x) = P [X x] = x f X (t)dt. 42 Lebesgue-a.e, to be exact 43 More specifically, if g = f Lebesgue-a.e., then g is also a pdf for X.

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

Solution to Assignment 3

6.041/6.431 Fall 2010 Quiz 2 Solutions

Expectation of Random Variables

18.440: Lecture 28 Lectures Review

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Math 416 Lecture 2 DEFINITION. Here are the multivariate versions: X, Y, Z iff P(X = x, Y = y, Z =z) = p(x, y, z) of X, Y, Z iff for all sets A, B, C,

Exam 1 Practice Questions I solutions, 18.05, Spring 2014

Chapter 4. Chapter 4 sections

Class 8 Review Problems solutions, 18.05, Spring 2014

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

ORF 245 Fundamentals of Statistics Great Expectations

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Homework 10 (due December 2, 2009)

6.1 Moment Generating and Characteristic Functions

18.440: Lecture 28 Lectures Review

Basics of Stochastic Modeling: Part II

Math 30530: Introduction to Probability, Fall 2013

This does not cover everything on the final. Look at the posted practice problems for other topics.

Final Solutions Fri, June 8

2 Continuous Random Variables and their Distributions

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Chapter 4. Continuous Random Variables

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

More on Distribution Function

Use partial integration with respect to y to compute the inner integral (treating x as a constant.)

Chapter 4 continued. Chapter 4 sections

Math Spring Practice for the Second Exam.

Chapter 2. Continuous random variables

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Jointly Distributed Random Variables

Dimensions = xyz dv. xyz dv as an iterated integral in rectangular coordinates.

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable

MATH/STAT 3360, Probability Sample Final Examination Model Solutions

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Massachusetts Institute of Technology

Probability and Distributions

ECE 302: Probabilistic Methods in Engineering

Continuous Random Variables and Continuous Distributions

18.01 Single Variable Calculus Fall 2006

Answer sheet: Final exam for Math 2339, Dec 10, 2010

STAT 515 MIDTERM 2 EXAM November 14, 2018

General Random Variables

2 Functions of random variables

Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Multiple Random Variables

volq = U(f, P 2 ). This finally gives

Probability. Computer Science Tripos, Part IA. R.J. Gibbens. Computer Laboratory University of Cambridge. Easter Term 2008/9

Introduction to Probability

Chapter 4. Continuous Random Variables 4.1 PDF

Math Spring Practice for the final Exam.


STAT 430/510: Lecture 10

3 Continuous Random Variables

MAT 211 Final Exam. Fall Jennings.

Probability Review. Chao Lan

SDS 321: Introduction to Probability and Statistics

Math 151. Rumbos Spring Solutions to Review Problems for Exam 2

Statistics Examples. Cathal Ormond

Math 265 (Butler) Practice Midterm III B (Solutions)

Analysis of Engineering and Scientific Data. Semester

234 Review Sheet 2 Solutions

Bivariate Distributions

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

18.05 Exam 1. Table of normal probabilities: The last page of the exam contains a table of standard normal cdf values.

AP Calculus BC - Problem Solving Drill 19: Parametric Functions and Polar Functions

ENGG2430A-Homework 2

STA 256: Statistics and Probability I

Homework 9 (due November 24, 2009)

1 Probability theory. 2 Random variables and probability theory.

1 Review of Probability and Distributions

Math Review Sheet, Fall 2008

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Solutions to old Exam 3 problems

Practice Examination # 3

Final. Fall 2016 (Dec 16, 2016) Please copy and write the following statement:

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

STAT 430/510: Lecture 16

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Calculus III 2004 Summer Practice Final 8/3/2004

Fermat s Principle. Fermat s Principle states that a ray of light in a medium will follow the path which takes the least amount of time.

Continuous Random Variables

Continuous Expectation and Variance, the Law of Large Numbers, and the Central Limit Theorem Spring 2014

Sampling Distributions

HW4 : Bivariate Distributions (1) Solutions

Transcription:

6.4/6.43: Probabilistic Systems Analysis (Fall ) Recitation Solutions October 9,. cov(ax + b,y ) ρ(ax + b,y ) var(ax + b)(var(y )) E[(aX + b E[aX + b])(y E[Y ])] a var(x)var(y ) E[(aX + b ae[x] b)(y E[Y ])] a var(x)var(y ) ae[(x E[X])(Y E[Y ])] a var(x)var(y ) cov(x,y ) var(x)var(y ) = ρ(x,y ) As an example where this property of the correlation coefficient is relevant, consider the homework and exam scores of students in a class. We expect the homework and exam scores to be positively correlated and thus have a positive correlation coefficient. Note that, in this example, the above property will mean that the correlation coefficient will not change whether the exam is out of 5 points, points, or any other number of points.. (a) When z : F Z (z) = P(X Y z) = P(X Y + z) y+z = f X,Y (x,y y+z = λe λy λe λx dxdy = λ λy λ(y+z) e dy λz y= e λy = + e y= λz = e z )dxdy Page of 6

When z < : Hence, Massachusetts Institute of Technology 6.4/6.43: Probabilistic Systems Analysis (Fall ) P(X Y + z) = F Z (z) = y+z f X,Y (x,y)dxdy d f Z (z) = F Z (z) = dz = λe λx λe λy dydx x z = λe λx e λ(x z) dx λz = e λe λx dx = λz e z e λz z f Z (z) = λz e z < λ e λz z λ λz e z < λ e λ z (b) Solving using the total probability theorem, we have: f Z (z) = f X (x)f Z X (z x)dx = f X (x)f Y X (x z x)dx = f X (x)f Y (x z)dx First when z <, we have: f X (x)f Y (x z)dx = λe λx λe λ(x z) dx = λe λz λe λx dx λ λz = e Page of 6

6.4/6.43: Probabilistic Systems Analysis (Fall ) Then, when z we have: f X (x)f Y (x z)dx = z λe λx λe λ(x z) dx = λe λz λe λx dx z λ λz λz = e e λ λz = e f Z (z) = λ e λ z z 3. (a) We have X = Rcos(Θ) and Y = Rsin(Θ). Recall that in polar coordinates, the differential area is da = dxdy = rdrd. So r π F R (r) = P(R r) = f X (r cos)f Y (r sin)d r dr r π = e (r ) / d r dr π F R (r) = r π e (r ) / dr r / = r d π = e u du (u = (r ) /) { e / r r < f R (r) = d FR (r) = ( /)(r)( e / ) dr = re /, r Page 3 of 6

6.4/6.43: Probabilistic Systems Analysis (Fall ) (b) F Θ () = P(Θ ) = f X (rcos )f Y (rsin )rdr d = e / rdr d π = re / d dr π = e u du (u = r /) π u = e = π π π < F Θ () = π π π d f Θ () = F Θ () = π d π (c) F R,Θ (r,) = P(R r,θ ) = = = = e / r, π π F R,Θ (r,) = e / r, > π otherwise r r e (r ) / dr d π r / e u dud π e / d π e / r, π (u = (r ) /) > π f R,Θ (r,) = FR,Θ (r,) = re / r π r, π Note: The PDF of R is exponentially distributed with parameter λ = /. This is a very convenient way to generate normal random variables from independent uniform and exponential random variables. We can generate an arbitrary random variable X with CDF F X by first generating a uniform random variable and then passing the samples from the uniform distribution through the function F X. But since we don t have a closed-form expression for the CDF of a normal random variable, this method doesn t work. However, Page 4 of 6

6.4/6.43: Probabilistic Systems Analysis (Fall ) we do have a closed-form expression for the exponential distribution. Therefore, we can generate an exponential distribution with parameter / and we can generate a uniform distribution in [,π], and with these two distributions we can generate standard normal distributions. 4. Problem 4., page 5 in text. See text for the proof. An alternative proof is given below: Consider the problem of picking a parameter α to minimize the expected squared difference between two random variables X and Y. Consider [ ] J(α) = E (X αy ) with Y =. We start with a variational calculation to find α that minimizes J(α). The value of α which minimizes J(α) is found by setting the first derivative of J(α) to zero (since, for Y =, d J(α) = E[Y ] > ). dα J(α) α dj dα = d d J(α) = E[X ] αe[xy ] + α E[Y ] = dα dα α = E[XY ] E[Y ] minimizes J(α). Page 5 of 6

6.4/6.43: Probabilistic Systems Analysis (Fall ) Then E[XY ] J E[Y ] = E [( ) ] E[XY ] X E[Y Y ] = E[X (E[XY ]) (E[XY ]) E[Y ] ] E[Y + ] (E[Y ]) = E[X ] (E[XY ]) E[Y ] Rearranging this expression gives the Schwarz inequality for expected values: E[X ]E[Y ] (E[XY ]) Note that in the above derivation, we assumed Y = so that E[Y ] >. If we assume Y = then the Schwarz inequality will hold with equality since then E[XY ] = and E[Y ] =. Page 6 of 6

MIT OpenCourseWare http://ocw.mit.edu 6.4 / 6.43 Probabilistic Systems Analysis and Applied Probability Fall For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.