Lecture 7. Sums of random variables

Similar documents
18.175: Lecture 3 Integration

Lecture 9. Expectations of discrete random variables

18.440: Lecture 28 Lectures Review

18.175: Lecture 2 Extension theorems, random variables, distributions

Lecture 23. Random walks

18.440: Lecture 19 Normal random variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE

18.440: Lecture 28 Lectures Review

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Lecture 10. Variance and standard deviation

18.175: Lecture 15 Characteristic functions and central limit theorem

x log x, which is strictly convex, and use Jensen s Inequality:

Lecture 18. Uniform random variables

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

1 Measurable Functions

Recitation 2: Probability

18.440: Lecture 26 Conditional expectation

Independent random variables

Lecture 7. Bayes formula and independence

18.175: Lecture 17 Poisson random variables

SDS 321: Introduction to Probability and Statistics

Probability and Measure

6.041/6.431 Fall 2010 Quiz 2 Solutions

Admin and Lecture 1: Recap of Measure Theory

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Analysis of Probabilistic Systems

18.440: Lecture 25 Covariance and some conditional expectation exercises

Lecture 16. Lectures 1-15 Review

Synthetic Probability Theory

Lecture 14. More discrete random variables

18.440: Lecture 33 Markov Chains

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Probability Theory. Richard F. Bass

Review of Probability Theory

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

18.175: Lecture 14 Infinite divisibility and so forth

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

1 Probability space and random variables

We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.

ELEMENTS OF PROBABILITY THEORY

conditional cdf, conditional pdf, total probability theorem?

Fundamental Tools - Probability Theory II

Lecture 12. Poisson random variables

Lecture 11. Probability Theory: an Overveiw

Probability and Measure

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

3 Multiple Discrete Random Variables

Practice Midterm 2 Partial Solutions

Bayesian statistics, simulation and software

4 Pairs of Random Variables

Part II Probability and Measure

Recap of Basic Probability Theory

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Lecture 2: Convergence of Random Variables

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map

Recap of Basic Probability Theory

Real Analysis Problems

Paradox and Infinity Problem Set 6: Ordinal Arithmetic

18.175: Lecture 13 Infinite divisibility and Lévy processes

A D VA N C E D P R O B A B I L - I T Y

1 Independent increments

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Lecture 5: Expectation

REAL ANALYSIS I Spring 2016 Product Measures

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Compatible probability measures

Advanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts. Tuesday, January 16th, 2018

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Notes 1 : Measure-theoretic foundations I

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

SDS 321: Introduction to Probability and Statistics

ECE531 Lecture 4b: Composite Hypothesis Testing

Lecture 9: Conditional Probability and Independence

Random Process Lecture 1. Fundamentals of Probability

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

A Refresher in Probability Calculus

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Lecture 6 Basic Probability

1 Presessional Probability

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

18.600: Lecture 7 Bayes formula and independence

1: PROBABILITY REVIEW

Exercises. T 2T. e ita φ(t)dt.

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson

Almost Sure Convergence of a Sequence of Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

Product measure and Fubini s theorem

1 Stat 605. Homework I. Due Feb. 1, 2011

Intro to probability concepts

Multiple Integrals. Introduction and Double Integrals Over Rectangular Regions. Philippe B. Laval. Spring 2012 KSU

An Introduction to Laws of Large Numbers

Notes 13 : Conditioning

2.1 Lecture 5: Probability spaces, Interpretation of probabilities, Random variables

Lecture 1: Review on Probability and Statistics

Transcription:

18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1

Outline Definitions Sums of random variables 18.175 Lecture 7 2

Outline Definitions Sums of random variables 18.175 Lecture 7 3

Recall expectation definition Given probability space (Ω, F, P) and random variable X (i.e., measurable function X from Ω to R), we write EX = XdP. Expectation is always defined if X 0 a.s., or if integrals of max{x, 0} and min{x, 0} are separately finite. 18.175 Lecture 7 4

Strong law of large numbers Theorem (strong law): If X 1, X 2,... are i.i.d. real-valued a 1 n random variables with expectation m and A n := n are the empirical means then lim n A n = m almost surely. i=1 X i Last time we defined independent. We showed how to use Kolmogorov to construct infinite i.i.d. random variables on a measure space with a natural σ-algebra (in which the existence of a limit of the X i is a measurable event). So we ve come far enough to say that the statement makes sense. 18.175 Lecture 7 5

Recall some definitions Two events A and B are independent if P(A B) = P(A)P(B). Random variables X and Y are independent if for all C, D R, we have P(X C, Y D) = P(X C )P(Y D), i.e., the events {X C } and {Y D} are independent. Two σ-fields F and G are independent if A and B are independent whenever A F and B G. (This definition also makes sense if F and G are arbitrary algebras, semi-algebras, or other collections of measurable sets.) 18.175 Lecture 7 6

Recall some definitions Say events A 1, A 2,..., A n are independent if for each I {1, 2,..., n} we have P( i I A i ) = P(A i ). i I Say random variables X 1, X 2,..., X n are independent if for any measurable sets B 1, B 2,..., B n, the events that X i B i are independent. Say σ-algebras F 1, F 2,..., F n if any collection of events (one from each σ-algebra) are independent. (This definition also makes sense if the F i are algebras, semi-algebras, or other collections of measurable sets.) 18.175 Lecture 7 7

Recall Kolmogorov Kolmogorov extension theorem: If we have consistent probability measures on (R n, R n ), then we can extend them uniquely to a probability measure on R N. Proved using semi-algebra variant of Carathéeodory s extension theorem. 18.175 Lecture 7 8

Extend Kolmogorov Kolmogorov extension theorem not generally true if replace (R, R) with any measure space. But okay if we use standard Borel spaces. Durrett calls such spaces nice: a set (S, S) is nice if have 1-1 map from S to R so that φ and φ 1 are both measurable. Are there any interesting nice measure spaces? Theorem: Yes, lots. In fact, if S is a complete separable metric space M (or a Borel subset of such a space) and S is the set of Borel subsets of S, then (S, S) is nice. separable means containing a countable dense set. 18.175 Lecture 7 9

Standard Borel spaces Main idea of proof: Reduce to case that diameter less than one (e.g., by replacing d(x, y) with d(x, y)/(1 + d(x, y))). Then map M continuously into [0, 1] N by considering countable dense set q 1, q 2,... and mapping x to c l d(q 1, x), d(q 2, x),.... Then give measurable one-to-one map from [0, 1] N to [0, 1] via binary expansion (to send N N-indexed matrix of 0 s and 1 s to an N-indexed sequence of 0 s and 1 s). In practice: say I want to let Ω be set of closed subsets of a disc, or planar curves, or functions from one set to another, etc. If I want to construct natural σ-algebra F, I just need to produce metric that makes Ω complete and separable (and if I have to enlarge Ω to make it complete, that might be okay). Then I check that the events I care about belong to this σ-algebra. 18.175 Lecture 7 10

Fubini s theorem Consider σ-finite measure spaces (X, A, µ 1 ) and (Y, B, µ 2 ). Let Ω = X Y and F be product σ-algebra. Check: unique measure µ on F with µ(a B) = µ 1 (A)µ 2 (B). Fubini s theorem: If f 0 or f dµ < then f (x, y)µ 2 (dy)µ 1 (dx) = fdµ = X Y X Y Y X f (x, y)µ 1 (dx)µ 2 (dy). Main idea of proof: Check definition makes sense: if f measurable, show that restriction of f to slice {(x, y) : x = x 0 } is measurable as function of y, and the integral over slice is measurable as function of x 0. Check Fubini for indicators of rectangular sets, use π λ to extend to measurable indicators. Extend to simple, bounded, L 1 (or non-negative) functions. 18.175 Lecture 7 11

Non-measurable Fubini counterexample What if we take total ordering - or reals in [0, 1] (such that for each y the set {x : x - y} is countable) and consider indicator function of {(x, y) : x - y}? 18.175 Lecture 7 12

More observations If X i are independent with distributions µ i, then (X 1,..., X n ) has distribution µ 1... µ n. If X i are independent and satisfy either X i 0 for all i or E X i < for all i then n n E X i = X i. i=1 i=1 18.175 Lecture 7 13

Outline Definitions Sums of random variables 18.175 Lecture 7 14

Outline Definitions Sums of random variables 18.175 Lecture 7 15

Summing two random variables Say we have independent random variables X and Y with density functions f X and f Y. Now let s try to find F X +Y (a) = P{X + Y a}. This is the integral over {(x, y) : x + y a} of f (x, y) = f X (x)f Y (y). Thus, P{X + Y a} = a y f X (x)f Y (y)dxdy = F X (a y)f Y (y)dy. Differentiating both sides gives d fx +Y (a) = da F X (a y)f Y (y)dy = f X (a y)f Y (y)dy. Latter formula makes some intuitive sense. We re integrating over the set of x, y pairs that add up to a. Can also write P(X + Y z) = F ( z y)dg ( y). 18.175 Lecture 7

Summing i.i.d. uniform random variables Suppose that X and Y are i.i.d. and uniform on [0, 1]. So f X = f Y = 1 on [0, 1]. What is the probability density function of X + Y? 1 f X +Y (a) = f X (a y)f Y (y)dy = 0 f X (a y) which is the length of [0, 1] [a 1, a]. That s a when a [0, 1] and 2 a when a [0, 2] and 0 otherwise. 18.175 Lecture 7 17

Summing two normal variables X is normal with mean zero, variance σ 1 2, Y is normal with mean zero, variance σ 2 2. x 2 y 2 1 2σ 2 1 2σ 2 2πσ 1 2πσ 2 f X (x) = e 1 and f Y (y) = e 2. We just need to compute f X +Y (a) = f X (a y)f Y (y)dy. We could compute this directly. Or we could argue with a multi-dimensional bell curve picture that if X and Y have variance 1 then f σ1 X +σ 2 Y is the density of a normal random variable (and note that variances and expectations are additive). Or use fact that if A i { 1, 1} are i.i.d. coin tosses then a 1 σ 2 N i=1 A i is approximately normal with variance σ 2 when N N is large. Generally: if independent a random a variables a X j are normal n n n (µ σ 2 j, σj 2 ) then j=1 X j is normal ( j=1 µ j, j=1 j ). 18.175 Lecture 7 18

MIT OpenCourseWare http://ocw.mit.edu 18.175 Theory of Probability Spring 2014 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

MIT OpenCourseWare http://ocw.mit.edu 18.175 Theory of Probability Spring 2014 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.