Convergence of Random Variables

Similar documents
Convergence of Random Variables

1 Sequences of events and their limits

Lecture 2: Convergence of Random Variables

Expectation of Random Variables

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Continuous Random Variables

Convergence Concepts of Random Variables and Functions

On the convergence of sequences of random variables: A primer

17. Convergence of Random Variables

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Probability Theory. Richard F. Bass

Lecture 4: September Reminder: convergence of sequences

Chapter 5. Weak convergence

Stochastic Models (Lecture #4)

Lecture Notes 3 Convergence (Chapter 5)

Preliminaries. Probability space

7 Convergence in R d and in Metric Spaces

Spectral representations and ergodic theorems for stationary stochastic processes

Probability and Measure

Probability and Measure

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 3 9/10/2008 CONDITIONING AND INDEPENDENCE

Verona Course April Lecture 1. Review of probability

Analysis II: Basic knowledge of real analysis: Part V, Power Series, Differentiation, and Taylor Series

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

0.1 Uniform integrability

2 2 + x =

P (A G) dp G P (A G)

Probability and Measure

Lecture 6 Basic Probability

Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson

Part II Probability and Measure

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

COMPSCI 240: Reasoning Under Uncertainty

Probability Lecture III (August, 2006)

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications

Lecture 11: Random Variables

6.1 Moment Generating and Characteristic Functions

Convergence of random variables, and the Borel-Cantelli lemmas

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Recap of Basic Probability Theory

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Recap of Basic Probability Theory

STA205 Probability: Week 8 R. Wolpert

7 About Egorov s and Lusin s theorems

Lecture 1: Review on Probability and Statistics

Useful Probability Theorems

4 Expectation & the Lebesgue Theorems

Advanced Probability Theory (Math541)

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

1 Probability theory. 2 Random variables and probability theory.

The main results about probability measures are the following two facts:

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

STA 711: Probability & Measure Theory Robert L. Wolpert

Lecture 1: Review of Basic Asymptotic Theory

Course Notes. Part II. Probabilistic Combinatorics. Algorithms

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

STOR 635 Notes (S13)

Lecture Notes for MA 623 Stochastic Processes. Ionut Florescu. Stevens Institute of Technology address:

Elementary Probability. Exam Number 38119

Math 117: Infinite Sequences

Lecture 22: Variance and Covariance

Brownian Motion and Conditional Probability

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Exercises in Extreme value theory

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Bootstrap - theoretical problems

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Selected Exercises on Expectations and Some Probability Inequalities

Midterm Examination. STA 205: Probability and Measure Theory. Wednesday, 2009 Mar 18, 2:50-4:05 pm

MATH5011 Real Analysis I. Exercise 1 Suggested Solution

Martingale Theory and Applications

PROBABILITY THEORY II

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:

. Find E(V ) and var(v ).

1 Independent increments

Notes 1 : Measure-theoretic foundations I

1 Presessional Probability

Product measure and Fubini s theorem

RANDOM WALKS WHOSE CONCAVE MAJORANTS OFTEN HAVE FEW FACES

4 Sums of Independent Random Variables

Synthetic Probability Theory

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

EE514A Information Theory I Fall 2013

Finite Fields. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

4 Expectation & the Lebesgue Theorems

1: PROBABILITY REVIEW

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

Formulas for probability theory and linear models SF2941

2-4: The Use of Quantifiers

Measure-theoretic probability

Fundamental Tools - Probability Theory IV

Lecture I: Asymptotics for large GUE random matrices

Conditional independence, conditional mixing and conditional association

Independent random variables

Transcription:

1 / 15 Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014

2 / 15 Motivation Theorem (Weak Law of Large Numbers) Let X 1, X 2,... be a sequence of independent identically distributed random variables with finite means µ. Their partial sums S n = X 1 + X 2 + + X n satisfy S n n P µ as n. Theorem (Central Limit Theorem) Let X 1, X 2,... be a sequence of independent identically distributed random variables with finite means µ and finite non-zero variance σ 2. Their partial sums S n = X 1 + X 2 + + X n satisfy ( ) Sn n n µ D N (0, σ 2 ) as n.

Modes of Convergence A sequence of real numbers {x n : n = 1, 2,...} is said to converge to a limit x if for all ε > 0 there exists an m ε N such that x n x < ε for all n m ε. We want to define convergence of random variables but they are functions from Ω to R The solution Derive real number sequences from sequences of random variables Define convergence of the latter in terms of the former Four ways of defining convergence for random variables Convergence almost surely Convergence in r th mean Convergence in probability Convergence in distribution 3 / 15

Convergence Almost Surely Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) For each ω Ω, X(ω) and X n(ω) are reals X n X almost surely if {ω Ω : X n(ω) X(ω) as n } is an event whose probability is 1 X n X almost surely is abbreviated as X n Example a.s. X Let Ω = [0, 1] and P be the uniform distribution on Ω P (ω [a, b]) = b a for 0 a b 1 Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] X n a.s. X 4 / 15

Convergence in rth Mean Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) Suppose E [ X r ] < and E [ X r n ] < for all n X n X in rth mean if E ( X n X r ) 0 as n where r 1 r X n X in rth mean is abbreviated as X n X For r = 1, X 1 n X is written as X n X in mean For r = 2, X 2 m.s. n X is written as X n X in mean square or X n X Example Let Ω = [0, 1] and P be the uniform distribution on Ω Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] E[ X n ] = 1 and so X n does not converge in mean to X 5 / 15

Convergence in Probability Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) X n X in probability if P ( X n X > ɛ) 0 as n for all ɛ > 0 X n X in probability is abbreviated as X n P X Example Let Ω = [0, 1] and P be the uniform distribution on Ω Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] For ε > 0, P[ X n X > ε] = P[ X n > ε] P[X n = n] = 1 n 0 X n P X 6 / 15

Convergence in Distribution Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) X n X in distribution if P (X n x) P (X x) as n for all points x where F X (x) = P(X x) is continuous X n X in distribution is abbreviated as X n D X Convergence in distribution is also termed weak convergence Example Let X be a Bernoulli RV taking values 0 and 1 with equal probability 1 2. Let X 1, X 2, X 3,... be identical random variables given by X n = X for all n. The X n s are not independent but X n D X. Let Y = 1 X. Then X n D Y. But X n Y = 1 and the X n s do not converge to Y in any other mode. 7 / 15

8 / 15 Relations between Modes of Convergence Theorem (X n a.s. X) (X n r X) P (X n X) D (Xn X) for any r 1.

Convergence in Probability Implies Convergence in Distribution Suppose X n P X Let F n(x) = P(X n x) and F (x) = P(X x) If ε > 0, F n(x) = P(X n x) = P(X n x, X x + ε) + P(X n x, X > x + ε) F(x + ε) + P ( X n X > ε) F(x ε) = P(X x ε) = P(X x ε, X n x) + P(X x ε, X n > x) F n(x) + P ( X n X > ε) Combining the above inequalities we have F(x ε) P ( X n X > ε) F n(x) F(x + ε) + P ( X n X > ε) If F is continuous at x, F(x ε) F(x) and F(x + ε) F(x) as ε 0 Since X n P X, P ( X n X > ε) 0 as n 9 / 15

10 / 15 Convergence in r th Mean Implies Convergence in Probability If r > s 1 and X n r X then X n s X Lyapunov s inequality: If r > s > 0, then (E [ Y s ]) 1 s r If X n X, then E [ X n X r ] 0 and (E [ X n X s ]) 1 s (E [ X n X r ]) 1 r (E [ Y r ]) 1 r If X n 1 X then X n P X By Markov s inequality, we have for all ε > 0 P ( X n X > ε) E ( Xn X ) ε

11 / 15 Convergence Almost Surely Implies Convergence in Probability Let A n(ε) = { X n X > ε} and B m(ε) = n m An(ε) X n a.s. X if and only if P (B m(ε)) 0 as m, for all ε > 0 Let C = {ω Ω : X n(ω) X(ω) as n } A(ε) = {ω Ω : ω A n(ε) for infinitely many values of n} = A n(ε) m n=m X n(ω) X(ω) if and only if ω / A(ε) for all ε > 0 P(C) = 1 if and only if P (A(ε)) = 0 for all ε > 0 B m(ε) is a decreasing sequence of events with limit A(ε) P(A(ε)) = 0 if and only if P (B m(ε)) 0 as m Since A n(ε) B n(ε), we have P ( X n X > ε) = P (A n(ε)) 0 whenever P (B n(ε)) 0 Thus X n a.s. X = X n P X

12 / 15 Some Converses If X n D c, where c is a constant, then X n P c P ( X n c > ε) = P(X n < c ε) + P(X n > c + ε) 0 if X n D c If P n(ε) = P ( X n X > ε) satisfies n Pn(ε) < for all ε > 0, then a.s. X X n Let A n(ε) = { X n X > ε} and B m(ε) = n m An(ε) P (B m(ε)) P (A n(ε)) = n=m P n(ε) 0 as m n=m X n a.s. X if and only P (B m(ε)) 0 as m, for all ε > 0

13 / 15 Borel-Cantelli Lemmas Let A 1, A 2,... be an infinite sequence of events from (Ω, F, P) Consider the event that infinitely many of the A n occur A = {A n i.o.} = n Theorem Let A be the event that infinitely many of the A n occur. Then P(A) = 0 if n P(An) <, P(A) = 1 if n P(An) = and A 1, A 2, A 3,... are independent events Proof of first lemma. We have A Am for all n P(A) A m P(A m) 0 as n 0

14 / 15 Proof of Second Borel-Cantelli Lemma ( P Thus A c m ) = lim r P ( r A c = n A c m ) = lim A c m r r [1 P(A m)] = ( ) exp [ P(A m)] = exp P(A m) = 0 P(A c ) = lim P n ( A c m ) = 0 [1 P(A m)]

15 / 15 Reference Chapter 7, Probability and Random Processes, Grimmett and Stirzaker, Third Edition, 2001.