Convergence of Random Variables

Similar documents
Convergence of Random Variables

Expectation of Random Variables

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Continuous Random Variables

Lecture 2: Convergence of Random Variables

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

1 Sequences of events and their limits

Lecture 4: September Reminder: convergence of sequences

Convergence Concepts of Random Variables and Functions

On the convergence of sequences of random variables: A primer

17. Convergence of Random Variables

Lecture Notes 3 Convergence (Chapter 5)

Spectral representations and ergodic theorems for stationary stochastic processes

Verona Course April Lecture 1. Review of probability

Finite Fields. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Probability Lecture III (August, 2006)

Probability Theory. Richard F. Bass

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

Preliminaries. Probability space

0.1 Uniform integrability

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

7 Convergence in R d and in Metric Spaces

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

6.1 Moment Generating and Characteristic Functions

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

1 Probability theory. 2 Random variables and probability theory.

P (A G) dp G P (A G)

Stochastic Processes - lesson 2

Probability and Measure

Fundamental Tools - Probability Theory IV

Stochastic Models (Lecture #4)

2 2 + x =

COMPSCI 240: Reasoning Under Uncertainty

Recap of Basic Probability Theory

Recap of Basic Probability Theory

1: PROBABILITY REVIEW

Probability and Measure

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Weak convergence in Probability Theory A summer excursion! Day 3

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Lecture 1: Review on Probability and Statistics

MAS113 Introduction to Probability and Statistics

PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez

Formulas for probability theory and linear models SF2941

f ( x) = L ( the limit of f(x), as x approaches a,

Mathematical Statistics 1 Math A 6330

Selected Exercises on Expectations and Some Probability Inequalities

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Chapter 6: Large Random Samples Sections

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

Parameter Estimation

Math 117: Infinite Sequences

f ( x) = L ( the limit of f(x), as x approaches a,

Lecture 22: Variance and Covariance

Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson

AMCS243/CS243/EE243 Probability and Statistics. Fall Final Exam: Sunday Dec. 8, 3:00pm- 5:50pm VERSION A

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Notes on the second moment method, Erdős multiplication tables

The random variable 1

Random Variables Example:

POL502: Foundations. Kosuke Imai Department of Politics, Princeton University. October 10, 2005

EE514A Information Theory I Fall 2013

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Northwestern University Department of Electrical Engineering and Computer Science

Probability and Measure

Mathematics 324 Riemann Zeta Function August 5, 2005

Expectation maximization

Read carefully the instructions on the answer book and make sure that the particulars required are entered on each answer book.

Week 2. Review of Probability, Random Variables and Univariate Distributions

MTH 202 : Probability and Statistics

by Dimitri P. Bertsekas and John N. Tsitsiklis Last updated: November 29, 2002

Section Taylor and Maclaurin Series

Chapter 2. Limits and Continuity 2.6 Limits Involving Infinity; Asymptotes of Graphs

1 Presessional Probability

B5.6 Nonlinear Systems

A random variable is a quantity whose value is determined by the outcome of an experiment.

Lecture 11: Random Variables

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Lecture 4: Sampling, Tail Inequalities

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

The main results about probability measures are the following two facts:

MATH 418: Lectures on Conditional Expectation

Discrete Distributions

Chapter 7. Basic Probability Theory

Ch3 Operations on one random variable-expectation

Limiting Distributions

Midterm Examination. STA 205: Probability and Measure Theory. Thursday, 2010 Oct 21, 11:40-12:55 pm

STA205 Probability: Week 8 R. Wolpert

Review of Probability Theory II

2.1 Elementary probability; random sampling

Advanced Probability Theory (Math541)

Transcription:

1 / 13 Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay April 8, 2015

2 / 13 Motivation Theorem (Weak Law of Large Numbers) Let X 1, X 2,... be a sequence of independent identically distributed random variables with finite means µ. Their partial sums S n = X 1 + X 2 + + X n satisfy S n n P µ as n. Theorem (Central Limit Theorem) Let X 1, X 2,... be a sequence of independent identically distributed random variables with finite means µ and finite non-zero variance σ 2. Their partial sums S n = X 1 + X 2 + + X n satisfy ( ) Sn n n µ D N (0, σ 2 ) as n.

Modes of Convergence A sequence of real numbers {x n : n = 1, 2,...} is said to converge to a limit x if for all ε > 0 there exists an m ε N such that x n x < ε for all n m ε. We want to define convergence of random variables but they are functions from Ω to R The solution Derive real number sequences from sequences of random variables Define convergence of the latter in terms of the former Four ways of defining convergence for random variables Convergence almost surely Convergence in r th mean Convergence in probability Convergence in distribution 3 / 13

Convergence Almost Surely Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) For each ω Ω, X(ω) and X n(ω) are reals X n X almost surely if {ω Ω : X n(ω) X(ω) as n } is an event whose probability is 1 X n X almost surely is abbreviated as X n Example X Let Ω = [0, 1] and P be the uniform distribution on Ω P (ω [a, b]) = b a for 0 a b 1 Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] X n X 4 / 13

Convergence in rth Mean Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) Suppose E [ X r ] < and E [ X r n ] < for all n X n X in rth mean if E ( X n X r ) 0 as n where r 1 r X n X in rth mean is abbreviated as X n X For r = 1, X 1 n X is written as X n X in mean For r = 2, X 2 m.s. n X is written as X n X in mean square or X n X Example Let Ω = [0, 1] and P be the uniform distribution on Ω Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] E[ X n ] = 1 and so X n does not converge in mean to X 5 / 13

Convergence in Probability Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) X n X in probability if P ( X n X > ɛ) 0 as n for all ɛ > 0 X n X in probability is abbreviated as X n P X Example Let Ω = [0, 1] and P be the uniform distribution on Ω Let X n be defined as X n(ω) = { n, ω [ ) 0, 1 n 0, ω [ 1, 1] n Let X(ω) = 0 for all ω [0, 1] For ε > 0, P[ X n X > ε] = P[ X n > ε] P[X n = n] = 1 n 0 X n P X 6 / 13

Convergence in Distribution Let X, X 1, X 2,... be random variables on a probability space (Ω, F, P) X n X in distribution if P (X n x) P (X x) as n for all points x where F X (x) = P(X x) is continuous X n X in distribution is abbreviated as X n D X Convergence in distribution is also termed weak convergence Example Let X be a Bernoulli RV taking values 0 and 1 with equal probability 1 2. Let X 1, X 2, X 3,... be identical random variables given by X n = X for all n. The X n s are not independent but X n D X. Let Y = 1 X. Then X n D Y. But X n Y = 1 and the X n s do not converge to Y in any other mode. 7 / 13

8 / 13 Relations between Modes of Convergence Theorem (X n X) (X n r X) P (X n X) D (Xn X) for any r 1.

9 / 13 Convergence in Probability Implies Convergence in Distribution Suppose X P n X Let F n(x) = P(X n x) and F (x) = P(X x) If ε > 0, F n(x) = P(X n x) = P(X n x, X x + ε) + P(X n x, X > x + ε) F(x + ε) + P ( X n X > ε) F(x ε) = P(X x ε) = P(X x ε, X n x) + P(X x ε, X n > x) F n(x) + P ( X n X > ε) Combining the above inequalities we have F(x ε) P ( X n X > ε) F n(x) F(x + ε) + P ( X n X > ε) If F is continuous at x, F(x ε) F(x) and F(x + ε) F(x) as ε 0 Since X n P X, P ( X n X > ε) 0 as n

10 / 13 Convergence in r th Mean Implies Convergence in Probability If r > s 1 and X n r X then X n s X Lyapunov s inequality: If r > s > 0, then (E [ Y s ]) 1 s r If X n X, then E [ X n X r ] 0 and (E [ X n X s ]) 1 s (E [ X n X r ]) 1 r (E [ Y r ]) 1 r If X n 1 X then X n P X By Markov s inequality, we have for all ε > 0 P ( X n X > ε) E ( Xn X ) ε

11 / 13 Convergence Almost Surely Implies Convergence in Probability Let A n(ε) = { X n X > ε} and B m(ε) = n m An(ε) X n X if and only if P (B m(ε)) 0 as m, for all ε > 0 Let C = {ω Ω : X n(ω) X(ω) as n } A(ε) = {ω Ω : ω A n(ε) for infinitely many values of n} = A n(ε) m n=m X n(ω) X(ω) if and only if ω / A(ε) for all ε > 0 P(C) = 1 if and only if P (A(ε)) = 0 for all ε > 0 B m(ε) is a decreasing sequence of events with limit A(ε) P(A(ε)) = 0 if and only if P (B m(ε)) 0 as m Since A n(ε) B n(ε), we have P ( X n X > ε) = P (A n(ε)) 0 whenever P (B n(ε)) 0 Thus X n X = X n P X

12 / 13 Some Converses If X n D c, where c is a constant, then X n P c P ( X n c > ε) = P(X n < c ε) + P(X n > c + ε) 0 if X n D c If P n(ε) = P ( X n X > ε) satisfies n Pn(ε) < for all ε > 0, then X X n Let A n(ε) = { X n X > ε} and B m(ε) = n m An(ε) P (B m(ε)) P (A n(ε)) = n=m P n(ε) 0 as m n=m X n X if and only P (B m(ε)) 0 as m, for all ε > 0

13 / 13 Reference Chapter 7, Probability and Random Processes, Grimmett and Stirzaker, Third Edition, 2001.