Lecture 8 Characteristic Functions

Similar documents
Continuous Random Variables

The Regulated and Riemann Integrals

Chapter 5 : Continuous Random Variables

8 Laplace s Method and Local Limit Theorems

(4.1) D r v(t) ω(t, v(t))

Lecture 1. Functional series. Pointwise and uniform convergence.

Best Approximation in the 2-norm

1 Probability Density Functions

Lecture 3 Gaussian Probability Distribution

SOLUTIONS FOR ANALYSIS QUALIFYING EXAM, FALL (1 + µ(f n )) f(x) =. But we don t need the exact bound.) Set

HW3 : Moment functions Solutions

g i fφdx dx = x i i=1 is a Hilbert space. We shall, henceforth, abuse notation and write g i f(x) = f

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS.

Numerical integration

UNIFORM CONVERGENCE MA 403: REAL ANALYSIS, INSTRUCTOR: B. V. LIMAYE

Appendix to Notes 8 (a)

ODE: Existence and Uniqueness of a Solution

1 1D heat and wave equations on a finite interval

Math 554 Integration

Problem Set 4: Solutions Math 201A: Fall 2016

Summary: Method of Separation of Variables

The Banach algebra of functions of bounded variation and the pointwise Helly selection theorem

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying

Riemann Sums and Riemann Integrals

Abstract inner product spaces

f(x)dx . Show that there 1, 0 < x 1 does not exist a differentiable function g : [ 1, 1] R such that g (x) = f(x) for all

MATH34032: Green s Functions, Integral Equations and the Calculus of Variations 1

Analytical Methods Exam: Preparatory Exercises

Joint distribution. Joint distribution. Marginal distributions. Joint distribution

Riemann Sums and Riemann Integrals

Sturm-Liouville Eigenvalue problem: Let p(x) > 0, q(x) 0, r(x) 0 in I = (a, b). Here we assume b > a. Let X C 2 1

Convex Sets and Functions

1 The Riemann Integral

Definite integral. Mathematics FRDIS MENDELU

Improper Integrals. Type I Improper Integrals How do we evaluate an integral such as

FUNDAMENTALS OF REAL ANALYSIS by. III.1. Measurable functions. f 1 (

The First Fundamental Theorem of Calculus. If f(x) is continuous on [a, b] and F (x) is any antiderivative. f(x) dx = F (b) F (a).

Problem. Statement. variable Y. Method: Step 1: Step 2: y d dy. Find F ( Step 3: Find f = Y. Solution: Assume

STUDY GUIDE FOR BASIC EXAM

Definite integral. Mathematics FRDIS MENDELU. Simona Fišnarová (Mendel University) Definite integral MENDELU 1 / 30

Review of Riemann Integral

Lecture 3 ( ) (translated and slightly adapted from lecture notes by Martin Klazar)

f(x) dx, If one of these two conditions is not met, we call the integral improper. Our usual definition for the value for the definite integral

Review of Calculus, cont d

CS667 Lecture 6: Monte Carlo Integration 02/10/05

Chapter 0. What is the Lebesgue integral about?

7.2 Riemann Integrable Functions

Theoretical foundations of Gaussian quadrature

Riemann is the Mann! (But Lebesgue may besgue to differ.)

Properties of the Riemann Integral

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

7 - Continuous random variables

Chapter 4. Lebesgue Integration

The final exam will take place on Friday May 11th from 8am 11am in Evans room 60.

AM1 Mathematical Analysis 1 Oct Feb Exercises Lecture 3. sin(x + h) sin x h cos(x + h) cos x h

f(a+h) f(a) x a h 0. This is the rate at which

ACM 105: Applied Real and Functional Analysis. Solutions to Homework # 2.

Lecture notes. Fundamental inequalities: techniques and applications

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

AP Calculus Multiple Choice: BC Edition Solutions

Convergence of Fourier Series and Fejer s Theorem. Lee Ricketson

Fourier series. Preliminary material on inner products. Suppose V is vector space over C and (, )

Lecture 3. Limits of Functions and Continuity

Lecture 1: Introduction to integration theory and bounded variation

3.4 Numerical integration

Presentation Problems 5

Math Calculus with Analytic Geometry II

Math 61CM - Solutions to homework 9

The Henstock-Kurzweil integral

MAA 4212 Improper Integrals

A HELLY THEOREM FOR FUNCTIONS WITH VALUES IN METRIC SPACES. 1. Introduction

0.1 Properties of regulated functions and their Integrals.

Recitation 3: More Applications of the Derivative

For a continuous function f : [a; b]! R we wish to define the Riemann integral

Stuff You Need to Know From Calculus

Reversals of Signal-Posterior Monotonicity for Any Bounded Prior

Best Approximation. Chapter The General Case

4.1. Probability Density Functions

Bernoulli Numbers Jeff Morton

Hilbert Spaces. Chapter Inner product spaces

Lecture 19: Continuous Least Squares Approximation

1.2. Linear Variable Coefficient Equations. y + b "! = a y + b " Remark: The case b = 0 and a non-constant can be solved with the same idea as above.

a < a+ x < a+2 x < < a+n x = b, n A i n f(x i ) x. i=1 i=1

Section 17.2 Line Integrals

Chapter 3 Polynomials

Math 360: A primitive integral and elementary functions

7.2 The Definite Integral

1. On some properties of definite integrals. We prove

Reversing the Chain Rule. As we have seen from the Second Fundamental Theorem ( 4.3), the easiest way to evaluate an integral b

Main topics for the First Midterm

Chapter 28. Fourier Series An Eigenvalue Problem.

1 The fundamental theorems of calculus.

A product convergence theorem for Henstock Kurzweil integrals

Improper Integrals, and Differential Equations

Math 4200: Homework Problems

Advanced Calculus: MATH 410 Uniform Convergence of Functions Professor David Levermore 11 December 2015

Line Integrals. Partitioning the Curve. Estimating the Mass

Functional Analysis I Solutions to Exercises. James C. Robinson

A BRIEF INTRODUCTION TO UNIFORM CONVERGENCE. In the study of Fourier series, several questions arise naturally, such as: c n e int

We partition C into n small arcs by forming a partition of [a, b] by picking s i as follows: a = s 0 < s 1 < < s n = b.

Transcription:

Lecture 8: Chrcteristic Functions of 9 Course: Theory of Probbility I Term: Fll 203 Instructor: Gordn Zitkovic Lecture 8 Chrcteristic Functions First properties A chrcteristic function is simply the Fourier trnsform, in probbilistic lnguge. Since we will be integrting complex-vlued functions, we define (both integrls on the right need to exist) f dµ = f dµ + i I f dµ, where f nd I f denote the rel nd the imginry prt of function f : C. The reder will esily figure out which properties of the integrl trnsfer from the rel cse. Definition 8.. The chrcteristic function of probbility mesure µ on B() is the function ϕ µ : C given by ϕ µ (t) = e itx µ(dx) When we spek of the chrcteristic function ϕ X of rndom vrible X, we hve the chrcteristic function ϕ µx of its distribution µ X in mind. Note, moreover, tht ϕ X (t) = E[e itx ]. While difficult to visulize, chrcteristic functions cn be used to lern lot bout the rndom vribles they correspond to. We strt with some properties which follow directly from the definition: Proposition 8.2. Let X, Y nd {X n } n N be rndom vribles.. ϕ X (0) = nd ϕ X (t), for ll t. 2. ϕ X (t) = ϕ X (t), where br denotes complex conjugtion. 3. ϕ X is uniformly continuous. 4. If X nd Y re independent, then ϕ X+Y = ϕ X ϕ Y. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 2 of 9 5. For ll t < t 2 < < t n, the mtrix A = ( ij ) i,j n given by Note: We do not prove (or use) it in these notes, but it cn be shown tht jk = ϕ X (t j t k ), is Hermitin nd positive semi-definite, i.e., A = A nd ξ T Aξ 0, for ny ξ C n, 6. If X n D X, then ϕxn (t) ϕ X (t), for ech t. Proof.. Immedite. 2. e itx = e itx. 3. We hve ϕ X (t) ϕ X (s) = (e itx e isx ) µ(dx) h(t s), where h(u) = e iux µ(dx). Since e iux 2, dominted convergence theorem implies tht lim u 0 h(u) = 0, nd, so, ϕ X is uniformly continuous. 4. Independence of X nd Y implies the independence of exp(itx) nd exp(ity). Therefore, ϕ X+Y (t) = E[e it(x+y) ] = E[e itx e ity ] = E[e itx ]E[e ity ] = ϕ X (t)ϕ Y (t). 5. The mtrix A is Hermitin by (2). To see tht it is positive semidefinite, note tht jk = E[e itjx e itkx ], nd so n j= n k= ξ j ξ k jk = E = E[ ( n ( n ξ j e jx) it j= n j= ξ j e it jx 2 ] 0. k= ξ k e it kx) 6. For f C b (), we hve f (X n ) f (X),.s., nd so, by the dominted convergence theorem pplied to the cses f (x) = cos(tx) nd f (x) = sin(tx), we hve ϕ X (t) = E[exp(itX)] = E[lim n exp(itx n )] = lim n E[exp(itX n )] = lim n ϕ Xn (t). Here is simple problem you cn use to test your understnding of the definitions: function ϕ : C, continuous t the origin with ϕ(0) = is chrcteristic function of some probbility mesure µ on B() if nd only if it is positive semidefinite, i.e., if it stisfies prt 5. of Proposition 8.2. This is known s Bochner s theorem. Problem 8.. Let µ nd ν be two probbility mesures on B(), nd let ϕ µ nd ϕ ν be their chrcteristic functions. Show tht Prsevl s Identity holds: e its ϕ µ (t) ν(dt) = ϕ ν (t s) µ(dt), for ll s. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 3 of 9 Our next result shows µ cn be recovered from its chrcteristic function ϕ µ : Theorem 8.3 (Inversion theorem). Let µ be probbility mesure on B(), nd let ϕ = ϕ µ be its chrcteristic function. Then, for < b, we hve T µ((, b)) + 2 µ({, b}) = lim e it e itb ϕ(t) dt. (8.) T it Proof. We strt by picking < b nd noting tht e it e itb it = b e ity dy, so tht, by Fubini s theorem, the integrl in (8.) is well-defined: F(, b, T) = exp( ity)ϕ(t) dy dt, [ T,T] [,b] where T e F(, b, T) = it e itb ϕ(t) dt. T it Another use of Fubini s theorem yields: F(, b, T) = exp( ity) exp(itx) dy dt µ(dx) [ T,T] [,b] ( ) = exp( it(y x)) dy dt µ(dx) [ T,T] [,b] ( ( = e it( x) e it(b x)) ) dt µ(dx). Set f (, b, T) = T T [ T,T] it T it (e it( x) e it(b x) sin(ct) ) dt nd K(T, c) = t dt, 0 nd note tht, since cos is n even nd sin n odd function, we hve T ) f (, b, T; x) = 2 dt 0 ( sin(( x)t) t sin((b x)t) t = 2K(T; x) 2K(T; b x). Since T sin(ct) 0 ct d(ct) = ct sin(s) 0 s ds = K(cT; ), c > 0 K(T; c) = 0, c = 0 K( c T; ), c < 0, (8.2) Note: The integrl T T it exp( it( x)) dt is not defined; we relly need to work with the full f (, b, T; x) to get the right cncelltion. Problem 4. implies tht lim K(T; c) = π 2, c > 0, 0, c = 0, π 2, c < 0. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 4 of 9 nd so lim f (, b, T; x) = 0, x [, b] c, π, x = or x = b,, < x < b. Observe first tht the function T K(T; ) is continuous on [0, ) nd hs finite limit s T so tht sup T 0 K(T; ) <. Furthermore, (8.2) implies tht K(T; c) sup T 0 K(T; ) for ny c nd T 0 so tht sup{ f (, b, T; x) : x, T 0} <. Therefore, we cn use the dominted convergence theorem to get tht lim F(, b, T; x) = lim f (, b, T; x) µ(dx) = lim f (, b, T; x) µ(x) = 2 µ({}) + µ((, b)) + 2 µ({b}). Corollry 8.4. For probbility mesures µ nd µ 2 on B(), the equlity ϕ µ = ϕ µ2 implies tht µ = µ 2. Proof. By Theorem 8.3, we hve µ ((, b)) = µ 2 ((, b)) for ll, b C where C is the set of ll x such tht µ ({x}) = µ 2 ({x}) = 0. Since C c is t most countble, it is strightforwrd to see tht the fmily {(, b) :, b C} of intervls is π-system which genertes B(). Corollry 8.5. Suppose tht ϕµ (t) dt <. Then µ dµ λ nd dλ is bounded nd continuous function given by dµ = f, where f (x) = e itx ϕ µ (t) dt for x. dλ Proof. Since ϕ µ is integrble nd e itx =, f is well defined. For < b we hve b f (x) dx = = = b = lim e itx ϕ µ (t) dt dx ( b ) ϕ µ (t) e itx dx dt e it e itb ϕ(t) dt it T e it e itb ϕ(t) dt it T = µ((, b)) + 2 µ({, b}), (8.2) by Theorem 8.3, where the use of Fubini s theorem bove is justified by the fct tht the function (t, x) e itx ϕ µ (t) is integrble on [, b], Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 5 of 9 for ll < b. For, b such tht µ({}) = µ({b}) = 0, the eqution (8.2) implies tht µ((, b)) = b f (x) dx. The clim now follows by the π λ-theorem. Exmple 8.6. Here is list of some common distributions nd the corresponding chrcteristic functions:. Continuous distributions. 2. Discrete distributions. 3. A singulr distribution. Til behvior Nme Prmeters Density f X (x) Ch. function ϕ X (t) Uniform < b b [,b](x) 2 Norml µ, σ > 0 σ 2 e it e itb it(b ) exp( (x µ)2 2σ 2 ) exp(iµt 2 σ2 t 2 ) 3 Exponentil λ > 0 λ exp( λx) [0, ) (x) it 4 Double Exponentil λ > 0 2 λ exp( λ x ) +t 2 5 Cuchy µ, γ > 0 γ π(γ 2 +(x µ) 2 ) exp(iµt γ t ) Nme Prmeters p n = P[X = n], n Z Ch. function ϕ X (t) 6 Dirc m N 0 {m=n} exp(itm) 7 Coin-toss p (0, ) p = p, p = ( p) cos(t) 8 Geometric p (0, ) p n ( p), n N 0 p e it p 9 Poisson λ > 0 e λ λn n!, n N 0 exp(λ(e it )) Nme Ch. function ϕ X (t) 0 Cntor e it/2 k= cos( t 3 k ) We continue by describing severl methods one cn use to extrct useful informtion bout the tils of the underlying probbility distribution from chrcteristic function. Proposition 8.7. Let X be rndom vrible. d n (dt) n ϕ X (t) exists for ll t nd If E[ X n ] <, then d n (dt) n ϕ X (t) = E[e itx (ix) n ]. In prticulr E[X n ] = ( i) n dn (dt) n ϕ X (0). Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 6 of 9 Proof. We give the proof in the cse n = nd leve the generl cse to the reder: ϕ(h) ϕ(0) e lim h 0 h = lim ihx e h 0 h µ(dx) = lim ihx h 0 h µ(dx) = ix µ(dx), where the pssge of the limit under the integrl sign is justified by the dominted convergence theorem which, in turn, cn be used since x, nd x µ(dx) = E[ X ] <. emrk 8.8. eihx h. It cn be shown tht for n even, the existence of d n (dt) n ϕ X (0) (in the pproprite sense) implies the finiteness of the n-th moment E[ X n ]. 2. When n is odd, it cn hppen tht d n (dt) n ϕ X (0) exists, but E[ X n ] = - see Problem 8.6. Finer estimtes of the tils of probbility distribution cn be obtined by finer nlysis of the behvior of ϕ round 0: Proposition 8.9. Let µ be probbility mesure on B() nd let ϕ = ϕ µ be its chrcteristic function. Then, for ε > 0 we hve µ([ 2 ε, 2 ε ]c ) ε ε ε ( ϕ(t)) dt. Proof. Let X be rndom vrible with distribution µ. We strt by using Fubini s theorem to get 2ε ε ε ε ( ϕ(t)) dt = 2ε E[ ( e itx ) dt] = ε E[ ε 0 ε ( cos(tx)) dt] = E[ sin(εx) εx ]. It remins to observe tht sin(x) x 0 nd sin(x) x for ll x x. Therefore, if we use the first inequlity on [ 2, 2] nd the second one on [ 2, 2] c, we get sin(x) 2 { x >2} so tht x ε 2ε ( ϕ(t)) dt 2 P[ εx > 2] = 2 µ([ 2 ε, 2 ε ]c ). ε Problem 8.2. Use the inequlity of Proposition 8.9 to show tht if Note: f (t) = g(t) + O(h(t)) mens ϕ(t) = + tht, for some δ > 0, we hve O( t α ) for some α > 0, then x β µ(dx) <, for ll f (t) g(t) β < α. Give n exmple where sup <. x α µ(dx) =. h(t) t δ Problem 8.3 (iemnn-lebesgue theorem). Suppose tht µ λ. Show tht lim ϕ µ(t) = lim ϕ µ(t) = 0. t t Hint: Use (nd prove) the fct tht f L +() cn be pproximted in L () by function of the form n k= α k [k,b k ]. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 7 of 9 The continuity theorem Theorem 8.0 (Continuity theorem). Let {µ n } n N be sequence of probbility distributions on B(), nd let {ϕ n } n N be the sequence of their chrcteristic functions. Suppose tht there exists function ϕ : C such tht. ϕ n (t) ϕ(t), for ll t, nd 2. ϕ is continuous t t = 0. Then, ϕ is the chrcteristic function of probbility mesure µ on B() nd µ n w µ. Proof. We strt by showing tht the continuity of the limit ϕ implies tightness of {µ n } n N. Given ε > 0 there exists δ > 0 such tht ϕ(t) ε/2 for t δ. By the dominted convergence theorem we hve lim sup µ n ([ 2 δ, 2 δ ]c ) lim sup n n = δ δ δ δ δ δ ( ϕ n (t)) dt ( ϕ(t)) dt ε. By tking n even smller δ > 0, we cn gurntee tht sup µ n ([ 2 δ, 2 δ ] c ) ε, n N which, together with the rbitrriness of ε > 0 implies tht {µ n } n N is tight. Let {µ nk } k N be convergent subsequence of {µ n } n N, nd let µ be its limit. Since ϕ nk ϕ, we conclude tht ϕ is the chrcteristic function of µ. It remins to show tht the whole sequence converges to µ wekly. This follows, however, directly from Problem 7.4, since ny convergent subsequence {µ nk } k N hs the sme limit µ. Problem 8.4. Let ϕ be chrcteristic function of some probbility mesure µ on B(). Show tht ˆϕ(t) = e ϕ(t) is lso chrcteristic function of some probbility mesure ˆµ on B(). Additionl Problems Problem 8.5 (Atoms from the chrcteristic function). Let µ be probbility mesure on B(), nd let ϕ = ϕ µ be its chrcteristic function.. Show tht µ({}) = lim 2T T T e it ϕ(t) dt. 2. Show tht if lim t ϕ(t) = lim t ϕ(t) = 0, then µ hs no toms. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 8 of 9 3. Show tht converse of (2) is flse. Hint: Prove tht ϕ(t n ) = long suitbly chosen sequence t n, where ϕ Problem 8.6 (Existence of ϕ X (0) does not imply tht X L ). Let X be rndom vrible which tkes vlues in Z \ { 2,, 0,, 2} with P[X = k] = P[X = k] = C, for k = 3, 4,..., k 2 log(k) where C = 2 ( k 3 k 2 log(k) ) (0, ). Show tht ϕ X (0) = 0, but X L. Hint: Argue tht, in order to estblish tht ϕ X (0) = 0, it is enough to show tht cos(hk) lim h 0 h = k 2 0. log(k) k 3 Then split the sum t k close to 2/h nd use (nd prove) the inequlity cos(x) min(x 2 /2, x). Bounding sums by integrls my help, too. Problem 8.7 (Multivrite chrcteristic functions). Let X = (X,..., X n ) be rndom vector. The chrcteristic function ϕ = ϕ X : n C is given by ϕ(t, t 2,..., t n ) = E[exp(i n k= t k X k )]. is the chrcteristic function of the Cntor distribution. We will lso use the shortcut t for (t,..., t n ) nd t X for the rndom vrible n k= t kx k. Prove the following sttements Note: Tke for grnted the following sttement (the proof of which is similr. ndom vribles X nd Y re independent if nd only if to the proof of the -dimensionl cse): ϕ (X,Y) (t, t 2 ) = ϕ X (t )ϕ Y (t 2 ) for ll t, t 2. 2. ndom vectors X nd X 2 hve the sme distribution if nd only if rndom vribles t X nd t X 2 hve the sme distribution for ll t n. (This fct is known s Wld s device.) An n-dimensionl rndom vector X is sid to be Gussin (or, to hve the multivrite norml distribution) if there exists vector µ n nd symmetric positive semi-definite mtrix Σ n n such tht ϕ X (t) = exp(i t µ 2 tτ Σt), where t is interpreted s column vector, nd () τ is trnsposition. This is denoted s X N(µ, Σ). X is sid to be non-degenerte if Σ is positive definite. Suppose tht X nd X 2 re rndom vectors with ϕ X (t) = ϕ X2 (t) for ll t n. Then X nd X 2 hve the sme distribution, i.e. µ X = µ X2. 3. Show tht rndom vector X is Gussin, if nd only if the rndom Note: Be creful, nothing in the second vector t X is normlly distributed (with some men nd vrince) sttement tells you wht the men nd vrince of t X re. for ech t n. 4. Let X = (X, X 2,..., X n ) be Gussin rndom vector. Show tht X k nd X l, k = l, re independent if nd only if they re uncorrelted. Lst Updted: November 0, 203

Lecture 8: Chrcteristic Functions 9 of 9 5. Construct rndom vector (X, Y) such tht both X nd Y re normlly distributed, but tht X = (X, Y) is not Gussin. 6. Let X = (X, X 2,..., X n ) be rndom vector consisting of n independent rndom vribles with X i N(0, ). Let Σ n n be given positive semi-definite symmetric mtrix, nd µ n given vector. Show tht there exists n ffine trnsformtion T : n n such tht the rndom vector T(X) is Gussin with T(X) N(µ, Σ). 7. Find necessry nd sufficient condition on µ nd Σ such tht the converse of the previous problem holds true: For Gussin rndom vector X N(µ, Σ), there exists n ffine trnsformtion T : n n such tht T(X) hs independent components with the N(0, )-distribution (i.e. T(X) N(0, yi), where yi is the identity mtrix). Problem 8.8 (Slutsky s Theorem). Let X, Y, {X n } n N nd {Y n } n N be rndom vribles defined on the sme probbility spce, such tht Show tht X n D X nd Yn D Y. (8.3). It is not necessrily true tht X n + Y n D X + Y. For tht mtter, we do not necessrily hve (X n, Y n ) D (X, Y) (where the pirs re considered s rndom elements in the metric spce 2 ). 2. If, in ddition to (8.3), there exists constnt c such tht P[Y = Hint: It is enough to show tht c] =, show tht g(x n, Y n ) D g(x, c), for ny continuous function g : 2. Problem 8.9 (Convergence of norml sequence). (X n, Y n ) D (X n, c). Use Problem 8.7).. Let {X n } n N be sequence of normlly-distributed rndom vri- Hint: Use this fct: for sequence bles converging wekly towrds rndom vrible X. Show tht {µ n } n N of rel numbers, the following two sttements re equivlent X must be norml rndom vrible itself. 2. Let X n be sequence of norml rndom vribles such tht X n.s. X. Show tht X n L p X for ll p. () µ n µ, nd (b) exp(itµ n ) exp(itµ), for ll t. You don t need to prove it, but feel free to try. Lst Updted: November 0, 203