Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Size: px
Start display at page:

Download "Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University"

Transcription

1 Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C.

2 Identically distributed summands 27- Central limit theorem: The sum of many independent random variables will be approximately normally distributed, if each summand has high probability of being small. Theorem 27. (Lindeberg-Lévy theorem) Suppose that {X n } n= is an independent sequence of random variables having the same distribution with mean c and finite positive variance σ 2.Then S n nc σ n N, where N is standard normal distributed, and S n = X + + X n.

3 Idea behind the proof Proof: Without loss of generality, we can assume that X n has zero-mean and unit variance. The characteristic function of S n / n is then given by: [ E e its n/ ] [ n = E [ ] = E n By the continuity theorem, = ϕ n X Theorem 26.3 (Continuity theorem) e it(x + +X n )/ n e itx/ n ( t n ). X n X if, and only if ϕ Xn (t) ϕ X (t) for every t R. ] we need to show that or equivalently, ( ) t lim ϕn X = e t2 /2, n [ ( )] t lim n log ϕ X n = t2 2.

4 Idea behind the proof Lemma If E[ X k ] <, then ϕ (k) (0) = i k E[X k ]. By noting that ϕ X (0) =, ϕ X (0) = 0 and ϕ X (0) =, lim n log ϕ X(t/ log ϕ X (ts) n) = lim (s =/ n) s 0 s 2 = lim s 0 = lim s 0 ϕ X t (ts) ϕ X (ts) ( 2s ϕ X (ts)ϕ X (ts) [ϕ (ts)] 2 ) t2 ϕ 2 X (ts) 2 = t2 2.

5 Generalization of Theorem Theorem 27. requires the distribution of each X n being identical. Can we relax this requirement for the central limit theorem to hold? Yes, but different proof technique must be developed.

6 Illustration of idea behind alternative proof 27-5 Lemma If X has a moment of order n, then n ϕ(t) (it) k [ { }] E[X k tx n+ ] k! E min (n +)!, 2 tx n n! k=0 { t n+ E[ X n+ ] min, 2 t n E[ X n } ]. (n +)! n! (Notably, this inequality is valid even if E[ X n+ ]=. Suppose that E[ X 3 ] <. (The assumption is simply made for showing the idea behind an alternative proof). With E[X] =0andE[X 2 ]=, ( ) ) [ { }] t ϕ X ( t2 tx 3 E min n 2n 6n, t2 3/2 n X2 t 3 E[ X 3 ] n 3/2 6

7 Illustration of idea behind alternative proof 27-6 Lemma Let z,...,z m and w,...,w m be complex numbers of modulus at most ; then m z z 2 z m w w 2 w m z k w k. Proof: As z z 2 z m w w 2 w m = z z 2 z m w z 2 z m + w z 2 z m w w 2 w m = (z w )(z 2 z m )+w (z 2 z m w 2 w m ), we obtain: z z 2 z m w w 2 w m = (z w )(z 2 z m )+w (z 2 z m w 2 w m ) (z w )(z 2 z m ) + w (z 2 z m w 2 w m ) z w + z 2 z m w 2 w m. The lemma therefore can be proved by induction.

8 Illustration of idea behind alternative proof 27-7 With the lemma and considering those n satisfying n t 2 /4, ( ) ) n ( ) t ϕn X ( t2 n t n 2n ϕ X n n t 3 E[ X 3 ] n 3/2 6 = t 3 n E[ X 3 ] 6 The central limit statement can then be validated by: ( ) n + t2 /2 e t2 /2. n 0. ( t2 2n)

9 Exemplified application of central limit theorem 27-8 Example 27.2 Suppose one wants to estimate the parameter α of an exponential distribution on the basis of independent samples X,...,X n. By law of large numbers, X n = n (X + + X n ) converges to mean α with probabiity. As the variance of exponential distribution is /α 2, Lindeberg-Lévy theorem gives that X + X X n n/α n/α = α n ( Xn /α ) N. Equivalently, [ lim Pr α ( n X n α ) ] x =Φ(x), where Φ( ) represents the standard normal cdf. Roughly speaking, Xn is approximately Gaussian distributed with mean /α and variance /(α 2 n). Notably, this statement exactly indicates that lim Pr [ Xn (/α) /(α n) ] x =Φ(x)

10 Exemplified application of central limit theorem 27-9 Theorem 25.6 (Skorohod s theorem) Suppose µ n and µ are probability measures on (R, B), and µ n µ. Then there exist random variables Y n and Y such that:. they are both defined on common probability space (Ω, F,P); 2. Pr[Y n y] =µ n (,y] for every y; 3. Pr[Y y] =µ(,y] for every y; 4. lim Y n (ω) =Y (ω) for every ω Ω. By Skorohod s Theorem, there exist Ȳn :Ω Rand Y :Ω Rsuch that lim α n ( Ȳ n (ω) /α ) = Y (ω) for every ω Ω, and Ȳn and Y have the same distributions as X n and N, respectively. As P ({ ω Ω : lim Ȳ n (ω) =/α }) =, ( ) n α Ȳ n (ω) α = α n ( Ȳ n (ω) α ) Y (ω) = Y (ω), αȳn(ω) α (/α) where Y is also standard normal distributed.

11 Exemplified application of central limit theorem 27-0 This concludes to ( n α Xn ) α N. In words, / X n is approximately Gaussian distributed with mean α and variance α 2 /n.

12 Lindeberg Theorem 27- Definition A triangular array of random variables is X, X,r X 2, X 2,2 X 2,r2 X 2,r2 X 3, X 3,2 X 3,3 X 3,r3 2 X 3,r3 X 3,r3. where the probability space, on which each sequence X n,,...,x n,rn is commonly defined, may change (since we do not care about the dependence across sequences.) A sequence of random variables is just a special case of a triangular array of random variables with r n = n and X n,k = X k. Lindeberg s condition For an array of independent zero-mean random variables X n,,...,x n,rn, r n r n lim x 2 [ ] df Xn,k (x) = lim E Xn,k 2 I [ X n,k εs n ] =0, s 2 n [ x εs n ] where s 2 n = r n E [ X 2 n,k]. s 2 n

13 Lindeberg Theorem 27-2 Theorem 27.2 (Lindeberg theorem) For an array of independent zero-mean random variables X n,,...,x n,rn, if Lindeberg s condition holds for all positive ε, then S n s n N, where S n = X n, + + X n,rn. Discussions Theorem 27. (Lindeberg-Lévy theorem) is a special case of Theorem Specifically, (i) r n = n,(ii) X n,k = X k,(iii) finite variance, and (iv) identically distributed assumption give: n lim [ ] nσ 2E XkI 2 [ ] [ Xk εσ n] = lim σ 2E XI 2 [ X εσ n] =0, where σ 2 = E [ X 2 ].

14 Lindeberg Theorem 27-3 Proof: Without loss of generality, assume s n = (since we can replace each X n,k by X n,k /s n ). Hence, Lindeberg s condition reduces to: lim r n [ x ε] x 2 df Xn,k (x) = lim r n E [ ] Xn,kI 2 [ Xn,k ε] =0. Since E[X n,k ]=0, ( ϕ X n,k (t) 2 t2 E[Xn,k]) 2 E [ min { tx n,k 3, tx n,k 2}] <. Lemma If X has a moment of order n, then n ϕ(t) (it) k [ { }] E[X k tx n+ ] k! E min (n +)!, 2 tx n n! k=0 ( E [ min { tx n+, tx n}] ) for n 2.

15 Lindeberg Theorem 27-4 Observe that: E [ min { tx n,k 3, tx n,k 2}] = min { tx 3, tx 2} df Xn,k (x) [ x <ε] + min { tx 3, tx 2} df Xn,k (x) [ x ε] tx 3 df Xn,k (x)+ tx 2 df Xn,k (x) [ x <ε] [ x ε] tε tx 2 df Xn,k (x)+ tx 2 df Xn,k (x) [ x <ε] [ x ε] ε t 3 E[Xn,k 2 ]+t2 x 2 df Xn,k (x), [ x ε] which implies that: r n ( ϕ X n,k (t) ) r n 2 t2 E[Xn,k] 2 ε t 3 E[Xn,k]+t 2 2 r n = ε t 3 + t 2 [ x ε] r n [ x ε] x 2 df Xn,k (x). x 2 df Xn,k (x)

16 Lindeberg Theorem 27-5 As ε can be made arbitrarily small, r n ϕ X n,k (t) By r n we get: ϕ Xn,k (t) lim r n lim ( 2 t2 E[X 2 n,k]) r n ϕ Xn,k (t) r n ( 2 t2 E[X 2 n,k ] ) =0. rn ϕ X n,k (t) (Hint: Is this correct? See the end of the proof!) It remains to show that lim r n ( 2 t2 E[X 2 n,k]) =0. ( ) 2 t2 E[Xn,k] 2 = e t2 /2. ( 2 t2 E[X 2 n,k]),

17 Lindeberg Theorem 27-6 e t2 /2 r n ( 2 t2 E[X 2 n,k]) = For each complex z, e z z z 2 k=2 z k 2 k! r n r n e t2 E[X 2 n,k ]/2 E[X 2 e t2 n,k ]/2 z 2 r n 4 t4 e t2 /2 k=2 r n ( 2 t2 E[Xn,k]) 2 ( 2 t2 E[Xn,k]) 2 z k 2 (k 2)! = z 2 e z. 4 t4 E 2 [X 2 n,k]e t2 E[X2 n,k ]/2 r n (Take z = 2 t2 E[X 2 n,k]) E 2 [X 2 n,k] (by E[X 2 n,k] s 2 n =) ( ) 4 t4 e t2 /2 max E[Xn,k 2 ] rn E[Xn,k 2 ] k r n = ( ) 4 t4 e t2 /2 max E[Xn,k] 2. k r n

18 Lindeberg Theorem 27-7 Finally, E[Xn,k] 2 ε 2 + [ x 2 ε 2 ] which implies that max E[Xn,k] 2 ε 2 + max k r n k r n ε 2 + k r n x 2 df Xn,k (x), [ x ε] [ x ε] x 2 df Xn,k (x) x 2 df Xn,k (x). The proof is then completed by taking arbitrarily small ε and Lindeberg s condition. ( 2 t2 E[X 2 n,k ]) is a complex number of modulus at most for t 2 4/E[X 2 n,k ]. Converse to Lindeberg theorem Give an array of independent zero-mean random variables X n,,...,x n,rn. If S n /s n N, then Lindeberg s condition holds, provided that max k r n E[X 2 n,k]/s 2 n 0.

19 Lindeberg Theorem 27-8 Without the extra condition of max E[Xn,k]/s 2 2 n 0, k r n the converse of Lindeberg theorem may not be valid. Counterexample Let X n,k be Gaussian distributed with mean 0 and variance for k<r n = n, andx n,n be Gaussian distributed with mean 0 and variance (n ). Thus, s 2 n = n E[Xn,k] 2 =(n ) + (n ) = 2(n ).

20 Lindeberg Theorem 27-9 Then S n /s n =(X n, +X n,2 + +X n,n )/s n is Gaussian distributed with mean 0 and variance, but n x 2 df s 2 Xn,k (x) x 2 df n [ x εs n ] s 2 Xn,n (x) n [ x εs n ] = x 2 e x2 /(2(n )) dx s 2 n [ x εs n ] 2π(n ) = 2(n ) [ y ε 2(n )/ (n )y 2 e y2 /2 dy, n ] 2π where x = n y = y 2 e y2 /2 dy, 2 2π and hence Lindeberg s condition fails. Notably, E[Xn,k 2 max ] k n does not converge to zero. s 2 n [ y ε 2] = max k n (n ) 2(n ) = 2

21 Goncharov s theorem Observation If X n,k is uniformly bounded for each n and k, ands n as n, then Lindeberg s condition holds. Proof: Let M be the bound for X n,k,namelypr[ X n,k M] = for each k and n. Then for any ε > 0, εs n will ultimately exceed M, and therefore [ x εs n ] x2 df Xn,k (x) = 0 for every n and k, which implies r n x 2 df Xn,k (x) =0. lim s 2 n [ x εs n ]

22 Goncharov s theorem 27-2 Example 27.3 Let Pr[Y n,k =]= k and Pr[Y n,k =0]= k. Let X n,k = Y n,k E[Y n,k ]=Y n,k k for k n. Then E[X n,k ]=0,E[X 2 n,k ]=k k 2,ands 2 n = n k k 2, Lindeberg s con- Since X n,k is bounded by with probability, and s 2 n dition holds. The Lindeberg theorem thus concludes: X n, + + X n,n s n = Y n, + + Y n,n n (/k) n (/k) n N. (/k2 )

23 Goncharov s theorem Goncharov s theorem: Y n, + + Y n,n log(n) N. log(n) Theorem 25.4 If X n X and X n Y n 0, then Y n X. Proof: Goncharov s theorem can be easily proved by: ( Y n, + + Y n,n ) n (/k) n (/k) n (/k2 ) N. and ( Y n, + + Y n,n ) n (/k) n (/k) n (/k2 ) ( Y n, + + Y n,n log(n) log(n) ) 0.

24 Lyapounov s condition Discussions Lindeberg s condition is quite satisfiable in the sense that it is the sufficient and necessary condition for normalized row sum of independent array random variables to converge to standard normal, provided that the variances are uniformly and asymptotically negligible. It however may not be easy to examine the validity of Lindeberg s condition. A useful sufficiency for Lindeberg s condition to hold is Lyapounov s condition, which is often easier to verify than Lindeberg s condition. Lyapounov s condition Fix an array of independent zero-mean random variables X n,,...,x n,rn.forsomeδ>0, lim where s 2 n = r n E [ X 2 n,k]. r n s 2+δ n E [ X n,k 2+δ] =0,

25 Lyapounov s condition Observation Lyapounov s condition implies Lindeberg s condition. Proof: r n s 2 n [ x εs n ] x 2 df Xn,k (x) r n s 2 n = ε δ r n ε δ r n [ x εs n ] s 2+δ n s 2+δ n ( ) x x 2 δ df (εs n ) δ Xn,k (x) [ x εs n ] R x 2+δ df Xn,k (x) x 2+δ df Xn,k (x). Theorem 27.3 For an array of independent zero-mean random variables X n,,...,x n,rn, if Lyapounov s condition holds for some positive δ, then where S n = X n, + + X n,rn. S n s n N,

26 Lyapounov s condition Example 27.4 The Lyapounov s condition is always valid for i.i.d. sequence with bounded (2 + δ)th absolute moment. Proof: lim r n s 2+δ n E [ X n,k 2+δ] = lim [ r n E X 2+δ] r +δ/2 n E +δ/2 [X 2 ] = E [ X 2+δ] E +δ/2 [X 2] lim = 0 rn δ/2

27 Problem of coupon collector Example 27.5 (Problem of coupon collector) A coupon collector has to collect r n distinct coupons to exchange for some free gift. Each purchase will give him one coupon, randomly and with replacement. The statistical behavior of this collection can be described as follows. Coupons are drawn from a coupon population of size n, randomly and with replacement, until the number of distinct coupons that have been collected is r n,where r n n. Let S n be the number of purchases required for this collection. Assume that r n /n ρ>0. What is the approximation distribution of (S n E[S n ])/ Var[S n ]? We can also apply this problem to, say, that r n out of n pieces need to be collected in order to recover the original information.

28 Problem of coupon collector Solution: If (k ) distinct coupons have thus far been collected, the number of purchases until the next distinct one enters is distributed as X k,where where p k = n k + n Pr[X k = j] =( p k ) j p k for j =, 2, 3,... = k n. (Notably, we wish to see any one of the remaining n (k ) coupons to appear.) S n = X + X X rn E[X k ]= and Var[X k ]= p k p k p 2. k

29 Problem of coupon collector Hence, n rn /n /n /n which implies that x dx E[S n]= s 2 n = which implies that lim r n s 2 n lim Var[X k ]= r n p k = r n k n E[S n ] n log[/( ρ)] =. r n n ρ 0 x/( x)2 dx = lim p k p 2 k = r n n k n ( k n rn /n 0 ) 2, s 2 n n[ρ/( ρ)+log( ρ)] =. x dx,

30 Problem of coupon collector Therefore, since Lyapounov s s condition holds for δ = 2, i.e., lim r n we obtain: s 4 n E [ X k E[X k ] 4] lim r n = lim s 4 n lim s 4 n = lim s 4 n s 4 n r n 24n lim s 4 n r n r n E [ X k 4] (2 p k )(2 2p k + p 2 k ) p 4 k 24 p 4 k 24 p 4 k rn /n S n n log[/( ρ)] n[ρ/( ρ)+log( ρ)] N. 0 ( x) 4dx =0,

31 Lindeberg condition Lyapounov condition Counterexample Lindeberg s condition does not necessarily imply Lyapounov s condition. Consider a sequence of random variables that are independent, and each has density function c f(x) = for x e , x 3 (log(x)) 2 where c = e t e t dt It can be verified that the i.i.d. sequence has finite marginal second moment c e x(log(x)) 2dx = hence, Lindeberg s condition holds. However, E[ X 2+δ ]= e c u2du = c; c x δ (log(x)) 2dx = c 0 e δu du =, u2 where u =log(x). Accordingly, Lyapounov s condition does not hold!

32 Dependent variables 27-3 Can we extend the central limit theorem to sequence of dependent variables? Definition (α-mixing) A sequence of random variables is said to be α-mixing, if there exists a non-negative sequence α,α 2,α 3,... such that and sup sup k H R k G R lim α n =0 Pr [( X k H ) (Xn+k G) ] Pr [ X k H ] Pr [Xn+k G] α n. Operational meaning: By α-mixing, we mean that X k =(X,X 2,,X k ) and X n+k =(X n+k,x n+k+, ) are approximately independent, when n is large. An independent sequence is α-mixing with α k =0forallk =, 2, 3,...

33 m-dependent Definition A sequence of random variables is said to be m-dependent, if (X i,...,x i+k ) and (X i+k+n,...,x i+k+n+l ) are independent whenever n>m. An independent sequence is 0-dependent. A m-dependent sequence is α-mixing with α n =0forn>m. Example 27.7 Let Y,Y 2,... be i.i.d. sequence. Define X n = f(y n,y n+,...,y n+m ) for a real Borel-measurable function on R m+. Then X,X 2,... is stationary and m-dependent.

34 α-mixing and m-dependent Example 27.6 Let Y,Y 2,...be a first-order finite-state Markov chain with positive transition probabilities. Suppose the initial probability used is the one that makes Y,Y 2,... stationary. Then Pr[Y = i,...,y k = i k,y k+n = j 0,...,Y k+n+l = j l ] = ( p i p i i 2 p ik i k ) p (n) i k j 0 ( pj0 j p jl j l ), where p ij = P Yn Y n (j i) andp (k) ij Also, = P Yn Y n k (j i). Pr[Y = i,...,y k = i k ]Pr[Y k+n = j 0,...,Y k+n+l = j l ] = ( ) ( ) p i p i i 2 p ik i pj0 pj0 k j p jl j l.

35 α-mixing and m-dependent Theorem 8.9 There exists a stationary distribution {π i } for a finite-state, irreducible, aperiodic, first-order Markovchainsuchthat p (n) ij π j Aρ n, where A 0and0 ρ<. A first-order Markov chain is aperiodic, if the greatest common divisor of the integers in the set {n N : p (n) jj > 0} is for every j. A Markov chain is irreducible, if for every i and j, p (n) ij > 0forsomen.

36 α-mixing and m-dependent = = Pr [( Y k H ) ( Y n+k+l n+k G )] Pr [ Y k H ] Pr [ Y n+k+l n+k G ] ( Pr[Y = i,...,y k = i k,y k+n = j 0,...,Y k+n+l = j l ] i k H j0 l G Pr[Y = i,...,y k = i k ]Pr[Y k+n = j 0,...,Y k+n+l = j l ]) ( ) ( ) pi p i i 2 p ik i k p (n) (pj0 ) i k j 0 p j0 j p jl j l i k H i k H j0 l G j l 0 G ( pi p i i 2 p ik i k ) p (n) i k j 0 p j0 ( pj0 j p jl j l ) ( )( ) pi p i i 2 p ik i pj0 k j p jl j l Aρ n i k H Aρ n j l 0 G ( pi p i i 2 p ik i k )( pj0 j p jl j l ) i k Yk j0 l Yl+ = Aρ n =A Y ρ n, j 0 Y for some A 0and0 ρ<.

37 α-mixing and m-dependent Since the upper bound is independent of k, l 0, H R k and G R l+,we obtain that Y,Y 2,... is α-mixing with α n = A Y ρ n. Hence, a finite-state, irreducible, aperiodic, first-order Markov chain is α-mixing with exponentially decaying α n.

38 Central limit theorem for α-mixing sequence Theorem 27.4 Suppose that X,X 2,... is zero-mean, stationary and α-mixing with α n = O(n 5 )as. Assume that E[Xn 2 ] <. Then. n Var[S n] σ 2 = E[X]+2 2 E[X X +k ]. 2. S n σ n N. ( ) S n Var[Sn ] N. Proof: No proof is provided.

39 Central limit theorem for α-mixing sequence Example 27.8 Let Y,Y 2,...be a first-order finite-state Markov chain with positive transition probabilities. Suppose the initial probability used is the one that makes Y,Y 2,... stationary. Example 27.6 already proves that Y,Y 2,... is α-mixing with α n = A Y ρ n. Thus, X = Y E[Y ],X 2 = Y 2 E[Y 2 ],... is also α-mixing with α n = A Y ρ n. Furthermore, the finite state assumption indicates that all moments of X n are bounded. Accordingly, Theorem 27.4 holds, namely, X + + X n Var[X + + X n ] N.

40 Central limit theorem for α-mixing sequence More specifically, let and Y n {, +} P Yn Y n (+ ) = P Yn Y n ( +)=ε. Since the probability transition matrix can be expressed as: [ ] [ ] ε ε 2 [ ] [ = 2 0 ε ε 2 0 2ε we have: [ ] n ε ε = ε ε [ ] [ 0 0 ( 2ε) n ] [ ] T = and Y,Y 2,... are α-mixing with α n =2(/2) 2ε n = 2ε n. ] T, [ ( 2ε)n 2 ] 2 ( 2ε)n 2 2 ( 2ε)n 2 +, 2 ( 2ε)n

41 Central limit theorem for α-mixing sequence Theorem 27.4 then gives that: Var[Y + + Y n ] n and E[Y 2 ]+2 = y {,} +2 = +2 = +2 E[Y Y +k ] y 2 P Y (y ) y {,} y +k {,} y {,} y 2 {,} ( 2ε) k = ε, Y + + Y n n ( N. ε ) y y +k P Y+k Y (y +k y )P Y (y ) 2 y y +k P Y+k Y (y +k y )

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

2016 Final for Advanced Probability for Communications

2016 Final for Advanced Probability for Communications 06 Final for Advanced Probability for Communications The number of total points is 00 in this exam.. 8 pt. The Law of the Iterated Logarithm states that for i.i.d. {X i } i with mean 0 and variance, [

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 21 Expected Values Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C. Expected value as integral 21-1 Definition (expected

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Section 26. Characteristic Functions. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 26. Characteristic Functions. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 26 Characteristic Functions Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3, R.O.C. Characteristic function 26- Definition (characteristic

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Convergence Concepts of Random Variables and Functions

Convergence Concepts of Random Variables and Functions Convergence Concepts of Random Variables and Functions c 2002 2007, Professor Seppo Pynnonen, Department of Mathematics and Statistics, University of Vaasa Version: January 5, 2007 Convergence Modes Convergence

More information

Solution Set for Homework #1

Solution Set for Homework #1 CS 683 Spring 07 Learning, Games, and Electronic Markets Solution Set for Homework #1 1. Suppose x and y are real numbers and x > y. Prove that e x > ex e y x y > e y. Solution: Let f(s = e s. By the mean

More information

Stochastic Models (Lecture #4)

Stochastic Models (Lecture #4) Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring STAT 6385 Survey of Nonparametric Statistics Order Statistics, EDF and Censoring Quantile Function A quantile (or a percentile) of a distribution is that value of X such that a specific percentage of the

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n

converges as well if x < 1. 1 x n x n 1 1 = 2 a nx n Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1 List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question

More information

2 2 + x =

2 2 + x = Lecture 30: Power series A Power Series is a series of the form c n = c 0 + c 1 x + c x + c 3 x 3 +... where x is a variable, the c n s are constants called the coefficients of the series. n = 1 + x +

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

Math 151. Rumbos Spring Solutions to Review Problems for Exam 3

Math 151. Rumbos Spring Solutions to Review Problems for Exam 3 Math 151. Rumbos Spring 2014 1 Solutions to Review Problems for Exam 3 1. Suppose that a book with n pages contains on average λ misprints per page. What is the probability that there will be at least

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback Vincent Y. F. Tan (NUS) Joint work with Silas L. Fong (Toronto) 2017 Information Theory Workshop, Kaohsiung,

More information

Deterministic. Deterministic data are those can be described by an explicit mathematical relationship

Deterministic. Deterministic data are those can be described by an explicit mathematical relationship Random data Deterministic Deterministic data are those can be described by an explicit mathematical relationship Deterministic x(t) =X cos r! k m t Non deterministic There is no way to predict an exact

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

A Gentle Introduction to Stein s Method for Normal Approximation I

A Gentle Introduction to Stein s Method for Normal Approximation I A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Proving the central limit theorem

Proving the central limit theorem SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 008, Article ID 38536, 11 pages doi:10.1155/008/38536 Research Article Exponential Inequalities for Positively Associated

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

Errata for FIRST edition of A First Look at Rigorous Probability

Errata for FIRST edition of A First Look at Rigorous Probability Errata for FIRST edition of A First Look at Rigorous Probability NOTE: the corrections below (plus many more improvements) were all encorporated into the second edition: J.S. Rosenthal, A First Look at

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Lecture 4: Completion of a Metric Space

Lecture 4: Completion of a Metric Space 15 Lecture 4: Completion of a Metric Space Closure vs. Completeness. Recall the statement of Lemma??(b): A subspace M of a metric space X is closed if and only if every convergent sequence {x n } X satisfying

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7 UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

Constructing Approximations to Functions

Constructing Approximations to Functions Constructing Approximations to Functions Given a function, f, if is often useful to it is often useful to approximate it by nicer functions. For example give a continuous function, f, it can be useful

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

The Heine-Borel and Arzela-Ascoli Theorems

The Heine-Borel and Arzela-Ascoli Theorems The Heine-Borel and Arzela-Ascoli Theorems David Jekel October 29, 2016 This paper explains two important results about compactness, the Heine- Borel theorem and the Arzela-Ascoli theorem. We prove them

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information