Appendix to: Hypothesis Testing for Multiple Mean and Correlation Curves with Functional Data

Size: px
Start display at page:

Download "Appendix to: Hypothesis Testing for Multiple Mean and Correlation Curves with Functional Data"

Transcription

1 Appedix to: Hypothesis Testig for Multiple Mea ad Correlatio Curves with Fuctioal Data Ao Yua 1, Hog-Bi Fag 1, Haiou Li 1, Coli O. Wu, Mig T. Ta 1, 1 Departmet of Biostatistics, Bioiformatics ad Biomathematics, Georgetow Uiversity Medical Ceter, Washigto DC 0057 USA Office of Biostatistics Research, Natioal Heart, Lug ad Blood Istitute, Natioal Istitutes of Health, Bethesda, MD 089 USA Appedix I. Descriptio of the simulatio. 1 Simulatio To ivestigate the fiite sample properties of the proposed methods, ad compare with the commoly used local liear smoothig (Loess) ad splie methods, we preset here several simulatio studies, which are desiged to mimic the practical situatios with moderate sample sizes. We cosider separately the tests for the equality of two mea curves ad the tests for the correlatio fuctio betwee two stochastic processes. For each case, the simulatio is based o 5000 replicatios, ad the mea values of estimates over the replicatios are reported. 1.1 Testig the Equality of Mea Curves Simulatio for Test with Two-Sided Alteratives For testig the hypotheses (9), we geerate the observatios { X 1, Y, ( X xy, Y xy )} of { X(t), Y (t) : t T } usig the data structure () with 1 = = 50 ad xy = 0 o k() = 50 equally spaced time poits { t j = j : j = 1,..., 50 }, so that x = y = 30. 1

2 For subjects with oly X(t) or Y (t) observed, we geerate X i (t j ) + ɛ i (t j ) = µ(t j ) + r i si(8 + t j /10) / 30 + N(0, σ (t j )), µ(t) = (t + Er]) si(8 + t/10) ]/ 30, where the r i s ad r are iid radom itegers uiformly distributed o {1,..., 50} used to make the curves more wiggly lookig; similarly, Y i (t j ) + ξ i (t j ) = η(t j ) + Cr i cos(8 + t j /10) / N(0, σ (t j )), η(t) = µ(t) + CEr] cos(8 + t j /10) / 100. σ (t) = t, r i s ad r are iid radom itegers uiformly distributed o {1,..., 50} used to make the curves more wiggly lookig, ad C is a costat characterizig the differece betwee µ(t) ad η(t). For subjects with ( X(t), Y (t) ) observed, we geerate {( X i (t) + ɛ i (t j ), Y i (t) + ξ i (t j ) ) T (( ) T ) } N µ(t), η(t), Σ(t) : i = x +1,..., 1, where the covariace matrix Σ(t) is composed by the variace σ (t) ad the correlatio coefficiet ρ(t) = 0.01 (t/50) Simulatio for Test with Oe-Sided Alteratives For testig the hypotheses (10), we geerate the observatios { X 1, Y, ( X xy, Y xy )} of { X(t), Y (t) : t T } usig the same method as Sectio 5.1.1, except that Y i (t j ) + ξ i (t j ) is replaced with Y i (t j ) + ξ i (t j ) N ( η (t j ), σ (t j ) ), where η (t) = µ(t) + C (t + Er]) cos(8 + t/3) / 100 ]. characterizig the differece betwee µ(t) ad η (t). Here, C, which plays a similar role as C, is a costat 1. Testig Correlatio Fuctios 1..1 Simulatio for Test with Two-Sided Alteratives For simplicity, each of our simulated samples cotais 1 = = xy = 50 subjects observed o k() = 50 time poits { t j = j : j = 1,..., 50 }, so that, i view of (), the sample cotais oly the paired observatios {( X i (t j ) + ɛ i (t j ), Y i (t j ) + ξ i (t j ) ) T : i = 1,..., 50; j = 1,..., 50 }. For testig the hypotheses i (15) usig the test statistic S, we geerate i each sample X i (t j )+ɛ i (t j ) N ( µ(t j ), σ1(t j ) ), where µ(t) = t si(8+t/10)/30

3 ad σ 1(t) = 0.01 t, ad, coditioal o X i (t j ) = x i (t j ), Y i (t j )+ξ i (t j ) has the coditioal ormal distributio, Y i (t j )+ξ i (t j ) ( xi (t j )+ɛ i (t j ) N µ(t j )+ρ(t j ) σ (t j ) / σ 1 (t j ) ] x i (t j )+ɛ i (t j ) µ(t j ) ], ( 1 ρ ) ) σ(t j ), (9) where σ (t) = 0.01 t ad ρ(t) = ρ si(8 t/10) for some ρ 0. Here, for ay t {1,..., 50}, ρ(t) is the true correlatio coefficiet betwee X i (t) ad Y i (t), ad ρ determies the differece of the correlatio curve R(t) from zero. Appedix II. Proofs. Proof of Theorem 1 (i). Sice, by (3) ad (4), X i,k() ( ) ad Y i,k() ( ) are two stochastic processes o T, we deote by µ k() (t) = EX i,k() (t)] ad η k() (t) = EY i,k() (t)] (A.1) the expectatios of the radom variables X i,k() (t) ad Y i,k() (t), respectively, for each fixed t T. The, we have { µ1(t) µ(t) ] η (t) η(t) ]} 1 = { µ1(t) µ k() (t) ] η (t) η k() (t) ]} { µk() (t) µ(t) ] η k() (t) η(t) ]}. Note that by (), (4) ad the assumptio Eɛ i (t)] = 0 for all t, µ k() (t) = (t j+1 t)ex i (t j ) + ɛ i (t j+1 )] + (t t j )EX i (t j+1 ) + ɛ i (t j+1 )] t j+1 t j = (t j+1 t)µ(t j ) + (t t j )µ(t j+1 ) t j+1 t j, t t j, t j+1 ]. (A.) Thus, µ k() ( ) is the liear iterpolatio of µ( ) o the t j, t j+1 )] s for j = 0,..., k() + 1 with t 0 = if{t T } ad t k()+1 = sup{t T }. Similarly, η k() ( ) is the liear iterpolatio of η( ) o T. 3

4 By the assumptios that µ( ) ad η( ) are Lipschitz cotiuous o T which is bouded, it follows that µ k() (t) ad η k() (t) are uiformly cotiuous o T. Thus, we have ad if µ(s) µ k()(t) sup µ(s) s t j,t j+1 ) s t j,t j+1 ) if η(s) η k()(t) sup η(s) s t j,t j+1 ) s t j,t j+1 ) for t t j, t j+1 ), j = 0,..., k() + 1. Let δ k() = max{t j+1 t j : j = 0, 1,..., k()}. The assumptio of first order Lipschitz cotiuity implies there are 0 < c 1, c <, such that sup µ(t) µ(s) c 1 δ k() ad s,t T, t s δ k() sup η(t) η(s) c δ k(). s,t T, t s δ k() Thus by the coditio δ k() 0, we get 1 sup t T µ k() (t) µ(t) ] η k() (t) η(t) ] (c1 + c ) δ k() 0. (A.3) Now, it suffices to show that i l (T ), 1 { µ1( ) µ k() ( ) ] η ( ) η k() ( ) ]} D W ( ). (A.4) To prove (A.4), it suffice to show, i l (T ), 1 µ1 ( ) µ k() ( ) ] D W 1 ( ) ad η ( ) η k() ( ) ] D W 1 ( ). (5) for some Gaussia processes W 1 ( ) ad W ( ). We will use Theorem.11.3 i va der Vaart ad Weller (1996, P.1) to prove (A.5). We oly show the first i (A.5), that for the secod is the same. Deote X i ( ) = X i ( ) + ɛ i ( ), the ˆµ 1 (t) (t t j ) X i (t j ) + (t j+1 t) X i (t j+1 ) t j+1 t j := g,t ( X i ), t t j, t j+1 ], 4

5 where g,t is the liear iterpolatio fuctioal, with kots {t 1,..., t k() }, evaluated at t T, ad µ k() (t) = Eg,t ( X i ) := P g,t ( X i ). Deote P 1 the empirical measure of X 1,..., X 1, the the first i (A.5) is writte as 1/ 1 (P 1 P )g, ( X i ) D W 1 ( ), i l (T ). (A.6) To show the above, we oly eed to check the coditios of Theorem For ay s, t T, let ρ(s, t) = t s, the (T, ρ) is a totally bouded semi-metric space. Let X be a iid copy of the X i s, ad defie Ỹi ad Ỹ similarly. Let G = {g,t ( X) : t T }. The G = G = sup t T X (t) + Ỹ (t)] 1/ is a evelope for G. By the give coditio P G <, so P G = P G = O(1), ad P G I(G > δ )] = P G I(G > δ )] 0 for every δ > 0. Also, for every δ 0, by the give codito ( X(t) ] ] ) δ 0 sup t s δ E + ɛ(t) X(s) ɛ(s) + Y (t) + η(t) Y (s) η(s) 0, sup P ( g,s ( X) g,t ( X) ) sup P ( ) X(s) X(t) 0. ρ(s,t)<δ ρ(s,t)<δ Thus (.11.1) i va der Vaart ad Weller (1996, P.0) is satisfied. Let N ] ( ɛ, G, L (P ) ) be the umber of ɛ-brackets eeded to cover G uder the L (P ) metric. Sice for each, there is oe member g,t G, let l,t = u,t = g,t, the l,t ( X) g,t ( X) u,t ( X) (t T ), ad for all ɛ > 0, P ( u,t ( X) l,t ( X) ) = 0 < ɛ G L (P ), i.e., (l,t, u,t ) is a ɛ-bracket of G uder the L (P ) orm. Here we have N ] ( ɛ G L (P ), G, L (P ) ) = 1, thus δ 0 log N ] ( ɛ G L (P ), G, L (P ) ) dɛ 0, for every δ 0. Now, by Theorem.11.3 i va der Vaart ad Weller (1996, P.1), (A.6) is true. Next we idetify the weak limit W ( ). For each fixed iterpolatio g,t G, g,t ( X) is a radom fuctio i t, so W ( ) is a process o T. For each positive iteger k ad fixed (t 1,..., t k ), by cetral limit theorm for double array, (W (t 1 ),..., W (t k )) T is the weak 5

6 limit of the vector 1 { µ1( ) µ k() (t j ) ] η ( ) η k() (t j ) ] : j = 1,..., k}. So (W (t 1 ),..., W (t k )) T is a mea zero ormal radom vector, ad by the uiform weak covergece showed above, W ( ) is a Gaussia process o T. Clearly EW ( )] = 0. The covariace fuctio R(s, t) = EW (s)w (t)] is give by ( 1 ) R(s, t) = lim Cov{ µ1 (s) µ k() (s) ] η (s) η k() (s) ], µ1 (t) µ k() (t) ] η (t) η k() (t) ]}. Sice { Cov µ1(s) µ k() (s) ] η (s) η k() (s) ], µ 1(t) µ k() (t) ] η (t) η k() (t) ]} we have { 1 x = Cov Xi,k() (s) µ k() (s) ] i= x+1 i= x+1 Xi,k() (s) µ k() (s) ] Yi,k() (s) η k() (s) ] 1 Yi,k() (s) η k() (s) ]], i= 1 1 x Xi,k() (t) µ k() (t) ] i= x+1 i= x+1 Yi,k() (t) η k() (t) ] 1 Xi,k() (t) µ k() (t) ] i= 1 Yi,k() (t) η k() (t) ]]} = 1 1 Cov X 1,k() (s), X 1,k() (t) ] xy 1 Cov X 1,k() (s), Y 1,k() (t) ] xy 1 Cov Y 1,k() (s), X 1,k() (t) ] + 1 Cov Y 1,k() (s), Y 1,k() (t) ], ( 1 ){ R(s, t) = lim 1 1 Cov X 1,k() (s), X 1,k() (t) ] ( 1 x ) Cov X 1,k() (s), Y 1,k() (t) ] 1 ( 1 x ) 1 = γ R 11 (s, t) γ 1 R 1 (s, t) + R 1 (s, t)] + γ 1 R (s, t). Cov Y 1,k() (s), X 1,k() (t) ] + 1 Cov Y 1,k() (s), Y 1,k() (t) ]} 6

7 Proof of Theorem. (i). By Theorem 1, we have that, uder H 0 of (9), L D 1 T T W (t)dt. (A.9) Sice R(, ) is almost everywhere cotiuous ad T is bouded, R (, ) is itegrable, that is, T T R (s, t) dsdt <. By Mercer s Theorem (cf. Theorem 5..1 of Shorack ad Weller (1986), page 08), we have that R(s, t) = λ j h j (s)h j (t), (A.10) where λ j 0, j = 1,,..., are the eigevalues of R(, ), ad h j ( ), j = 1,,..., are the correspodig orthoormal eigefuctios. Let { Z 1,..., Z m,... } be the set of idepedet idetically distributed radom variables with Z m N(0, 1). The Z(t) = λj Z j h j (t) is a Gaussia process o T with mea zero ad covariace fuctio R(s, t). W (t) ad Z(t), have the same distributio o T, Thus, the two stochastic processes, W (t) d = Z(t) = λj Z j h j (t) (A.11) ad, by (A.9) ad (A.10), 1 T T W (t)dt = d 1 T T λj Z j h j (t)] dt = 1 T λ j Zj. (A.1) The result of Theorem (i) follows from (A.9), (A.11) ad (A.1). (ii). By Theorem 1, we have that, uder H 0 of (10), D D 1 T t T W (t) dt = U, (A.13) where U has ormal distributio with mea zero. To compute the variace of U, we cosider the partitio { s j, s j+1 ) : j = 1,..., m } of T with δ = max { s j+1 s j : j = 7

8 1,..., m }. The, it follows from (A.13) that U = lim δ 0 W (s j )(s j+1 s j ). (A.14) Sice E W (s j ) ] = 0 for each fixed j, we have that, by (A.14) ad the cotiuity coditio of R(, ), V ar(u) = lim δ 0 = lim δ 0 E W (s i ) W (s j ) ] (s i+1 s i )(s j+1 s j ) R(s i, s j ) (s i+1 s i )(s j+1 s j ) = s T t T R(s, t) ds dt. The result of Theorem (ii) follows from (A.13) ad V ar(u). Proof of Theorem 3 The proof is similar to derivatio of (A.5) by substitutig µ( ) ad µ ( ) with µ( ) ad µ ( ), respectively. The differece here is that we have order two polyomial iterpolatios for the terms ( X i,k() ( ), Y i,k() ( ), X i,k()( )Y i,k() ( ) ) i additio to the liear iterpolatios for X i,k() ( ) ad Y i,k() ( ) with G playig the role of G i the derivatio of (A.5). The rest of the derivatio is proceeded the same way. The, the delta method leads to the claimed result. To idetify the matrix covariace fuctio Ω(s, t), we ote that µ (t) = 1 ( Xi,k() (t), Y i,k() (t), X i,k()(t), Y i,k()(t), X i,k() (t) Y i,k() (t) ) := 1 Z i (t). The, Ω(s, t) = Cov ( Z(s), Z(t) ) gives the expressio for Ω(s, t). Proof of Theorem 4. 8

9 The proof here is focused o the derivatio of (8), as the proof of (7) ca be proceeded usig the same approach here ad the results of Theorem 3. We first ote that W (t) = H µ (t) ] ad, uder H 0 of (15) ad (16), H µ(t) ] = 0. It follows that W (t) = { H µ (t) ] H µ(t) ]} = 1 + o p (1) ] Ḣ µ(t) ] µ (t) µ(t) ]. Usig result of Theorem 3, the delta method ad the similar derivatios as the proof of Theorem 1, we have W ( ) P W ( ) i l (T ) uiformly o P, where W ( ) is the mea zero Gaussia process o T with covariace fuctio ] Q(s, t) = Cov {Ḣ µ(s) X(s), Ḣ µ(t) ] Z(t) ]} = Ḣ µ(s) ] Cov Z(s), Z(t) ] Ḣ µ(t) ] = Ḣ µ(s) ] Ω(s, t) Ḣ µ(t) ]. The rest of the proof is the same as i that of Theorem. 9

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

1 The Haar functions and the Brownian motion

1 The Haar functions and the Brownian motion 1 The Haar fuctios ad the Browia motio 1.1 The Haar fuctios ad their completeess The Haar fuctios The basic Haar fuctio is 1 if x < 1/2, ψx) = 1 if 1/2 x < 1, otherwise. 1.1) It has mea zero 1 ψx)dx =,

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

Joint Probability Distributions and Random Samples. Jointly Distributed Random Variables. Chapter { }

Joint Probability Distributions and Random Samples. Jointly Distributed Random Variables. Chapter { } UCLA STAT A Applied Probability & Statistics for Egieers Istructor: Ivo Diov, Asst. Prof. I Statistics ad Neurology Teachig Assistat: Neda Farziia, UCLA Statistics Uiversity of Califoria, Los Ageles, Sprig

More information

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Chapter Output Aalysis for a Sigle Model Baks, Carso, Nelso & Nicol Discrete-Evet System Simulatio Error Estimatio If {,, } are ot statistically idepedet, the S / is a biased estimator of the true variace.

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data Proceedigs 59th ISI World Statistics Cogress, 5-30 August 013, Hog Kog (Sessio STS046) p.09 Kolmogorov-Smirov type Tests for Local Gaussiaity i High-Frequecy Data George Tauche, Duke Uiversity Viktor Todorov,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 Fuctioal Law of Large Numbers. Costructio of the Wieer Measure Cotet. 1. Additioal techical results o weak covergece

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences. Norwegia Uiversity of Sciece ad Techology Departmet of Mathematical Scieces Corrected 3 May ad 4 Jue Solutios TMA445 Statistics Saturday 6 May 9: 3: Problem Sow desity a The probability is.9.5 6x x dx

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

Empirical Processes: Glivenko Cantelli Theorems

Empirical Processes: Glivenko Cantelli Theorems Empirical Processes: Gliveko Catelli Theorems Mouliath Baerjee Jue 6, 200 Gliveko Catelli classes of fuctios The reader is referred to Chapter.6 of Weller s Torgo otes, Chapter??? of VDVW ad Chapter 8.3

More information

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1).

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1). Assigmet 7 Exercise 4.3 Use the Cotiuity Theorem to prove the Cramér-Wold Theorem, Theorem 4.12. Hit: a X d a X implies that φ a X (1) φ a X(1). Sketch of solutio: As we poited out i class, the oly tricky

More information

MA Advanced Econometrics: Properties of Least Squares Estimators

MA Advanced Econometrics: Properties of Least Squares Estimators MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Math Solutions to homework 6

Math Solutions to homework 6 Math 175 - Solutios to homework 6 Cédric De Groote November 16, 2017 Problem 1 (8.11 i the book): Let K be a compact Hermitia operator o a Hilbert space H ad let the kerel of K be {0}. Show that there

More information

STA Object Data Analysis - A List of Projects. January 18, 2018

STA Object Data Analysis - A List of Projects. January 18, 2018 STA 6557 Jauary 8, 208 Object Data Aalysis - A List of Projects. Schoeberg Mea glaucomatous shape chages of the Optic Nerve Head regio i aimal models 2. Aalysis of VW- Kedall ati-mea shapes with a applicatio

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Notes 19 : Martingale CLT

Notes 19 : Martingale CLT Notes 9 : Martigale CLT Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: [Bil95, Chapter 35], [Roc, Chapter 3]. Sice we have ot ecoutered weak covergece i some time, we first recall

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1 8. The cetral limit theorems 8.1. The cetral limit theorem for i.i.d. sequeces. ecall that C ( is N -separatig. Theorem 8.1. Let X 1, X,... be i.i.d. radom variables with EX 1 = ad EX 1 = σ (,. Suppose

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

MATH 413 FINAL EXAM. f(x) f(y) M x y. x + 1 n

MATH 413 FINAL EXAM. f(x) f(y) M x y. x + 1 n MATH 43 FINAL EXAM Math 43 fial exam, 3 May 28. The exam starts at 9: am ad you have 5 miutes. No textbooks or calculators may be used durig the exam. This exam is prited o both sides of the paper. Good

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n Review of Power Series, Power Series Solutios A power series i x - a is a ifiite series of the form c (x a) =c +c (x a)+(x a) +... We also call this a power series cetered at a. Ex. (x+) is cetered at

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 6 9/23/203 Browia motio. Itroductio Cotet.. A heuristic costructio of a Browia motio from a radom walk. 2. Defiitio ad basic properties

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Rates of Convergence by Moduli of Continuity

Rates of Convergence by Moduli of Continuity Rates of Covergece by Moduli of Cotiuity Joh Duchi: Notes for Statistics 300b March, 017 1 Itroductio I this ote, we give a presetatio showig the importace, ad relatioship betwee, the modulis of cotiuity

More information

Notes 27 : Brownian motion: path properties

Notes 27 : Brownian motion: path properties Notes 27 : Browia motio: path properties Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces:[Dur10, Sectio 8.1], [MP10, Sectio 1.1, 1.2, 1.3]. Recall: DEF 27.1 (Covariace) Let X = (X

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Solutions to HW Assignment 1

Solutions to HW Assignment 1 Solutios to HW: 1 Course: Theory of Probability II Page: 1 of 6 Uiversity of Texas at Austi Solutios to HW Assigmet 1 Problem 1.1. Let Ω, F, {F } 0, P) be a filtered probability space ad T a stoppig time.

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

1.3 Convergence Theorems of Fourier Series. k k k k. N N k 1. With this in mind, we state (without proof) the convergence of Fourier series.

1.3 Convergence Theorems of Fourier Series. k k k k. N N k 1. With this in mind, we state (without proof) the convergence of Fourier series. .3 Covergece Theorems of Fourier Series I this sectio, we preset the covergece of Fourier series. A ifiite sum is, by defiitio, a limit of partial sums, that is, a cos( kx) b si( kx) lim a cos( kx) b si(

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Itroductio to Probability ad Statistics Lecture 23: Cotiuous radom variables- Iequalities, CLT Puramrita Sarkar Departmet of Statistics ad Data Sciece The Uiversity of Texas at Austi www.cs.cmu.edu/

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

An Introduction to Asymptotic Theory

An Introduction to Asymptotic Theory A Itroductio to Asymptotic Theory Pig Yu School of Ecoomics ad Fiace The Uiversity of Hog Kog Pig Yu (HKU) Asymptotic Theory 1 / 20 Five Weapos i Asymptotic Theory Five Weapos i Asymptotic Theory Pig Yu

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

Binomial Distribution

Binomial Distribution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0 1 2 3 4 5 6 7 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Overview Example: coi tossed three times Defiitio Formula Recall that a r.v. is discrete if there are either a fiite umber of possible

More information

The Central Limit Theorem

The Central Limit Theorem Chapter The Cetral Limit Theorem Deote by Z the stadard ormal radom variable with desity 2π e x2 /2. Lemma.. Ee itz = e t2 /2 Proof. We use the same calculatio as for the momet geeratig fuctio: exp(itx

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

Supplementary Material to Wasserstein Covariance for Multiple Random Densities

Supplementary Material to Wasserstein Covariance for Multiple Random Densities Biometrika (8, xx, x, pp. 6 Prited i Great Britai Supplemetary Material to Wasserstei Covariace for Multiple Radom Desities BY ALEXANDER PETEEN Departmet of Statistics ad Applied Probability, Uiversity

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

SAMPLING LIPSCHITZ CONTINUOUS DENSITIES. 1. Introduction

SAMPLING LIPSCHITZ CONTINUOUS DENSITIES. 1. Introduction SAMPLING LIPSCHITZ CONTINUOUS DENSITIES OLIVIER BINETTE Abstract. A simple ad efficiet algorithm for geeratig radom variates from the class of Lipschitz cotiuous desities is described. A MatLab implemetatio

More information

Estimation of the essential supremum of a regression function

Estimation of the essential supremum of a regression function Estimatio of the essetial supremum of a regressio fuctio Michael ohler, Adam rzyżak 2, ad Harro Walk 3 Fachbereich Mathematik, Techische Uiversität Darmstadt, Schlossgartestr. 7, 64289 Darmstadt, Germay,

More information

Solution to Chapter 2 Analytical Exercises

Solution to Chapter 2 Analytical Exercises Nov. 25, 23, Revised Dec. 27, 23 Hayashi Ecoometrics Solutio to Chapter 2 Aalytical Exercises. For ay ε >, So, plim z =. O the other had, which meas that lim E(z =. 2. As show i the hit, Prob( z > ε =

More information

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Some Basic Probability Cocepts 2. Experimets, Outcomes ad Radom Variables A radom variable is a variable whose value is ukow util it is observed. The value of a radom variable results from a experimet;

More information

Probability and Statistics

Probability and Statistics ICME Refresher Course: robability ad Statistics Staford Uiversity robability ad Statistics Luyag Che September 20, 2016 1 Basic robability Theory 11 robability Spaces A probability space is a triple (Ω,

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Simulation. Two Rule For Inverting A Distribution Function

Simulation. Two Rule For Inverting A Distribution Function Simulatio Two Rule For Ivertig A Distributio Fuctio Rule 1. If F(x) = u is costat o a iterval [x 1, x 2 ), the the uiform value u is mapped oto x 2 through the iversio process. Rule 2. If there is a jump

More information

PRELIM PROBLEM SOLUTIONS

PRELIM PROBLEM SOLUTIONS PRELIM PROBLEM SOLUTIONS THE GRAD STUDENTS + KEN Cotets. Complex Aalysis Practice Problems 2. 2. Real Aalysis Practice Problems 2. 4 3. Algebra Practice Problems 2. 8. Complex Aalysis Practice Problems

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

arxiv: v1 [math.pr] 4 Dec 2013

arxiv: v1 [math.pr] 4 Dec 2013 Squared-Norm Empirical Process i Baach Space arxiv:32005v [mathpr] 4 Dec 203 Vicet Q Vu Departmet of Statistics The Ohio State Uiversity Columbus, OH vqv@statosuedu Abstract Jig Lei Departmet of Statistics

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

Archimedes - numbers for counting, otherwise lengths, areas, etc. Kepler - geometry for planetary motion

Archimedes - numbers for counting, otherwise lengths, areas, etc. Kepler - geometry for planetary motion Topics i Aalysis 3460:589 Summer 007 Itroductio Ree descartes - aalysis (breaig dow) ad sythesis Sciece as models of ature : explaatory, parsimoious, predictive Most predictios require umerical values,

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

1 Convergence in Probability and the Weak Law of Large Numbers

1 Convergence in Probability and the Weak Law of Large Numbers 36-752 Advaced Probability Overview Sprig 2018 8. Covergece Cocepts: i Probability, i L p ad Almost Surely Istructor: Alessadro Rialdo Associated readig: Sec 2.4, 2.5, ad 4.11 of Ash ad Doléas-Dade; Sec

More information

Homework 4. x n x X = f(x n x) +

Homework 4. x n x X = f(x n x) + Homework 4 1. Let X ad Y be ormed spaces, T B(X, Y ) ad {x } a sequece i X. If x x weakly, show that T x T x weakly. Solutio: We eed to show that g(t x) g(t x) g Y. It suffices to do this whe g Y = 1.

More information

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial. Taylor Polyomials ad Taylor Series It is ofte useful to approximate complicated fuctios usig simpler oes We cosider the task of approximatig a fuctio by a polyomial If f is at least -times differetiable

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

x x x Using a second Taylor polynomial with remainder, find the best constant C so that for x 0,

x x x Using a second Taylor polynomial with remainder, find the best constant C so that for x 0, Math Activity 9( Due with Fial Eam) Usig first ad secod Taylor polyomials with remaider, show that for, 8 Usig a secod Taylor polyomial with remaider, fid the best costat C so that for, C 9 The th Derivative

More information

lim za n n = z lim a n n.

lim za n n = z lim a n n. Lecture 6 Sequeces ad Series Defiitio 1 By a sequece i a set A, we mea a mappig f : N A. It is customary to deote a sequece f by {s } where, s := f(). A sequece {z } of (complex) umbers is said to be coverget

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Univariate Normal distribution. whereaandbareconstants. Theprobabilitydensityfunction(PDFfromnowon)ofZ andx is. ) 2π.

Univariate Normal distribution. whereaandbareconstants. Theprobabilitydensityfunction(PDFfromnowon)ofZ andx is. ) 2π. Uivariate Normal distributio I geeral, it has two parameters, µ ad σ mea ad stadard deviatio. A special case is stadardized Normal distributio, with the mea of 0 ad stadard deviatio equal to. Ay geeral

More information

Central Limit Theorem using Characteristic functions

Central Limit Theorem using Characteristic functions Cetral Limit Theorem usig Characteristic fuctios RogXi Guo MAT 477 Jauary 20, 2014 RogXi Guo (2014 Cetral Limit Theorem usig Characteristic fuctios Jauary 20, 2014 1 / 15 Itroductio study a radom variable

More information

http://www.xelca.l/articles/ufo_ladigsbaa_houte.aspx imulatio Output aalysis 3/4/06 This lecture Output: A simulatio determies the value of some performace measures, e.g. productio per hour, average queue

More information

of the matrix is =-85, so it is not positive definite. Thus, the first

of the matrix is =-85, so it is not positive definite. Thus, the first BOSTON COLLEGE Departmet of Ecoomics EC771: Ecoometrics Sprig 4 Prof. Baum, Ms. Uysal Solutio Key for Problem Set 1 1. Are the followig quadratic forms positive for all values of x? (a) y = x 1 8x 1 x

More information

Asymptotic distribution of products of sums of independent random variables

Asymptotic distribution of products of sums of independent random variables Proc. Idia Acad. Sci. Math. Sci. Vol. 3, No., May 03, pp. 83 9. c Idia Academy of Scieces Asymptotic distributio of products of sums of idepedet radom variables YANLING WANG, SUXIA YAO ad HONGXIA DU ollege

More information

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula Joural of Multivariate Aalysis 102 (2011) 1315 1319 Cotets lists available at ScieceDirect Joural of Multivariate Aalysis joural homepage: www.elsevier.com/locate/jmva Superefficiet estimatio of the margials

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

2.2. Central limit theorem.

2.2. Central limit theorem. 36.. Cetral limit theorem. The most ideal case of the CLT is that the radom variables are iid with fiite variace. Although it is a special case of the more geeral Lideberg-Feller CLT, it is most stadard

More information

Lecture 15: Density estimation

Lecture 15: Density estimation Lecture 15: Desity estimatio Why do we estimate a desity? Suppose that X 1,...,X are i.i.d. radom variables from F ad that F is ukow but has a Lebesgue p.d.f. f. Estimatio of F ca be doe by estimatig f.

More information

Kernel density estimator

Kernel density estimator Jauary, 07 NONPARAMETRIC ERNEL DENSITY ESTIMATION I this lecture, we discuss kerel estimatio of probability desity fuctios PDF Noparametric desity estimatio is oe of the cetral problems i statistics I

More information

Intro to Learning Theory

Intro to Learning Theory Lecture 1, October 18, 2016 Itro to Learig Theory Ruth Urer 1 Machie Learig ad Learig Theory Comig soo 2 Formal Framework 21 Basic otios I our formal model for machie learig, the istaces to be classified

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Self-normalized deviation inequalities with application to t-statistic

Self-normalized deviation inequalities with application to t-statistic Self-ormalized deviatio iequalities with applicatio to t-statistic Xiequa Fa Ceter for Applied Mathematics, Tiaji Uiversity, 30007 Tiaji, Chia Abstract Let ξ i i 1 be a sequece of idepedet ad symmetric

More information

STAT331. Example of Martingale CLT with Cox s Model

STAT331. Example of Martingale CLT with Cox s Model STAT33 Example of Martigale CLT with Cox s Model I this uit we illustrate the Martigale Cetral Limit Theorem by applyig it to the partial likelihood score fuctio from Cox s model. For simplicity of presetatio

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Summary. Recap ... Last Lecture. Summary. Theorem

Summary. Recap ... Last Lecture. Summary. Theorem Last Lecture Biostatistics 602 - Statistical Iferece Lecture 23 Hyu Mi Kag April 11th, 2013 What is p-value? What is the advatage of p-value compared to hypothesis testig procedure with size α? How ca

More information

PAPER : IIT-JAM 2010

PAPER : IIT-JAM 2010 MATHEMATICS-MA (CODE A) Q.-Q.5: Oly oe optio is correct for each questio. Each questio carries (+6) marks for correct aswer ad ( ) marks for icorrect aswer.. Which of the followig coditios does NOT esure

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL)

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL) TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES SUPPLEMENTARY MATERIAL) By Ke Zhu, Philip L.H. Yu ad Wai Keug Li Chiese Academy of Scieces ad Uiversity of Hog Kog APPENDIX: PROOFS I this appedix, we

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science. BACKGROUND EXAM September 30, 2004.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science. BACKGROUND EXAM September 30, 2004. MASSACHUSETTS INSTITUTE OF TECHNOLOGY Departmet of Electrical Egieerig ad Computer Sciece 6.34 Discrete Time Sigal Processig Fall 24 BACKGROUND EXAM September 3, 24. Full Name: Note: This exam is closed

More information

Detailed proofs of Propositions 3.1 and 3.2

Detailed proofs of Propositions 3.1 and 3.2 Detailed proofs of Propositios 3. ad 3. Proof of Propositio 3. NB: itegratio sets are geerally omitted for itegrals defied over a uit hypercube [0, s with ay s d. We first give four lemmas. The proof of

More information

Asymptotic Results for the Linear Regression Model

Asymptotic Results for the Linear Regression Model Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is

More information