Edgeworth Expansion for Studentized Statistics

Size: px
Start display at page:

Download "Edgeworth Expansion for Studentized Statistics"

Transcription

1 Edgeworth Expasio for Studetized Statistics Big-Yi JING Hog Kog Uiversity of Sciece ad Techology Qiyig WANG Australia Natioal Uiversity ABSTRACT Edgeworth expasios are developed for a geeral class of studetized statistics uder rather weak coditios. Applicatios of these results are give to obtai Edgeworth expasios for the distributios of the studetized U-statistics ad studetized L-statistics uder weaker momet coditios tha those available i the literature. AMS 1991 Subject Classificatios: 62E20, 60F15. Keywords ad Phrases: Studetized statistic, U-statistic, L-statistic, Edgeworth expasio. 0

2 1 Itroductio Suppose that we are iterested i the distributio of some statistic, T = T (X 1,..., X ), where X 1,, X is a sequece of idepedet ad idetically distributed (i.i.d.) real radom variables. Typically, oe ca use the delta method to show that T coverges i distributio to a ormal distributio. The rates of covergece to ormality is usually of the order 1/2 ad ca be described by Berry-Essee bouds. To get a better approximatio tha asymptotic ormality, oe ca develop higher-order Edgeworth expasios uder appropriate coditios. The theory of Edgeworth expasios dates back a log way. Of course, the simplest case is the Edgeworth expasio for the sample mea [c.f. Chapter 16 of Feller (1971)]. I recet years, a great deal of effort has bee devoted to derivig Edgeworth expasios for other classes of statistics such as fuctio of multivariate sample meas (Bhattacharya ad Rao (1976), Bai ad Rao (1991), or Hall (1992)), U-statistics, L-statistics ad others. (See Sectio 3 for more reviews.) O the other had, Edgeworth expasios for their studetized couterparts, such as Studet s t-statistic, studetized U-, ad L-statistics ad so o, have also gaied much mometum, partly due to their usefuless i statistical iferece (e.g., i costructig cofidece regios for populatio parameters, or testig hypothesis). It is worth poitig out that each of the methods for derivig Edgeworth expasios for the abovemetioed statistics was tailored to the idividual structures of these statistics. A geeral uifyig approach is to cosider symmetric statistics, which iclude all the abovemetioed statistics as special cases. See Lai ad Wag (1993), Betkus, Götze, va Zwet (1997) ad Putter ad va Zwet (1998) for istace. A quick glace at the literature reveals that the momet coditios i Edgeworth expasios for the studetized statistics are typically stroger tha the correspodig stadardized statistics; see Sectio 3 regardig U-statistics, L-statistics for example. This may ot be surprisig to us sice it is usually more difficult to hadle studetized statistics tha stadardized oes. Oe otable exceptio is the case of the sample mea, where the same third momet assumptio is eough for both the stadardized mea ad Studet s t-statistic, see Hall (1987) ad Betkus ad Götze (1996), for istace. This begs the questio whether the same pheomeo is also true for U-, L-statistics, ad other classes of statistics. I this paper we shall attempt to address this issue. However, istead of dealig with each idividual class of statistics separately, we shall cosider Edgeworth expasios for a very geeral class of so-called studetized statistics, ad the apply them to some special cases of iterest, e.g., U-, ad L-statistics. To be more precise, we shall cosider 1

3 studetized statistics of the form T /S (ad its slight variatios), where T = 1/2 α(x j ) + 3/2 β(x i, X j ) + V 1 S 2 1 = 1 + γ(x i, X j ) + V 2. ( 1) More detailed defiitios will be give i the ext sectio. Note that this class of studetized statistics T /S icludes a wide rage of statistics, much more so tha it first appears. For istace, symmetrical statistics belogs to this class by applyig the H- decompositio ad by takig S = 1. Similarly, the studetized symmetrical statistics cosidered i Putter ad vo Zwet (1998) are also i this class. Oe atural questio that oe might ask is why we cosider such a class of studetized statistics i the first place. Ideed, the research work o symmetric statistics have bee successful i dealig with stadardized statistics. However, we argue that the symmetrical statistics approach is yet to achieve the same kid of success whe it comes to studetized statistics. To illustrate why, let us cosider the Studet s t-statistic, which is clearly a symmetrical statistic, ad also a special case of studetized U-statistics; see Remark 3.1 as well. First we discuss Berry-Essee bouds for Studet s t-statistic. The geeral result of va Zwet (1984) o the symmetric statistics yields a Berry-Essee boud of order O( 1/2 ) provided E X 1 4 <. Friedrich (1989) improved this momet coditio to E X 1 10/3 <, which was later show by Betkus, Götze ad Zitikis (1994) to be optimal by usig this geeral approach. O the other had, it is well-kow that the optimal momet coditio for Studet s t-statistic is E X 1 3 < ; see Betkus ad Götze (1996) for istace. Clearly, the symmetrical statistics approach of va Zwet (1984) ad Friedrich (1989) fails to produce the best result i this case. Secodly, we look at Edgeworth expasios for Studet s t-statistic. Betkus, Götze ad vo Zwet (1997) give secod-order Edgeworth expasios of order O( 1 ). Applied to the Studet s t-statistic, Betkus et al (1997) obtaied a Edgeworth expasio with error term of size O( 1 ) (they used N istead of ). However, they eeded the existece of the (4 + ɛ)th momet o the populatio, which is clearly ot the optimal momet coditio i this case. I fact, the optimal momet coditio for the Studet s t-statistic is the fiite 4th momet, as show by Hall ad Jig (1995). The above Studet s t-statistic example shows that the geeral symmetric statistic approach may ot yield the optimal results for some studetized statistics. To get the optimal results, it seems very atural to look more closely at both the umerator ad the deomiator of the studetized statistics idividually. This is oe of the mai reasos why we cosider such a class of studetized statistics i this paper. As we have show 2

4 i our paper, this approach does lead to Edgeworth expasios uder rather weak ad atural coditios for studetized statistics tha usig the symmetric statistics approach. I particular, applicatio of our geeral results to studetized U- ad L-statistics leads to weaker ad more atural momet coditios tha previous work. Furthermore, applicatio to Studet s t-statistics lead to the optimal momet coditio as well (this was also show by Hall (1987)). The layout of the preset paper is as follows. I Sectio 2, we shall first itroduce a geeral class of studetized statistics ad the derive their Edgeworth expasios uder rather weak coditios. I Sectio 3, the results obtaied i Sectio 2 are applied to obtai Edgeworth expasios for the distributios of the studetized U-statistics ad studetized L-statistics. Proofs of the mai theorems are give i Sectio 4. Fially, some techical details are deferred to Sectio 5. Throughout this paper, we use C, C 1, C 2, ad also A 1, A 2, to deote some positive costats, which are idepedet of, ad may or may ot deped o the uderlyig distributio i questio. The same costat may resume differet values at each occurrece. For a set B, write I (B) as its idicator fuctio. Furthermore, we deote the stadard ormal distributio fuctio by Φ(x) ad its kth derivative by Φ (k) (x), ad φ(x) = Φ (1) (x). Fially, we itroduce the followig otatio for simplicity of presetatio,,, 1 <k 1<k i j i, i j i j k i,j,k=1 i j,j k,k i, (i) j<k j<k,j,k i. 2 Mai results Let X, X 1,, X be a sequece of i.i.d. real radom variables. Let α(x), β(x, y), γ(x, y), ζ(x, y) ad η(x, y, z) be some real-valued Borel measurable fuctios of x, y ad z. Furthermore, let V i V i (X 1,, X ), i = 1, 2, be real-valued fuctios of {X 1,, X }. Defie the statistic ad a ormalizig statistic T = 1/2 α(x j ) + 3/2 β(x i, X j ) + V 1 (2.1) S 2 = ( 1) γ(x i, X j ) + V 2. (2.2) Uder appropriate coditios, the domiat term i the studetized statistic T /S is 1/2 α(x j ), which coverges i distributio to a ormal distributio as teds 3

5 to ifiity by the Cetral Limit Theorem. I the followig theorem, we shall derive secod-order Edgeworth expasios with error size o( 1/2 ) uder rather weak ad atural coditios. THEOREM 2.1. Assume that a) β(x, y) ad γ(x, y) are symmetric i their argumets. b) Eα(X 1 ) = 0, Eα 2 (X 1 ) = 1, E α(x 1 ) 3 <, lim t Ee itα(x 1 ) < 1; E[β(X 1, X 2 ) X 1 ] = 0, Eβ 2 (X 1, X 2 ) < ; Eγ(X 1, X 2 ) = 0, E γ(x 1, X 2 ) 3/2 <. c) P ( V j o( 1/2 ) ) = o ( 1/2), j = 1, 2. The we have where x P (T /S x) F (1) (x) = o ( 1/2), (2.3) F (1) (x) = Φ(x) Φ(3) (x) 6 ( Eα 3 (X 1 ) + 3Eα(X 1 )α(x 2 )β(x 1, X 2 ) ) xφ(2) (x) 2 Eα(X 1)γ(X 1, X 2 ). I Theorem 2.1, the fuctios β(x, y) ad γ(x, y) are assumed to be symmetric i their argumets. However, i some applicatios, it is ofte easier to use the ext theorem, which removes the symmetry restrictio o β(x, y) ad γ(x, y). To describe the theorem, we first defie T = 1/2 α(x j ) + 3/2 ζ(x i, X j ) + V 1, (2.4) i j S 2 = η(x i, X j, X k ) + V 2. (2.5) i j k The followig theorem establishes a secod-order Edgeworth expasio for the distributio of T / S. THEOREM 2.2. Assume that a) Eα(X 1 ) = 0, Eα 2 (X 1 ) = 1, E α(x 1 ) 3 Ee <, lim itα(x 1 ) t < 1; E[ζ(X 1, X 2 ) X t ] = 0, t=1,2, Eζ 2 (X 1, X 2 ) < ; Eη(X 1, X 2, X 3 ) = 0, E η(x 1, X 2, X 3 ) 3/2 <. b) P ( V j o( 1/2 ) ) = o ( 1/2), j = 1, 2. 4

6 The we have where with x P ( T / S ) F (2) (x) = o ( 1/2), (2.6) F (2) (x) = Φ(x) Φ(3) (x) 6 ( Eα 3 (X 1 ) + 3Eα(X 1 )α(x 2 ) ζ(x 1, X 2 ) ) xφ(2) (x) 4 Eα(X 1) η(x 1, X 2, X 3 ), ζ(x 1, X 2 ) = ζ(x 1, X 2 ) + ζ(x 2, X 1 ), η(x 1, X 2, X 3 ) = η(x 1, X 2, X 3 ) + η(x 1, X 3, X 2 ) + η(x 2, X 1, X 3 ) +η(x 2, X 3, X 1 ) + η(x 3, X 1, X 2 ) + η(x 3, X 2, X 1 ). Remark 2.1. If we take γ(x, y) = 0 i Theorems 2.1, the we obtai the correspodig secod-order Edgeworth expasios for the stadardized statistics. I this sese, we have uified the treatmet for both types of statistics ad study them uder the same framework. Clearly, uless γ(x 1, X 2 ) i the studetized statistics T /S requires stroger coditios tha those imposed o α(x 1 ) ad β(x 1, X 2 ), the the secod-order Edgeworth expasios for both stadardized ad studetized statistics will hold uder the same set of coditios. This is oe of the appealig features of adoptig studetized statistics approach i this paper. Similar commets are also true for Theorem Applicatios I this sectio, we apply the mai results preseted i Sectio 2 to two well-kow examples, amely, the studetized U- ad L-statistics ad studetized fuctios of the sample mea. As ca be see later, applicatios of Theorem 2.1 ad 2.2 to these statistics lead to secod-order Edgeworth expasios uder weaker coditios tha ay other previous results. 3.1 Studetized U-Statistics. Let h(x, y) be a real-valued Borel measurable fuctio, symmetric i its argumets with Eh(X 1, X 2 ) = θ. Defie a U-statistic of degree 2 with kerel h(x, y) by U = 2 ( 1) 5 h(x i, X j ),

7 ad pose that g(x j ) = E (h(x i, X j ) θ X j ), σg 2 = V ar (g(x 1 )), R 2 = 4( 1)( 2) 2 1 h(x i, X j ) U 1 i=1 Note that 1 R 2 is the jackkife estimator of 4σ 2 g. Defie the distributios of the stadardized ad studetized U-statistic respectively by j i G 1 (x) = P ( (U θ)/(2σ g ) x ), G 2 (x) = P ( (U θ)/r x ). It is well-kow that G 1 (x) ad G 2 (x) coverge to the stadard ormal distributio fuctio Φ(x) provided Eh 2 (X 1, X 2 ) < ad σ 2 g > 0 [see Hoeffdig (1948) ad Arvese (1969), respectively]. Berry-Essee bouds have also bee studied by various authors. For stadardized U-statistics, Berry-Essee bouds have bee ivestigated by Grams ad Serflig (1973), Bickel (1974) ad Cha ad Wierma (1977), Callaert ad Jasse (1978), va Zwet (1984), Friedrich (1989), Betkus, Götze ad Zitikis (1994). For studetized U-statistics, Berry-Essée bouds were give by Callaert ad Veraverbeke (1981), Zhao (1983), Helmers (1985), ad Putter ad va Zwet (1998). Edgeworth expasios for U-statistics have also bee itesively studied i recet years. Here, we shall oly focus o the secod-order Edgeworth expasios with error size o( 1/2 ). For stadardized U-statistics, Bickel, Götze ad vo Zwet (1986) showed that where G 1 (x) F 1 (x) = o( 1/2 ), x 2. F 1 (x) = Φ(x) φ(x) 6 σg 3 ( x 2 1 ) { Eg 3 (X 1 ) + 3Eg(X 1 )g(x 2 )h(x 1, X 2 ) }, uder the coditios that σ 2 g > 0, the d.f. of g(x 1 ) is olattice, E g(x 1 ) 3 < ad E h(x 1, X 2 ) 2+ɛ < for some ɛ > 0. For studetized U-statistics, Helmers (1991) showed that G 2 (x) F 2 (x) = o( 1/2 ), x (the defiitio of F 2 (x) will be give i Theorem 3.1 below) uder the coditios that σ 2 g > 0, the d.f. of g(x 1 ) is olattice ad E h(x 1, X 2 ) 4+ɛ < for some ɛ > 0. Maesoo (1995, 1996, 1997) exteded Helmers result to arbitrary degree r ad fuctio 6

8 of studetized U-statistics. More recetly, Putter ad vo Zwet (1998) weakeed the (4 + ɛ)-th momet coditio of Helmers (1991) to E h(x 1, X 2 ) 3+ɛ <. However, this momet coditio still appears stroger tha eeded. I the ext theorem, we show that this momet coditio ca further be reduced to E h(x 1, X 2 ) 3 <. THEOREM 3.1. Suppose that σ 2 g > 0, E h(x 1, X 2 ) 3 < ad lim t Ee itg(x 1 ) < 1. The we have where G 2 (x) F 2 (x) = o( 1/2 ), x F 2 (x) = Φ(x) + φ(x) 6 σg 3 { (2x 2 + 1)Eg 3 (X 1 ) + 3(x 2 + 1)Eg(X 1 )g(x 2 )h(x 1, X 2 ) }. Before we prove the theorem, we shall make several remarks. Remark 3.1. If h(x, y) = 1(x + y), (U 2 θ)/r reduces to the well-kow Studet s t-statistic. Hall (1987) cosidered the Edgeworth expaios for Studet s t-statistic uder miimal coditios while Hall ad Jig (1995) derived Berry-Essee bouds for Edgeworth expasios of error size O( 1 ) uder the optimal fourth momet coditio. Remark 3.2. From Bickel, Götze ad vo Zwet (1986), it is kow that the momet coditio E h(x 1, X 2 ) 3 < is certaily ot optimal i obtaiig secod-order Edgeworth expasios with error size o( 1/2 ) for stadardized U-statistics. Judgig from the proof of this paper, it seems that it is ot optimal for studetized U-statistics either. I fact, it remais a ope questio as to what the optimal momet coditios are for both cases. As metioed earlier, the optimal momet coditio i Berry-Essee bouds for stadardized U-statitics are: E g(x 1 ) 3 < ad E h(x 1, X 2 ) 5/3 < ad σg 2 > 0. It would be iterestig to see whether these are sufficiet for establishig secod-order Edgeworth expasios with error size o( 1/2 ) for stadardized ad studetized U-statistics. Proof of Theorem 3.1. Similar to (A 3 ) i Callaert ad Veraverbeke (1981) (also see Serflig (1980)), we may rewrite (U θ) R = (U θ)/(2σ g ) 1/2 R /(2σ g ) T S, 7

9 where with T = 1/2 α(x i ) + 3/2 ζ(x i, X j ) + V 1, i=1 i j = 1 + ( 1) 1 ( 2) 2 η(x i, X j, X k ) + V 2 S 2 = i j k i j k η(x i, X j, X k ) + V 3, α(x j ) = σ 1 g g(x j ), ζ(x i, X j ) = (2σ g ) 1 [h(x i, X j ) θ g(x i ) g(x j )], η(x i, X j, X k ) = σ 2 g [h(x i, X j ) θ] [h(x i, X k ) θ] 1, V 1 = 2 3/2 ( 1) 1 ζ(x i, X j ), V 2 = Q 1 + Q 2, V 3 = V 2 + Q 3, Q 1 = 2σ 2 g (h(x ( 1)( 2) 2 i, X j ) θ) 2, ( 1)σ 2 g Q 2 = (U ( 2) 2 θ) 2, Q 3 = ( 2)2 + 4( 1) 3 ( 1)( 2) 2 i j k η(x i, X j, X k ). By the properties of coditioal expectatio, it ca be easily show that for 1 i j k 3, Eα(X 1 )α(x 2 )ζ(x i, X j ) = 1 2 σ 3 g Eg(X 1 )g(x 2 )h(x 1, X 2 ), if 1 i j 2, Eα(X 1 )η(x i, X j, X k ) = σ 3 g Eg 3 (X 1 ), if i = 1, j k i, These estimates, together with the followig relatios = σ 3 g Eg(X 1 )g(x 2 )h(x 1, X 2 ), if i = 2 or 3, j k i. Φ (2) (x) = xφ(x), Φ (3) (x) = (x 2 1)φ(x), imply that (see Theorem 2.2 for the defiitio of ζ(x 1, X 2 ) ad η(x 1, X 2, X 3 )) F (2) (x) = Φ(x) Φ(3) (x) 6 ( Eα 3 (X 1 ) + 3Eα(X 1 )α(x 2 ) ζ(x 1, X 2 ) ) xφ(2) (x) 4 Eα(X 1) η(x 1, X 2, X 3 ) = Φ(x) + φ(x) 6 σ 3 g { (2x 2 + 1)Eg 3 (X 1 ) + 3(x 2 + 1)Eg(X 1 )g(x 2 )h(x 1, X 2 ) }. 8

10 O the other had, the coditio (a) of Theorem 2.2 ca be easily checked. Therefore, by Theorem 2.2, Theorem 3.1 follows if we ca prove P ( V 1 1/2 (log ) 1) = o( 1/2 ), (3.1) P ( Q j 1/2 (log ) 1) = o( 1/2 ), for j = 1, 2, 3. (3.2) We oly prove (3.2) for j = 1. Proofs of others are similar ad thus omitted. Let h (X i, X j ) = [h(x i, X j ) θ] 2 E [h(x 1, X 2 ) θ] 2. We have h (X i, X j ) = h (X j, X i ) ad E h (X i, X j ) 3/2 <. Therefore, it follows from part (b) of Lemma 5.2 that for 4, P ( Q 1 1/2 (log ) 1) 2 P (h(x i, X j ) θ) 2 ( 1) 1 4 σ2 g 1/2 (log ) 1 P E(h(X 1, X 2 ) θ) h (X i, X j ) ( 1) 1 4 σ2 g 1/2 (log ) 1 = o( 1/2 ). The proof of Theorem 3.1 is thus complete. 3.2 Studetized L-statistics Let X 1,, X be i.i.d. real radom variables with distributio fuctio F. Defie F to be the empirical distributio, i.e., F (x) = 1 I (Xi x). Let J(t) be a real-valued fuctio o [0, 1] ad T (G) = xj(g(x)) dg(x). The statistic T (F ) is called a L- statistic (see Chapter 8 of Serflig (1980)). Write σ 2 σ 2 (J, F ) = J (F (s)) J (F (t)) F (s t) [1 F (s t)] dsdt, where s t = mi{s, t} ad s t = max{s, t}. Clearly, a atural estimate of σ 2 is give by σ 2 σ 2 (J, F ). Now let us defie the distributios of the stadardized ad studetized L-statistic T (F ) respectively by H 1 (x) = P ( σ 1 (T (F ) T (F )) x ), H 2 (x) = P ( σ 1 (T (F ) T (F )) x ). It is well-kow that H 1 (x) ad H 2 (x) coverge to the stadard ormal distributio fuctio Φ(x) provided E X 1 2 <, σ 2 > 0 ad some smoothess coditios o J(t) 9

11 (see Serflig (1980) ad Helmers, Jasse ad Serflig (1990) for refereces). Berry- Essee bouds for stadardized L-statistics were give by Helmers (1977), vo Zwet (1984), Helmers, Jasse ad Serflig (1990) ad so o while Berry-Essee bouds for studetized L-statistics were also give by Helmers (1982). Earlier work o Edgeworth expasios for stadardized L-statistics icludes Helmers (1982), Lai ad Wag (1993) amog others. I fact, some of these papers are cocered with third-order Edgeworth expasios with error size O( 1 ). O the other had, secodorder Edgeworth expasios of error size o( 1/2 ) for studetized L-statistics is give i Putter ad vo Zwet (1998) uder some smoothess coditios o J(t) ad the momet coditio E X 1 3+ɛ for some ɛ > 0. However, this momet coditio still appears stroger tha eeded. I the ext theorem, we show that this momet coditio ca be weakeed eve further. THEOREM 3.2. Assume that (a). J (t) is bouded o t [0, 1], (b). σ 2 > 0 ad [F (t)(1 F (t))] 1/3 dt <. (c). lim t Ee ity < 1, where Y = J(F (s)) ( I(X1 s) F (s) ) ds. The we have where H 2 (x) F2(x) = o( 1/2 ), x F2(x) = Φ(x) + 1 ( 6 3a σ Φ(1) (x) + 3(b + c)x σ 3 with J 0 (t) = J(F (t)), ad a = J 0(x)F (x)[1 F (x)]dx; b = J 0 (x)j 0 (y)j 0 (z)k(x, y, z)dxdydz, K(x, y, z) = [F (x y z) F (x y)f (z)] [(1 F (x y)] c = +F (x y)f (x y) [F (z) 1] ; J 0(x)J 0 (y)j 0 (z)k (x, y, z)dxdydz, K (x, y, z) = F (x y) [1 F (x y)] [F (y z) F (y)f (z)] ; κ = J 0 (x)j 0 (y)j 0 (z) {F (x y z) + U} dxdydz, Φ (2) (x) + κ + 3d ) Φ (3) (x) σ 3 U = F (x)f (y z) F (y)f (x z) F (z)f (x y) + 2F (x)f (y)f (z); d = J 0 (x)j 0 (y)j 0(z)V (x, z)v (y, z)dxdydz, V (s, t) = F (s t) F (s)f (t). 10

12 Remark 3.3. Uder the same coditios as i Theorem 3.2, similarly to the proof of Theorem 3.2, we ca show that where H 1 (x) F1(x) = o( 1/2 ), x F1(x) = Φ(x) + 1 ( 6 3a σ Φ(1) (x) + κ + 3d ) Φ (3) (x). σ 3 Remark 3.4. I Coditio (b), we assumed that [F (t)(1 F (t)] 1/3 dt <. This is weaker tha the coditio E X 1 3+δ <. To show that, we first ote that, by applyig Markov s iequality, we have F (t)(1 F (t)) E X 1 3+ɛ / t 3+ɛ. Thus [F (t)(1 F (t)] 1/3 dt 1dt + [F (t)(1 F (t)] 1/3 dt t 1 <. t >1 2 + ( E X 1 3+ɛ) 1/3 t 1 ɛ/3 dt t >1 I fact, the coditio [F (t)(1 F (t)] 1/3 dt < is almost the same as the momet coditio E X 1 3 <. Proof of Theorem 3.2. For abbreviatio, we further itroduce the followig otatio, J (t) = J(F (t)), Z(s, t, F ) = F (s t)(1 F (s t)), ξ(x i, X j ) = σ 2 J 0 (s)j 0 (t) ( I (Xi s t)i (Xj >s t) Z(s, t, F ) ) dsdt ϕ(x i, X j, X k ) = σ 2 J 0(s)J 0 (t) [ I (Xi t) F (t) ] I (Xj s t)i (Xk >s t) dsdt. Recallig (see Lemma B of Serflig, 1980, p.265), we may write t T (F ) T (F ) = [K 1 (F (x)) K 1 (F (x))]dx, K 1 (t) = J(u)du, 0 (T (F ) T (F )) σ = (T (F ) T (F )) /σ σ/σ T / S, where T = 1/2 α(x j ) + 3/2 ζ(x i, X j ) + 1/2 Eζ(X 1, X 1 ) + V 1, i j 2 S = η(x i, X j, X k ) + V 2, i j k 11

13 with α(x j ) = σ 1 J 0 (t) [ I (Xj t) F (t) ] dt, ζ(x i, X j ) = 1 2 σ 1 J 0(t) [ I (Xi t) F (t) ] [ I (Xj t) F (t) ] dt, η(x i, X j, X k ) = ξ(x i, X j ) + ϕ(x i, X j, X k ) V 1 = 1/2 (Q 1 + Q 2 ), V 2 = Q 3 + Q 4 + Q 5 ad Q i, i = 1,, 5, are defied by Q 1 = 2 Q 2 A(J)σ 1 Q 3 = 2σ 2 Q 4 = σ 2 [ζ(x j, X j ) Eζ(X 1, X 1 )], (3.3) F (t) F (t) 3 dt, (3.4) [J (s) J 0 (s) J 0(s)(F (s) F (s))] J 0 (t)z(s, t, F )dsdt, (3.5) [J (s) J 0 (s)] [J (t) J 0 (t)] Z(s, t, F )dsdt, (3.6) Q 5 = 3 [ξ(x j, X k ) + ϕ(x j, X j, X k ) + ϕ(x k, X j, X k )] j k 1 σ 2 F (s t) [1 F (s t)] dsdt. (3.7) We would like to apply Theorem 2.2. The coditio (a) of Theorem 2.2 ca be easily checked. Next let us check the coditio (b). By Lemma 5.6 i the Appedix, we have that P ( V 1 3 1/2 (log ) 1) P ( Q 1 + Q (log ) 1) = o( 1/2 ), P ( V 2 3 1/2 (log ) 1) P ( Q 3 + Q 4 + Q 5 3 1/2 (log ) 1) = o( 1/2 ). These estimates imply the coditio (b) of Theorem 2.2. Now, by usig Theorem 2.2, we obtai that where x ( P 1 S ( T 1/2 Eζ(X 1, X 1 ) ) ) x F (x) = o( 1/2 ), (3.8) F (x) = Φ(x) Φ(3) (x) 6 ( Eα 3 (X 1 ) + 3Eα(X 1 )α(x 2 ) ζ(x 1, X 2 ) ) xφ(2) (x) 4 Eα(X 1) η(x 1, X 2, X 3 ), ζ(x 1, X 2 ) = ζ(x 1, X 2 ) + ζ(x 2, X 1 ), η(x 1, X 2, X 3 ) = η(x 1, X 2, X 3 ) + η(x 1, X 3, X 2 ) + η(x 2, X 1, X 3 ) +η(x 2, X 3, X 1 ) + η(x 3, X 1, X 2 ) + η(x 3, X 2, X 1 ). 12

14 It follows from (3.8) ad similar method used i proof of Theorem 2.1 that P ( T / S x ) 1/2 Φ (1) (x)eζ(x 1, X 1 ) F (x) = o( 1/2 ) (3.9) x I terms of (3.9), Theorem 3.2 follows from the followig tedious but simple calculatios: Eζ(X 1, X 1 ) = 1 J 2σ 0(x)F (x)(1 F (x))dx; Eα 3 (X 1 ) = 1 J σ 3 0 (x)j 0 (y)j 0 (z) {F (x y z) + U} dxdydz, Eα(X 1 )α(x 2 )ζ(x 1, X 2 ) = Eα(X 1 )α(x 2 )ζ(x 2, X 1 ) = 1 J 2σ 3 0 (x)j 0 (y)j 0(z)V (x, z)v (y, z)dxdydz, Eα(X 1 )ξ(x 1, X 2 ) = Eα(X 1 )ξ(x 1, X 3 ) = 1 J σ 3 0 (x)j 0 (y)j 0 (z)w (x, y, z)(1 F (x y))dxdydz, W (x, y, z) = F (x y z) F (x y)f (z), Eα(X 1 )ξ(x 2, X 1 ) = Eα(X 1 )ξ(x 3, X 1 ) = 1 σ 3 J 0 (x)j 0 (y)j 0 (z)f (x y)f (x y)(f (z) 1)dxdydz, Eα(X 1 )ξ(x 2, X 3 ) = Eα(X 1 )ξ(x 3, X 2 ) = 0; Eα(X 1 )ϕ(x 1, X 2, X 3 ) = Eα(X 1 )ϕ(x 1, X 3, X 2 ) = 1 σ 3 J 0(x)J 0 (y)j 0 (z)f (x y)(1 F (x y))v (y, z)dxdydz, V (y, z) = F (y z) F (y)f (z), Eα(X 1 )ϕ(x 2, X 1, X 3 ) = Eα(X 1 )ϕ(x 2, X 3, X 1 ) = 0, Eα(X 1 )ϕ(x 3, X 1, X 2 ) = Eα(X 1 )ϕ(x 3, X 2, X 1 ) = 0. The proof of Theorem 3.2 is complete. 3.3 Studetized fuctios of the sample mea Let X 1,, X be i.i.d. real radom variables with EX 1 = µ ad V ar(x 1 ) = σ 2 <. Let f be a real-valued fuctio differetiable i a eighborhood of µ with f (µ) 0. Thus the asymptotic variace of f(x) is give by σ 2 f = (f (µ)) 2 σ 2. Deote the sample mea ad sample variace by X = 1 i=1 X i ad σ 2 = 1 i=1 (X i X) 2. The a obvious estimate of σ f is simply f (X) σ. I this paper, however, we shall use a alterative estimate, i.e., the jackkife variace estimate give by σ f 2 = 1 ( ) ( 2 f(x (j) (j) 1 ) ) f(x), where X = X i X j. 1 i=1 Defie the distributios of the stadardized ad studetized f(x) respectively by L 1 (x) = P ( σ 1 f (f(x) f(µ)) x), L 2 (x) = P ( σ 1 f (f(x) f(µ)) x). 13

15 Asymptotic properties of L 1 (x) (e.g., the asymptotic ormality, Berry-Essée boud ad Edgeworth expasio) have bee well studied (see Bhattacharya ad Ghosh (1976) for istace). O the other had, Miller (1964) showed that σ 2 f is a cosistet estimator of σ 2 f ad hece proved that L 2 (x) follows the asymptotic stadard ormal distributio. Applyig the results of Bai ad Rao (1991), a secod-order Edgeworth expasios for L 1 (x) with error size o( 1/2 ) holds uder the miimal momet coditio E X 1 3 <. O the other had, Putter ad vo Zwet (1998) gave Edgeworth expasios for L 2 (x) uder the coditio that E X 1 3+ɛ < for some ɛ > 0. I this sectio, we hope to weake this coditio eve further. THEOREM 3.3. Assume that f (3) (x) is bouded i a eighborhood of µ ad f (µ) 0, E X 1 3 < ad lim t Ee itx 1 < 1. The we have where F 2 (x) = Φ(x) + x L 2 (x) F 2 (x) = o ( 1/2), σf (u) 2 f (u) Φ(1) (x) + φ(x) ( 6 (2x 2 + 1) E(X 1 µ) 3 + 3σf ) (u). σ 3 f (u) PROOF. Usig Taylor s expasio ad otig that X (j) X = (X X j )/( 1), we ca get σ 2 f = ( 1) { ( ) f (µ) X (j) X + ( f (X) f (µ) ) ( ) X (j) X + 1 ) ( } 2 2 (η 2 f j X (j) + (1 η j )X X (j) X) (for 0 η j 1) = f 2 (µ) (X j X) 2 + 2f (µ)f (µ) (X j X) 2 (X µ) + W = f 2 (µ) (X j µ) 2 + 2f (µ)f (µ) (X j µ) 2 (X µ) + W = σ 2 f 2 (µ) + 1 { f 2 (µ)((x 2 j µ) 2 σ 2 )+ j k +2f (µ)f (µ)(x j µ) 2 (X k µ) } + W 4, where W 4 A(f) (K 1 + K 2 + K 3 + K 4 ) with K i beig defied by K 1 = (X µ) 2 + X µ 3 + (X µ) 4, K 2 = 1 ( (Xj µ) 2 + X 2 j µ 3), 14

16 K 3 = 1 3 K 4 = 1 (X µ)2 Similarly, by Taylor s expasio, we have (X j µ) 4, (X j µ) 2. f(x) f(µ) = f (µ)(x µ) f (µ)(x µ) 2 + W 5, where W 6 A(f) ( K 5 + K 6 ) with where K 5 = 1 2 = 1 f (µ)(x j µ) + 1 f (µ)(x 2 2 j µ)(x k µ) j k + σ2 f (u) + W 6, (3.10) 2 ( (Xj µ) 2 σ 2) ad K 6 = X µ 3. I order to apply Theorem 2.1, we rewrite σ f 1 (f(x) f(µ)) = T / S, where T = σf (u) 2 f (u) + 1/2 S 2 = ( 1) α(x j ) = (X j µ)/σ, β(x i, X j ) = f (µ) σf (µ) (X i µ)(x j µ), η(x i, X j ) = (X j µ) 2 σ 2 V 1 = V 2 = σf (u) W 6, α(x i ) + 3/2 β(x i, X j ) + V 1, i=1 (η(x i, X j ) + η(x j, X i )) + V 2, σ 2 + f (µ) σ 2 f (µ) (X i µ) 2 (X j µ), 1 σ 2 f 2 (u) W ( 1) (η(x i, X j ) + η(x j, X i )). The coditios (a) ad (b) i Theorem 2.1 ca be easily checked. Next let us first assume that the coditio (c) i Theorem 2.1 holds, i.e., these exists a δ 0 such that P ( V j δ 1/2) = o ( 1/2), for j = 1, 2. (3.11) Uder this assumptio, by usig Theorem 2.1, we have that ( ( x P 1 S T σf ) ) (u) 2 x F f 2 (x) = o( 1/2 ), (3.12) (u) 15

17 where F 2 (x) = Φ(x) Φ (3) (x) 6 ( Eα 3 (X 1 ) + 3Eα(X 1 )α(x 2 )β(x 1, X 2 ) ) xφ(2) (x) 2 Eα(X 1)(η(X 1, X 2 ) + η(x 2, X 1 )) It follows from (3.12) ad similar method used i proof of Theorem 2.1 that x P ( T / S x ) σf (u) 2 f (u) Φ(1) (x) F 2 (x) = o( 1/2 ) (3.13) I terms of (3.13), Theorem 3.3 follows easily from the followig simple calculatios: Eα 3 (X 1 ) = E(X 1 u) 3, Eα(X σ 3 1 )α(x 2 )β(x 1, X 2 ) = σf (u) f (u) Eα(X 1 )η(x 1, X 2 ) = 0, Eα(X 1 )η(x 2, X 1 ) = E(X 1 u) 3 Φ (2) (x) = xφ(x), Φ (3) (x) = (x 2 1)φ(x). σ 3 + σf (u) f (u). I the remaider of this sectio, let us show (3.11). It suffices to show that that there exists a δ 0 such that P ( K j δ 1/2) = o ( 1/2), for j = 1, 2, 3, 4, (3.14) P ( K j δ 1) = o ( 1/2), for j = 5, 6, (3.15) 1 P 2 (η(x i, X j ) + η(x j, X i )) ( 1) δ 1/2 = o( 1/2 ). (3.16) We oly prove (3.14) for j = 4. The others are similar but more simpler by Lemmas i the Appedix ad details are omitted. Usig Lemma 5.3 i the Appedix, we have P ( K 4 1/2 (log ) 1) = P 1 (X µ)2 (X j µ) 2 1/2 (log ) 1 P ( (X µ) 2 4σ 2 1 (log ) ) + P 1 (X j µ) 2 1/2 (σ 2 log ) 2 /4 = P ( 1/2 X µ 2σ(log ) 1/2) +P 1 [ (Xj µ) 2 σ 2] 1 4 1/2 (σ 2 log ) 2 σ 2 o ( 1/2) + P 1 [ (Xj µ) 2 σ 2] C (for large eough) = o ( 1/2). This proves (3.14) for j = 4. The proof of Theorem 3.3 is complete. 16

18 4 Proof of Theorems I this sectio, we give the proofs of the mai theorems. Some lemmas eeded i the proofs of mai theorems will be relegated to the Appedix. Proof of Theorem 2.1. Notice that for ay radom variables X, Y, Z 1, Z 2 ad ay costats C 1, C 2 > 0, P P ( ) X + Z1 x 1 + Y + Z2 ( ) X + Z1 x 1 + Y + Z2 P P ( ) X C1 x Y + C2 ( ) X C1 x 1 + Y + C2 2 P ( Z j C j ), (4.1) 2 P ( Z j C j ), (4.2) we ca replace V j by ±δ = o ( 1/2). To simplify the otatio, we further assume V j = 0. It will be clear that this assumptio will ot affect the proof of mai results. We ext tur back to the proof of Theorem 2.1. Let γ 1 (x) = Eγ (x, X 1 ), γ 2 (x, y) = γ(x, y) γ 1 (x) γ 1 (y). It is easy to show that where S 2 = γ 1 (X j ) + γ 2 (X i, X j ) ( 1) = 1 + Z + R, (4.3) Z = 1 γ 1 (X j ), R = By usig elemetary iequality: for u 1/9, we fid that if Z + R 1/9, the (Z + R ) ( 1) 1 + u 2 u2 6 (1 + u)1/2 1 + u 2 + u2 6, ( Z 2 + R 2 γ 2 (X i, X j ). ) S = (1 + Z + R ) 1/ (Z + R ) ( Z 2 + R 2 ). (4.4) Put (s) = 1 2 Z R + s Z 2. 17

19 The from (4.3) ad (4.4), we have P (T /S x) P (T /S x, Z + R 1/9) + P ( Z + R 1/9) ( { P T x 1 + (1/3) R + 1 }) 3 R2 + P ( Z + R 1/9) Similarly, we get P ( T x { 1 + (1/3) + 3/5}) ( 1 +P ( Z + R 1/9) + P 2 R + 1 ) 3 R2 3/5. (4.5) P (T /S x) P ( T x { 1 + ( 1/3) 3/5}) ( 1 P ( Z + R 1/9) P 2 R 1 ) 3 R2. (4.6) Usig Jese s iequality, we ca easily see that E γ 1 (X 1 ) 3/2 < ad E γ 2 (X 1, X 2 ) 3/2 <. (4.7) Furthermore, we ote that 2R is a degeerate U-statistic of order 1 (whose defiitio is give i the Appedix). The it follows from Lemmas that ( 1 P 2 R ± 1 ) 3 R2 3/5 P ( R 1) + P ( R 3/10 ) = o( 1/2 ), P ( Z + R 1/9) P ( Z 1/18) + P ( R 1/18) = o ( 1/2). These iequalities, together with (4.5) ad (4.6), imply that (2.3) holds if P (T x (1 + (s) + A )) F (1) (x) = o( 1/2 ) (4.8) x where A 1 3 3/5 ad s 1/3. To prove (4.8), we let ς j (x) = ( 1 2 γ 1(X j ) + s ) γ2 1(X j ), ψ ij (x) = β(x i, X j ) x (2sγ 1 (X i )γ 1 (X j ) γ 2(X i, X j ) Elemetary calculatio shows that P (T x (1 + (s) + A )) = P 1 α(x j ) + x ς j (x) + 1 ψ 3/2 ij (x) x (1 + A ). Recallig (4.7), it is easy to check that all coditios of Lemma 5.4 are satisfied with s 1/3, V = 3/5, ad ξ 1 (X j ) = α(x j ), ξ 2 (X j ) = 1 2 γ 1(X j ), ξ 3 (X j ) = sγ1(x 2 j ), ( ϕ 1 (X i, X j ) = β(x i, X j ), ϕ 2 (X i, X j ) = 2sγ 1 (X i )γ 1 (X j ) + 1 ) 2 γ 2(X i, X j ). ). 18

20 Therefore, (4.8) follows immediately from Lemma 5.4 ad the well-kow result: Eα(X 1 )γ 1 (X 1 ) = EE [α(x 1 )γ(x 1, X 2 ) X 1 ] = Eα(X 1 )γ(x 1, X 2 ). Proof of Theorem 2.1 is thus complete. Proof of Theorem 2.2. We first reduce the statistic S 2 to the form of (2.2). To do this, let us itroduce symmetric fuctio η(x, y, z) = η(x, y, z) + η(x, z, y) + η(y, x, z) +η(y, z, x) + η(z, x, y) + η(z, y, x), ad kerels η (1), η (2), ad η (3) defied recursively by the equatios η (1) (x 1 ) = E η(x 1, X 2, X 3 ), η (2) (x 1, x 2 ) = 2 E η(x 1, x 2, X 3 ) η (1) (x j ), η (3) (x 1, x 2, x 3 ) = 3 η(x 1, x 2, x 3 ) η (1) (x j ) η (2) (x i, x j ). 13 By applyig Hoeffdig s decompositio for U-statistics (see, e.g., Lee (1990, page 25)), it is easy to show that 6 η(x i, X j, X k ) ( 1)( 2) i j k ( ) 1 = η(x i, X j, X k ) 3 <k = 3 ( ) 1 η (1) (X i ) + 3 η (2) (X i, X j ) + H (3) 2 = i=1 6 ( 1) where γ(x i, X j ) = η (2) (X i, X j ) ( η(1) (X i ) + η (1) (X j )) ad H (3) = I terms of (4.9), we ca rewrite S 2 as where S 2 = 1 + γ(x i, X j ) + H (3), (4.9) ( ) 1 η (3) (X i, X j, X k ). 3 <k 1 ( 1) V2 3 2 = V 2 3 ( 1)( 2) 19 <k γ(x i, X j ) + V 2, η(x i, X j, X k ) H(3).

21 Similarly, we get T = 1/2 α(x j ) + 3/2 β(x i, X j ) + V 1, where β(x, y) = ζ(x, y) = ζ(x, y)+ζ(y, x), which is a symmetric fuctio of its argumets. By Jese s iequality, it is easy to show that E γ(x 1, X 2 ) 3/2 AE η(x 1, X 2, X 3 ) 3/2 A 1 E η(x 1, X 2, X 3 ) 3/2 <, E η (3) (X 1, X 2, X 3 ) 3/2 AE η(x 1, X 2, X 3 ) 3/2 A 1 E η(x 1, X 2, X 3 ) 3/2 <. Hece, by otig ( ) 1 3 <k η(x i, X j, X k ) is a degeerate U-statistic of order 0 ad is a degeerate U-statistic of order 2, it follows from Lemma 5.2 that H (3) Therefore, from Theorem 2.1 ad P ( V 2 o( 1/2 )) = o( 1/2 ). Eα(X 1 )γ(x 1, X 2 ) = 1 2 Eα(X 1) η(x 1, X 2, X 3 ), we have the desired result. The proof of Theorem 2.2 is complete. 5 Appedix I this sectio, we give several Lemmas that are commetary of the proofs of mai results. Let X 1, X 2,, be i.i.d. radom variables ad g k (x 1,, x k ) be symmetric Borelmeasurable real fuctio of its argumets. Defie U (g k ) = ( ) 1 g k (X i1,, X ik ). k 1i 1 < <i k U (g k ) is called a degeerate U-statistic of order m if Eg k (x 1,, x m, X m+1,, X k ) = 0. Lemma 5.1. Assume that U (g k ) is a degeerate U-statistic of order m ad E g k (X 1,, X k ) p <. The, E U (g k ) p C (m+1)(1 p), for 1 p 2, E U (g k ) p C 1 (m+1)p/2, for p 2. 20

22 For a proof of Lemma 5.1, see Theorems ad i Koroljuk ad Borovskich (1994). Lemma 5.2. Assume that U (g k ) is a degeerate U-statistic of order m. (a) If E g k (X 1,, X k ) 2 <, the P ( U (g k ) C 1 1/2) = o( 1/2 ), for m 1 (b) If E g k (X 1,, X k ) 3/2 <, the P ( U (g k ) C 2 1/2 (log ) 1) = o( 1/2 ), for m = 0, P ( U (g k ) C 3 3/10) = o( 1/2 ), for m = 1, P ( U (g k ) C 4 1/2 (log ) 1) = o( 1/2 ), for m = 2. (c) If E g k (X 1,, X k ) 3 <, the for all m 0 ad k > 0, P ( U (g k ) C 5 1/4 (log ) k) = o( 1/2 ). By applyig Markov s iequality ad Lemma 5.1, proof of Lemma 5.2 is simple ad will be omitted. Lemma 5.3. If E X 1 <, the there exists a δ 0 such that P 1 3/2 X j δ = o( 1/2 ). (5.1) If EX 1 = 0 ad E X 1 3/2 <, the there exists a δ 0 such that P 1 X j δ = o( 1/2 ). (5.2) If E X 1 3/4 <, the P 1 2 X j C 1 = o( 1/2 ). (5.3) If EX 1 = 0, EX 2 1 = 1 ad E X 1 3 <, the P x 2 10 log 1 X j x = o( 1/2 ). (5.4) 3 21

23 Proof. To prove (5.1), we let δ 3 = max { E X 1 I ( X1 1/4 ), 2 1/2 E X 1 }. Uder the coditio that E X 1 <, it follows that δ 0 ad P 1 X 2 j δ 1/2 P ( X 1 3/2 ) + P 1 ( ) 3/2 Xj I ( Xj 3/2 ) EX j I ( Xj 3/2 ) δ 1/2 E X 1 1/2 E X 1 I ( X1 3/2 ) + 8 ( EX 2 2 δ 2 1 I ( X1 1/4 ) + EX1I 2 ( 1/4 < X 1 )) 3/2 o( 1/2 8 ) + (E X 1 ) 2 EX2 1I ( X1 1/4 ) + 8δ 1/2 = o ( 1/2). This implies (5.1). Similarly, we ca prove (5.2) ad (5.3). To prove (5.4), by usig o-uiform Berry-Essée theorem for sums of idepedet radom variables (cf. Chapter V of Petrov, 1975) x (1 + x 3 ) P 1 X j x Φ(x) A 1/2 E X 1 3 (5.5) ad 1 Φ(x) 1 2π e x2 /2 2, for x 1, (5.6) 1 + x 3 we have that for 3 ad x 2 10 log, P 1 X j x /3 1 Φ( x /3) + C 1/2 (log ) 1 = o( 1/2 ). Similarly, we have that for 3 ad x 2 10 log, P 1 X j x /3 Φ( x /3) + C 1/2 (log ) 1 = o( 1/2 ). Therefore, (5.4) follows. The proof of Lemma 5.3 is complete. Next we preset Lemma 5.4, which is of its ow idepedet iterest. Lemma 5.4. Let ξ j (x) ad ϕ j (x, y) be real Borel measurable fuctios ad symmetric i their argumets. Let V V (X 1,..., X ) be a real Borel-measurable fuctio of X 1,..., X. Assume that (D1) Eξ 1 (X 1 ) = 0, Eξ 1 (X 1 ) 2 = 1, E ξ 1 (X 1 ) 3 <, Ee lim itξ 1 (X 1 ) t < 1. (D2) Eξ 2 (X 1 ) = 0, E ξ 2 (X 1 ) 3/2 <, E ξ 3 (X 1 ) 3/4 < ; (D3) E(ϕ j (X 1, X 2 ) X 1 ) = 0, j = 1, 2; 22

24 E ϕ 1 (X 1, X 2 ) 2 <, E ϕ 2 (X 1, X 2 ) 3/2 <. (D4) P ( V o( 1/2 )) = o( 1/2 ). The, as, where x P (K (x) x(1 + V )) F (3) (x) = o( 1/2 ), (5.7) ς j = ξ 2 (X j ) + 1 ξ 3(X j ), ψ ij (x) = ϕ 1 (X i, X j ) + x ϕ 2 (X i, X j ); K (x) = 1 ξ 1 (X j ) + x ς j + 1 ψ 3/2 ij (x) F (3) (x) = Φ(x) Φ(3) (x) 6 ( Eξ 3 1 (X 1 ) + 3Eξ 1 (X 1 )ξ 1 (X 2 )ϕ 1 (X 1, X 2 ) ) + xφ(2) (x) Eξ 1 (X 1 )ξ 2 (X 1 ). Proof. Without loss of geerality, we assume that For, if ot, we ca defie The we have ϕ 2 (X i, X j ) 4 2 for all i, j. (5.8) ϕ 3 (X i, X j ) = ϕ 2 (X i, X j )I ( ϕ2 2 ) Eϕ 2 (X i, X j )I ( ϕ2 2 ), ϕ 4 (X i, X j ) = ϕ 3 (X i, X j ) E(ϕ 3 (X i, X j ) X i ) E(ϕ 3 (X i, X j ) X j ), ψ ij (x) = ϕ 1 (X i, X j ) + x ϕ 4 (X i, X j ). 1 ψ 3/2 ij (x) = 1 3/2 ψ ij (x) + xr, say. Write δ 2 = E ϕ 2 (X 1, X 2 ) 3/2 I ( ϕ2 2 ). By Markov s iequality ad usig the assumptio E [ϕ 2 (X 1, X 2 ) X 1 ] = 0, we get P ( R δ / ) = P 1 [ϕ 2 2 (X i, X j ) ϕ 4 (X i, X j )] δ / 4 δ 1 E ϕ 2 (X 1, X 2 ) I ( ϕ2 2 ) (4 δ 1 ) 1 E ϕ 2 (X 1, X 2 ) 3/2 I ( ϕ2 2 ) 4δ 1/2. 23

25 It is easy to show that δ satisfyig 0 ad that ϕ 4 (x, y) is a real-valued symmetric fuctio E(ϕ 4 (X 1, X 2 ) X 1 ) = 0, E ϕ 4 (X 1, X 2 ) 3/2 <, ϕ 4 (X i, X j ) 4 2. Therefore, if (5.8) does ot hold, we ca replace ϕ 2 (X i, X j ) by ϕ 4 (X i, X j ), ad V by V R i the proof. I view of (4.1) ad (4.2), we may further assume V = 0 for coveiece. It is obvious that this assumptio does ot affect the proof of the mai results. We tur back to the proof of the mai result. Write The we have ξ 2(X j ) = ξ 2 (X j )I ( ξ2 (X j ) /(1+x 2 )), ξ 3(X j ) = ξ 3 (X j )I ( ξ3 (X j ) 2 /(1+x 2 )), ς j = ξ 2(X j ) + 1 ξ 3(X j ), K(x) = 1 ξ 1 (X j ) + x ς j + 1 ψ 3/2 ij (x). P (K (x) x) F (3) (x) x P (K (x) x) F (3) (x) + P (K(x) x) F (3) (x) x 2 10 log x 2 10 log + x 2 10 log P (K(x) x) P (K (x) x) P 1 + P 2 + P 3, say, where P 1 = P (K (x) x) F (3) (x), (5.9) P 2 = x 2 10 log P (K(x) x) F (3) (x), (5.10) x 2 10 log P 3 = x 2 10 log P (K(x) x) P (K (x) x). (5.11) Therefore, it suffices to show P i = o( 1/2 ), i = 1, 2, 3. (5.12) First, we shall prove P 1 = o( 1/2 ). Note that P (K (x) x) F (3) (x) x (10 log ) 1/2 24

26 P (K (x) x) + 1 F (3) (x) x (10 log ) 1/2 x (10 log ) 1/2 P 1 ξ 1 (X j ) x + P 1 ς j 1 x (10 log ) 1/ P 1 ψ x 1 3/2 ij (x) x + 1 F (3) (x) 3 x (10 log ) 1/2 P 11 + P 12 + P 13 + P 14. Usig Lemmas 2-3 ad the iequality (5.6), we have P 11 = o( 1/2 ); P 12 P 1 ξ 2 (X j ) 1 + P 1 6 ξ 2 3 (X j ) 1 = o( 1/2 ); 6 P 13 P 1 3/2 ϕ 1 (X i, X j ) 1 + P 1 ϕ (X i, X j ) 1 = o( 1/2 ); 6 P 14 = o( 1/2 ). This implies that x (10 log ) 1/2 P (K (x) x) F (3) (x) = o( 1/2 ). Similarly, we ca show that Thus, P 1 = o( 1/2 ). x (10 log ) 1/2 P (K (x) x) F (3) (x) = o( 1/2 ). where Secodly, let us show that P 3 = o( 1/2 ). We write P 3 = x 2 10 log P (K(x) x) P (K (x) x) Ω 1 + Ω 2 + Ω 3, Ω 0 = P (K(x) x) P (K (x) x), x 2 1 Ω 1 = P ( K (x) x, ς j ςj, for some j ), 1x 2 10 log Ω 2 = P ( K(x) x, ς j ςj, for some j ). 1x 2 10 log We shall show ow that Ω i = o( 1/2 ), i = 0, 1, 2. 25

27 First cosider the term Ω 0. It is easy to see that P ( ς j ς j It follows from (5.13) that ) P ( ξ2 (X 1 ) /(1 + x 2 ) ) + P ( ξ 3 (X 1 ) 2 /(1 + x 2 ) ) 1 + x 3/2 + x 3 3/2 ( E ξ2 (X 1 ) 3/2 I ( ξ2 (X 1 ) /(1+x 2 )) + E ξ 3 (X 1 ) 3/4 I ( ξ3 (X 1 ) 2 /(1+x 2 ))). (5.13) Ω 0 P (ς j ςj) = o( 1/2 ). (5.14) x 2 1 Next we ivestigate the term Ω 1. Without loss of geerality, we assume that x 1. The i view of (5.13) ad idepedece of X k, we obtai ( 1 P 1x 2 10 log 1x 2 10 log P = o( 1/2 ) + = o( 1/2 ), P k=1 ( 1 ξ 1 (X k ) x/3, ς j ς j, k=1 ) ( 1 ξ 1 (X j ) 1/6 1x 2 10 log ξ 1 (X k ) x/3, ς j ς j P + 1x 2 10 log 1 k=1 k j P for some j ) 1 k=1 k j ) ξ 1 (X k ) x/6 P ( ) ς j ςj where we have used the followig estimate: for all 1 j, P 1 ξ 1 (X k ) x/6 Cx 3. k=1 k j From this ad P 12 = o( 1/2 ) ad P 13 = o( 1/2 ), we get Ω 1 P 1 ψ x 2 1 3/2 ij (x) x/3 + P 1 ς j 1/3 ( 1 + P 1x 2 10 log k=1 = P 12 + P 13 + P 1x 2 10 log = o( 1/2 ). Similarly, we have that ξ 1 (X k ) x/3, ς j ς j, ( 1 k=1 ξ 1 (X k ) x/6, ς j ςj for some j ξ 1 (X k ) x/3, ς j ς j, ) for some j ) (5.15) Ω 2 = o( 1/2 ). (5.16) 26

28 Combiig (5.14)-(5.16), we have show that P 3 = o( 1/2 ). Fially, we shall prove P 2 = o( 1/2 ). Write ad defie Y j (x) = ξ 1 (X j ) + x ( ς j Eς j), σ(x) 2 = EY1(x), 2 θ (x) = x σ (x) (1 Eς 1). { ( L (y) = EΦ y Y ) } 1(x) Φ(y) 1 σ (x) 2 Φ(2) (y), K (x) = 1 1 σ (x) Y j(x) + 1 3/2 1 σ (x) ψ ij(x), E (y) = Φ(y) + L (y) Φ(3) (y) 2 σ 3 (x) EY 1(x)Y 2 (x)ψ 12 (x), E (y) = Φ(y) Φ(3) (y) 6 σ 3 (x) The we have P (K(x) x) F (3) (x) x 2 10 log = P ( K (x) θ (x) ) F (3) (x) x 2 10 log P ( K (x) y ) E (y) + x 2 10 log y + x 2 10 log E(θ (x)) F (3) = I 1 + I 2 + I 3, say. Therefore, (5.10) follows if ( EY 3 1 (x) + 3EY 1 (x)y 2 (x)ψ 12 (x) ). (x) x 2 10 log E (y) E(y) y I j = o( 1/2 ), j = 1, 2, 3. (5.17) Next we tur to the proof of (5.17). Uder the coditio (D2), we obtai that for all x 2 10 log, Eξ 2(X 1 ) E ξ 2 (X 1 ) I ( ξ2 (X 1 ) /(1+x 2 )) = o ( ) 1 + x, (5.18) ( ) α 3/2 E ξ2(x 1 ) α E ξ 2 (X 1 ) α I ( ξ2 (X 1 ) ) + E ξ2 (X 1 + x 2 1 ) 3/2 I ( ξ2(x1) ) ( ) α 3/2 = o, for α > 3/2. (5.19) 1 + x 2 Similarly, we have E ξ 3(X 1 ) α = o ( x 2 ) α 3/4, for α > 3/4. (5.20) 27

29 Recallig ς j = ξ 2(X j ) + 1 ξ 3(X j ), it follows from (5.18)-(5.20) that for all x 2 10 log, Eς 1 = Eξ 2(X 1 ) + 1 ( ) Eξ 3(X 1 ) 1 + x = o, (5.21) E ς1 E ξ2(x 1 ) + 1 E ξ 3(X 1 ) = O(1), (5.22) ( ) 2 ( ) 2 ( x x E(ς 1) 2 2 E(ξ 2(X 1 )) ) ( ) 1 + x 2 E(ξ 3(X 1 )) 2 = o,(5.23) ( ) 3 ( ) 3 ( x x E ς E ξ 2(X 1 ) ) 3 E ξ 3(X 1 ) 3 = o(1), (5.24) By usig (5.21) - (5.24), together with Hölder s iequality, we get if x 2 10 log, the σ(x) 2 = 1 + 2x ( ) 1 + x Eξ 1 (X 1 )ξ 2 (X 1 ) + o, (5.25) 1 σ (x) = 1 x ( ) 1 + x Eξ 1 (X 1 )ξ 2 (X 1 ) + o, (5.26) EY 3 1(x) = Eξ 1 (X 1 ) 3 + o(1), (5.27) E Y 1 (x) 3 = O(1), (5.28) EY 1 (x)y 2 (x)ψ 12 (x) = Eξ 1 (X 1 )ξ 1 (X 2 )ϕ 1 (X 1, X 2 ) + o(1). (5.29) We oly check (5.29) i details. I fact, let µ j (x) = x ( ς j Eς j), the EY 1 (x)y 2 (x)ψ 12 (x) = EY 1 (x)y 2 (x)ϕ 1 (X 1, X 2 ) + x EY 1 (x)y 2 (x)ϕ 2 (X 1, X 2 ) = Eξ 1 (X 1 )ξ 1 (X 2 )ϕ 1 (X 1, X 2 ) + E {ξ 1 (X 1 )µ 2 (x) + µ 1 (x)y 2 (x)} ϕ 1 (X 1, X 2 ) + x EY 1 (x)y 2 (x)ϕ 2 (X 1, X 2 ) It follows from (5.24) that E µ 1 (x) 3 = o(1). Therefore, by otig idepedece of X k, (5.29) follows from (5.28) ad the followig estimates: E {ξ 1 (X 1 )µ 2 (x) + µ 1 (x)y 2 (x)} ϕ 1 (X 1, X 2 ) 3 ( E µ 1 (x) 3) 1/3 ( E ξ 1 (X 1 ) 3 + E Y 1 (x) 3) 1/3 ( E ϕ 1 (X 1, X 2 ) 3/2) 2/3 = o(1), E Y 1 (x)y 2 (x)ϕ 2 (X 1, X 2 ) ( E Y 1 (x) 3) 2/3 ( E ϕ 2 (X 1, X 2 ) 3/2) 2/3 = O(1). We tur back to the proof of (5.17). We first prove (5.17) for j = 3. Recallig defiitio of E(y) ad F (3) (x), it suffices to show Φ(θ (x)) Φ(x) xφ(2) (x) Eξ 1 (X 1 )ξ 1 (X 2 ) = o( 1/2 ) (5.30) x 2 10 log 28

30 Φ (3) (θ (x)) Φ (3) (x) = O( 1/2 ) (5.31) x 2 10 log EY 3 1(x) Eξ x 2 10 log σ(x) 3 1 (X 1 ) 3 = o(1) (5.32) EY 1 (x)y 2 (x)ψ 12 (x) Eξ x 2 10 log σ(x) 3 1 (X 1 )ξ 1 (X 2 )ϕ 1 (X 1, X 2 ) = o(1) (5.33) We oly check (5.30). Others are simple by usig (5.25)-(5.29). Recallig θ (x) = x 2 10 log, θ (x) = x ad hece for sufficietly large, x (1 σ Eς (x) 1), by usig (5.21) ad (5.26), we get that for all ( 1 x ) Eξ 1 (X 1 )ξ 1 (X 2 ) x/2 θ (x) 3x/2. + o ( 1 + x 2 ) From these estimates ad Taylor s expasio, it follows that there exists 1/2 δ 3/2 such that for all x 2 10 log, Φ(θ (x)) = Φ(x) + (θ (x) x)φ(x) + (θ (x) x) 2 Φ (2) (δx) 2 = Φ(x) x2 φ(x) Eξ 1 (X 1 )ξ 2 (X 1 ) + o( 1/2 )f(x)φ(x/2) where f(x) is a polyomial of x. By otig Φ (2) (x) = xφ(x), (5.30) follows easily. We fiish the proof of (5.17) for j = 3. We use Lemma 5.5 to prove (5.17) for j = 1. I terms of (5.22) ad (5.26), we have that for ay fixed β > 0, Y1(x) Eeit x 2 10 log t β Y 1 (x) E x 2 10 log ( 1 + β β x 2 10 log σ(x) Ee itξ 1(X 1 ) σ (x) ξ 1(X 1 ) σ (x) 1 x E ς 1 Eς 1 ) = o(1). (5.34) Notig that lim t Ee itξ 1 (X 1 ) < 1, it follows from (5.34) that for all sufficietly large ad all x 2 10 log, there exists a positive absolute costat δ such that lim t Ee it Y 1 (x) σ(x) < 1 δ. Therefore, by usig Lemma 5.5 with V (X j ) = Y j(x) σ (x), W (X i, X j ) = ψ ij(x) σ (x), 29

31 we obtai that for all sufficietly large ad all x 2 10 log (recallig Remark 5.1), P ( K (x) y ) E (y) y A δ 1 ( E ψ12 (x) 2 σ 2 (x) + E Y j(x) 3 ) 2/3 (E Y j (x) 3 ) 2 log + A σ(x) (5.35) σ(x) 6 Recallig (5.8), we have ϕ 2 (X 1, X 2 ) 4 2 ad hece ( ) 2 x E ψ 12 (x) 2 2Eϕ 2 1(X 1, X 2 ) + 2 Eϕ 2 2(X 1, X 2 ) 2Eϕ 2 1(X 1, X 2 ) + 8x 2 E ϕ 2 (X 1, X 2 ) 3/2. (5.36) I terms of (5.25), (5.28), (5.35) ad (5.36), simple calculatio shows that I 1 = P ( K (x) y ) E (y) = o( 1/2 ). x 2 10 log y This fiishes the proof of (5.17) for j = 1. To prove (5.17) for j = 2, we ote that y = y E (y) E(y) { ( EΦ y Y ) } 1(x) Φ(y) σ (x) 6 σ(x) 3 Φ(3) (y) 1 2 Φ(2) (y) + EY 3 1(x) A 1 σ 3 (x) E Y 1(x) 3 I ( Y1 (x) σ (x)) + A 2 σ 4 (x) E Y 1(x) 4 I ( Y1 (x) σ (x)), where the last iequality follows from Theorem 3.2 of Hall (1982). It follows from (5.25) that for sufficietly large ad all x 2 10 log, 1/2 < σ (x) < 3/2. (5.37) It follows from (5.21) that for sufficietly large ad all x 2 10 log, Y 1 (x) = ξ 1(X 1 ) + x (ς 1 Eς1) 1 + ξ 1 (X 1 ) + ξ 2 (X 1 ) 1/2 + ξ 3 (X 1 ) 1/4 = κ(x 1 ), say. Notig (5.37) ad Eκ 3 (X 1 ) <, we get for sufficietly large, I 2 = E (y) E(y) x 2 10 log y ( C 1/2 x 2 10 log ( C 1/2 Eκ 3 (X 1 )I (κ(x1 ) 1/4 ) + 1 = o( 1/2 ). E Y 1 (x) 3 I ( Y1 (x) 1/4 ) + 1 E Y 1 (x) 4 I ( Y1 (x) 1/4 ) ) 1/4 Eκ3 (X 1 ) 30 )

32 This implies (5.17) for j = 2. Now, we fiish the proof of (5.10). Proof of Lemma 5.4 ow is complete. Lemma 5.5. Let V (x) ad W (x, y) be real Borel-measurable fuctios ad W (x, y) be symmetric i its argumets. Assume that there exists a 0 such that for all 0, The, for all 0, x (D1) EV (X 1 ) = 0, EV 2 (X 1 ) = 1, τ > 0, (D2) E[W (X 1, X 2 ) X 1 ] = 0. P 1 V (X j ) + 1 W 3/2 (X i, X j ) x E (x) A τ 1 (λ + ρ ) 2/3 log + A 1 (ρ ) 2 1. (5.38) where A ad A 1 are absolute costats, λ = EW 2 (X 1, X 2 ), ρ = E V (X 1 ) 3, τ = 1 { Ee itv (X 1 ) : 1 /(4ρ ) t 1/6}, E (x) = Φ(x) + L (x) Φ(3) (x) 2 EV (X 1 )V (X 2 )W (X 1, X 2 ), { ( L (x) = EΦ x 1 ) } V (X 1 ) Φ(x) 1 2 Φ(2) (x). Remark 5.1. If the distributio of V (X 1 ) is o-lattice ad the Cramér coditio that τ lim Ee itv (X 1 ) t < 1 is satisfied, the τ > 0. It is clear that we ca replace τ by 1 τ i (5.38) if τ < 1. O the other had, we ote that the result give i Lemma 5.5 is ot optimal, but it is eough for our propose. This result ca ot be obtaied from Betkus, Götze ad va Zwet (1997). Proof. Without loss of geerality, we assume λ < ad ρ <. Write γ(t) = Ee itv(x 1)/, f (t) = 1 + (γ(t) 1) + t2 e t2 /2, 2 ϕ (t) = t 2 B e t2 /2, B = 1 2 EV (X 1 )V (X 2 )W (X 1, X 2 ) L (x) = B Φ (3) (x), S m = 1 m V (X j ),,m = 1 m 1 3/2 [ i=1 j=i+1 ] W (X i, X j ). 31

33 Simple calculatio shows that e itx d (Φ(x) + L (x)) = f (t), e itx dl (x) = itϕ (t). (5.39) From (5.39) ad Essee s smoothig lemma (Petrov, 1975), it follows that (otig ρ 1) P 1 V (X j ) + 1 W x 3/2 (X i, X j ) x E (x) 1 Ee it(s+,) f (t) itϕ (t) dt + A 2/3 de (x) t 2/3 t x dx 4 I j + A 1 2/3 (λ + ρ ), (5.40) where I 1 = I 2 = I 3 = I 4 = 1 t 1/10 1 t 1/10 t 1/10 Ee it(s+,) Ee its ite, e its dt, t Ee its f (t) dt, t E, e its ϕ (t) dt, 1/10 t 2/3 1 t Ee it(s+,) dt. I the followig, we estimate the each term of (5.40). First we estimate I 1. I terms of the coditio (D2), it ca be easily show that E 2,m A 2 mλ. (5.41) Thus, we have I 1 1 t E ( ) 2 2 t 1/10, dt A 2/3 λ. Secodly we estimate I 2. Usig similar argumets to the proof of Lemma i Hall (1982), we have I 2 A ( (EV 2 (X 1 ) ) 2 + ( E V (X 1 ) 3) 2 ) A 1 (ρ ) 2. Thirdly we estimate I 3. Similar to the proof of Bickel et al (1986), we fid that EW (X i, X j )e it(v(x i)+v (X j ))/ = t2 EV (X 1 )V (X 2 )W (X 1, X 2 ) + θ (1) ij (t) (5.42) 32

34 where ( otig e ix 1 ix 2 x 3/2 ad e ix 1 x ) θ (1) ij (t) 2 4 ( EW (X i, X j ) e itv(x i)/ 1 itv ) (X i ) (e itv (X j )/ 1 ) + t ( EW (X i, X j )V (X i ) e itv(x j)/ 1 itv ) (X j ) ( t ) 5/2 { E W (X 1, X 2 ) V (X 1 ) 3/2 V (X 2 ) +E W (X 1, X 2 ) V (X 1 ) V (X 2 ) 3/2} ( ) 5/2 ( ) 5/2 t t (λ ρ ) 1/2 8 (λ + ρ ). Together (5.42) with the followig relatios (see Petrov, 1975) γ(t) e t2 /3, γ (t) e t2 /2 16ρ t 3 e t2 /3 (5.43) for t /(4ρ ), we obtai that if t 1/10 ad is sufficietly large, where EW (X i, X j )e its = γ 2 (t)ew (X i, X j )e it(v(x i)+v (X j ))/ = t2 γ 2 (t) EV (X 1 )V (X 2 )W (X 1, X 2 ) + θ (1) ij (t)γ 2 (t) = t2 e t2 /2 EV (X 1 )V (X 2 )W (X 1, X 2 ) + θ (2) ij (t) (5.44) θ (2) ij (t) θ (1) ij (t) e t2 /4 + λ1/2 t 2 γ 2 (t) e t2 /2 A ( 5/4 (λ + ρ ) + 3/2 λ 1/2 ρ ) ( 1 + t 6 ) e t2 /4. From (5.44), it follows that I 3 A ( 3/4 (λ + ρ ) + 1 λ 1/2 ρ ) 2A ( 3/4 (λ + ρ ) + 1 (ρ ) 2). Fially we estimate I 4. Put I view of (5.41), we have that,m =,,m = 1 1 3/2 i=m j=i+1 W (X i, X j ). Ee it(s+,) Ee it(s+,m ) ite,m e it(s+,m) A t 2 2 mλ. 33

35 This iequality, together with idepedece of X k, implies that for ay 1 m, Ee it(s+,) γ m 1 (t) + A 1 1/2 λ 1/2 t γ m 2 (t) + A 2 t 2 2 mλ, (5.45) where we use the estimate: E,m (E,m 2 ) 1/2. Choosig m = [ ] 6 log t + 2, it follows from (5.45) ad (5.43) that 2 1/10 t /(4ρ ) 1 Ee it(s+,) dt A 2/3 (1 + λ ). (5.46) t O the other had, otig that τ > 0, it follows that whe /(4ρ ) t 2/3, γ(t) 1 τ e τ. (5.47) I this case, choosig m = [ ] 4 log τ + 2, it follows from (5.45) ad (5.47) that 1 Ee it(s+,) dt A τ 1 /(4ρ) t 2/3 2/3 (log ) λ. (5.48) t Substitutig the above estimate for I j s ito (5.40), we get the required result. We fiish the proof of Lemma 5.5. Lemma 5.6. Assume that [F (t)(1 F (t)] 1/3 dt <. The we have P ( Q j 1 (log ) 1) = o( 1/2 ), for j = 1, 2, (5.49) P ( Q j 1/2 (log ) 1) = o( 1/2 ), for j = 3, 4, 5, (5.50) where Q j for j = 1,, 5 are defied as i (3.3) (3.7). Proof. (i). First cosider Q 1. Sice ζ(x i, X i ) = (2σ) 1 J 0(t) [ I (Xi t) F (t) ] 2 dt, the E ζ(x i, X i ) 3 1 ( 8 σ 3 J 0(t) 3 E (I(Xit) F (t) ) ) 2 3 dt t [ C(J) E (I(Xis) F (s) ) 2 ( I (Xit) F (t) ) 2 ( I (Xi v) F (v) ) 2 ] dsdtdv C(J) ( [ E ( I (Xit) F (t) ) 6 ] 1/3 dt ) 3 C(J) ( = C(J) C. ( [ E ( I (Xit) F (t) ) 2 ] 1/3 dt ) 3 ) 3 F 1/3 (t) (1 F (t)) 1/3 dt 34

36 Applyig Lemma 5.1 i the Appedix with p = 3, k = 1 ad m = 0, we get E Q 1 3 C 3/2. Therefore, P ( Q 1 1 (log ) 1) (log ) 3 E Q 1 3 C 3/2 (log ) 3 = o( 1/2 ). (ii). Let us deal with the term Q 2. Note that for k 2, we have It follows that E F (t) F (t) k A k/2 E I{X 1 t} F (t) k P ( Q 2 1 (log ) 1) ( log ) 2 E Q 2 2 A k/2 E I{X 1 t} F (t) 2 = A k/2 F (t)(1 F (t)). (5.51) ( ) 2 = C( log ) 2 σ 2 E F (t) F (t) 3 dt = C( log ) 2 σ 2 E { F (s) F (s) 3 F (t) F (t) 3} dsdt C( log ) 2 σ 2 ( (E F (s) F (s) 6) 1/2 ds ) 2 C( log ) 2 3 σ 2 ( = o( 1/2 ). ) 2 F 1/2 (s)(1 F (s)) 1/2 ds (iii). Next we cosider the term Q 3. I view of the iequality Z(s, t, F ) [F (s)(1 F (s))] 1/2 [F (t)(1 F (t))] 1/2, (5.52) we have Q 3 σ 2 J 0 (x)j 0 (y) (F (s) F (s)) 2 Z(s, t, F )dsdt x,y A(J)σ 2 (F (s) F (s)) 2 ds F 1/2 (t)(1 F (t)) 1/2 dt = A(J)σ 2 Q 6 Q 7. (5.53) where Q 6 = (F (s) F (s)) 2 ds, Q 7 = F 1/2 (t)(1 F (t)) 1/2 dt. It follows from this ad (5.51) that ( ) 3 EQ 3 6 = E (F (s) F (s)) 2 ds = E { (F (s) F (s)) 2 (F (t) F (t)) 2 (F (v) F (v)) 2} dsdtdv ( (E F (s) F (s) 6) ) 1/3 3 ds A 3 ( F 1/3 (s)(1 F (s)) 1/3 ds) 3. (5.54) 35

37 Similarly, we have ( EQ 3 7 = E = ) 3 F 1/2 (t)(1 F (t)) 1/2 dt E { F 1/2 (s)(1 F (s)) 1/2 F 1/2 (t)(1 F (t)) 1/2 F 1/2 (v)(1 F (v)) 1/2} dsdtdv (E { F 3/2 (s)(1 F (s)) 3/2}) 1/3 ( E { F 3/2 (t)(1 F (t)) 3/2}) 1/3 = ( E { F 3/2 (v)(1 F (v)) 3/2}) 1/3 dsdtdv ( (E { F 3/2 (t)(1 F (t)) 3/2}) ) 1/3 3 dt ( ) 3 (E {F (t)(1 F (t))}) 1/3 dt ( F 1/3 (s)(1 F (s)) 1/3 ds) 3, (5.55) where i the secod last step, we have used the iequality E[F (t)(1 F (t))] F (t)(1 F (t)). Combiig (5.53) (5.55) ad applyig Markov s iequality, we have P ( Q 3 1/2 (log ) 1) ( 1/2 log ) 3/2 E Q3 3/2 A 3/2 (J)σ ( 3 1/2 log ) 3/2 ( ) EQ 3 1/2 ( ) 6 EQ 3 1/2 7 ( ) 3 C 3/4 (log ) 3/2 [F (t)(1 F (t)] 1/3 dt = o( 1/2 ). (5.56) get (iv). Next we cosider the term Q 4. Usig (5.52) ad Cauchy-Swartze iequality, we Q 4 = σ 2 [J (s) J 0 (s)] [J (t) J 0 (t)] Z(s, t, F )dsdt, σ 2 J 0(x) 2 F (s) F (s) F (t) F (t) Z(s, t, F )dsdt x B(J)σ 2 F (s) F (s) F (t) F (t) [F (s)(1 F (s))] 1/2 [F (t)(1 F (t))] 1/2 dsdt ( ) 2 = B(J)σ 2 F (t) F (t) [F (t)(1 F (t))] 1/2 dsdt ( ) ( ) B(J)σ 2 (F (s) F (s)) 2 ds F (t)(1 F (t))dt ( ) ( ) B(J)σ 2 (F (s) F (s)) 2 ds F 1/2 (t)(1 F (t)) 1/2 dt = B(J)σ 2 Q 6 Q 7, 36

A UNIFIED APPROACH TO EDGEWORTH EXPANSIONS FOR A GENERAL CLASS OF STATISTICS

A UNIFIED APPROACH TO EDGEWORTH EXPANSIONS FOR A GENERAL CLASS OF STATISTICS Statistica Siica 20 2010, 613-636 A UNIFIED APPROACH TO EDGEWORTH EXPANSIONS FOR A GENERAL CLASS OF STATISTICS Big-Yi Jig ad Qiyig Wag Hog Kog Uiversity of Sciece ad Techology ad Uiversity of Sydey Abstract:

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Berry-Esseen bounds for self-normalized martingales

Berry-Esseen bounds for self-normalized martingales Berry-Essee bouds for self-ormalized martigales Xiequa Fa a, Qi-Ma Shao b a Ceter for Applied Mathematics, Tiaji Uiversity, Tiaji 30007, Chia b Departmet of Statistics, The Chiese Uiversity of Hog Kog,

More information

Self-normalized deviation inequalities with application to t-statistic

Self-normalized deviation inequalities with application to t-statistic Self-ormalized deviatio iequalities with applicatio to t-statistic Xiequa Fa Ceter for Applied Mathematics, Tiaji Uiversity, 30007 Tiaji, Chia Abstract Let ξ i i 1 be a sequece of idepedet ad symmetric

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Lecture 8: Convergence of transformations and law of large numbers

Lecture 8: Convergence of transformations and law of large numbers Lecture 8: Covergece of trasformatios ad law of large umbers Trasformatio ad covergece Trasformatio is a importat tool i statistics. If X coverges to X i some sese, we ofte eed to check whether g(x ) coverges

More information

o f P r o b a b i l i t y Vol. 10 (2005), Paper no. 38, pages Journal URL ejpecp/

o f P r o b a b i l i t y Vol. 10 (2005), Paper no. 38, pages Journal URL   ejpecp/ E l e c t r o i c J o u r a l o f P r o b a b i l i t y Vol. 10 005, Paper o. 38, pages 160-185. Joural URL http://www.math.washigto.edu/ ejpecp/ Limit Theorems for Self-ormalized Large Deviatio 1 Qiyig

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Asymptotic distribution of products of sums of independent random variables

Asymptotic distribution of products of sums of independent random variables Proc. Idia Acad. Sci. Math. Sci. Vol. 3, No., May 03, pp. 83 9. c Idia Academy of Scieces Asymptotic distributio of products of sums of idepedet radom variables YANLING WANG, SUXIA YAO ad HONGXIA DU ollege

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

This section is optional.

This section is optional. 4 Momet Geeratig Fuctios* This sectio is optioal. The momet geeratig fuctio g : R R of a radom variable X is defied as g(t) = E[e tx ]. Propositio 1. We have g () (0) = E[X ] for = 1, 2,... Proof. Therefore

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory ST525: Advaced Statistical Theory Departmet of Statistics & Applied Probability Tuesday, September 7, 2 ST525: Advaced Statistical Theory Lecture : The law of large umbers The Law of Large Numbers The

More information

h(x i,x j ). (1.1) 1 i<j n

h(x i,x j ). (1.1) 1 i<j n ESAIM: PS 15 2011 168 179 DOI: 10.1051/ps/2009014 ESAIM: Probability ad Statistics www.esaim-ps.org CRAMÉR TYPE MODERATE DEVIATIONS FOR STUDENTIZED U-STATISTICS,, Tze Leg Lai 1, Qi-Ma Shao 2 ad Qiyig Wag

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002 ECE 330:541, Stochastic Sigals ad Systems Lecture Notes o Limit Theorems from robability Fall 00 I practice, there are two ways we ca costruct a ew sequece of radom variables from a old sequece of radom

More information

Notes 19 : Martingale CLT

Notes 19 : Martingale CLT Notes 9 : Martigale CLT Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: [Bil95, Chapter 35], [Roc, Chapter 3]. Sice we have ot ecoutered weak covergece i some time, we first recall

More information

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences Commuicatios of the Korea Statistical Society 29, Vol. 16, No. 5, 841 849 Precise Rates i Complete Momet Covergece for Negatively Associated Sequeces Dae-Hee Ryu 1,a a Departmet of Computer Sciece, ChugWoo

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1 8. The cetral limit theorems 8.1. The cetral limit theorem for i.i.d. sequeces. ecall that C ( is N -separatig. Theorem 8.1. Let X 1, X,... be i.i.d. radom variables with EX 1 = ad EX 1 = σ (,. Suppose

More information

Notes 5 : More on the a.s. convergence of sums

Notes 5 : More on the a.s. convergence of sums Notes 5 : More o the a.s. covergece of sums Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: Dur0, Sectios.5; Wil9, Sectio 4.7, Shi96, Sectio IV.4, Dur0, Sectio.. Radom series. Three-series

More information

2.2. Central limit theorem.

2.2. Central limit theorem. 36.. Cetral limit theorem. The most ideal case of the CLT is that the radom variables are iid with fiite variace. Although it is a special case of the more geeral Lideberg-Feller CLT, it is most stadard

More information

1 Convergence in Probability and the Weak Law of Large Numbers

1 Convergence in Probability and the Weak Law of Large Numbers 36-752 Advaced Probability Overview Sprig 2018 8. Covergece Cocepts: i Probability, i L p ad Almost Surely Istructor: Alessadro Rialdo Associated readig: Sec 2.4, 2.5, ad 4.11 of Ash ad Doléas-Dade; Sec

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

Rates of Convergence by Moduli of Continuity

Rates of Convergence by Moduli of Continuity Rates of Covergece by Moduli of Cotiuity Joh Duchi: Notes for Statistics 300b March, 017 1 Itroductio I this ote, we give a presetatio showig the importace, ad relatioship betwee, the modulis of cotiuity

More information

Mi-Hwa Ko and Tae-Sung Kim

Mi-Hwa Ko and Tae-Sung Kim J. Korea Math. Soc. 42 2005), No. 5, pp. 949 957 ALMOST SURE CONVERGENCE FOR WEIGHTED SUMS OF NEGATIVELY ORTHANT DEPENDENT RANDOM VARIABLES Mi-Hwa Ko ad Tae-Sug Kim Abstract. For weighted sum of a sequece

More information

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT TR/46 OCTOBER 974 THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION by A. TALBOT .. Itroductio. A problem i approximatio theory o which I have recetly worked [] required for its solutio a proof that the

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

A survey on penalized empirical risk minimization Sara A. van de Geer

A survey on penalized empirical risk minimization Sara A. van de Geer A survey o pealized empirical risk miimizatio Sara A. va de Geer We address the questio how to choose the pealty i empirical risk miimizatio. Roughly speakig, this pealty should be a good boud for the

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

1+x 1 + α+x. x = 2(α x2 ) 1+x

1+x 1 + α+x. x = 2(α x2 ) 1+x Math 2030 Homework 6 Solutios # [Problem 5] For coveiece we let α lim sup a ad β lim sup b. Without loss of geerality let us assume that α β. If α the by assumptio β < so i this case α + β. By Theorem

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

Self-normalized Cramér type Moderate Deviations for the Maximum of Sums

Self-normalized Cramér type Moderate Deviations for the Maximum of Sums Self-ormalized Cramér type Moderate Deviatios for the Maximum of Sums Weidog Liu 1, Qi-Ma Shao ad Qiyig Wag 3 Abstract Let X 1, X,... be idepedet radom variables with zero meas ad fiite variaces, ad let

More information

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities Chapter 5 Iequalities 5.1 The Markov ad Chebyshev iequalities As you have probably see o today s frot page: every perso i the upper teth percetile ears at least 1 times more tha the average salary. I other

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

INFINITE SEQUENCES AND SERIES

INFINITE SEQUENCES AND SERIES 11 INFINITE SEQUENCES AND SERIES INFINITE SEQUENCES AND SERIES 11.4 The Compariso Tests I this sectio, we will lear: How to fid the value of a series by comparig it with a kow series. COMPARISON TESTS

More information

Asymptotic Results for the Linear Regression Model

Asymptotic Results for the Linear Regression Model Asymptotic Results for the Liear Regressio Model C. Fli November 29, 2000 1. Asymptotic Results uder Classical Assumptios The followig results apply to the liear regressio model y = Xβ + ε, where X is

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Approximation theorems for localized szász Mirakjan operators

Approximation theorems for localized szász Mirakjan operators Joural of Approximatio Theory 152 (2008) 125 134 www.elsevier.com/locate/jat Approximatio theorems for localized szász Miraja operators Lise Xie a,,1, Tigfa Xie b a Departmet of Mathematics, Lishui Uiversity,

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Lecture 3: August 31

Lecture 3: August 31 36-705: Itermediate Statistics Fall 018 Lecturer: Siva Balakrisha Lecture 3: August 31 This lecture will be mostly a summary of other useful expoetial tail bouds We will ot prove ay of these i lecture,

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

Solutions to HW Assignment 1

Solutions to HW Assignment 1 Solutios to HW: 1 Course: Theory of Probability II Page: 1 of 6 Uiversity of Texas at Austi Solutios to HW Assigmet 1 Problem 1.1. Let Ω, F, {F } 0, P) be a filtered probability space ad T a stoppig time.

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information

Central limit theorem and almost sure central limit theorem for the product of some partial sums

Central limit theorem and almost sure central limit theorem for the product of some partial sums Proc. Idia Acad. Sci. Math. Sci. Vol. 8, No. 2, May 2008, pp. 289 294. Prited i Idia Cetral it theorem ad almost sure cetral it theorem for the product of some partial sums YU MIAO College of Mathematics

More information

Frequentist Inference

Frequentist Inference Frequetist Iferece The topics of the ext three sectios are useful applicatios of the Cetral Limit Theorem. Without kowig aythig about the uderlyig distributio of a sequece of radom variables {X i }, for

More information

1 The Haar functions and the Brownian motion

1 The Haar functions and the Brownian motion 1 The Haar fuctios ad the Browia motio 1.1 The Haar fuctios ad their completeess The Haar fuctios The basic Haar fuctio is 1 if x < 1/2, ψx) = 1 if 1/2 x < 1, otherwise. 1.1) It has mea zero 1 ψx)dx =,

More information

Lecture 20: Multivariate convergence and the Central Limit Theorem

Lecture 20: Multivariate convergence and the Central Limit Theorem Lecture 20: Multivariate covergece ad the Cetral Limit Theorem Covergece i distributio for radom vectors Let Z,Z 1,Z 2,... be radom vectors o R k. If the cdf of Z is cotiuous, the we ca defie covergece

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT Itroductio to Extreme Value Theory Laures de Haa, ISM Japa, 202 Itroductio to Extreme Value Theory Laures de Haa Erasmus Uiversity Rotterdam, NL Uiversity of Lisbo, PT Itroductio to Extreme Value Theory

More information

Math 341 Lecture #31 6.5: Power Series

Math 341 Lecture #31 6.5: Power Series Math 341 Lecture #31 6.5: Power Series We ow tur our attetio to a particular kid of series of fuctios, amely, power series, f(x = a x = a 0 + a 1 x + a 2 x 2 + where a R for all N. I terms of a series

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

A Proof of Birkhoff s Ergodic Theorem

A Proof of Birkhoff s Ergodic Theorem A Proof of Birkhoff s Ergodic Theorem Joseph Hora September 2, 205 Itroductio I Fall 203, I was learig the basics of ergodic theory, ad I came across this theorem. Oe of my supervisors, Athoy Quas, showed

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

Probability and Statistics

Probability and Statistics ICME Refresher Course: robability ad Statistics Staford Uiversity robability ad Statistics Luyag Che September 20, 2016 1 Basic robability Theory 11 robability Spaces A probability space is a triple (Ω,

More information

Application to Random Graphs

Application to Random Graphs A Applicatio to Radom Graphs Brachig processes have a umber of iterestig ad importat applicatios. We shall cosider oe of the most famous of them, the Erdős-Réyi radom graph theory. 1 Defiitio A.1. Let

More information

The Riemann Zeta Function

The Riemann Zeta Function Physics 6A Witer 6 The Riema Zeta Fuctio I this ote, I will sketch some of the mai properties of the Riema zeta fuctio, ζ(x). For x >, we defie ζ(x) =, x >. () x = For x, this sum diverges. However, we

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet

More information

An almost sure invariance principle for trimmed sums of random vectors

An almost sure invariance principle for trimmed sums of random vectors Proc. Idia Acad. Sci. Math. Sci. Vol. 20, No. 5, November 200, pp. 6 68. Idia Academy of Scieces A almost sure ivariace priciple for trimmed sums of radom vectors KE-ANG FU School of Statistics ad Mathematics,

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 3 9//203 Large deviatios Theory. Cramér s Theorem Cotet.. Cramér s Theorem. 2. Rate fuctio ad properties. 3. Chage of measure techique.

More information

Solution to Chapter 2 Analytical Exercises

Solution to Chapter 2 Analytical Exercises Nov. 25, 23, Revised Dec. 27, 23 Hayashi Ecoometrics Solutio to Chapter 2 Aalytical Exercises. For ay ε >, So, plim z =. O the other had, which meas that lim E(z =. 2. As show i the hit, Prob( z > ε =

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

MAXIMAL INEQUALITIES AND STRONG LAW OF LARGE NUMBERS FOR AANA SEQUENCES

MAXIMAL INEQUALITIES AND STRONG LAW OF LARGE NUMBERS FOR AANA SEQUENCES Commu Korea Math Soc 26 20, No, pp 5 6 DOI 0434/CKMS20265 MAXIMAL INEQUALITIES AND STRONG LAW OF LARGE NUMBERS FOR AANA SEQUENCES Wag Xueju, Hu Shuhe, Li Xiaoqi, ad Yag Wezhi Abstract Let {X, } be a sequece

More information

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL)

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL) TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES SUPPLEMENTARY MATERIAL) By Ke Zhu, Philip L.H. Yu ad Wai Keug Li Chiese Academy of Scieces ad Uiversity of Hog Kog APPENDIX: PROOFS I this appedix, we

More information

Ada Boost, Risk Bounds, Concentration Inequalities. 1 AdaBoost and Estimates of Conditional Probabilities

Ada Boost, Risk Bounds, Concentration Inequalities. 1 AdaBoost and Estimates of Conditional Probabilities CS8B/Stat4B Sprig 008) Statistical Learig Theory Lecture: Ada Boost, Risk Bouds, Cocetratio Iequalities Lecturer: Peter Bartlett Scribe: Subhrasu Maji AdaBoost ad Estimates of Coditioal Probabilities We

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

Probability for mathematicians INDEPENDENCE TAU

Probability for mathematicians INDEPENDENCE TAU Probability for mathematicias INDEPENDENCE TAU 2013 28 Cotets 3 Ifiite idepedet sequeces 28 3a Idepedet evets........................ 28 3b Idepedet radom variables.................. 33 3 Ifiite idepedet

More information

HAJEK-RENYI-TYPE INEQUALITY FOR SOME NONMONOTONIC FUNCTIONS OF ASSOCIATED RANDOM VARIABLES

HAJEK-RENYI-TYPE INEQUALITY FOR SOME NONMONOTONIC FUNCTIONS OF ASSOCIATED RANDOM VARIABLES HAJEK-RENYI-TYPE INEQUALITY FOR SOME NONMONOTONIC FUNCTIONS OF ASSOCIATED RANDOM VARIABLES ISHA DEWAN AND B. L. S. PRAKASA RAO Received 1 April 005; Revised 6 October 005; Accepted 11 December 005 Let

More information

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial. Taylor Polyomials ad Taylor Series It is ofte useful to approximate complicated fuctios usig simpler oes We cosider the task of approximatig a fuctio by a polyomial If f is at least -times differetiable

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 Fuctioal Law of Large Numbers. Costructio of the Wieer Measure Cotet. 1. Additioal techical results o weak covergece

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

Supplementary Material for Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate

Supplementary Material for Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate Supplemetary Material for Fast Stochastic AUC Maximizatio with O/-Covergece Rate Migrui Liu Xiaoxua Zhag Zaiyi Che Xiaoyu Wag 3 iabao Yag echical Lemmas ized versio of Hoeffdig s iequality, ote that We

More information

Entropy Rates and Asymptotic Equipartition

Entropy Rates and Asymptotic Equipartition Chapter 29 Etropy Rates ad Asymptotic Equipartitio Sectio 29. itroduces the etropy rate the asymptotic etropy per time-step of a stochastic process ad shows that it is well-defied; ad similarly for iformatio,

More information

Sequences and Limits

Sequences and Limits Chapter Sequeces ad Limits Let { a } be a sequece of real or complex umbers A ecessary ad sufficiet coditio for the sequece to coverge is that for ay ɛ > 0 there exists a iteger N > 0 such that a p a q

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

REGRESSION WITH QUADRATIC LOSS

REGRESSION WITH QUADRATIC LOSS REGRESSION WITH QUADRATIC LOSS MAXIM RAGINSKY Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X, Y ), where, as before, X is a R d

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 9. POINT ESTIMATION 9. Covergece i Probability. The bases of poit estimatio have already bee laid out i previous chapters. I chapter 5

More information

Appendix to Quicksort Asymptotics

Appendix to Quicksort Asymptotics Appedix to Quicksort Asymptotics James Alle Fill Departmet of Mathematical Scieces The Johs Hopkis Uiversity jimfill@jhu.edu ad http://www.mts.jhu.edu/~fill/ ad Svate Jaso Departmet of Mathematics Uppsala

More information

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology

Advanced Analysis. Min Yan Department of Mathematics Hong Kong University of Science and Technology Advaced Aalysis Mi Ya Departmet of Mathematics Hog Kog Uiversity of Sciece ad Techology September 3, 009 Cotets Limit ad Cotiuity 7 Limit of Sequece 8 Defiitio 8 Property 3 3 Ifiity ad Ifiitesimal 8 4

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Law of the sum of Bernoulli random variables

Law of the sum of Bernoulli random variables Law of the sum of Beroulli radom variables Nicolas Chevallier Uiversité de Haute Alsace, 4, rue des frères Lumière 68093 Mulhouse icolas.chevallier@uha.fr December 006 Abstract Let be the set of all possible

More information

EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS

EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS Ryszard Zieliński Ist Math Polish Acad Sc POBox 21, 00-956 Warszawa 10, Polad e-mail: rziel@impagovpl ABSTRACT Weak laws of large umbers (W LLN), strog

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learig Theory: Lecture Notes Kamalika Chaudhuri October 4, 0 Cocetratio of Averages Cocetratio of measure is very useful i showig bouds o the errors of machie-learig algorithms. We will begi with a basic

More information

Statistical Analysis on Uncertainty for Autocorrelated Measurements and its Applications to Key Comparisons

Statistical Analysis on Uncertainty for Autocorrelated Measurements and its Applications to Key Comparisons Statistical Aalysis o Ucertaity for Autocorrelated Measuremets ad its Applicatios to Key Comparisos Nie Fa Zhag Natioal Istitute of Stadards ad Techology Gaithersburg, MD 0899, USA Outlies. Itroductio.

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

Lecture 2. The Lovász Local Lemma

Lecture 2. The Lovász Local Lemma Staford Uiversity Sprig 208 Math 233A: No-costructive methods i combiatorics Istructor: Ja Vodrák Lecture date: Jauary 0, 208 Origial scribe: Apoorva Khare Lecture 2. The Lovász Local Lemma 2. Itroductio

More information

Math Solutions to homework 6

Math Solutions to homework 6 Math 175 - Solutios to homework 6 Cédric De Groote November 16, 2017 Problem 1 (8.11 i the book): Let K be a compact Hermitia operator o a Hilbert space H ad let the kerel of K be {0}. Show that there

More information