AN EDGEWORTH EXPANSION FOR SYMMETRIC FINITE POPULATION STATISTICS. M. Bloznelis 1, F. Götze

Size: px
Start display at page:

Download "AN EDGEWORTH EXPANSION FOR SYMMETRIC FINITE POPULATION STATISTICS. M. Bloznelis 1, F. Götze"

Transcription

1 AN EDGEWORTH EXPANSION FOR SYMMETRIC FINITE POPULATION STATISTICS M. Bloznelis 1, F. Götze Vilnius University and the Institute of Mathematics and Informatics, Bielefeld University 2000 (Revised 2001 Abstract. Let T be a symmetric statistic based on sample of size n drawn without replacement from a finite population of size N, where N > n. Assuming that the linear part of Hoeffding s decomposition of T is nondegenerate we construct a one term Edgeworth expansion for the distribution function of T and prove the validity of the expansion with the remainder O(1/n as n, where n = min{n, N n}. 1. Introduction and results 1.1. Introduction. Given a set X = {x 1,..., x N }, let (X 1,..., X N be a random permutation of the ordered set (x 1,..., x N. We assume that the random permutation is uniformly distributed over the class of permutations. Let T = t(x 1,..., X n denote a symmetric statistic of the first n observations X 1,..., X n, where n < N. That is, t is a real function defined on the class of subsets {x i1,..., x in } X of size n and we assume that t(x i1,..., x in is invariant under permutations of its arguments. Since X 1,..., X n represents a sample drawn without replacement from the population X, we call T a symmetric finite population statistic. We shall consider symmetric finite population statistics which are asymptotically normal when n and N tend to, where n = min{n, N n}. In the simplest case of linear statistics the asymptotic normality was established by Erdős and Rényi 1 Research supported by the A.v. Humboldt Foundation 1991 Mathematics Subject Classification. Primary 62E20; secondary 60F05. Key words and phrases. Edgeworth expansion, finite population, Hoeffding decomposition, sampling without replacement. Typeset by AMS-TEX 1

2 2 M. BLOZNELIS, F. GÖTZE (1959 under fairly general conditions. The rate in the Erdős-Rényi central limit theorem was studied by Bikelis (1972. Höglund (1978 proved the Berry Esseen bound. An Edgeworth expansion was established by Robinson (1978, see also Bickel and van Zwet (1978, Schneller (1989, Babu and Bai (1996. Asymptotic normality of nonlinear finite population statistics was studied by Nandi and Sen (1963, who proved a central limit theorem for U statistics. The accuracy of the normal approximation of U statistics was studied by Zhao and Chen (1987, 1990, Kokic and Weber (1990. A general Berry Esseen bound for combinatorial multivariate sampling statistics (including finite population U statistics was established by Bolthausen and Götze (1993. Rao and Zhao (1994, Bloznelis (1999 constructed Berry-Esseen bounds for Student s t statistic. One term asymptotic expansions of nonlinear statistics, which can be approximated by smooth functions of (multivariate sample means have been shown by Babu and Singh (1985, see also Babu and Bai (1996. For U statistics of degree two one term Edgeworth expansions were constructed by Kokic and Weber (1990. Bloznelis and Götze (1999, 2000 established the validity of one term Edgeworth expansion for U statistics of degree two with remainders o(1/ n and O(1/n. A second order asymptotic theory for general asymptotically normal symmetric statistics of independent and identically distributed observations was developed in a recent paper by Bentkus, Götze and van Zwet (1997, which concludes a number of previous investigations of particular statistics: Bickel (1974, Callaert and Janssen (1978, Götze (1979, Callaert, Janssen and Veraverbeke (1980, Serfling (1980, Helmers (1982, Helmers and van Zwet (1982, van Zwet (1984, Bickel, Götze and van Zwet (1986, Lai and Wang (1993, etc. This theory is based on the representation of symmetric statistics by sums of U statistics of increasing order via Hoeffding s decomposition. Another approach, see, e.g., Chibisov (1972, Pfanzagl (1973, Bhattacharya and Ghosh (1978, which is based on Taylor expansions of statistics in powers of the underlying i.i.d. observations, focuses on smooth functions of observations. In view of important classes of applications (jackknife histogram, see, Wu (1990, Shao (1989, Booth and Hall (1993 and subsampling, see, Politis and Romano (1994, Bertail (1997, Bickel, Götze and van Zwet (1997 we want to develop in this paper a second order asymptotic theory similar to that of Bentkus, Götze and van Zwet (1997 for simple random samples drawn without replacement from finite populations. The starting point of our asymptotic analysis is the Hoeffding decomposition (1.1 T = ET + 1 i n g 1 (X i + 1 i<j n g 2 (X i, X j +... which expands T into the sum of mutually uncorrelated U-statistics of increasing

3 order (1.2 U k = 1 i 1 < <i k n FINITE POPULATION 3 g k (X i1,..., X ik, k = 1,..., n. Here the symmetric kernels g k, k = 1,..., n, are centered, Eg k (X 1,..., X k = 0, and satisfy the orthogonality condition (1.3 E(g k (X 1,..., X k X 1,..., X k 1 = 0 almost surely. It follows from (1.3 that U 1,..., U n are orthogonal in L 2 (i.e., EU k U r = 0, for k r. Furthermore, the condition (1.3 ensures the uniqueness of the decomposition 1.1 in the following sense: given another decomposition like 1.1 with symmetric kernels, say g k, satisfying (1.3, we always have g k = g k. Let us mention briefly that given k, the function g k (x i1,..., x ik can be expressed by a linear combination of conditional expectations E(T ET X 1 = x r1,..., X j = x rj where j = 1,..., k and {x r1,..., x rj } {x i1,..., x ik }. The expressions for g 1 and g 2 are provided by (1.5 below. For larger k = 3,..., n, the expressions are more complex and we refer to Bloznelis and Götze (2001 where a general formula for g k is derived. We shall assume that the linear part U 1 = g 1 (X i is nondegenerate. That is, σ 2 > 0, where σ 2 = Varg 1 (X 1. In the case where, for large n, the linear part dominates the statistic we can approximate the distribution of T by a normal distribution, using the central limit theorem. Furthermore, the sum of the linear and quadratic term, U = ET + 1 i n g 1 (X i + 1 i<j n g 2 (X i, X j typically provides a sufficiently precise approximation to T so that one term Edgeworth expansions for the distribution functions of T and U are, in fact, the same. Therefore, in order to construct a one term Edgeworth expansion of T it suffices to find such expansion for U. In particular, we do not need to evaluate all the summands of the decomposition 1.1, but (moments of the first two terms only, cf. (1.4 below. Similarly, the two term Edgeworth expansion for the distribution function of T could be constructed using the approximation T ET +U 1 +U 2 +U 3, etc. An advantage of such an approach is that it provides (at least formal Edgeworth expansion for an arbitrary symmetric finite population statistic T no matter whether it is a smooth function of observations or not. In the present paper we construct the one term Edgeworth expansion for the distribution function of T and prove the validity of the expansion with the remainder O(1/n.

4 4 M. BLOZNELIS, F. GÖTZE A simple calculation shows that the variance of the linear part satisfies Var g 1 (X i = σ 2 τ 2 N N 1, τ 2 = Npq, p = n/n, q = 1 p. 1 i n Note that n /2 τ 2 n. We approximate the distribution function by the one term Edgeworth expansion (1.4 G(x = Φ(x F (x = P{T ET + στx}, (q p α + 3 κ 6 τ and provide an explicit bound for the remainder where and where = sup F (x G(x, x R Φ (3 (x α = σ 3 Eg 3 1(X 1 and κ = σ 3 τ 2 Eg 2 (X 1, X 2 g 1 (X 1 g 1 (X 2 (1.5 g 1 (X i = N 1 N n E(T X i, g 2 (X i, X j = N 2 N n N 3 N n 1 T = T ET, ( E(T X i, X j N 1 N 2 (E(T X i + E(T X j. Furthermore, Φ denotes the standard normal distribution function, and Φ (3 denotes the third derivative of Φ. Before to formulate our main results, Theorems 1.1 and 1.2, we introduce the smoothness conditions, which together with the moment conditions, ensure the validity of the expansion ( Smoothness conditions. Given a general symmetric statistic T we approximate it by a U-statistic via Hoeffding s decomposition. In order to control the accuracy of such an approximation we use moments of finite differences of T. Introduce the difference operation D j T = t(x 1,..., X j,..., X n t(x 1,..., X j,..., X n, X j = X n+j, where X j is replaced by X j in the second summand, for j n. Higher order difference operations are defined recursively: D j 1,j 2 T = D j 2 ( D j 1 T, D j 1,j 2,j 3 T = D j 3 ( D j 2 (D j 1 T,.... It is easy to see that the difference operations are symmetric, i.e., D j 1,j 2 T = D j 2,j 1 T, etc. Given k < n write δ j = δ j (T = E ( τ 2(j 1 D j T 2, Dj T = D 1,2,...,j T, 1 j k. Bounds for the accuracy of the approximation of T by the sum of the first few terms of the decomposition 1.1 are provided by the following theorem.

5 FINITE POPULATION 5 Theorem A. ( Bloznelis and Götze (2001 For 1 k < n, we have (1.6 T = ET + U U k + R k, with ER 2 k τ 2(k 1 δ k+1. In typical situations (U-statistics, smooth functions of sample means, Student s t and many others for a properly standardized statistic T we have U j = O P ( τ 1 j, for j = 1,..., k, and (1.7 δ k+1 = O(τ 2 as n, N, for some k. Note that (1.7 can be viewed as a smoothness condition. For instance, given a statistic which is a function of the sample mean this condition is satisfied in the case where the function (defining the statistic is k + 1 times differentiable, see Bloznelis and Götze (2001. Assuming that (1.7 holds for k = 2 we obtain from Theorem A that T = U + O P (τ 2 thus, showing that up to an error O(τ 2 the statistics T and U are asymptotically equivalent. Finally, we remark that (1.7 holds if δ k+1 /σ 2 and the variance of the linear part VarU 1 remain bounded as n, N. Another smoothness condition we are going to use is a Cramér type condition. Recall Cramér s (C condition for the distribution F Z of a random variable Z, (C sup E exp{itz} < 1, for some δ > 0. t >δ In the classical theory of sums of independent random variables this condition together with moment conditions ensures the validity of Edgeworth expansions with remainders O(n k/2 and o(n k/2, k = 2, 3,..., for the distribution function of the sum of n independent observations from the distribution F Z, see Petrov (1975. In our situation the condition like (C is too stringent. We shall use a modification of (C which is applicable to random variables assuming a finite number of values only. For the summand of the linear part Z = σ 1 g 1 (X 1, we assume that ρ > 0, where ρ := 1 sup{ E exp{itz} : b 1 /β 3 t τ}. Here b 1 is a small absolute constant (one may choose, e.g., b 1 = and β 3 = σ 3 E g 1 (X 1 3. Other modifications of Cramér s (C condition which are applicable to discrete random variables were considered by Albers, Bickel and van Zwet (1976, Robinson (1978 and Bloznelis and Götze (2000, where in the latter paper relations between various conditions are discussed Results. Write ζ = σ 2 τ 8 Eg 2 3(X 1, X 2, X 3 and denote β k = σ k E g 1 (X 1 k, γ k = σ k τ 2k E g 2 (X 1, X 2 k, k = 2, 3, 4.

6 6 M. BLOZNELIS, F. GÖTZE Theorem 1.1. There exists an absolute constant c > 0 such that (1.8 c τ 2 β 4 + γ 4 + ζ ρ 2 + c τ 2 δ 4 σ 2 ρ 2. For U statistics of arbitrary but fixed degree k (1.9 1 i 1 < <i k n h(x i1,..., X ik, where h is a real symmetric function defined on k-subsets of X, we have the following bound. Theorem 1.2. There exist an absolute constant c > 0 and a constant c(k > 0 depending only on k such that (1.10 c β 4 + γ 4 τ 2 ρ 2 + c(k τ 2 δ 3 σ 2 ρ 2. Since the absolute constants are not specified Theorems 1.1 and 1.2 should be viewed as asymptotic results. Assume that the population size N and the sample size n increases so that n. In particular, τ. In those models where β 4, γ 4 and ζ + δ 4 /σ 2 (respectively δ 3 /σ 2 remain bounded and (1.11 lim inf ρ > 0 Theorem 1.1 (respectively Theorem 1.2 provides the bound = O(τ 2. Since n /2 τ 2 n we obtain = O(1/n. Remark 1. Note that the bounds of Theorems 1.1 and 1.2 are established without any additional assumption on p and q. This fact is important for applications, like subsampling, where p or q may tend to zero as N. Remark 2. The bound of order O(τ 2 for the remainder is unimprovable, because the next term of the Edgeworth expansion, at least for linear statistics, is of order O(τ 2, see Robinson (1978. Remark 3. An expansion of the probability P {T ET + στx} in powers of τ 1 would be the most natural choice of asymptotics. We invoke two simple arguments supporting this choice. Firstly, τ 2 is proportional to the variance of the linear part. Secondly, the number of observations n does not longer determine the scale of T in the case where samples are drawn without replacement since the statistic effectively depends on n ( τ 2 observations. Indeed, it was shown in Bloznelis and Götze (2001 that, for n > N n, we have almost surely (1.12 T = T, T = ET + U U n,

7 where we denote U k = FINITE POPULATION 7 1 i 1 < <i k n ( 1 k g k (X i 1,..., X i k, X j = X n+j. That is, T effectively depends on n = N n observations X 1,..., X n only. Remark 4. The bounds of Theorems 1.1 and 1.2 are optimal in the sense that it is impossible to approximate F by a continuous differentiable function, like G, with the remainder o(τ 2, if no additional smoothness condition apart from (1.11 is imposed. Already for U-statistics of degree two, Cramér s condition (1.11 together with moment conditions of arbitrary order do not suffice to establish the approximation of order o(τ 2. This fact is demonstrated by means of a counter example in Bentkus, Götze and van Zwet (1997 in the i.i.d. situation, and it is inherited by finite population statistics. Indeed, in the case where N and n remains fixed the simple random sample model approaches the i.i.d. situation. We have τ n, p 0, q 1. Replacing τ, p and q by n, 0 and 1 respectively we obtain from G the one term Edgeworth expansion for the distribution function of symmetric statistic based on i.i.d. observations, which was constructed in Bentkus, Götze and van Zwet (1997. Remark 5. The bound (1.8 involves moments (of nonlinear parts which are higher than those which are necessary to define expansions. Thus, in an optimal dependence on moments one would like to replace γ 4 +ζ+δ 4 /σ 2 by γ 2 +δ 3 /σ 2 in the remainder. Let us mention also that for U-statistics of degree k, where k is fixed, the bound (1.10 is more precise than (1.8. Indeed, a straightforward calculation shows that for some absolute constant c we have δ 3 cσ 2 ζ+cδ 4 τ 2. Our technique allows us to prove (1.10 for the U-statistics only and with c(k, for k. In order to apply our results to particular classes of statistics one has to estimate moments δ 3 or δ 4 of differences D 3 T or D 4 T. For U statistics and smooth functions of sample means this problem is easy and routine, see Bloznelis and Götze (2001. Some applications of our results to resampling procedures are considered ibidem. The remaining part of the paper is organized as follows. In Section 2 we prove Theorem 1.2. Theorem 1.1 is a consequence of Theorem 1.2. In the proof we use the data dependent smoothing technique, first introduced in Bentkus, Götze and van Zwet (1997, and expansions of characteristic functions. Expansions of characteristic functions are presented separately in Section 3. Section 4 collects auxiliary combinatorial lemmas. Lemma 4.2 of this section may be of independent interest. 2. Proofs The section consists of two parts. In the first part, for reader s convenience, we collect several important facts about Hoeffding s decomposition of finite population

8 8 M. BLOZNELIS, F. GÖTZE statistics which are used in proofs below. These facts are shown in Bloznelis and Götze (2001. The second part contains proofs of Theorems 1.1 and 1.2. Throughout this section and the next we shall assume without loss of generality that ET = 0 and σ 2 = τ 2. For k = 1, 2,..., we write Ω k = {1,..., k}. By C, c, c 0, c 1,... we denote positive absolute constants. Given two numbers a and b > 0, we write a b if a c b Hoeffding s decomposition of a finite population statistic (2.1 T = ET + U U n was studied by Zhao and Chen (1990 in the case of a U-statistic and by Bloznelis and Götze (2001 in the case of a general symmetric statistic. It was shown in the latter paper that if n > N/2 then U j 0 for j > N n. Note that if T is a U statistic of degree k (that is, a statistic of the form (1.9 then always U j 0 for j > k. Recall that U j is defined in (1.2. Given A = {i 1,..., i r } Ω n and B = {j 1,..., j s } Ω N with 1 r n write T A = g r (X i1,..., X ir and E(T A B = E(T A X j1,..., X js, and denote T = ET. By symmetry, the random variables T A and T A are identically distributed for any A, A Ω N such that A = A n. A simple calculation shows that (1.3 extends to the following identity (2.2 E(T A B = 0, for any A, B Ω N, such that B < A n. For A, B Ω N with 1 j = A = B n and k = A B denote σ 2 j = ET 2 A, s j,k = ET A T B. Using (2.2 it is easy to show, see e.g., Bloznelis and Götze (2001, that ( 1 N j (2.3 s j,k = ( 1 j k σj 2, 0 k j n. j k Since ET = 0 we can write (2.1 in the form T = A Ω n T A. Similarly, for U k from (1.2 we have U k = A Ω n, A =k T A. 2.2 Proofs of Theorems 1.1 and 1.2. The expression exp{ix} is abbreviated by e{x}. Given a complex function H defined on R, we write H(x = sup x R H(x. Write δ = 1 sup{e cos(tg 1 (X 1 + s : s R, b 1 τ/β 3 t τ 2 }.

9 FINITE POPULATION 9 It is easy to show that ρ δ, see Bloznelis and Götze (2000. This inequality will be used in the proof below. We also use the inequalities 1 = β 2 β 3 β 1/2 4 and γ 2 γ 2/3 3 γ 1/2 4 which are simple consequences of Hölder s inequality. We can assume that for sufficiently small c 0, (2.4 β 4 c 0 τ 2, γ 2 c 0 τ 2 δ 2, δ 2/3 ln τ c 0 τ. Indeed, if (2.4 fails, the bounds (1.8 and (1.10 follow from the inequalities F (x 1 and G(x 1 + τ 1 (β 1/2 4 + γ 1/2 2. Note that β 4 1 and the first inequality in (2.4 imply that τ 2 c 1 0 is sufficiently large. Proof of Theorem 1.1. The theorem is a consequence of Theorem 1.2. Write T = U 1 + U 2 + U 3 + R 3, see (1.6. A Slutzky type argument gives (2.5 + τ 2 G (1 (x + P{ R 3 τ 2 }, where, := P{U 1 + U 2 + U 3 στx} G(x satisfies, by (1.10, τ 2 ρ 2 (β 4 + γ 4 + ζ. Furthermore, by (2.4, G (1 (x 1. We bound this quantity by cβ 4, since β 4 1. Finally, by (1.6, we have P{ R 3 τ 2 } δ 4 and using the identity σ 2 = τ 2 we replace δ 4 by τ 2 δ 4 /σ 2. Collecting these bounds in (2.5 we obtain (1.8. Proof of Theorem 1.2. In view of (1.12 it suffices to prove the theorem in the case where n N/2. Therefore, we assume without loss of generality that n N/2 and n = n. The proof of (1.10 is complex and we first outline the main steps. In the first step we replace T by T = V V m + W, where W is a function of the observations (X m+1,..., X n (=:X m for short, and where V j = V (X j, X m is a function of X j and X m. Random variables W, V 1,..., V m and the integer m < n are specified below. In the second step we apply Prawitz s (1972 smoothing lemma to construct upper and lower bounds for the conditional distribution function F 1 (x = P{ T στx X m }: (2.6 F 1 (x+ 1/2 + V P exp{ itx}h 1 K(t/Hf 1 (tdt, R (2.7 F 1 (x 1/2 V P exp{ itx}h 1 K( t/hf 1 (tdt. R

10 10 M. BLOZNELIS, F. GÖTZE Here f 1 (t = E ( exp{it T } X m denotes the conditional characteristic function of T ; F (x+ = lim z x F (z, F (x = lim z x F (z, and V P denotes Cauchy s principal value. The kernel K(s = K 1 (s/2 + ik 2 (s/(2πs, where K 1 (s = II{ s 1} (1 s, K 2 (s = II{ s 1} ((1 s π s cot(π s + s. The positive random variable H = O P (n (a function of X m is specified below. Taking the expectations of (2.6 and (2.7 we obtain upper and lower bounds for F 1 (x+ and F 1 (x, where F 1 (x = P{ T στx}. Combining these bounds with the inversion formula G(x = 1/2 + i/(2π lim V P M exp{ itx}ĝ(tt 1 dt, t M see, e.g., Bentkus, Götze and van Zwet (1997, we obtain upper bounds for F 1 (x+ G(x =: d 1 and G(x F 1 (x =: d 2 respectively. Here Ĝ denotes the Fourier-Stieltjes transform of G. Thus, for d 1, we have (2.8 F 1 (x+ G(x EI 1 + EI 2 + EI 3, I 1 = 1 ( t 2 H 1 exp{ ix t} K 1 f1 (t dt, H R I 2 = i 2 π R V.P. ( t ( exp{ ix t}k 2 f1 (t H Ĝ(t dt t I 3 = i ( 2 π R V.P. ( t dt exp{ ix t} K 2 1 Ĝ(t. H t, A similar inequality holds for d 2. This step of the proof is called data depending smoothing, see Bentkus, Götze and van Zwet (1997. The final step of the proof provides upper bounds for d 1 and d 2. For this purpose we construct an upper bound for f 1 (t, for t cn 1/2 /β 3. Using Cramér s condition and the multiplicative structure of f 1 (note that T is conditionally linear given X m we show that f 1 (t decay exponentially in t. Furthermore, for t cn 1/2 /β 3 we replace the conditional characteristic function f 1 by the unconditional one ˆF (t = E exp{itt } and construct bounds for ˆF (t Ĝ(t by means of expansions of ˆF (t in powers observations X 1,..., X n. Note that, usually, the validity of an Edgeworth expansion is proved using the conventional Berry Eseen smoothing lemma, see e.g., Petrov (1975, Callaert, Janssen and Veraverbeke (1980, Bickel, Götze and van Zwet (1986. In the present paper (in the second step of the proof we use Prawitz s smoothing lemma instead. This lemma is more precise in the sense that the right-hand sides of (2.6 and (2.7 do not involve absolute values of characteristic functions. Therefore, after taking the

11 FINITE POPULATION 11 expected values of (2.6 and (2.7 we can interchange the order of integration in the right-hand sides and obtain the unconditional characteristic functions in the integrands. At the same time, the appropriate choice of the random cut-off H allows to control the nonlinear part of T so that the exponential decay of f 1 (t is established using the minimal smoothness condition (Cramér s condition on the linear part of the statistic. More restrictive smoothness conditions which involve nonlinear parts of the statistic are considered in Callaert, Janssen and Veraverbeke (1980, Bickel, Götze and van Zwet (1986. Step 1. Let m denote the integer closest to the number 8δ 1 ln τ. Since, by (2.4, τ 2 c 1 0 we can choose c 0 small enough so that 10 m n/2. Split T = V + W, where V = T B, W = T B, V = B Ω n, B Ω m B Ω n, B Ω m = m V i + Λ m + Y m + Z m, V i = T {i} + ξ i + η m,i, ξ i = i=1 Here we denote (2.9 Λ m = Y m = B Ω m, B =2 B Ω n, B Ω m =2, B 3 T B, Z m = T B, η m,i = B Ω n, B Ω m 3 T B, n j=m+1 B Ω n, B Ω m ={i}, B 3 T {i,j}. We are going to replace T by T = m i=1 V i + W. Write T = T + R, where R = Λ m + Y m + Z m. Given ε = δ 2 τ 2, a Slutzky type argument gives T B. 1 + ε G (1 (x + P{ R ε}, 1 = F 1 (x G(x, where F 1 (x = P{ T στx}. Invoking the bounds of Lemma 4.1 and the simple bound E Λ m 3 m 6 E T {1,2} 3, we obtain, by Chebyshev s inequality, that P { R ε} P{ Λ m ε 3 } + P{ Y m ε 3 } + P{ Z m ε 3 } δ 3/σ 2 + γ 3 τ 2 δ 2. Finally, we bound G (1 (x as in proof of Theorem 1.1 above. We obtain 1 + τ 2 δ 2 (1 + δ 3 /σ 2 + γ 3. Therefore, in order to prove (1.10, it remains to bound 1. Clearly, 1 max{d 1 ; d 2 }, where d 1 = F 1 (x+ G(x and d 2 = G(x F 1 (x. The remaining part of the proof provides bounds for d 1 and d 2.

12 12 M. BLOZNELIS, F. GÖTZE Step 2. Let A = (A 1,..., A N be a random permutation of (x 1,..., x N which is uniformly distributed over the class of permutations. Let r = [(n + m/2] denote the integer part of (n + m/2. Introduce the sets I 0 = {m + 1,..., n}, J 0 = {1,..., N} \ I 0, J 1 = J 0 { m + 1,..., r } and J 2 = J 0 { r + 1,..., n }. Define (random sub-populations A i = {A k, k J i }, i = 0, 1, 2, and given A i let A i be a random variable uniformly distributed in A i. We assume that X j = A j, for j I 0 and, given A j, j I 0, the observations X 1,..., X m, are drawn without replacement from A 0. Write H = n δ/ ( 32 q 1 n (Θ 1 + Θ 2 + 1, Θ i = E v i (A i, i = 1, 2, v 1 (a = g 2 (a, A j, v 2 (a = g 2 (a, A j. r+1 j n m+1 j r Here, given A i, we denote E f(a i = J i 1 j J i f(a j, for f : A i R. In order to prove an upper bound for F 1 (x+ G(x we apply (2.8 and show that (2.10 EI 1 + E(I 2 + I 3 τ 2 δ 2 (β 4 + γ 4 + c(kδ 3 /σ 2. An upper bound for G(x F 1 (x is obtained in a similar same way. In the remaining part of the proof we verify (2.10. Write ˆF 1 (t = E e{t T }, H 1 = b 1 τ/β 3, Z 1 = { t H 1 } R, Z 2 = {H 1 t H} R, J 1 = e{ tx} ˆF 1 (t Ĝ(t f dt, J t 2 = 1 (t dt, t Z 1 Z 2 Ĝ(t J 3 = dt, J t 4 = e{ tx} f 1(t H dt, J 5 = Z 1 t >H 1 Z 2 f 1 (t H dt, Here and below we write e{x} for exp{ix}. Using the inequality K 1 (s 1 s we replace K 1 (s by 1 in EI 1 and obtain EI 1 EJ 4 + EJ 5 + R, R = EH 2 1 H 2. Similarly, using the inequality K 2 (s 1 5s 2 and the fact that K 2 (s = 0 for s > 1, we obtain E(I 2 + I 3 E J 1 + EJ 2 + J 3 + R, R = EH 2 1 H 2, where J 1 is defined as J 1 above, but with ˆF 1 replaced by f 1. Note that the change of the order of integration yields E J 1 = J 1.

13 FINITE POPULATION 13 The bounds J 3 τ 2 (β 4 + γ 2 and R δ 2 τ 2 (1 + γ 2 are proved in Bloznelis and Götze (2000. Furthermore, note that J 5 J 2. Therefore, in order to prove (2.10 it suffices to show that (2.11 EJ 2 β 4 + δ 3 /σ 2, EJ τ γ 2 + δ 3 /σ 2, J τ 2 δ 2 1 β 4 + γ 4 + c(kδ 3 /σ 2. τ 2 δ 2 Step 3. Here we bound J 1, EJ 2, and EJ 4. The bound for EJ 2. Given t, denote I t = I{ t E η m (A 0 < δ/16}, where η m (x = E(η m,1 X 1 = x, X m+1,..., X n. The identity f 1 (t = I t f 1 (t+(1 I t f 1 (t combined with the inequalities 1 I t 16 2 δ 2 t 2( E η m (A δ 2 t 2 E η 2 m(a 0 yields J 2 J J 2.2, where J 2.1 = dt and J 2.2 = 16 2 δ 2 E ηm(a 2 0H 2. f II 1 (t t t Z 2 In order to prove the bound (2.11 for EJ 2 we show that EJ 2.2 τ 2 δ 3 /σ 2 and EJ 2.1 β 3 /n 2. The first bound is a consequence of the inequalities H nδ and EE ηm(a 2 0 n 3 δ 3 /σ 2, where the latter inequality follows from (4.1 by symmetry. The proof of the bound EJ 2.1 β 3 /n 2 is almost the same as that of the corresponding inequality (3.12 in Bloznelis and Götze (2000. The only and minor difference is that now we add one more nonlinear term η m. Namely, in the proof of (3.12 ibidem one should replace v(a = v 1 (a + v 2 (a by ṽ(a = v(a + η m (a and use the bound E t η m (A 0 δ/16 when estimating E (1 + 2u 2 (A 0 in (3.17, ibidem. Indeed, the bound E t η m (A 0 δ/16 holds on the event {II t 0}. The bound for EJ 4. Define J 4 in the same way as J 4, but with f 1 (t replaced by E(e{tŨ} X m, where Ũ = U Λ m. We shall apply the bound EJ 4 n 1 (1 + γ 2 /δ 2 which is proved in (3.20 of Bloznelis and Götze (2000. This bound and the inequality (2.12 EJ 4 EJ 4 τ 2 δ 2 (1 + δ 3 /σ 2 + γ 2 yield the bound (2.11 for EJ 4. In order to prove (2.12 we write T Ũ = R 2 Y m Z m, see (1.6 and (2.9. This identity in combination with the inequality e{x} e{y} x y yields EJ 4 EJ 4 R, where R = H 2 1 EH 1 E ( R 2 Y m Z m Xm H 2 1 (EH 2 1/2[ (ER 2 2 1/2 + (EY 2 m 1/2 + (EZ 2 m 1/2].

14 14 M. BLOZNELIS, F. GÖTZE In the last step we applied Cauchy Schwarz. It follows from (1.6, (4.1 and the last inequality of (2.4 that the quantity in the brackets [... ] τ 2 δ 1 δ 1/2 3 /σ. Finally, invoking the bound EH 2 n 2 δ 2 (1 + γ 2, which is proved in (5.1 of Bloznelis and Götze (2000, we complete the proof of (2.12. The bound for J 1. The bound (2.11 is a consequence of the following two bounds (2.13 I Z1 ( ˆF 1 ˆF τ 2 δ 2 (1 + γ 2 + δ 3 /σ 2, (2.14 I Z1 ( ˆF Ĝ τ 2 δ 2 (β 4 + γ 4 + c(kδ 3 /σ 2. Recall that ˆF (t = E e{tt }. Here and below for a Borel set B R and an integrable complex function f, we write (for short I B (f = B t 1 f(tdt. The proof of (2.14 is rather complex. We place it in a separate Section 3 below. Note that the bound (2.14 is the only step of the proof where we essentially use the assumption that T is a U statistic. Here we show (2.13. In the proof we replace ˆF 1 (t ˆF (t by f(t = E e{tt }itλ m and then replace f(t by g(t = E e{tu}itλ m. Finally, we invoke the bound ( m (2.15 I Z1 (g n 3/2 (1 + γ 2 n 1 δ 2 (1 + γ 2, 2 which is proved in (3.38 of Bloznelis and Götze (2000. In order to show how (2.13 follows from (2.15 write T = T Λ m Y m Z m, see (2.9. We have ˆF 1 (t = E e{t(t Λ m Y m Z m }. Expanding the exponent in powers of it(y m + Z m and then in powers of itλ m we get ˆF 1 (t ˆF (t f(t E ty m + E tz m + t 2 EΛ 2 m. Furthermore, the identity T U = R 2 combined with the mean value theorem yields f(t g(t Et 2 Λ m R 2. Combining these two inequalities we obtain I Z1 ( ˆF 1 ˆF I Z1 (g +R, R = 2H 1 E Y m +2H 1 Z m +H 2 1 EΛ 2 m+h 2 1 E Λ m R 2. Now, the bound (2.11 follows from (2.15 and the bound R τ 2 δ 2 (1 + γ 2 + δ 3 /σ 2. The latter bound follows from the inequalities (4.1, (1.6, and the simple inequality EΛ 2 m m 2 n 3 γ 2, via Hölder s inequality. The proof of Theorem 1.2 is complete. 3. Expansions Here we show (2.14. Split Z 1 = B 1 B 2 where B 1 = { t c 1 } and B 2 = {c 1 t H 1 }, and where c 1 is an absolute constant. Clearly, (2.14 follows from the obvious inequalities I Z1 ( ˆF Ĝ I B 1 ( ˆF Ĝ + I B 2 ( ˆF Ĝ, I B1 ( ˆF Ĝ I B 1 ( ˆF ˆF U + I B1 ( ˆF U Ĝ, ˆFU (t = E e{tu},

15 and the bounds FINITE POPULATION 15 (3.1 I B2 ( ˆF Ĝ R, R = β 4 + γ 4 + c(kδ 3 /σ 2 τ 2 δ 2, (3.2 I B1 ( ˆF U Ĝ β 4 + γ 4 τ 2, I B1 ( ˆF ˆF U 1 + δ 3/σ 2 τ 2. The first bound of (3.2 is proved in (4.1 of Bloznelis and Götze (2000. To prove the second bound we decompose T = U + R 2, see (1.6, and apply the mean value theorem to get ˆF (t ˆF U (t E tr 2. Finally, an application of (1.6 gives E R 2 (ER2 2 1/2 τ 2 δ 1/2 3 /σ, and, obviously, δ 1/2 3 /σ 1 + δ 3 /σ 2. In order to prove (3.1 we write the characteristic function ˆF (t in the Erdős- Rényi (1959 form, see (3.4 below. Let ν = (ν 1,..., ν N be i.i.d. Bernoulli random variables independent of (X 1,..., X N and having probabilities P{ν 1 = 1} = p and P{ν 1 = 0} = q. Observe, that the conditional distribution of T = ν i, A Ω N, A n T A ν A, where ν A = i A given the event E = {S ν = n}, coincides with the distribution of T. Here S ν = ν ν N. Therefore, ˆF can be written as follows (3.3 ˆF (t = λ πτ πτ E e{tt + τ 1 s(s ν n}ds, Using (2.2 it is easy to show that, for 1 k n, almost surely T A νa = A Ω N, A =k λ 1 = 2πP{E}τ. A Ω N, A =k Q A, Q A = T A ν A, ν A = i A Therefore, almost surely, tt + τ 1 s(s ν n = S + tq, where (ν i p. S = N S i, S i = (tt {i} + τ 1 s(ν i p, Q = Q A. i=1 A Ω N, 2 A n Substitution of this identity in (3.3 gives (3.4 ˆF (t = λ πτ πτ E e{s + tq}ds. In view of (3.4, the bound (3.1 follows from the inequalities (3.5 E e{s + tq} (h1 + h 2 dt ds (3.6 t B 2 λ λ t B 2 s π τ s π τ (h 1 + h 2 ds Ĝ(t dt t R. t R,

16 16 M. BLOZNELIS, F. GÖTZE Here h 1 = E e{s}, h 2 = i 3 ( n 2 E e{s S N }V, V = tq {1,2} S 1 S 2. The inequality (3.6 is proved in Bloznelis and Götze (2000 (formula (4.2. We are going to prove (3.5. Before the proof we introduce some notation. Given a complex valued function f(s, t we write f R if B 2 dt t πτ πτ f(s, t ds R. Furthermore, we write f g for f g R. In view of the inequality λ 2π, see Höglund (1978, the bound (3.5 can be written as follows: E e{s +tq} h 1 +h 2. Let us prove (3.5. In what follows we assume that t B 2 and s πτ. Given s, t write u = s 2 + t 2 and let (here and throughout the section m denote the integer closest to the number c 2 N u 1 ln u, where c 2 is an absolute constant. We choose c 1 and c 2 so that 10 < m < N/2. Split m m (3.7 Q = K + L + W + Y + Z, K = ζ + µ, ζ = ζ j, µ = µ j, ζ j = L = Z = A Ω m ={j}, A =2 A Ω m, A =2 A Ω m 3 Q A, µ j = Q A, Y = Q A, W = A Ω m ={j}, A 3 A Ω m =2, A 3 A Ω m =, A 2 j=1 j=1 Q A, 1 j m, Q A, Q A, and denote f 1 = E e{s + t(k + W } and f 2 = E e{s + t(k + W }itl. In order to prove E exp{s + tq} h 1 + h 2 we shall show that (3.8 (3.9 (3.10 E e{s + tq} f 1 + f 2, f 2 h 3, h 3 = i 3 ( m 2 f 1 h 1 + h 2 h 3. E e{s S N }V, Let us introduce some more notation. Given a sum v = v v k we write v B = j B v j, for B Ω k. Given B Ω N, by E (B we denote the conditional expectation given all the random variables, but ν j, j B. For D Ω m, denote (3.11 Y D = E (D e{s D }, Z D = E (D e{s D + tζ D }, I D = I{κ D > c 1 3 }, κ D = τ 2 D N 1 ζ 2 (X j, ζ(a = g 2 (a, X j (ν j p. j D j=m+1

17 FINITE POPULATION 17 Using the multiplicative structure of Y D one can prove a sufficiently fast decay of its expected value as u. More precisely, one can construct random variables F D, D Ω N, of the form F D = i D ũ(tg 1(X i + s/τ, where ũ 0 is a real function, such that for D m/4 we have (3.12 Y D F D, Z D I D + F D, E(F 2 D Xi, X j u 20, i, j Ω N \ D. Clearly, the latter inequality holds for the unconditional expectation as well. the formulas (4.9 and (4.10 ibidem. The proof of (3.12 and the construction of random variables F D are provided in formulas ( of Bloznelis and Götze (2000. Let us mention that in order to establish (3.12, one chooses the constants c 1, c 2 and c 3 in an appropriate way. Split Ω m = Ω 1 m Ω 2 m Ω 3 m, where Ω i m, i = 1, 2, 3 are disjoint consecutive intervals with cardinalities Ω i m m/3. For i j, let Ω i,j denote the set of all pairs {l, r} such that l Ω i m, r Ω j m and l < r. Proof of (3.8. Expanding the exponent in powers of itz and invoking (3.17 we get E e{s + tq} = f 3 + R, where f 3 = E e{s + t(k + L + W + Y } and where R E tz t (EZ 2 1/2 t c 1/2 1 (ku 3/2 ln 3/2 u δ 1/2 3 σ 1 τ 2 R. Furthermore, expanding f 3 in powers of it(l + Y we obtain f 3 = f 1 + f 2 + f 4 + R, f 4 = E e{s + t(k + W }ity, R t 2 E(L + Y 2 t 2 u 2 ln 2 u τ 2 (γ 2 + c 1 (kδ 3 /σ 2 R. In the last step we invoked (3.18 and used the identity EL 2 = ( m 2 p 2 q 2 τ 6 γ 2. We obtain E e{s + tq} f 1 + f 2 + f 4. It remains to prove that f 4 R. To this aim we show that f 4 f 5 and f 5 R, where f 5 = E e{s +t(ζ +W }ity. By the mean value theorem f 4 f 5 Et 2 Y µ. Furthermore, by Cauchy Schwarz and (3.18, (3.19, Et 2 Y µ t 2 (EY 2 1/2 (Eµ 2 1/2 t 2 c 1 (ku 3/2 ln 3/2 u τ 4 δ 3 /σ 2 R. Therefore, f 4 f 5. In order to prove f 5 R we split f 5 = 1 i j 3 f i,j and show that f i,j R, for every i j. Here f i,j is defined in the same way as f 5, but with Y replaced by Y i,j, where Y i,j denotes the sum of all Q A such that A Ω m Ω i,j. Given i, j choose r from {1, 2, 3} \ {i, j}. Note that the random variable Y i,j and the sequence {ν l, l Ω r m} are independent. Therefore, by (3.12, f i,j t EZ Ω r m Y i,j t (EZ 2 Ω r m 1/2 (EY 2 i,j 1/2 t (EF 2 Ω r m + c 3Eκ Ω r m 1/2 (EY 2 i,j 1/2.

18 18 M. BLOZNELIS, F. GÖTZE In the last step we used the simple inequality I D c 3 κ D, for D = Ω r m. Note that the bound (3.18 applies to EYi,j 2 as well. This bound in combination with (3.12 and (3.24 implies f i,j R thus completing the proof of (3.8. Proof of (3.9. Split W = W 0 + W 1, where (3.13 W 0 = A Ω N : A Ω m =, A =2 Q A, W 1 = A Ω N : A Ω m =, A 3 In order to prove (3.9 we replace f 2 by f 6 = E e{s + t(ζ + W }itl and then replace f 6 by f 7 = E e{s + t(ζ + W 0 }itl. Finally, we invoke the relation f 7 h 3 which is proved in Bloznelis and Götze (2000 (formula (4.15. Let us prove f 2 f 6 and f 6 f 7. By the mean value theorem, f 2 f 6 t 2 E Lµ. Invoking (3.19 and the bound EL 2 m 2 p 2 q 2 τ 6 γ 2 we obtain, by Cauchy Schwarz, t 2 E Lµ R. Hence, f 2 f 6. Let us prove f 6 f 7. Split L = 1 i j 3 L i,j, where L i,j = A Ω i,j Q A. Write f 6 i,j = E e{s + t(ζ + W }itl i,j and f 7 i,j = E e{s + t(ζ + W 0 }itl i,j. In order to prove f 6 f 7 we show that f 6 i,j f 7 i,j, for every 1 i j 3. Given i j, choose r {1, 2, 3} \ {i, j} and denote D = Ω r m. Expanding the exponent in f 6 i,j in powers of itw 1 we obtain f 6 i,j = f 7 i,j + t 2 R, where Q A. R EZ D L i,j W 1 R 1 + R 2, R 1 = EF D L i,j W 1, R 2 = EI D L i,j W 1, by (3.12. By Cauchy Schwarz, we have (3.14 R 2 1 EW 2 1 EL 2 i,jf 2 D, R 2 2 EW 2 1 EL 2 i,ji D EW 2 1 EL 2 i,jc 3 κ D Fix {i 1, i 2 } Ω i,j. By symmetry, (3.12 and (3.24, (3.15 EL 2 i,jf 2 D = Ω i,j p 2 q 2 Eg 2 2(X i1, X i2 E(F 2 D i 1, i 2 u 20 τ 2 γ 2, (3.16 EL 2 i,jκ D = Ω i,j p 2 q 2 Eg 2 2(X i1, X i2 E(κ D i 1, i 2 u 2 ln 2 u τ 4 γ 2 2. Here we estimated Ω i,j < m 2 and m 2 p 2 q 2 τ 4 u 2 ln 2 u τ 4. It follows from (3.14, (3.15, (3.16 and (3.20 that t 2 R R. We obtain f 6 i,j f 7 i,j thus completing the proof of (3.9. Proof of (3.10. In the proof we replace f 1 by f 8 = E e{s + t(ζ + W } and then replace f 8 by the sum f 10 +f 11, where f 10 = E e{s+tw } and f 11 = E e{s+tw }itζ. Finally, we replace f 10 by f 12 = E e{s + tw 0 } and f 11 by f 13 = E e{s + tw 0 }itζ (recall that W 0 is defined in (3.13, and invoke the relation f 12 +f 13 h 1 +h 2 h 3, which is proved in Bloznelis and Götze (2000 (formulas ( Let us prove f 1 f 8. Expanding in f 1 powers of itµ we obtain f 1 = f 8 + f 9 + R, f 9 = E e{s + t(ζ + W }itµ,

19 FINITE POPULATION 19 where R t 2 µ 2 R, by (3.19. Therefore, f 1 f 8 +f 9. In order to show f 9 R, split µ = µ 1 + µ 2 + µ 3, where µ j = k Ω j µ k, and write f m 9 = f1 + f2 + f3, where fj = E e{s + t(ζ + W }itµ j. Given j, we show f j R. Fix r {1, 2, 3} \ {j} and denote D = Ω r m. By (3.12, f j t EZ D µ j t (E(µ j 2 1/2 (E2(F 2 D + I D 1/2 2 t (E(µ j 2 1/2 (EF 2 D + c 3/2 3 Eκ 3/2 D 1/2. Here we applied Cauchy Schwarz and the inequality I D c 3/2 3 κ 3/2 D. Note that the bound (3.19 holds for µ j as well. This bound in combination with (3.24 and (3.12 gives fj R. We obtain f 9 R, thus, completing the proof of f 1 f 8. In order to replace f 8 by f 10 + f 11 we use the relation f 8 f 10 + f 11. The proof of this relation is almost the same as that of the corresponding relation (4.35 in Bloznelis and Götze (2000. It remains to show that f 10 f 12 and f 11 f 13. Expanding f 10 in powers of itw 1 we get f 10 = f 12 + R, where R EY Ωm tw 1. It follows from (3.12 and (3.20, by Cauchy Schwarz, that R R. Therefore, f 10 f 12. In order to prove f 11 f 13 split Ω m = V 1 V 2, where V 1 V 2 = and the cardinality V i m/2, for i = 1, 2, and write ζ = ζ V1 +ζ V2. Expanding f 11 in powers of itw 1, we get f 11 = f 13 + R (1 + R (2, where R (i Et 2 W 1 ζ Vi Y Ωm \V i. Fix r V 1. By Cauchy Schwarz and symmetry, R (1 2 t 2 EW 2 1 Eζ 2 V 1 Y 2 V 2 = t 2 EW 2 1 V 1 (N mp 2 q 2 Eg 2 2(X r, X N E(Y 2 V 2 X r, X N t 2 τ 6 u 20 γ 2 c(kδ 3 /σ 2. Here we applied (3.20 and (3.12. It follows that R (1 R. The same bound holds for R (2 as well. Therefore f 11 f 13. The proof of (3.10 is complete. Lemma 3.1. Assume that n N/2 and that T is a U statistic of degree k. Then (3.17 (3.18 (3.19 (3.20 EZ 2 c 1 (km 3 p 3 q 3 τ 10 δ 3 σ 2, EY 2 c 1 (km 2 p 2 q 2 τ 8 δ 3 σ 2, Eµ 2 c 1 (kmpqτ 6 δ 3 σ 2, EW1 2 c 1 (kτ 4 δ 3 σ 2, where Y, Z and µ are defined in (3.7, and W 1 is defined in (3.13. Here c 1 (k denotes a constant which depends on k only. Proof of Lemma 3.1. Note that our assumption that T is a U-statistic of degree k implies σj 2 = 0 for j > k. In the proof we use the bound which follows from (4.3,

20 20 M. BLOZNELIS, F. GÖTZE (4.4 (3.21 k a j σj 2 τ 8 δ 3, a j = j=3 ( n 3 j 3 ( N n j 3 ( N j j 3 1. A simple calculation shows that a 3 = 1 and a j = b 1 b j 3 b 1... b j 3, where b i = (n 2 i/(n j i + 1 and b i = N n i + 1. For i = 1,..., j 3 and j = 4,..., k we apply the following simple bounds p c(kb i and q c(kb i /N, for some constant c(k depending on k only. These bounds imply for j = 3,..., k, that (pqn j 3 c 1 (ka j, where c 1 (k is a constant depending on k only. Combining this bound and (3.21 we obtain k (3.22 p j 3 q j 3 N j 3 σj 2 c 1 (kτ 8 δ 3. j=3 We introduce some more notation. Consider a random variable M = A M Q A, where M denotes a class of subsets of Ω N, i.e., M {A Ω N : 3 A n}. Note that EQ A = 0, for every A and EQ 2 A = σ2 j pj q j, for A = j. Since Q A and Q B are uncorrelated for A B, we have (3.23 EM 2 = n EQ 2 A = e j [M]σj 2 p j q j, A M where e j [M] denotes the number of different subsets A M of size A = j. Let us prove (3.17. A simple calculation shows that j 3 ( ( m N m e j [Z] =, 3 j k. j r r r=0 Invoking the inequality ( m ( j r m 3 m j r 3 we obtain j 3 ( ( ( e j [Z] m 3 m N m N = m 3. j r 3 r j 3 r=0 by Vandermonde s convolution formula. Therefore, e j [Z] m 3 N j 3. In view of (3.23 (applied to Z we obtain (3.17 from this inequality and (3.22. The proof of the remaining bounds ( is similar. We find e j [Y ] = ( m 2 ( N m j 2, e j [µ] = m It follows from these identities that j=3 ( N m j 1, e j [W 1 ] = e j [Y ] m 2 N j 2, e j [µ] mn j 1, e j [W 1 ] N j. ( N m Combining these bounds and (3.22 we obtain ( , thus, completing the proof of the lemma. j.

21 FINITE POPULATION 21 Lemma 3.2. For every D Ω m, with cardinality D m/4, and for any i, j Ω m \ D we have (3.24 E(κ D X i, X j τ 2 γ 2, E(κ 3/2 D X i, X j τ 3 γ 3, Eκ D τ 2 γ 2, Eκ 3/2 D τ 3 γ 3, where κ D is defined in (3.11 and where m c 2 Nu 1 ln u. Proof of Lemma 3.2. Clearly, the first two inequalities imply the rest ones. The first inequality is proved in Bloznelis and Götze (2000 (formula (4.22. Let us prove the second inequality. An application of the simple inequality ( D 1 r D x2 r 3/2 D 1 r D x r 3 to x r = ζ(x r gives κ 3/2 D τ 3 D 1 r D ζ(x r 3. Invoking the bound E( ζ(x r 3 X i, X j (Npq 3/2 E g 2 (X r, X N 3 we complete the proof. In order to prove this latter bound we first apply Rosenthal s inequality to the conditional expectation of ζ(x r 3 given all the random variables but ν m+1,..., ν N and then take the expected value given X i, X j and apply the inequalities (4.5 of Bloznelis and Götze (2000, see also (5.4 ibidem. Lemma is proved. 4. Combinatorial lemmas Here we prove two lemmas. Lemma 4.1 establishes bounds for the second moments of random variables Y m, Z m and η m,i defined in (2.9 above. Lemma 4.2 provides an auxiliary combinatorial identity. We first introduce some more notation. For k n write Ω c k = Ω n \ Ω k, where Ω k = {1,..., k}. Let H be a random variable of the form H = A H T A, where H is a class of subsets of Ω N, H = {A Ω N : 1 A n}. Denote U j (H = A H, A =j T A and write e j (H = EUj 2(Hσ 2 j, for σj 2 > 0. For σ2 j = 0 put e j (H = 0. We have H = U j (H and EH 2 = EUj 2 (H = e j (Hσj 2, 1 j n 1 j n 1 j n where the second identity follows from the fact that T A and T B are uncorrelated for A = B, see (2.2 above. For non negative integers k, s, t, u such that u min{s, t} + k denote s t ( ( ( 1 s t u r k (s, t, u = ( 1 v+k. v v v + k v=0 Here s t = min{s, t}. Recall that for x R, ( x r = [x]r /r!, if the integer r 0, and ( x r = 0, for r < 0. Here [x]r = x(x 1 (x r + 1, for r > 0, and [x] 0 = 1. In particular, for non-negative integers s < v, we have ( s v = 0

22 22 M. BLOZNELIS, F. GÖTZE Lemma 4.1. Assume that 100 n N/2. Given 3 m n and i Ω m, we have (4.1 EY 2 m n 3 m 4 δ 3, EZ 2 m n 4 m 6 δ 3, Eη 2 m,i n 2 δ 3. Proof of Lemma 4.1. In order to prove (4.1 we show that (4.2 EZ 2 m m 6 EZ 2 3, EY 2 m n m 4 EZ 2 3, Eη 2 m,i n 2 EZ 2 3, (4.3 EZ 2 3 n 4 δ 3. Let us show that, for 3 j n, (4.4 e j (Z 3 = By symmetry, EU 2 j (Z 3 = ( n 3 r 0 (j 3, n j, N j = j 3 ( n 3 j 3 ( N n j 3 ( N j j 3 ( (j 3 (n j n 3 ET Ωj U j (Z 3, ET Ωj U j (Z 3 = M v s j,j v, j 3 v=0 1. where M v = ( ( n j j 3 v v counts the summands TA of the sum U j (Z 3 which satisfy A Ω j = j v. Invoking (2.3, we obtain ET Ωj U j (Z 3 = r 0 (j 3, n j, N jσj 2, thus proving the first part of (4.4. The second part follows from (4.19. Let us prove the first inequality of (4.2. Write Z m = Z 3 + D D m, where D s = Z s Z s 1. By the inequality (a a s 2 s(a a 2 s, we have EZ 2 m (m 2(EZ ED ED 2 m. Now (4.2 follows from the inequalities ED 2 s s 4 EZ 2 3, for 4 s m. To prove these inequalities we show that, for 3 j n, (4.5 e j (D s s 4 e j (Z 3, Observe, that e j (D s = 0, for n s + 3 < j n, by the definition of D s. In the case where 3 j n s + 3, the inequalities (4.5 follow from the identities (4.6, (4.7 and the bound (4.9, see below. We have D s = T A {s} B. A Ω s 1, A =2 B Ω c s

23 FINITE POPULATION 23 Given j, this sum has ( ( s 1 n s 2 j 3 different summands TA {s} B, such that A {s} B = j. Fix B 0 Ω c s with B 0 = j 3 and denote A 0 = Ω 2 {s} B 0. By symmetry ( s 1 (4.6 EUj 2 (D s = 2 In the next step we show that (4.7 ET A0 U j (D s = ( n s j 3 ( L 0 (j + 2(s 3L 1 (j + ET A0 U j (D s. ( s 3 L 2 (j σj 2, 2 where we denote L i (j = r i (j 3, κ, N j, where κ = n s j + 3. To this aim split U j (D s = W 0 + W 1 + W 2, where W i = A A i B Ω c s, B =j 3 T A {s} B, A i = {A Ω s 1 : A = 2, A Ω 2 = 2 i}, and write ET A0 U j (D s = ET A0 W 0 + ET A0 W 1 + ET A0 W 2. Note that (4.7 follows from the identities (4.8 ET A0 W i = A i L i (jσ 2 j, i = 0, 1, 2, which we are going to prove now. Denote H 0 = {1, 2}, H 1 = {1, 3}, H 2 = {3, 4}. By symmetry, for i = 0, 1, 2, ET A0 W i = A i ET A0 B Ω c s, B =j 3 T Hi {s} B = A i (j 3 κ v=0 M v s j,j v i. Here M v = ( ( κ j 3 v v counts the subsets B Ω c s such that B B 0 = j 3 v. Invoking (2.3 we obtain (4.8. We complete the proof of (4.5, by showing that, for 3 j n s + 3, (4.9 L i(j 1, where L i(j := ( n s j 3 L i (j e j (Z 3, i = 0, 1, 2. To evaluate L i (j we use the expression (4.4 for e j(z 3 and invoke the formulas (4.21 for L i (j. For i = 0 a simple calculation shows that L 0(j = [n s] j 4 j 3 [N n + s 3] j 3 = [N n] j 3 [n 3] j 3 r=0 x r y r y r + s 3 x r + s 3,

24 24 M. BLOZNELIS, F. GÖTZE where we denote x r = n s r and y r = N n r. Now the inequality L 0(j 1 follows from the inequalities x r y r, which are consequences of the inequality n N/2. For i = 1, 2 the proof of (4.9 is similar. Let us prove the second inequality of (4.2. To this aim we shall show that (4.10 e j (Y m n m 4 e j (Z 3, 3 j n. Note that (by the definition of Y m e j (Y m = 0, for j > n m + 2. Let us prove (4.10, for 3 j n m + 2. Given j, fix B 0 Ω c m, with B 0 = j 2. By symmetry, (4.11 EU 2 j (Y m = ( m 2 ( n m j 2 Proceeding as in the proof of (4.7 above, we obtain (4.12 ET Ω2 B 0 U j (Y m = ET Ω2 B 0 U j (Y m. ( L 0 (j + 2(m 2L 1 (j + ( m 2 L 2 (j σj 2, 2 where L i (j = r i (j 2, n m j + 2, N j, for i = 0, 1, 2. Furthermore, arguing as in proof of (4.9 we obtain ( n m (4.13 L i (j n e j (Z 3, for i = 0, 1, 2. j 2 Finally, combining ( we get EUj 2(Y m n m 4 e j (Z 3 σj 2. This bound implies (4.10 thus completing the proof of the second inequality of (4.2 In order to prove the last inequality of (4.2 we shall show that, for 3 j n, (4.14 e j (η m,i n 2 e j (Z 3. Note that (by the definition of η m,i e j (η m,i = 0, for j > n m + 1. Let us prove (4.14, for 3 j n m + 1. Given j, fix B 0 Ω c m, with B 0 = j 1. By symmetry, ( n m (4.15 EUj 2 (η m,i = ET {i} B0 U j (η m,i. j 1 Denote κ = n m j + 1. A direct calculation shows that ET {i} B0 U j (η m,i = (j 1 κ v=0 ( j 1 v ( κ s j,j v = r 0 (j 1, κ, N jσj 2, v

25 FINITE POPULATION 25 where in the last step we invoke (2.3. Furthermore, proceeding as in the proof of (4.9 we obtain (4.16 ( n m r 0 (j 1, κ, N j n 2 e j (Z 3. j 1 Now (4.14 follows from (4.15 and (4.16. Proof of (4.3. Note that the inequality n N/2 implies τ 2 n/2. Therefore, in order to prove (4.3 it suffices to show that EZ 2 3 E(D 3 T 2. For this purpose we show that (4.17 e j (Z 3 e j (D 3 T, 3 j n. We invoke the formula for EUj 2(D it provided in Lemma 2 of Bloznelis and Götze (2001: EU 2 j (D i T = n i j i N n i j i N i j j i Combining this formula and (4.4 we find e j(z 3 e j (D 3 T N j + 1 N i j + 1 2i σ 2 j. = A B C where A = [N n] 3 [N j] 3, B = [N 2j + 3] 3 [N n j + 3] 3, C = 2 3 N j 2 N j + 1. The inequality (4.17 follows from the inequalities A 1, C 2 3 and B 2 3. The first two inequalities are obvious. In order to show the last one we write j = n ε and use the fact that ε 0. The lemma is proved. In the remaining part of the section we evaluate the coefficients r k (s, t, u. Using the identity, see Feller (1968, Chapter II, (4.18 ( ( a u v ( 1 v = v t v ( u a, a, t, u {0, 1, 2,... }, u t Zhao and Chen (1990 showed that for u s t and s t ( ( 1 u s u (4.19 r 0 (s, t, u =. t t Given integers 0 s t, let l(t, s denote the coefficients of the expansion (4.20 [v + t] t = l(t, t[v] t + l(t, t 1[v] t l(t, 0[v] 0.

26 26 M. BLOZNELIS, F. GÖTZE Lemma 4.2. Let k, s, t, u {0, 1, 2,... }. For u s t + k, we have (4.21 r k (s, t, u = where k l(k, r( 1 r+k A k,r, r=0 A k,r = 0 for r > s t; A k,r = 0 for u < s + t + k r; A k,r = (u s k!(u t k![s] r[t] r (u s t k + r!u! otherwise. Clearly, the numbers l(i, j can be expressed by Stirling numbers. A direct calculation shows that l(0, 0 = 1; l(1, 0 = 1, l(1, 1 = 1; l(2, 0 = 2, l(2, 1 = 4, l(2, 2 = 1; l(3, 0 = 6, l(3, 1 = 18, l(3, 2 = 9, l(3, 3 = 1. Proof of Lemma 4.2. Write a = min{s, t} and b = max{s, t}. We have a ( ( ( 1 b a u (4.22 r k (s, t, u = ( 1 v+k M v, where M v =. v v v + k v=0 A simple calculation shows that ( ( 1 u k v u k M v = [v + k] k w k (a, u, w k (a, u = [u] 1 k u k a a. Invoking the expansion (4.20, for the function v [v + k] k, we obtain an expression for M v. Substituting this expression in (4.22 we get k a ( ( b u k v r k (s, t, u = w k (a, u l(k, rs r, S r = ( 1 v+k [v] r. v u k a r=0 We complete the proof of (4.21 by showing that, for 0 r k, ( ( u b k u b k (4.23 S r = ( 1 r+k [b] r and [b] r w k (a, u = A k,r. a r a r ( Note that [v] r = 0, for v < r. For r v b, we have [v] b ( r v = b r [b]r v=0 Therefore, denoting v = v r, we can write a r ( ( b r u k r v S r = ( 1 v +r+k [b] r v u k a v =0. v r. Finally, invoking (4.18 we obtain the first identity of (4.23. The second identity is trivial.

27 FINITE POPULATION 27 References Albers, W., Bickel, P.J. and van Zwet, W.R., Asymptotic expansions for the power of distribution free tests in the one-sample problem, Ann. Statist. 4 (1976, Babu, Gutti Jogesh, Bai, Z. D., Mixtures of global and local Edgeworth expansions and their applications, J. Multivariate Anal. 59 (1996, Babu, G.J. and Singh, K., Edgeworth expansions for sampling without replacement from finite populations, J. Multivar. Analysis. 17 (1985, Bentkus, V., Götze, F. and van Zwet, W. R., An Edgeworth expansion for symmetric statistics,, Ann. Statist. 25 (1997, Bertail, P., Second-order properties of an extrapolated bootstrap without replacement under weak assumptions, Bernoulli 3 (1997, Bhattacharya, R.N. and Ghosh, J.K., On the validity of the formal Edgeworth expansion, Ann. Statist. 6 (1978, Bickel, P.J., Edgeworth expansions in non parametric statistics, Ann. Statist. 2 (1974, Bickel, P.J., Götze, F. and van Zwet, W. R., The Edgeworth expansion for U-statistics of degree two, Ann. Statist. 14 (1986, Bickel, P.J., Götze, F. and van Zwet, W. R., Resampling fewer than n observations: gains, losses, and remedies for losses, Statist. Sinica. 7 (1997, Bickel, P.J. and Robinson, J., Edgeworth expansions and smoothness, Ann. Probab. 10 (1982, Bickel, P.J. and van Zwet, W. R., Asymptotic expansions for the power of distribution free tests in the two-sample problem, Ann. Statist. 6 (1978, Bikelis, A., On the estimation of the remainder term in the central limit theorem for samples from finite populations., Studia Sci. Math. Hungar. 4 (1972, (In Russian. Bloznelis, M., A Berry Esseen bound for finite population Student s statistic, Ann. Probab. 27 (1999, Bloznelis, M. and Götze, F., An Edgeworth expansion for finite population U-statistics, Bernoulli 6 (2000, Bloznelis, M. and Götze, F., One term Edgeworth expansion for finite population U statistics of degree two, Acta Applicandae Mathematicae 58 (1999, Bloznelis, M. and Götze, F., Orthogonal decomposition of finite population statistic and its applications to distributional asymptotics, Ann. Statist. 29 (2001. Bolthausen, E. and Götze, F., The rate of convergence for multivariate sampling statistics, Ann. Statist. 21 (1993, Booth, J. G. and Hall, P., An improvement of the jackknife distribution function estimator, Ann. Statist. 21 (1993, Callaert, H., Janssen, P., A Berry Esseen theorm for U statistics, Ann. Statist. 6 (1978, Callaert, H., Janssen, P. and Veraverbeke, N., An Edgeworth expansion for U-statistics, Ann. Statist. 8 (1980, Chibisov, D.M., Asymptotic expansion for the distribution of a statistic admitting a stochastic expansion, Theory Probab. Appl. 17 (1972, Erdős, P. and Rényi, A., On the central limit theorem for samples from a finite population, Publ. Math. Inst. Hungar. Acad. Sci. 4 (1959, Feller, W., An introduction to probability theory and its applications. I, Wiley, New York, Götze, F., Asymptotic expansions for bivariate von Mises functionals, Z. Wahrsch. verw. Gebiete 50 (1979,

EMPIRICAL EDGEWORTH EXPANSION FOR FINITE POPULATION STATISTICS. I. M. Bloznelis. April Introduction

EMPIRICAL EDGEWORTH EXPANSION FOR FINITE POPULATION STATISTICS. I. M. Bloznelis. April Introduction EMPIRICAL EDGEWORTH EXPANSION FOR FINITE POPULATION STATISTICS. I M. Bloznelis April 2000 Abstract. For symmetric asymptotically linear statistics based on simple random samples, we construct the one-term

More information

ORTHOGONAL DECOMPOSITION OF FINITE POPULATION STATISTICS AND ITS APPLICATIONS TO DISTRIBUTIONAL ASYMPTOTICS. and Informatics, 2 Bielefeld University

ORTHOGONAL DECOMPOSITION OF FINITE POPULATION STATISTICS AND ITS APPLICATIONS TO DISTRIBUTIONAL ASYMPTOTICS. and Informatics, 2 Bielefeld University ORTHOGONAL DECOMPOSITION OF FINITE POPULATION STATISTICS AND ITS APPLICATIONS TO DISTRIBUTIONAL ASYMPTOTICS M. Bloznelis 1,3 and F. Götze 2 1 Vilnius University and Institute of Mathematics and Informatics,

More information

EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS

EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS Statistica Sinica 6006, 409-4 EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS Qiying Wang and Neville C Weber The University of Sydney Abstract: The leading term in

More information

Edgeworth expansions for a sample sum from a finite set of independent random variables

Edgeworth expansions for a sample sum from a finite set of independent random variables E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 12 2007), Paper no. 52, pages 1402 1417. Journal URL http://www.math.washington.edu/~ejpecp/ Edgeworth expansions for a sample sum from

More information

A BERRY ESSEEN BOUND FOR STUDENT S STATISTIC IN THE NON-I.I.D. CASE. 1. Introduction and Results

A BERRY ESSEEN BOUND FOR STUDENT S STATISTIC IN THE NON-I.I.D. CASE. 1. Introduction and Results A BERRY ESSEE BOUD FOR STUDET S STATISTIC I THE O-I.I.D. CASE V. Bentkus 1 M. Bloznelis 1 F. Götze 1 Abstract. We establish a Berry Esséen bound for Student s statistic for independent (non-identically)

More information

Introduction to Self-normalized Limit Theory

Introduction to Self-normalized Limit Theory Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized

More information

On large deviations for combinatorial sums

On large deviations for combinatorial sums arxiv:1901.0444v1 [math.pr] 14 Jan 019 On large deviations for combinatorial sums Andrei N. Frolov Dept. of Mathematics and Mechanics St. Petersburg State University St. Petersburg, Russia E-mail address:

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Self-normalized Cramér-Type Large Deviations for Independent Random Variables

Self-normalized Cramér-Type Large Deviations for Independent Random Variables Self-normalized Cramér-Type Large Deviations for Independent Random Variables Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu 1. Introduction Let X, X

More information

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong

More information

RELATIVE ERRORS IN CENTRAL LIMIT THEOREMS FOR STUDENT S t STATISTIC, WITH APPLICATIONS

RELATIVE ERRORS IN CENTRAL LIMIT THEOREMS FOR STUDENT S t STATISTIC, WITH APPLICATIONS Statistica Sinica 19 (2009, 343-354 RELATIVE ERRORS IN CENTRAL LIMIT THEOREMS FOR STUDENT S t STATISTIC, WITH APPLICATIONS Qiying Wang and Peter Hall University of Sydney and University of Melbourne Abstract:

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

On large deviations of sums of independent random variables

On large deviations of sums of independent random variables On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

Concentration of Measures by Bounded Size Bias Couplings

Concentration of Measures by Bounded Size Bias Couplings Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Cramér type moderate deviations for trimmed L-statistics

Cramér type moderate deviations for trimmed L-statistics arxiv:1608.05015v1 [math.pr] 17 Aug 2016 Cramér type moderate deviations for trimmed L-statistics Nadezhda Gribkova St.Petersburg State University, Mathematics and Mechanics Faculty, 199034, Universitetskaya

More information

Asymptotic Normality under Two-Phase Sampling Designs

Asymptotic Normality under Two-Phase Sampling Designs Asymptotic Normality under Two-Phase Sampling Designs Jiahua Chen and J. N. K. Rao University of Waterloo and University of Carleton Abstract Large sample properties of statistical inferences in the context

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

Cramér-Type Moderate Deviation Theorems for Two-Sample Studentized (Self-normalized) U-Statistics. Wen-Xin Zhou

Cramér-Type Moderate Deviation Theorems for Two-Sample Studentized (Self-normalized) U-Statistics. Wen-Xin Zhou Cramér-Type Moderate Deviation Theorems for Two-Sample Studentized (Self-normalized) U-Statistics Wen-Xin Zhou Department of Mathematics and Statistics University of Melbourne Joint work with Prof. Qi-Man

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Wasserstein-2 bounds in normal approximation under local dependence

Wasserstein-2 bounds in normal approximation under local dependence Wasserstein- bounds in normal approximation under local dependence arxiv:1807.05741v1 [math.pr] 16 Jul 018 Xiao Fang The Chinese University of Hong Kong Abstract: We obtain a general bound for the Wasserstein-

More information

On robust and efficient estimation of the center of. Symmetry.

On robust and efficient estimation of the center of. Symmetry. On robust and efficient estimation of the center of symmetry Howard D. Bondell Department of Statistics, North Carolina State University Raleigh, NC 27695-8203, U.S.A (email: bondell@stat.ncsu.edu) Abstract

More information

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES Lithuanian Mathematical Journal, Vol. 4, No. 3, 00 AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES V. Bentkus Vilnius Institute of Mathematics and Informatics, Akademijos 4,

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG

More information

Normal Approximation for Non-linear Statistics Using a Concentration Inequality Approach

Normal Approximation for Non-linear Statistics Using a Concentration Inequality Approach Normal Approximation for Non-linear Statistics Using a Concentration Inequality Approach Louis H.Y. Chen 1 National University of Singapore and Qi-Man Shao 2 Hong Kong University of Science and Technology,

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Zhishui Hu 1, John Robinson 2 and Qiying Wang 2

Zhishui Hu 1, John Robinson 2 and Qiying Wang 2 ESAIM: PS 6 (202) 425 435 DOI: 0.05/ps/200027 ESAIM: Probability and Statistics www.esaim-ps.org TAIL APPROXIMATIOS FOR SAMPLES FROM A FIITE POPULATIO WITH APPLICATIOS TO PERMUTATIO TESTS Zhishui Hu, John

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Arithmetic progressions in sumsets

Arithmetic progressions in sumsets ACTA ARITHMETICA LX.2 (1991) Arithmetic progressions in sumsets by Imre Z. Ruzsa* (Budapest) 1. Introduction. Let A, B [1, N] be sets of integers, A = B = cn. Bourgain [2] proved that A + B always contains

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Bounds of the normal approximation to random-sum Wilcoxon statistics

Bounds of the normal approximation to random-sum Wilcoxon statistics R ESEARCH ARTICLE doi: 0.306/scienceasia53-874.04.40.8 ScienceAsia 40 04: 8 9 Bounds of the normal approximation to random-sum Wilcoxon statistics Mongkhon Tuntapthai Nattakarn Chaidee Department of Mathematics

More information

Notes on Poisson Approximation

Notes on Poisson Approximation Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Concentration of Measure Distributional

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics ON A HYBRID FAMILY OF SUMMATION INTEGRAL TYPE OPERATORS VIJAY GUPTA AND ESRA ERKUŞ School of Applied Sciences Netaji Subhas Institute of Technology

More information

ASYMPTOTIC NORMALITY UNDER TWO-PHASE SAMPLING DESIGNS

ASYMPTOTIC NORMALITY UNDER TWO-PHASE SAMPLING DESIGNS Statistica Sinica 17(2007), 1047-1064 ASYMPTOTIC NORMALITY UNDER TWO-PHASE SAMPLING DESIGNS Jiahua Chen and J. N. K. Rao University of British Columbia and Carleton University Abstract: Large sample properties

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Combinatorial Dimension in Fractional Cartesian Products

Combinatorial Dimension in Fractional Cartesian Products Combinatorial Dimension in Fractional Cartesian Products Ron Blei, 1 Fuchang Gao 1 Department of Mathematics, University of Connecticut, Storrs, Connecticut 0668; e-mail: blei@math.uconn.edu Department

More information

BURGESS INEQUALITY IN F p 2. Mei-Chu Chang

BURGESS INEQUALITY IN F p 2. Mei-Chu Chang BURGESS INEQUALITY IN F p 2 Mei-Chu Chang Abstract. Let be a nontrivial multiplicative character of F p 2. We obtain the following results.. Given ε > 0, there is δ > 0 such that if ω F p 2\F p and I,

More information

A central limit theorem for randomly indexed m-dependent random variables

A central limit theorem for randomly indexed m-dependent random variables Filomat 26:4 (2012), 71 717 DOI 10.2298/FIL120471S ublished by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A central limit theorem for randomly

More information

PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS

PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS REZA HASHEMI and MOLUD ABDOLAHI Department of Statistics, Faculty of Science, Razi University, 67149, Kermanshah,

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

Probabilistic number theory and random permutations: Functional limit theory

Probabilistic number theory and random permutations: Functional limit theory The Riemann Zeta Function and Related Themes 2006, pp. 9 27 Probabilistic number theory and random permutations: Functional limit theory Gutti Jogesh Babu Dedicated to Professor K. Ramachandra on his 70th

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

On the absolute constants in the Berry Esseen type inequalities for identically distributed summands

On the absolute constants in the Berry Esseen type inequalities for identically distributed summands On the absolute constants in the Berry Esseen type inequalities for identically distributed summands Irina Shevtsova arxiv:1111.6554v1 [math.pr] 28 Nov 2011 Abstract By a modification of the method that

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

The Kadec-Pe lczynski theorem in L p, 1 p < 2

The Kadec-Pe lczynski theorem in L p, 1 p < 2 The Kadec-Pe lczynski theorem in L p, 1 p < 2 I. Berkes and R. Tichy Abstract By a classical result of Kadec and Pe lczynski (1962), every normalized weakly null sequence in L p, p > 2 contains a subsequence

More information

Stein Couplings for Concentration of Measure

Stein Couplings for Concentration of Measure Stein Couplings for Concentration of Measure Jay Bartroff, Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] [arxiv:1402.6769] Borchard

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Maximal inequalities and a law of the. iterated logarithm for negatively. associated random fields

Maximal inequalities and a law of the. iterated logarithm for negatively. associated random fields Maximal inequalities and a law of the arxiv:math/0610511v1 [math.pr] 17 Oct 2006 iterated logarithm for negatively associated random fields Li-Xin ZHANG Department of Mathematics, Zhejiang University,

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

General Franklin systems as bases in H 1 [0, 1]

General Franklin systems as bases in H 1 [0, 1] STUDIA MATHEMATICA 67 (3) (2005) General Franklin systems as bases in H [0, ] by Gegham G. Gevorkyan (Yerevan) and Anna Kamont (Sopot) Abstract. By a general Franklin system corresponding to a dense sequence

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Almost Sure Central Limit Theorem for Self-Normalized Partial Sums of Negatively Associated Random Variables

Almost Sure Central Limit Theorem for Self-Normalized Partial Sums of Negatively Associated Random Variables Filomat 3:5 (207), 43 422 DOI 0.2298/FIL70543W Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Almost Sure Central Limit Theorem

More information

A NEW UPPER BOUND FOR FINITE ADDITIVE BASES

A NEW UPPER BOUND FOR FINITE ADDITIVE BASES A NEW UPPER BOUND FOR FINITE ADDITIVE BASES C SİNAN GÜNTÜRK AND MELVYN B NATHANSON Abstract Let n, k denote the largest integer n for which there exists a set A of k nonnegative integers such that the

More information

On variable bandwidth kernel density estimation

On variable bandwidth kernel density estimation JSM 04 - Section on Nonparametric Statistics On variable bandwidth kernel density estimation Janet Nakarmi Hailin Sang Abstract In this paper we study the ideal variable bandwidth kernel estimator introduced

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY SERGIU KLAINERMAN AND MATEI MACHEDON Abstract. The purpose of this note is to give a new proof of uniqueness of the Gross- Pitaevskii hierarchy,

More information

RANDOM MATRICES: LAW OF THE DETERMINANT

RANDOM MATRICES: LAW OF THE DETERMINANT RANDOM MATRICES: LAW OF THE DETERMINANT HOI H. NGUYEN AND VAN H. VU Abstract. Let A n be an n by n random matrix whose entries are independent real random variables with mean zero and variance one. We

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Solving a linear equation in a set of integers II

Solving a linear equation in a set of integers II ACTA ARITHMETICA LXXII.4 (1995) Solving a linear equation in a set of integers II by Imre Z. Ruzsa (Budapest) 1. Introduction. We continue the study of linear equations started in Part I of this paper.

More information

large number of i.i.d. observations from P. For concreteness, suppose

large number of i.i.d. observations from P. For concreteness, suppose 1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate

More information

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability Probability Theory Chapter 6 Convergence Four different convergence concepts Let X 1, X 2, be a sequence of (usually dependent) random variables Definition 1.1. X n converges almost surely (a.s.), or with

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary

More information

Scattering for the NLS equation

Scattering for the NLS equation Scattering for the NLS equation joint work with Thierry Cazenave (UPMC) Ivan Naumkin Université Nice Sophia Antipolis February 2, 2017 Introduction. Consider the nonlinear Schrödinger equation with the

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR

ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR Periodica Mathematica Hungarica Vol. 51 1, 2005, pp. 11 25 ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR István Berkes Graz, Budapest, LajosHorváth Salt Lake City, Piotr Kokoszka Logan Qi-man Shao

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

Almost sure limit theorems for U-statistics

Almost sure limit theorems for U-statistics Almost sure limit theorems for U-statistics Hajo Holzmann, Susanne Koch and Alesey Min 3 Institut für Mathematische Stochasti Georg-August-Universität Göttingen Maschmühlenweg 8 0 37073 Göttingen Germany

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Strong approximation for additive functionals of geometrically ergodic Markov chains

Strong approximation for additive functionals of geometrically ergodic Markov chains Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

ONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION

ONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION ONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION Mark M. Meerschaert Department of Mathematics University of Nevada Reno NV 89557 USA mcubed@unr.edu and Hans Peter Scheffler

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information