Rates of convergence for minimal distances in the central limit theorem under projective criteria
|
|
- Aubrie Ward
- 5 years ago
- Views:
Transcription
1 Rates of convergence for minimal distances in the central limit theorem under projective criteria Jérôme Dedecker a, Florence Merlevède b and Emmanuel Rio c a Université Paris 6, LSTA, 75 rue du Chevaleret, 753 Paris, FRANCE. jerome.dedecker@upmc.fr b Université Paris 6, LPMA and C.N.R.S UMR 7599, 75 rue du Chevaleret, 753 Paris, FRANCE. florence.merlevede@upmc.fr c Université de Versailles, Laboratoire de mathématiques, UMR 8 CNRS, Bâtiment Fermat, 45 Avenue des Etats-Unis, 7835 Versailles, FRANCE. rio@math.uvsq.fr Key words: Minimal and ideal distances, rates of convergence, Martingale difference sequences, stationary sequences, projective criteria, weak dependence, uniform mixing. Mathematical Subject Classification 2: 6 F 5 Abstract In this paper, we give estimates of ideal or minimal distances between the distribution of the normalized partial sum and the limiting Gaussian distribution for stationary martingale difference sequences or stationary sequences satisfying projective criteria. Applications to functions of linear processes and to functions of expanding maps of the interval are given. Introduction and Notations Let X, X 2,... be a strictly stationary sequence of real-valued random variables r.v. with mean zero and finite variance. Set S n = X + X X n. By P n /2 S n we denote the law of n /2 S n and by G σ 2 the normal distribution N, σ 2. In this paper, we shall give quantitative estimates of the approximation of P n /2 S n by G σ 2 in terms of minimal or ideal metrics. Let Lµ, ν be the set of the probability laws on R 2 with marginals µ and ν. Let us consider the following minimal distances sometimes called Wasserstein distances of order r { } inf x y r P dx, dy : P Lµ, ν W r µ, ν = { /r } inf x y r P dx, dy : P Lµ, ν if < r < if r.
2 It is well known that for two probability measures µ and ν on R with respective distributions functions d.f. F and G, /r W r µ, ν = F u G u du r for any r.. We consider also the following ideal distances of order r Zolotarev distances of order r. For two probability measures µ and ν, and r a positive real, let { } ζ r µ, ν = sup fdµ fdν : f Λ r, where Λ r is defined as follows: denoting by l the natural integer such that l < r l +, Λ r is the class of real functions f which are l-times continuously differentiable and such that f l x f l y x y r l for any x, y R R..2 It follows from the Kantorovich-Rubinstein theorem 958 that for any < r, W r µ, ν = ζ r µ, ν..3 For probability laws on the real line, Rio 998 proved that for any r >, where c r is a constant depending only on r. W r µ, ν c r ζr µ, ν /r,.4 For independent random variables, Ibragimov 966 established that if X L p for p ]2, 3], then W P n /2 S n, G σ 2 = On p/2 see his Theorem 4.3. Still in the case of independent r.v. s, Zolotarev 976 obtained the following upper bound for the ideal distance: if X L p for p ]2, 3], then ζ p P n /2 S n, G σ 2 = On p/2. From.4, the result of Zolotarev entails that, for p ]2, 3], W p P n /2 S n, G σ 2 = On /p /2 which was obtained by Sakhanenko 985 for any p > 2. From. and Hölder s inequality, we easily get that for independent random variables in L p with p ]2, 3], W r P n /2 S n, G σ 2 = On p 2/2r for any r p..5 In this paper, we are interested in extensions of.5 to sequences of dependent random variables. More precisely, for X L p and p in ]2, 3] we shall give L p -projective criteria under which: for r [p 2, p] and r, p, 3, W r P n /2 S n, G σ 2 = On p 2/2 max,r..6 2
3 As we shall see in Remark 2.4,.6 applied to r = p 2 provides the rate of convergence On p 2 2p in the Berry-Esseen theorem. When r, p =, 3, Dedecker and Rio 28 obtained that W P n /2 S n, G σ 2 = On /2 for stationary sequences of random variables in L 3 satisfying L projective criteria or weak dependence assumptions a similar result was obtained by Pène 25 in the case where the variables are bounded. which W P n /2 S n, G σ 2 = On /2 log n. In this particular case our approach provides a new criterion under Our paper is organized as follows. In Section 2, we give projective conditions for stationary martingales differences sequences to satisfy.6 in the case r, p, 3. To be more precise, let X i i Z be a stationary sequence of martingale differences with respect to some σ-algebras F i i Z see Section. below for the definition of F i i Z. As a consequence of our Theorem 2., we obtain that if X i i Z is in L p with p ]2, 3] and satisfies E S 2 n F n 2 p/2 σ 2 p/2 <,.7 n n= then the upper bound.6 holds provided that r, p, 3. In the case r = and p = 3, we obtain the upper bound W P n /2 S n, G σ 2 = On /2 log n. In Section 3, starting from the coboundary decomposition going back to Gordin 969, and using the results of Section 2, we obtain L p -projective criteria ensuring.6 if r, p, 3. For instance, if X i i Z is a stationary sequence of L p random variables adapted to F i i Z, we obtain.6 for any p ]2, 3[ and any r [p 2, p] provided that.7 holds and the series ES n F converge in L p. In the case where p = 3, this last condition has to be strengthened. Our approach makes also possible to treat the case of non-adapted sequences. Section 4 is devoted to applications. In particular, we give sufficient conditions for some functions of Harris recurrent Markov chains and for functions of linear processes to satisfy the bound.6 in the case r, p, 3 and the rate On /2 log n when r = and p = 3. Since projective criteria are verified under weak dependence assumptions, we give an application to functions of φ-dependent sequences in the sense of Dedecker and Prieur 27. These conditions apply to unbounded functions of uniformly expanding maps.. Preliminary notations Throughout the paper, Y is a N, -distributed random variable. We shall also use the following notations. Let Ω, A, P be a probability space, and T : Ω Ω be a bijective bimeasurable transformation preserving the probability P. For a σ-algebra F satisfying F T F, we define the nondecreasing filtration F i i Z by F i = T i F. Let F = k Z F k and 3
4 F = k Z F k. We shall denote sometimes by E i the conditional expectation with respect to F i. Let X be a zero mean random variable with finite variance, and define the stationary sequence X i i Z by X i = X T i. 2 Stationary sequences of martingale differences. In this section we give bounds for the ideal distance of order r in the central limit theorem for stationary martingale differences sequences X i i Z under projective conditions. Notation 2.. For any p > 2, define the envelope norm.,φ,p by X,Φ,p = Φ u/2 p 2 Q X udu where Φ denotes the d.f. of the N, law, and Q X denotes the quantile function of X, that is the cadlag inverse of the tail function x P X > x. Theorem 2.. Let X i i Z be a stationary martingale differences sequence with respect to F i i Z. Let σ denote the standard deviation of X. Let p ]2, 3]. Assume that E X p < and that E S 2 n F n 2 p/2 σ 2,Φ,p <, 2. n n= and n= E S 2 n F n 2/p σ 2 p/2 <. 2.2 n Then, for any r [p 2, p] with r, p, 3, ζ r P n /2 S n, G σ 2 = On p/2, and for p = 3, ζ P n /2 S n, G σ 2 = On /2 log n. Remark 2.. Let a > and p > 2. Applying Hölder s inequality, we see that there exists a positive constant Cp, a such that X,Φ,p Cp, a X a. Consequently, if p ]2, 3], the two conditions 2. and 2.2 are implied by the condition.7 given in the introduction. Remark 2.2. Under the assumptions of Theorem 2., ζ r P n /2 S n, G σ 2 = On r/2 if r < p 2. Indeed, let p = r + 2. Since p < p, if the conditions 2. and 2.2 are satisfied for p, they also hold for p. Hence Theorem 2. applies with p. From.3 and.4, the following result holds for the Wasserstein distances of order r. 4
5 Corollary 2.. Under the conditions of Theorem 2., W r P n /2 S n, G σ 2 = On p 2/2 max,r for any r in [p 2, p], provided that r, p, 3. Remark 2.3. For p in ]2, 3], W p P n /2 S n, G σ 2 = On p 2/2p. This bound was obtained by Sakhanenko 985 in the independent case. For p < 3, we have W P n /2 S n, G σ 2 = On p/2. This bound was obtained by Ibragimov 966 in the independent case. Remark 2.4. Recall that for two real valued random variables X, Y, the Ky Fan metric αx, Y is defined by αx, Y = inf{ε > : P X Y > ε ε}. Let Πµ, ν be the Prokhorov distance between µ and ν. By Theorem.3.5 in Dudley 989 and Markov inequality, one has, for any r >, ΠP X, P Y αx, Y E X Y r /r+. Taking the minimum over the random couples X, Y with law Lµ, ν, we obtain that, for any < r, Πµ, ν W r µ, ν /r+. Hence, if Π n is the Prokhorov distance between the law of n /2 S n and the normal distribution N, σ 2, Π n W r P n /2 S n, G σ 2 /r+ for any < r. Taking r = p 2, it follows that under the assumptions of Theorem 2., Π n = On p 2 2p if p < 3 and Πn = On /4 log n if p = For p in ]2, 4], under 2.2, we have that n EX2 i σ 2 F i p/2 = On 2/p apply Theorem 2 in Wu and Zhao 26. Applying then the result in Heyde and Brown 97, we get that if X i i Z is a stationary martingale difference sequence in L p such that 2.2 is satisfied then F n Φ σ = O n p 2 2p+. where F n is the distribution function of n /2 S n and Φ σ is the d.f. of G σ 2. Now F n Φ σ + σ 2π /2 Π n. Consequently the bounds obtained in 2.3 improve the one given in Heyde and Brown 97, provided that 2. holds. Remark 2.5. If X i i Z is a stationary martingale difference sequence in L 3 such that EX 2 = σ 2 and k /2 EXk F 2 σ 2 3/2 <, 2.4 k> then, according to Remark 2., the conditions 2. and 2.2 hold for p = 3. Consequently, if 2.4 holds, then Remark 2.4 gives F n Φ σ = O n /4 log n. This result has to be compared with Theorem 6 in Jan 2, which states that F n Φ σ k> EX2 k F σ 2 3/2 <. 5 = On /4 if
6 Remark 2.6. Notice that if X i i Z is a stationary martingale differences sequence, then the conditions 2. and 2.2 are respectively equivalent to 2 jp/2 2 j ES 2 2 j F σ 2,Φ,p <, and 2 j 2/p 2 j ES 2 2 j F σ 2 p/2 <. j j To see this, let A n = ESn F 2 ES 2 n,φ,p and B n = ESn F 2 ES 2 n p/2. We first show that A n and B n are subadditive sequences. Indeed, by the martingale property and the stationarity of the sequence, for all positive i and j A i+j = ESi 2 + S i+j S i 2 F ES 2 i + S i+j S i 2,Φ,p A i + E S i+j S i 2 ESj 2 F,Φ,p. Proceeding as in the proof of 4.6, p. 65 in Rio 2, one can prove that, for any σ-field A and any integrable random variable X, EX A,Φ,p X,Φ,p. Hence E S i+j S i 2 ESj 2 F,Φ,p ES i+j S i 2 ESj 2 F i,φ,p. By stationarity, it follows that A i+j A i + A j. Similarly B i+j B i + B j. The proof of the equivalences then follows by using the same arguments as in the proof of Lemma 2.7 in Peligrad and Utev Rates of convergence for stationary sequences In this section, we give estimates for the ideal distances of order r for stationary sequences which are not necessarily adapted to F i. Theorem 3.. Let X i i Z be a stationary sequence of centered random variables in L p with p ]2, 3[, and let σn 2 = n ESn. 2 Assume that EX n F and X n EX n F converge in L p, 3. n> n> and n 2+p/2 n ESn F 2 σn 2 p/2 <. 3.2 n Then the series k Z CovX, X k converges to some nonnegative σ 2, and. ζ r P n /2 S n, G σ 2 = On p/2 for r [p 2, 2], 2. ζ r P n /2 S n, G σ 2 n = On p/2 for r ]2, p]. 6
7 Remark 3.. According to the bound 5.4, we infer that, under the assumptions of Theorem 3., the condition 3.2 is equivalent to n 2+p/2 n ESn F 2 σ 2 p/2 <. 3.3 n The same remark applies to the next theorem with p = 3. Remark 3.2. The result of item is valid with σ n instead of σ. On the contrary, the result of item 2 is no longer true if σ n is replaced by σ, because for r ]2, 3], a necessary condition for ζ r µ, ν to be finite is that the two first moments of ν and µ are equal. Note that under the assumptions of Theorem 3., both W r P n /2 S n, G σ 2 and W r P n /2 S n, G σ 2 n are of the order of n p 2/2 max,r. Indeed, in the case where r ]2, p], one has that W r P n /2 S n, G σ 2 W r P n /2 S n, G σ 2 n + W r G σ 2 n, G σ 2, and the second term is of order σ σ n = On /2. In the case where p = 3, the condition 3. has to be strengthened. Theorem 3.2. Let X i i Z be a stationary sequence of centered random variables in L 3, and let σn 2 = n ESn. 2 Assume that 3. holds for p = 3 and that EX k F < n 3 k n n and k EX k F <. 3.4 n k nx 3 n Assume in addition that n /2 n ESn F 2 σn 2 3/2 <. 3.5 n Then the series k Z CovX, X k converges to some nonnegative σ 2 and. ζ P n /2 S n, G σ 2 = On /2 log n, 2. ζ r P n /2 S n, G σ 2 = On /2 for r ], 2], 3. ζ r P n /2 S n, G σ 2 n = On /2 for r ]2, 3]. 7
8 4 Applications 4. Martingale differences sequences and functions of Markov chains Recall that the strong mixing coefficient of Rosenblatt 956 between two σ-algebras A and B is defined by αa, B = sup{ PA B PAPB : A, B A B }. For a strictly stationary sequence X i i Z, let F i = σx k, k i. Define the mixing coefficients α n of the sequence X i i Z by α n = αf, σx n. For the sake of brevity, let Q = Q X see Notation 2. for the definition. According to the results of Section 2, the following proposition holds. Proposition 4.. Let X i i Z be a stationary martingale difference sequence in L p with p ]2, 3[. Assume moreover that the series k k 2 p/2 α k log/u p 2/2 Q 2 udu and αk 2/p Q udu p 4. k 2/p k are convergent.then the conclusions of Theorem 2. hold. Remark 4.. From Theorem 2.b in Dedecker and Rio 28, a sufficient condition to get W P n /2 S n, G σ 2 = On /2 log n is k α n Q 3 udu <. This condition is always strictly stronger than the condition 4. when p = 3. We now give an example. Consider the homogeneous Markov chain Y i i Z with state space Z described at page 32 in Davydov 973. The transition probabilities are given by p n,n+ = p n, n = a n for n, p n, = p n, = a n for n >, p, =, a = /2 and /2 a n < for n. This chain is irreducible and aperiodic. It is Harris positively recurrent as soon as n 2 Πn k= a k <. In that case the stationary chain is strongly mixing in the sense of Rosenblatt 956. Denote by K the Markov kernel of the chain Y i i Z. The functions f such that Kf = almost everywhere are obtained by linear combinations of the two functions f and f 2 given by f =, f = and f n = f n = if n, and f 2 =, f 2 = f 2 = and f 2 n + = f 2 n = a n bounded. if n >. Hence the functions f such that Kf = are 8
9 If X i i Z is defined by X i = fy i, with Kf =, then Proposition 4. applies if α n = On p/2 log n p/2 ɛ for some ɛ >, 4.2 which holds as soon as P τ = n = On p/2 log n p/2 ɛ, where P is the probability of the chain starting from, and τ = inf{n >, X n = }. Now P τ = n = a n Π n a i for n 2. Consequently, if a i = p + + ɛ for i large enough, 2i log i the condition 4.2 is satisfied and the conclusion of Theorem 2. holds. Remark 4.2. If f is bounded and Kf, the central limit theorem may fail to hold for S n = n fy i EfY i. We refer to the Example 2, page 32, given by Davydov 973, where S n properly normalized converges to a stable law with exponent strictly less than 2. Proof of Proposition 4.. Let B p F be the set of F -measurable random variables such that Z p. We first notice that EX 2 k F σ 2 p/2 = Applying Rio s covariance inequality 993, we get that sup CovZ, Xk 2. Z B p/p 2 F α k 2/p EXk F 2 σ 2 p/2 2 Q udu p, which shows that the convergence of the second series in 4. implies 2.2. Now, from Fréchet 957, we have that EX 2 k F σ 2,Φ,p = sup { E Z p 2 EX 2 k F σ 2, Z F -measurable, Z N, }. Hence, setting ε k = signex 2 k F σ 2, EX 2 k F σ 2,Φ,p = sup { Covε k Z p 2, X 2 k, Z F -measurable, Z N, }. Applying again Rio s covariance inequality 993, we get that α k EXk F 2 σ 2,Φ,p C logu p 2/2 Q 2 udu, which shows that the convergence of the first series in 4. implies 2.. 9
10 4.2 Linear processes and functions of linear processes In what follows we say that the series i Z a i converges if the two series i a i and i< a i converge. Theorem 4.. Let a i i Z be a sequence of real numbers in l 2 such that i Z a i converges to some real A. Let ε i i Z be a stationary sequence of martingale differences in L p for p ]2, 3]. Let X k = j Z a jε k j, and σn 2 = n ESn. 2 Let b = a A and b j = a j for j. A n = j Z n k= b k j 2. If A n = on, then σn 2 converges to σ 2 = A 2 Eε 2. If moreover Let E n 2 F ε n 2 p/2 j Eε 2 n <, p/2 4.3 then we have n=. If A n = O, then ζ P n /2 S n, G σ 2 = On /2 logn, for p = 3, j= 2. If A n = On r+2 p/r, then ζ r P n /2 S n, G σ 2 = On p/2, for r [p 2, ] and p 3, 3. If A n = On 3 p, then ζ r P n /2 S n, G σ 2 = On p/2, for r ], 2], 4. If A n = On 3 p, then ζ r P n /2 S n, G σ 2 n = On p/2, for r ]2, p]. Remark 4.3. If the condition given by Heyde 975 holds, that is 2 2 a k < and a k <, 4.4 n= k n n= k n then A n = O, so that it satisfies all the conditions of items -4. Remark 4.4. Under the additional assumption i Z a i <, one has the bound A n 4B n, where B n = n k= j k 2 a j + j k 2 a j. 4.5 Proof of Theorem 4.. We start with the following decomposition: n n S n = A ε j + b k j ε j. 4.6 j= j= Let R n = j= n k= b k jε j. Since R n 2 2 = A n ε 2 2 and since σ n σ n /2 R n 2, the fact that A n = on implies that σ n converges to σ. We now give an upper bound for R n p. From Burkholder s inequality, there exists a constant C such that { R n p C j= k= n 2ε } /2 2 p/2 b k j j C ε p An. 4.7 k=
11 According to Remark 2., since 4.3 holds, the two conditions 2. and 2.2 of Theorem 2. are satisfied by the martingale M n = A n k= ε k. To conclude the proof, we use Lemma 5.2 given in Section 5.2, with the upper bound 4.7. Proof of Remarks 4.3 and 4.4. To prove Remark 4.3, note first that A n = n j= j l= a l + l=n+ j a l 2 + n+i l=i a l 2 + i l= i n+ a l 2. It follows easily that A n = O under 4.4. To prove the bound 4.5, note first that A n 3B n + i=n+ n+i l=i 2 a l + Let T i = l=i a l and Q i = i l= a l. We have that n+i i=n+ l=i i i=n+ l= i n+ 2 a l T n+ Since nt 2 n+ + Q 2 n+ B n, 4.5 follows. i=n+ i=n+ i l= i n+ a l 2. T i T n+i nt 2 n+ 2 a l Q n+ Q i Q n+i nq 2 n+. i=n+ In the next result, we shall focus on functions of real-valued linear processes X k = h a i ε k i E h a i ε k i, 4.8 i Z where ε i i Z is a sequence of iid random variables. Denote by w h., M the modulus of continuity of the function h on the interval [ M, M], that is w h t, M = sup{ hx hy, x y t, x M, y M}. Theorem 4.2. Let a i i Z be a sequence of real numbers in l 2 and ε i i Z be a sequence of iid random variables in L 2. Let X k be defined as in 4.8 and σ 2 n = n ES 2 n. Assume that h is γ-hölder on any compact set, with w h t, M Ct γ M α, for some C >, γ ], ] and α. If for some p ]2, 3], E ε 2 α+γp < and i Z γ/2 i p/2 aj 2 <, 4.9 then the series k Z CovX, X k converges to some nonnegative σ 2, and i j i
12 . ζ P n /2 S n, G σ 2 = On /2 log n, for p = 3, 2. ζ r P n /2 S n, G σ 2 = On p/2 for r [p 2, 2] and r, p, 3, 3. ζ r P n /2 S n, G σ 2 n = On p/2 for r ]2, p]. Proof of Theorem 4.2. Theorem 4.2 is a consequence of the following proposition: Proposition 4.2. Let a i i Z, ε i i Z and X i i Z be as in Theorem 4.2. Let ε i i Z be an independent copy of ε i i Z. Let V = i Z a iε i and M,i = V a j ε j + a j ε j and M 2,i = V a j ε j +. a j ε j j<i j i j<i j i If for some p ]2, 3], i p/2 wh, p a j ε j M,i < i j i then the conclusions of Theorem 4.2 hold. and i p/2 wh, p a j ε j M2, i <, i j< i 4. To prove Theorem 4.2, it remains to check 4.. We only check the first condition. Since w h t, M Ct γ M α and the random variables ε i are iid, we have, p γ w h a j ε j M,i C a j ε j V α p γ + C a j ε j p V α p, so that j i j i, p w h a j ε j M,i j i C 2 α α+γ a j ε j p + γ a j ε j p V α p + 2 α α a j ε j p. j i j i j<i From Burkholder s inequality, for any β >, β β a j ε j p = a j ε j j i j i βp K j i a 2 j j i β/2 ε β 2 βp. Applying this inequality with β = γ or β = α + γ, we infer that the first part of 4. holds under 4.9. The second part can be handled in the same way. 2
13 Proof of Proposition 4.2. Let F i = σε k, k i. We shall first prove that the condition 3.2 of Theorem 3. holds. We write n n i ESn F 2 ESn 2 p/2 2 EX i X k+i F EX i X k+i p/2 n 4 k= n EX i X k+i F p/2 + 2 k=i k= n i EX i X k+i F EX i X k+i p/2. We first control the second term. Let ε be an independent copy of ε, and denote by E ε the conditional expectation with respect to ε. Define Y i = j<i a j ε i j, Y i = a j ε i j, Z i = a j ε i j, and j<i j i Z i = j i a j ε i j. Taking F l = σε i, i l, and setting h = h Eh i Z a iε i, we have EX i X k+i F EX i X k+i p/2 = E ε h Y i + Z i h Y k+i + Z k+i E ε h Y i + Z ih Y k+i + Z k+i Applying first the triangle inequality, and next Hölder s inequality, we get that EX i X k+i F EX i X k+i p/2 h Y k+i + Z k+i p h Y i + Z i h Y i + Z i p p/2 + h Y i + Z i p h Y k+i + Z k+i h Y k+i + Z k+i p. Let m,i = Y i + Z i Y i + Z i. Since w h t, M = w h t, M, it follows that EX i X k+i F EX i X k+i p/2 h Y k+i wh + Z k+i p a j ε i j ε, p i j m,i + h Y i + Z i p wh j k+i j i a j ε k+i j ε k+i j, m,k+i p. By subadditivity, we obtain that w h a j ε i j ε p, p i j, m,i w h a j ε i j m,i + w h a j ε p i j, m,i. j i j i Since the three couples j i a jε i j, m,i, j i a jε i j, m,i and j i a jε j, M,i are identically distributed, it follows that w h a j ε i j ε p, p i j, m,i 2 w h a j ε j M,i. j i 3 j i j i.
14 In the same way w h a j ε k+i j ε p, p k+i j, m,k+i 2 w h a j ε j M,k+i. Consequently j k+i n n 3 p/2 n j k+i i EX i X k+i F EX i X k+i p/2 < k= provided that the first condition in 4. holds. We turn now to the control of n n k=i EX ix k+i F p/2. We first write that EX i X k+i F p/2 E X i EX i F i+[k/2] X k+i F p/2 + E EX i F i+[k/2] X k+i F p/2 X p X i EX i F i+[k/2] p + X p EX k+i F i+[k/2] p. Let bk = k [k/2]. Since EX k+i F i+[k/2] p = EX bk F p, we have that EX k+i F i+[k/2] p = E ε h a j ε bk j + j<bk j bk a j ε bk j h j<bk j bk a j ε bk j + j bk a j ε bk j p. Using the same arguments as before, we get that EX k+i F i+[k/2] p = EX bk F p 2 w h, p a j ε j M,bk. 4. In the same way, X i EX i F i+[k/2] p = E ε h Let j< [k/2] a j ε i j + j [k/2] m 2,i,k = a i ε i j j Z a j ε i j h j< [k/2] a j ε i j + j< [k/2] j [k/2] a j ε i j + a j ε i j. j [k/2] a j ε i j p. Using again the subbadditivity of t w h t, M, and the fact that j< [k/2] a jε i j, m 2,i,k, j< [k/2] a jε i j, m 2,i,k and j< [k/2] a jε j, M 2, [k/2] are identically distributed, we obtain that X i EX i F i+[k/2] = X [k/2] EX [k/2] F 2 w h p p 4 j< [k/2] a j ε j, M2, [k/2] p. 4.2
15 Consequently n n 3 p/2 n n EX i X k+i F p/2 < provided that 4. holds. This completes the proof of 3.2. k=i Using the bounds 4. and 4.2 taking bk = n in 4. and [k/2] = n in 4.2, we see that the condition 3. of Theorem 3. and also the condition 3.4 of Theorem 3.2 in the case p = 3 holds under Functions of φ-dependent sequences In order to include examples of dynamical systems satisfying some correlations inequalities, we introduce a weak version of the uniform mixing coefficients see Dedecker and Prieur 27. Definition 4.. For any random variable Y = Y,, Y k with values in R k define the function g x,j t = I t x PY j x. For any σ-algebra F, let φf, Y = sup x,...,x k R k E k k g xj,jy j F E g xj,jy j. j= For a sequence Y = Y i i Z, where Y i = Y T i and Y is a F -measurable and real-valued r.v., let φ k,y n = max l k j= sup φf, Y i,..., Y il. i l >...>i n Definition 4.2. For any p, let Cp, M, P X be the closed convex envelop of the set of functions f which are monotonic on some open interval of R and null elsewhere, and such that E fx p < M. Proposition 4.3. Let p ]2, 3] and s p. Let X i = fy i EfY i, where Y i = Y T i and f belongs to Cs, M, P Y. Assume that i p 4/2+s 2/s φ 2,Y i s 2/s <. 4.3 i Then the conclusions of Theorem 4.2 hold. Remark 4.5. Notice that if s = p = 3, the condition 4.3 becomes i φ 2,Yi /3 <, and if s =, the condition 4.3 becomes i ip 2/2 φ 2,Y i <. 5
16 Proof of Proposition 4.3. Let B p F be the set of F -measurable random variables such that Z p. We first notice that EX k F p EX k F s = sup CovZ, fy k. Z B s/s F Applying Corollary 6.2 with k = 2 to the covariance on right hand take f = Id and f 2 = f, we obtain that EX k F s sup 8φσZ, Y k s /s Z s/s φσy k, Z /s M /s Z B s/s F 8φ,Y k s /s M /s, 4.4 the last inequality being true because φσz, Y k φ,y k and φσy k, Z. It follows that the conditions 3. for p ]2, 3] and 3.4 for p = 3 are satisfied under 4.3. The condition 3.2 follows from the following lemma by taking b = 4 p/2. Lemma 4.. Let X i be as in Proposition 4.3, and let b ], [. If i b+s 2/s φ 2,Y i s 2/s <, i then n> n +b ES2 n F ES 2 n p/2 <. Proof of Lemma 4.. Since, ES 2 n F ES 2 n p/2 2 we infer that there exists C > such that n> n n i EX i X k+i F EX i X k+i p/2, k= n +b ES2 n F ESn 2 p/2 C i + k EX ix b k+i F EX i X k+i p/ i> k We shall bound up EX i X k+i F EX i X k+i p/2 in two ways. First, using the stationarity and the upper bound 4.4, we have that EX i X k+i F EX i X k+i p/2 2 X EX k F p/2 6 X p M /s φ,y k s /s. 4.6 Next, note that EX i X k+i F EX i X k+i p/2 = sup CovZ, X i X k+i Z B s/s 2 F = sup Z B s/s 2 F EZ EZX i X k+i. 6
17 Applying Corollary 6.2 with k = 3 to the term EZ EZX i X k+i take f = Id, f 2 = f 3 = f, we obtain that EX i X k+i F EX i X k+i p/2 is smaller than sup 32φσZ, Y i, Y k+i s 2/s Z s/s 2 M 2/s φσy i, Z, Y k+i /s φσy k+i, Z, Y i /s. Z B s/s 2 F Since φσz, Y i, Y k+i φ 2,Y i and φσy i, Z, Y k+i, φσy k+i, Z, Y i, we infer that EX i X k+i F EX i X k+i p/2 32φ 2,Y i s 2/s M 2/s. 4.7 From 4.5, 4.6 and 4.7, we infer that the conclusion of Lemma 4. holds provided that i> [is 2/s ] k= Here, note that φ i + k b 2,Y i s 2/s + k [ks /s 2 ] φ i + k b,y k s /s <. [i s 2/s ] k= i + k b i b+ s 2 s and [k s /s 2 ] [2k s /s 2 ] i + k b m= m b Dk b s s 2, for some D >. Since φ,y k φ 2,Y k, the conclusion of Lemma 4. holds provided i s 2 b+ i s φ2,y i s 2 s < and k s b k s 2 φ2,y k s s <. To complete the proof, it remains to prove that the second series converges provided the first one does. If the first series converges, then lim 2n n i=n+ s 2 b+ i s φ2,y i s 2 s =. 4.8 Since φ 2,Y i is non increasing, we infer from 4.8 that φ 2,Y i /s = oi /s b/s 2. It follows that φ 2,Y k s /s Cφ 2,Y k s 2/s k /s b/s 2 for some positive constant C, and the second series converges Application to Expanding maps Let BV be the class of bounded variation functions from [, ] to R. For any h BV, denote by dh the variation norm of the measure dh. Let T be a map from [, ] to [, ] preserving a probability µ on [, ], and let n S n f = f T k µf. k= 7
18 Define the Perron-Frobenius operator K from L 2 [, ], µ to L 2 [, ], µ via the equality Khxfxµdx = hxf T xµdx. 4.9 A Markov Kernel K is said to be BV -contracting if there exist C > and ρ [, [ such that dk n h Cρ n dh. 4.2 The map T is said to be BV -contracting if its Perron-Frobenius operator is BV -contracting. Let us present a large class of BV -contracting maps. We shall say that T is uniformly expanding if it belongs to the class C defined in Broise 996, Section 2. page. Recall that if T is uniformly expanding, then there exists a probability measure µ on [, ], whose density f µ with respect to the Lebesgue measure is a bounded variation function, and such that µ is invariant by T. Consider now the more restrictive conditions: a T is uniformly expanding. b The invariant measure µ is unique and T, µ is mixing in the ergodic-theoretic sense. c f µ fµ > is a bounded variation function. Starting from Proposition 4. in Broise 996, one can prove that if T satisfies the assumptions a, b and c above, then it is BV contracting see for instance Dedecker and Prieur 27, Section 6.3. Some well known examples of maps satisfying the conditions a, b and c are:. T x = βx [βx] for β >. These maps are called β-transformations. 2. I is the finite union of disjoint intervals I k k n, and T x = a k x+b k on I k, with a k >. 3. T x = ax [ax ] for some a >. For a =, this transformation is known as the Gauss map. Proposition 4.4. Let σn 2 = n ESnf. 2 If T is BV -contracting, and if f belongs to Cp, M, µ with p ]2, 3], then the series µf µf n> µf T n f µf converges to some nonnegative σ 2, and. ζ P n /2 S nf, G σ 2 = On /2 log n, for p = 3, 2. ζ r P n /2 S nf, G σ 2 = On p/2 for r [p 2, 2] and r, p, 3, 3. ζ r P n /2 S nf, G σ 2 n = On p/2 for r ]2, p]. 8
19 Proof of Proposition 4.4. Let Y i i be the Markov chain with transition Kernel K and invariant measure µ. Using the equation 4.9 it is easy to see that Y,..., Y n is distributed as T n+,..., T. Consequently, to prove Proposition 4.4, it suffices to prove that the sequence X i = fy i µf satisfies the condition 4.3 of Proposition 4.3. According to Lemma in Dedecker and Prieur 27, the coefficients φ 2,Y i of the chain Y i i with respect to F i = σy j, j i satisfy φ 2,Y i Cρ i for some ρ ], [ and some positive constant C. It follows that 4.3 is satisfied for s = p. 5 Proofs of the main results From now on, we denote by C a numerical constant which may vary from line to line. Notation 5.. For l integer, q in ]l, l + ] and f l-times continuously differentiable, we set f Λq = sup{ x y l q f l x f l y : x, y R R}. 5. Proof of Theorem 2. We prove Theorem 2. in the case σ =. The general case follows by dividing the random variables by σ. Since ζ r P ax, P ay = a r ζ r P X, P Y, it is enough to bound up ζ r P Sn, G n. We first give an upper bound for ζ p,n := ζ p P, G S2 N 2 N. Proposition 5.. Let X i i Z be a stationary martingale differences sequence in L p for p in ]2, 3]. Let M p = E X p. Then for any natural integer N, 2 2N/p ζ 2/p p,n M p N K= where Z K = ES 2 2 K F ES 2 2 K and N = N K= 2 2K/p Z K p/2. 2 Kp/2 2 Z K,Φ,p 2/p + 2 p N, 5. Proof of Proposition 5.. The proof is done by induction on N. Let Y i i N be a sequence of N, -distributed independent random variables, independent of the sequence X i i Z. For m >, let T m = Y + Y Y m. Set S = T =. For any numerical function f and m n, set f n m x = Efx + T n T m. Then, from the independence of the above sequences, EfS n ft n = n D m with D m = E f n m S m + X m f n m S m + Y m. 5.2 m= 9
20 For any two-times differentiable function g, the Taylor integral formula at order two writes gx + h gx = g xh + 2 h2 g x + h 2 tg x + th g xdt. 5.3 Hence, for any q in ]2, 3], gx + h gx g xh 2 h2 g x h 2 t th q 2 g Λq dt Let D m = Ef n ms m X 2 m = Ef n ms m X 2 m Y 2 m qq h q g Λq. 5.4 From 5.4 applied twice with g = f n m, x = S m and h = X m or h = Y m together with the martingale property, D m 2 D m Now E Y m p p p M p. Hence pp f n m Λp E X m p + Y m p. D m D m/2 M p f n m Λp 5.5 Assume now that f belongs to Λ p. Then the smoothed function f n m belongs to Λ p also, so that f n m Λp. Hence, summing on m, we get that EfS n ft n nm p + D /2 where D = D + D D n. 5.6 Suppose now that n = 2 N. To bound up D, we introduce a dyadic scheme. Notation 5.2. Set m = m and write m in basis 2: m = N i= b i2 i with b i = or b i = note that b N =. Set m L = N i=l b i2 i, so that m N =. Let I L,k =]k2 L, k + 2 L ] N note that I N, =]2 N, 2 N+ ], U k L U L = U L and Ũ L = ŨL. = i I L,k X i and Ũ k L Since m N =, the following elementary identity is valid D m = N L= = i I L,k Y i. For the sake of brevity, let E f n m L S ml f n m L+ S ml+ Xm 2. Now m L m L+ only if b L =, then in this case m L = k2 L with k odd. It follows that D = N L= k I N L, k odd E f n k2 S L k2 L f n k 2 S L k 2 L 2 {m:m L =k2 L } Xm 2 σ
21 Note that {m : m L = k2 L } = I L,k. Now by the martingale property, E k2 L Xi 2 σ 2 = E k2 LU k L 2 EU k L 2 := Z k i I L,k Consequently L. D = = N L= N L= k I N L, k odd k I N L, k odd f E n k2 S L k2 L f n k 2 S L k 2 L Z k L f E n k2 S L k2 L f n k2 S L k 2 L + T k2 L T k 2 L Z k L, 5.8 since X i i N and Y i i N are independent. By using.2, we get that D N L= k I N L, k odd E U k L From the stationarity of X i i N and the above inequality, D 2 N K= Ũ k L p 2 Z k L. 2 N K E U K ŨK p 2 Z K. 5.9 Now let V K be the N, 2 K -distributed random variable defined from U K via the quantile transformation, that is V K = 2 K/2 Φ F K U K + δ K F K U K F K U K where F K denotes the d.f. of U K, and δ K is a sequence of independent r.v. s uniformly distributed on [, ], independent of the underlying random variables. Now, from the subadditivity of x x p 2, U K ŨK p 2 U K V K p 2 + V K ŨK p 2. Hence E U K ŨK p 2 Z K U K V K p 2 p Z K p/2 + E V K ŨK p 2 Z K. 5. By definition of V K, the real number U K V K p is the so-called Wasserstein distance of order p between the law of U K and the N, 2K normal law. Therefrom, by Theorem 3. of Rio 27 which improves the constants given in Theorem of Rio 998, we get that, for p ]2, 3], U K V K p 22p ζ p,k /p 24ζ p,k /p. 5. Now, since V K and ŨK are independent, their difference has the N, 2 K+ distribution. Note that if Y is a N, -distributed random variable, Q Y p 2u = Φ u/2 p 2. Hence, by 2
22 Fréchet s inequality 957 see also Inequality.b page 9 in Rio 2, and by definition of the norm.,φ,p, From 5., 5. and 5.2, we get that E V K ŨK p 2 Z K 2K+p/2 Z K,Φ,p. 5.2 E U K ŨK p 2 Z K 2p 4/p ζ p 2/p p,k Z K p/2 + 2 K+p/2 Z K,Φ,p. 5.3 Then, from 5.6, 5.9 and 5.3, we get N 2 N ζ p,n M p + 2 p/2 3 N + 2 p 2 4/p K= 2 K ζ p 2/p p,k Z K p/2, where N = N K= 2Kp/2 2 Z K,Φ,p. Consequently we get the induction inequality 2 N ζ p,n M p + N 2 2 N + 2 K ζ p 2/p p,k Z K p/ We now prove 5. by induction on N. First by 5.6 applied with n =, one has ζ p, M p, since D = f EX 2 =. Assume now that ζ p,l satisfies 5. for any L in [, N ]. Starting from 5.4, using the induction hypothesis and the fact that K N, we get that 2 N ζ p,n M p + N 2 2 N + 2 2K/p Z K p/2 M p + 2/p 2 2 p/2 2 N + p K. K= Now 2 2K/p Z K p/2 = K+ K. Consequently K= 2 N ζ p,n M p + N 2 2 N + M p + 2/p 2 2 p/2 2 N + p x dx, which implies 5. for ζ p,n. In order to prove Theorem 2., we will also need a smoothing argument. This is the purpose of the lemma below. Lemma 5.. Let S and T be two centered and square integrable random variables with the same variance. For any r in ], p], ζ r P S, P T 2ζ r P S G, P T G Proof of Lemma 5.. Throughout the sequel, let Y be a N, -distributed random variable, independent of the σ-field generated by S, T. 22
23 For r 2, since ζ r is an ideal metric with respect to the convolution, ζ r P S, P T ζ r P S G, P T G + 2ζ r δ, G ζ r P S G, P T G + 2E Y r which implies Lemma 5. for r 2. For r > 2, from 5.4, for any f in Λ r, fs fs + Y + f SY 2 f SY 2 rr Y r. Taking the expectation and noting that E Y r r for r in ]2, 3], we infer that E fs fs + Y 2 f S r. Obviously this inequality still holds for T instead of S and f instead of f, so that adding the so obtained inequality, EfS ft EfS + Y ft + Y + 2 Ef S f T +. Since f belongs to Λ r 2, it follows that ζ r P S, P T ζ r P S G, P T G + 2 ζ r 2P S, P T +. Now r 2. Hence ζ r 2 P S, P T = W r 2 P S, P T W r P S, P T r 2. Next, by Theorem 3. in Rio 27, W r P S, P T 32ζ r P S, P T /r. Furthermore 32ζ r P S, P T 2/r ζ r P S, P T as soon as ζ r P S, P T 2 5r/2 5. This condition holds for any r in ]2, 3] if ζ r P S, P T 4 2. Then, from the above inequalities ζ r P S, P T ζ r P S G, P T G + 2 ζ rp S, P T +, which implies Lemma 5.. We go back to the proof of Theorem 2.. Let n ]2 N, 2 N+ ] and l = n 2 N. The main step is then to prove the inequalities below: for r p 2 and r, p, 3, for some ɛn tending to zero as N tends to infinity, ζ r P Sn, G n c r,p 2 Nr p/2 ζ p P Sl, G l + C2 Nr+2 p/2 + 2 Nr p/2+2/p ɛnζ p P Sl, G l p 2/p
24 and for r = and p = 3, ζ P Sn, G n CN + 2 N ζ 3 P Sl, G l + 2 N/3 ζ 3 P Sl, G l / Assuming that 5.5 and 5.6 hold, we now complete the proof of Theorem 2.. ζ p,n = sup n 2 N ζ pp Sn, G n, we infer from 5.5 applied to r = p that Let ζ p,n+ ζ p,n + C2 N + 2 2N/p ɛnζ p,n p 2/p. Let N be such that CɛN /2 for N N, and let K be such that ζ p,n K2 N. Choosing K large enough such that K 2C, we can easily prove by induction that ζ p,n K2N for any N N. Hence Theorem 2. is proved in the case r = p. For r in [p 2, p[, Theorem 2. follows by taking into account the bound ζ p,n K2N, valid for any N N, in the inequalities 5.5 and 5.6. We now prove 5.5 and 5.6. We will bound up ζ p,n by induction on N. For n ]2N, 2 N+ ] and l = n 2 N, we notice that ζ r P Sn, G n ζ r P Sn, P Sl G 2 N + ζ r P Sl G 2 N, G l G 2 N. Let φ t be the density of the law N, t 2. With the same notation as in the proof of Proposition 5., we have ζ r P Sl G 2 N, G l G 2 N = sup f Λ r Ef 2 N S l f 2 N T l f φ 2 N/2 Λp ζ p P Sl, G l. Applying Lemma 6., we infer that ζ r P Sn, G n ζ r P Sn, P Sl G 2 N + c r,p 2 Nr p/2 ζ p P Sl, G l. 5.7 On the other hand, setting S l = X l + + X, we have that S n is distributed as S l + S 2 N and, S l + T 2 N as S l + T 2 N. Let Y be a N, -distributed random variable independent of X i i Z and Y i i Z. Using Lemma 5., we then derive that ζ r P Sn, P Sl G 2 N sup f Λ r Ef S l + S 2 N + Y f S l + T 2 N + Y. 5.8 Let D m = Ef 2 N m+ S l + S m X 2 m. We follow the proof of Proposition 5.. From the Taylor expansion 5.3 applied twice with g = f 2 N m+, x = S l + S m and h = X m or h = Y m together with the martingale property, we get that Ef S l + S 2 N + Y f S l + T 2 N + Y = 2 N m= Ef 2 N m+ S l + S m + X m f 2 N m+ S l + S m + Y m = D + + D 2 N /2 + R + + R 2 N,
25 where, as in 5.5, R m M p f 2 N m+ Λp. 5.2 In the case r = p 2, we will need the more precise upper bound R m E Xm 2 f 2 N m+ 3 f 2 6 N m+ X m + 3 f 2 6 N m+ E Y m 3, 5.2 which is derived from the Taylor formula at orders two and three. From 5.2 and Lemma 6., we have that R := R + + R 2 N = O2 Nr p+2/2 if r > p 2, and R = ON if r, p =, i 2, It remains to consider the case r = p 2 and r <. Applying Lemma 6., we get that for It follows that 2 N m= f i 2 N m+ c r,i 2 N m + r i/ E Xm 2 f 2 N m+ f 3 2 N m+ X m C Consequently for r = p 2 and r <, m= m E X 2 r/2 X m [X2 ] X 2 CE m + r/2 m=. m 3 r/2 m=[x 2 ]+ X 3 R + + R 2 N CM p + E Y We now bound up D + + D 2 N. Using the dyadic scheme as in the proof of Proposition 5., we get that D m = N L= f E 2 N m L S l + S ml f 2 N m L+ S l + S ml+ Xm 2 + Ef 2 S N l Xm 2 := D m + Ef 2 N S l X 2 m. Notice first that 2 N m= Ef 2 N S l X 2 m = Ef 2 N S l f 2 N T l Z N Since f belongs to Λ r i.e. f Λr, we infer from Lemma 6. that f i Λp Ci r p/2 which means exactly that f i x f i y Ci r p/2 x y p
26 Starting from 5.25 and using 5.26 with i = 2 N, it follows that 2 N m= Ef 2 N S l X 2 m C2 Nr p/2 E S l T l p 2 Z N. Proceeding as to get 5.3 that is, using similar upper bounds as in 5., 5. and 5.2, we obtain that E S l T l p 2 Z N 2p 4/p ζ p P Sl, G l p 2/p Z N p/2 + 2l p/2 Z N,Φ,p. Using Remark 2.6, 2. and 2.2 entail that Z N p/2 = o2 2N/p and Z N,Φ,p = o2 N2 p/2. Hence, for some ɛn tending to as N tends to infinity, one has 2 N m= 2 N m= D m 2 N m= D m + CɛN2 Nr p/2+2/p ζ p P Sl, G l p 2/p + 2 Nr+2 p/ Next, proceeding as in the proof of 5.8, we get that D m N L= k I N L, k odd f E 2 N k2 S L l + S k2 L f 2 N k2 S L l + S k 2 L + T k2 L T k 2 L Z k L Let r > p 2 or r, p =, 3. Using 5.26 with i = 2 N k2 L, 5.28, and the stationarity of X i i N, we infer that It follows that 2 N m= 2 N m= 2 N m= D m C N L= k I N L, k odd L= 2 N k2 L r p/2 E U L ŨL p 2 Z. D m C2 N r+2 p/2 N 2 L E UL ŨL p 2 Z if r > p 2, 5.29 D m CN N 2 L E UL ŨL Z if r = and p = L= In the case r = p 2 and r <, we have 2 N m= D m C N L= k I N L, k odd L f E 2 N k2 L f 2 N k2 UL L ŨL Z L. L L 26
27 Applying 5.23 to i = 2 and i = 3, we obtain 2 N m= D m C N Z 2 r 2L/2 E L= Proceeding as to get 5.24, we have that 2 N L k= k r 2/2 2 L/2 UL k ŨL It follows that 2 N m= D m C L= L 2 N L k= k r 2/2 2 L/2 UL k ŨL, k r 2/2 2 L/2 UL k ŨL C2 Lr/2 U L ŨL r. k= N UL 2 E L ŨL r Z if r = p 2 and r <. 5.3 Now by Remark 2.6, 2. and 2.2 are respectively equivalent to 2 2K/p Z K p/2 <. K 2 Kp/2 2 Z K,Φ,p <, and K Next, by Proposition 5., ζ p,k = O2 K under 2. and 2.2. Therefrom, taking into account the inequality 5.3, we derive that under 2. and 2.2, UL 2 E L ŨL p 2 Z C2 2L/p Z L p/2 + C2 Lp/2 2 Z L,Φ,p L Consequently, combining 5.32 with the upper bounds 5.29, 5.3 and 5.3, we obtain that { 2 N D m O2 Nr+2 p/2 if r p 2 and r, p, 3 = 5.33 ON if r = and p = 3. m= From 5.7, 5.8, 5.9, 5.22, 5.24, 5.27 and 5.33, we obtain 5.5 and 5.6. L 5.2 Proof of Theorem 3. By 3., we get that see Volný 993 where Z = X = D + Z Z T, 5.34 EX k F k EX k F and D = k=x EX k F EX k F. k Z k= Note that Z L p, D L p, D is F -measurable, and ED F =. Let D i = D T i, and Z i = Z T i. We obtain that S n = M n + Z Z n+, 5.35 where M n = n j= D j. We first bound up EfS n fm n by using the following lemma 27
28 Lemma 5.2. Let p ]2, 3] and r [p 2, p]. Let X i i Z be a stationary sequence of centered random variables in L 2 r. Assume that S n = M n + R n where M n M n n> is a strictly stationary sequence of martingale differences in L 2 r, and R n is such that ER n =. Let nσ 2 = EM 2 n, nσ 2 n = ES 2 n and α n = σ n /σ.. If r [p 2, ] and E R n r = On r+2 p/2, then ζ r P Sn, P Mn = On r+2 p/2. 2. If r ], 2] and R n r = On 3 p/2, then ζ r P Sn, P Mn = On r+2 p/2. 3. If r ]2, p], σ 2 > and R n r = On 3 p/2, then ζ r P Sn, P αn M n = On r+2 p/2. 4. If r ]2, p], σ 2 = and R n r = On r+2 p/2r, then ζ r P Sn, G nσ 2 n = On r+2 p/2. Remark 5.. All the assumptions on R n in items, 2, 3 and 4 of Lemma 5.2 are satisfied as soon as sup n> R n p <. Proof of Lemma 5.2. For r ], ], ζ r P Sn, P Mn E R n r, which implies item. If f Λ r with r ], 2], from the Taylor integral formula and since ER n =, we get EfS n fm n = E R n f M n f + f M n + tr n f M n dt. Using that f x f y x y r and applying Hölder s inequality, it follows that EfS n fm n R n r f M n f r/r + R n r r R n r M n r r + R n r r. Since M n r M n 2 = nσ, we infer that ζ r P Sn, P Mn = On r+2 p/2. Now if f Λ r with r ]2, p] and if σ >, we define g by gt = ft tf t 2 f /2. The function g is then also in Λ r and is such that g = g =. Since α 2 nem 2 n = ES 2 n, we have EfS n fα n M n = EgS n gα n M n Now from the Taylor integral formula at order two, setting R n = R n + α n M n, EgS n gα n M n = E R n g α n M n + 2 E R n 2 g α n M n Note that, since g = g =, one has + E R n 2 tg α n M n + t R n g α n M n dt E R n g α n M n = E Rn α n M n g tα n M n g dt 28
29 Using that g x g y x y r 2 and applying Hölder s inequality in 5.37, it follows that EgS n gα n M n r E R n α n M n r + 2 R n 2 r g α n M n r/r R n r r r αr n R n r M n r r + 2 αr 2 n R n 2 r M n r 2 r + 2 R n r r. Now α n = O and R n r R n r + α n M n r. Since S n 2 M n 2 R n 2, we infer that α n = On 2 p/2. Hence, applying Burkhölder s inequality for martingales, we infer that R n r = On 3 p/2, and consequently ζ r P Sn, P αn M n = On r+2 p/2. If σ 2 =, then S n = R n. Let Y be a N, random variable. Using that EfS n f nσ n Y = EgR n g nσ n Y and applying again Taylor s formula, we obtain that sup EfS n f nσ n Y f Λ r r R n r nσ n Y r r + 2 R n 2 r nσ n Y r 2 r + 2 R n r r, where R n = R n nσ n Y. Since nσ n = R n 2 R n r and since R n r = On r+2 p/2r, we infer that nσ n = On r+2 p/2r and that R n r = On r+2 p/2r. The result follows. By 5.35, we can apply Lemma 5.2 with R n := Z Z n+. Then for p 2 r 2, the result follows if we prove that under 3. and 3.2, M n satisfies the conclusion of Theorem 2.. Now if 2 < r p and σ 2 >, we first notice that ζ r P αn M n, G nσ 2 n = α r nζ r P Mn, G nσ 2. Since α n = O, the result will follow by Item 3 of Lemma 5.2, if we prove that under 3. and 3.2, M n satisfies the conclusion of Theorem 2.. We shall prove that n n 3 p/2 EM 2 n F EM 2 n p/2 < In this way, according to Remark 2., both 2. and 2.2 will be satisfied. Suppose that we can show that n n 3 p/2 EM 2 n F ES 2 n F p/2 <, 5.39 then by taking into account the condition 3.2, 5.38 will follow. Indeed, it suffices to notice that 5.39 also entails that n n 3 p/2 ES2 n EM 2 n <,
30 and to write that EM 2 n F EM 2 n p/2 EM 2 n F ES 2 n F p/2 + ES 2 n F ES 2 n p/2 + ES 2 n EM 2 n. Hence, it remains to prove Since S n = M n + Z Z n+, and since Z i = Z T i is in L p, 5.39 will be satisfied provided that Notice that n n 3 p/2 S nz Z n+ p/2 <. 5.4 S n Z Z n+ p/2 M n p Z Z n+ p + Z Z n+ 2 p. From Burkholder s inequality, M n p = O n and from 3., sup n Z Z n+ p <. Consequently 5.4 is satisfied for any p in ]2, 3[. 5.3 Proof of Theorem 3.2 Starting from 5.35 we have that in L p, where R n = k n+ EX k F n k EX k F and R n = k M n := S n + R n + R n 5.42 X k EX k F X k EX k F n. Arguing as in the proof of Theorem 3. the theorem will follow from 3.5, if we prove that n k n n 3/2 EM 2 n F ES 2 n F 3/2 < Under 3., sup n R n 3 < and sup n R n 3 <. Hence 5.43 will be verified as soon as n= n 3/2 ES nr n + R n F 3/2 < We first notice that the decomposition 5.42 together with Burkholder s inequality for martingales and the fact that sup n R n 3 < and sup n R n 3 <, implies that S n 3 C n
31 Now to prove 5.44, we first notice that 3/2 E S n EX k F F ES n F 3 EX k F, k n+ k which is bounded by using 3.. Now write E S n EX k F n F = E S n EX k F n F + ES n ES 2n S n F n F. Clearly E S n k 2n+ k 2n+ 3/2 EX k F n F S n 3 C n k k 2n+ k 2n+ EX k F n 3 EX k F, by using Considering the bounds 5.46 and 5.47 and the condition 3.4, in order to prove that n= n 3/2 ES nr n F 3/2 <, 5.48 it is sufficient to prove that n ES nes 3/2 2n S n F n F 3/2 < n= With this aim, take p n = [ n] and write n= ES n ES 2n S n F n F = ES n S n pn ES 2n S n F n F +ES n pn ES 2n S n F n F. 5.5 By stationarity and 5.45, we get that n ES pn 3/2 n S n pn ES 2n S n F n F 3/2 C n ES n F 3/2 3, which is finite by using 3. and the fact that p n = [ n]. Hence from 5.5, 5.49 will follow if we prove that n= With this aim we first notice that n 3/2 ES n p n ES 2n S n F n F 3/2 <. 5.5 n= ES n pn ES n pn F n pn ES 2n S n F n F 3/2 S n pn ES n pn F n pn 3 ES 2n S n F n 3, 3
32 which is bounded under 3.. Consequently 5.5 will hold if we prove that We first notice that n= n 3/2 EES n p n F n pn ES 2n S n F n F 3/2 < EES n pn F n pn ES 2n S n F n F = EES n pn F n pn ES 2n S n F n pn F, and by stationarity and 5.45, EES n pn F n pn ES 2n S n F n pn F 3/2 S n pn 3 ES 2n S n F n pn 3 Hence 5.52 will hold provided that n n k [ n] C n ES n+pn S pn F 3. EX k F < The fact that 5.53 holds under the first part of the condition 3.4 follows from the following elementary lemma applied to hx = k [x] EX k F 3. Lemma 5.3. Assume that h is a positive function on R + satisfying h x + = h n for any x in [n, n[. Then n n h n < if and only if n n hn <. Write It remains to show that n= n 3/2 ES n R n F 3/2 < S n Rn = S n X k EX k F k k n X k EX k F n = S n ES n F n S n + k F n EX k F. k EX Notice first that E S n S n ES n F n F 3/2 = E S n ES n F n 2 F 3/2 S n ES n F n 2 3, which is bounded under 3.. Now for p n = [ n], we write k EX k F n EX k F = k EX k F n EX k F pn + k 32 EX k F pn EX k F.
33 Note that EX k F pn EX k F = 3 k k EX k F k X X k EX k F pn 3 k X k EX k F + X k EX k F, 3 3 k k p n which is bounded under 3.. Next, since the random variable k EX k F pn EX k F is F pn -measurable, we get 3/2 E S n EX k F pn EX k F F k 3/2 E S pn EX k F pn EX k F F k + ES n S pn F pn 3 EX k F pn EX k F k S pn 3 + ES n pn F 3 EX k F pn EX k F C p n, 3 by using 3. and Hence, since p n = [ n], we get that n= k k E 3/2 S n 3/2 n EX k F pn EX k F F <. It remains to show that E 3/2 S n 3/2 n EX k F n EX k F pn F < n= k 3 Note first that EX k F n EX k F pn = 3 k k EX k F n k X X k EX k F pn 3 k X k EX k F + X k EX k F. 3 3 k n k p n It follows that 3/2 E S n EX k F n EX k F pn F k C n X k EX k F + X k EX k F. 3 3 k p n k n 33
34 by taking into account Consequently 5.55 will follow as soon as n n X k EX k F <, 3 k [ n] which holds under the second part of the condition 3.4, by applying Lemma 5.3 with hx = k [x] X k EX k F 3. This ends the proof of the theorem. 6 Appendix 6. A smoothing lemma. Lemma 6.. Let r > and f be a function such that f Λr < see Notation 5. for the definition of the seminorm Λr. Let φ t be the density of the law N, t 2. For any real p r and any positive t, f φ t Λp c r,p t r p f Λr for some positive constant c r,p depending only on r and p. Furthermore c r,r =. Remark 6.. In the case where p is a positive integer, the result of Lemma 6. can be written as f φ p t c r,p t r p f Λr. Proof of Lemma 6.. Let j be the integer such that j < r j +. In the case where p is a positive integer, we have f φ t p x = f j u f j x φ p j t x udu since p j. Since f j u f j x x u r j f Λr, we obtain that f φ t p x f Λr x u r j φ p j t x u du f Λr u r j φ p j t u du. Using that φ p j t x = t p+j φ p j x/t, we conclude that Lemma 6. holds with the constant c r,p = z r j φ p j z dz. The case p = r is straightforward. In the case where p is such that j < r < p < j +, by definition Also, by Lemma 6. applied with p = j +, f j φ t x f j φ t y f Λr x y r j. f j φ t x f j φ t y x y f j+ φ t f Λr c r,j+ t r j x y. 34
35 Hence by interpolation, f j φ t x f j φ t y f Λr t r p c p r/j+ r r,j+ x y p j. It remains to consider the case where r i < p i +. By Lemma 6. applied successively with p = i and p = i +, we obtain that Consequently and by interpolation, f i φ t x f Λr c r,i t r i and f i+ φ t x f Λr c r,i+ t r i. f i φ t x f i φ t y f Λr t r i 2c r,i c r,i+ t x y, f i φ t x f i φ t y f Λr t r p 2c r,i p+i c p i r,i+ x y p i. 6.2 Covariance inequalities. In this section, we give an upper bound for the expectation of the product of k centered random variables Π k X i EX i. Proposition 6.. Let X = X,, X k be a random variable with values in R k. Define the number φ i = φσx i, X,..., X i, X i+,..., X k 6. = k k sup E I Xj >x j PX j > x j σx i E x R k I Xj >x j PX j > x j. j=,j i j=,j i Let F i be the distribution function of X i and Q i be the quantile function of X i see Section 4. for the definition. Let F i be the generalized inverse of F i and let D i u = F i u F i u +. We have the inequalities and E E k k X i EX i X i EX i 2 k k k D i u/φ i du 6.2 Q i u/φ i du. 6.3 In addition, for any k-tuple p,..., p k such that /p /p k =, we have E k X i EX i 2 k 35 k φ i /p i X i pi. 6.4
36 Proof of Proposition 6.. We have that Now for all i, E k E k X i EX i = I Xi >x i PX i > x i k = E I Xi >x i E j=,j i k = E I Xi x i E Consequently, for all i, E k j=,j i E k I Xi >x i PX i > x i dx... dx k. 6.5 k I Xj >x j PX j > x j σx i E j=,j i k I Xj >x j PX j > x j σx i E j=,j i I Xj >x j PX j > x j I Xj >x j PX j > x j. I Xi >x i PX i > x i φ i PX i x i PX i > x i. 6.6 Hence, we obtain from 6.5 and 6.6 that E k X i EX i k k I u/φ i <PX i >x i I u/φ i PX i x i dx i du I F i u/φ i x i <F i u/φ i dx i du, and 6.2 follows. Now 6.3 comes from 6.2 and the fact that D i u 2Q i u see Lemma 6. in Dedecker and Rio 28. Finally 6.4 follows by applying Hölder s inequality to 6.3. Definition 6.. For a quantile function Q in L [, ], λ, let FQ, P X be the set of functions f which are nondecreasing on some open interval of R and null elsewhere and such that Q fx Q. Let CQ, P X denote the set of convex combinations λ if i of functions f i in FQ, P X where λ i note that the series λ if i X converges almost surely and in L P X. Corollary 6.. Let X = X,, X k be a random variable with values in R k and let the φ i s be defined by 6.. Let f i i k be k functions from R to R, such that f i CQ i, P Xi. We have the inequality E k f i X i Ef i X i 2 2k 36 k Q i u φ i du.
An empirical Central Limit Theorem in L 1 for stationary sequences
Stochastic rocesses and their Applications 9 (9) 3494 355 www.elsevier.com/locate/spa An empirical Central Limit heorem in L for stationary sequences Sophie Dede LMA, UMC Université aris 6, Case courrier
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationCentral Limit Theorem for Non-stationary Markov Chains
Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities
More informationThe empirical distribution function for dependent variables: asymptotic and nonasymptotic results in L p
The empirical distribution function for dependent variables: asymptotic and nonasymptotic results in L p Jérôme Dedecker, Florence Merlevède Abstract Considering the centered empirical distribution function
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationAdditive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More informationMATHS 730 FC Lecture Notes March 5, Introduction
1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists
More informationReal Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis
Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It
More informationSection 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University
Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationAn introduction to some aspects of functional analysis
An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More information7 Convergence in R d and in Metric Spaces
STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a
More information3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More information2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More informationAsymptotic statistics using the Functional Delta Method
Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationAnalysis Qualifying Exam
Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,
More information5 Birkhoff s Ergodic Theorem
5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationTreball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationWeak and strong moments of l r -norms of log-concave vectors
Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationµ X (A) = P ( X 1 (A) )
1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationFunctional Analysis I
Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker
More information4. Ergodicity and mixing
4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation
More information1 Introduction. 2 Measure theoretic definitions
1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationLectures 2 3 : Wigner s semicircle law
Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries
More informationLecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University
Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationAn almost sure invariance principle for additive functionals of Markov chains
Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department
More informationNotes on Distributions
Notes on Distributions Functional Analysis 1 Locally Convex Spaces Definition 1. A vector space (over R or C) is said to be a topological vector space (TVS) if it is a Hausdorff topological space and the
More informationBERNOULLI DYNAMICAL SYSTEMS AND LIMIT THEOREMS. Dalibor Volný Université de Rouen
BERNOULLI DYNAMICAL SYSTEMS AND LIMIT THEOREMS Dalibor Volný Université de Rouen Dynamical system (Ω,A,µ,T) : (Ω,A,µ) is a probablity space, T : Ω Ω a bijective, bimeasurable, and measure preserving mapping.
More informationISyE 6761 (Fall 2016) Stochastic Processes I
Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationDynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)
Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More information4 Expectation & the Lebesgue Theorems
STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does
More informationInformation Theory and Statistics Lecture 3: Stationary ergodic processes
Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationSupplementary Notes for W. Rudin: Principles of Mathematical Analysis
Supplementary Notes for W. Rudin: Principles of Mathematical Analysis SIGURDUR HELGASON In 8.00B it is customary to cover Chapters 7 in Rudin s book. Experience shows that this requires careful planning
More informationMeasure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond
Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More informationB. Appendix B. Topological vector spaces
B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationDistance-Divergence Inequalities
Distance-Divergence Inequalities Katalin Marton Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences Motivation To find a simple proof of the Blowing-up Lemma, proved by Ahlswede,
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationTHE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationIntroduction and Preliminaries
Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationPartial Differential Equations, 2nd Edition, L.C.Evans Chapter 5 Sobolev Spaces
Partial Differential Equations, nd Edition, L.C.Evans Chapter 5 Sobolev Spaces Shih-Hsin Chen, Yung-Hsiang Huang 7.8.3 Abstract In these exercises always denote an open set of with smooth boundary. As
More informationStrong approximation for additive functionals of geometrically ergodic Markov chains
Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability
More informationLebesgue Integration on R n
Lebesgue Integration on R n The treatment here is based loosely on that of Jones, Lebesgue Integration on Euclidean Space We give an overview from the perspective of a user of the theory Riemann integration
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationBrownian Motion and Conditional Probability
Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More informationProblem set 2 The central limit theorem.
Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationConcentration inequalities and the entropy method
Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationLEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.
LEGENDRE POLYNOMIALS AND APPLICATIONS We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.. Legendre equation: series solutions The Legendre equation is
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationProbability Theory. Richard F. Bass
Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2
More informationAliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide
aliprantis.tex May 10, 2011 Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide Notes from [AB2]. 1 Odds and Ends 2 Topology 2.1 Topological spaces Example. (2.2) A semimetric = triangle
More informationDefining the Integral
Defining the Integral In these notes we provide a careful definition of the Lebesgue integral and we prove each of the three main convergence theorems. For the duration of these notes, let (, M, µ) be
More information