Convergence at first and second order of some approximations of stochastic integrals

Similar documents
The Azéma-Yor Embedding in Non-Singular Diffusions

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem

On pathwise stochastic integration

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

Pathwise Construction of Stochastic Integrals

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

The Codimension of the Zeros of a Stable Process in Random Scenery

Lecture 21 Representations of Martingales

Lecture 22 Girsanov s Theorem

Immersion of strong Brownian filtrations with honest time avoiding stopping times

for all f satisfying E[ f(x) ] <.

Exercises in stochastic analysis

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Topics in fractional Brownian motion

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.

1 Brownian Local Time

On a class of stochastic differential equations in a financial network model

n E(X t T n = lim X s Tn = X s

Applications of Ito s Formula

Stochastic Calculus (Lecture #3)

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

Stochastic Differential Equations.

1. Stochastic Processes and filtrations

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Stochastic integration. P.J.C. Spreij

Product measure and Fubini s theorem

Generalized covariations, local time and Stratonovich Itô s formula for fractional Brownian motion with Hurst index H>=1/4

Gaussian and non-gaussian processes of zero power variation, and related stochastic calculus

Exponential martingales: uniform integrability results and applications to point processes

I forgot to mention last time: in the Ito formula for two standard processes, putting

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Supplement A: Construction and properties of a reected diusion

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Brownian Motion and Stochastic Calculus

Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Information and Credit Risk

(B(t i+1 ) B(t i )) 2

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Solutions to the Exercises in Stochastic Analysis

Brownian Motion and Conditional Probability

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS

Squared Bessel Process with Delay

Estimating the efficient price from the order flow : a Brownian Cox process approach

SAMPLE PROPERTIES OF RANDOM FIELDS

STAT 331. Martingale Central Limit Theorem and Related Results

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Gaussian Processes. 1. Basic Notions

Stability of Stochastic Differential Equations

Lecture 19 L 2 -Stochastic integration

Unfolding the Skorohod reflection of a semimartingale

m-order integrals and generalized Itô s formula; the case of a fractional Brownian motion with any Hurst index

Exercises. T 2T. e ita φ(t)dt.

A limit theorem for the moments in space of Browian local time increments

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

µ X (A) = P ( X 1 (A) )

Lecture 12. F o s, (1.1) F t := s>t

STAT 7032 Probability Spring Wlodek Bryc

Generalized Gaussian Bridges of Prediction-Invertible Processes

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

A Barrier Version of the Russian Option

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Regularity of the density for the stochastic heat equation

On the submartingale / supermartingale property of diffusions in natural scale

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

DOOB S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES

e - c o m p a n i o n

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Part III Stochastic Calculus and Applications

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

On Reflecting Brownian Motion with Drift

Pathwise uniqueness for stochastic differential equations driven by pure jump processes

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

On It^o's formula for multidimensional Brownian motion

An essay on the general theory of stochastic processes

Citation Osaka Journal of Mathematics. 41(4)

Stochastic Processes

MATH 6605: SUMMARY LECTURE NOTES

The main results about probability measures are the following two facts:

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Local times for functions with finite variation: two versions of Stieltjes change of variables formula

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Brownian Motion. Chapter Stochastic Process

Asymptotic behavior of some weighted quadratic and cubic variations of the fractional Brownian motion

April 11, AMS Classification : 60G44, 60G52, 60J25, 60J35, 60J55, 60J60, 60J65.

Random Process Lecture 1. Fundamentals of Probability

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Supermodular ordering of Poisson arrays

Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes

17. Convergence of Random Variables

An almost sure invariance principle for additive functionals of Markov chains

Transcription:

Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456 Vandœuvre lès Nancy Abstract hal-292344, version 1-1 Jul 28 We consider the convergence of the approximation schemes related to Itô s integral and quadratic variation, which have been developed in [8. First, we prove that the convergence in the a.s. sense exists when the integrand is Hölder continuous and the integrator is a continuous semimartingale. Second, we investigate the second order convergence in the Brownian motion case. Key words: stochastic integration by regularization, quadratic variation, first and second order convergence, stochastic Fubini s theorem 2 MSC: 6F5, 6F17, 6G44, 6H5,6J65 1 Introduction We consider a complete probability space Ω, F, F t, P, which satisfies the usual hypotheses. The notation ucp will stand for the convergence in probability, uniformly on the compact set in time. 1. Let X be a real continuous F t -semimartingale. In the usual stochastic calculus, the quadratic variation and the stochastic integral with respect to X play a central role. In [5, [6 and [7, Russo and Vallois extended these notions to continuous processes. Let us briefly recall their main definitions. Definition 1.1 Let X be a real-valued continuous process, F t -adapted, and H be a locally integrable process. The forward integral Hd X is defined as Hd X = lim ucp 1 H u X u+ X u du, if the limit exists. The quadratic variation is defined by [X t = lim ucp 1 1 X u+ X u 2 du

if the limit exists. In the article, X will stand for a real-valued continuous F t -semimartingale and H t t for a F t -adapted process. If H is continuous, then, according to Proposition 1.1 of [5, the limits in 1.1 exist and coincide with the usual objects. In order to work with adapted processes only, we change u + into u + t in the integrals. This change does not affect the limit by 3.3 of [8. Consequently, and H u dx u = lim ucp 1 < X > t = lim ucp 1 H u Xu+ t X u du, 1.1 Xu+ t X u 2 du 1.2 where H udx u is the usual stochastic integral and < X > is the usual quadratic variation. hal-292344, version 1-1 Jul 28 2. First, we determine sufficient conditions under which the convergences in 1.1 and 1.2 hold in the almost sure sense. Let us mention that some results in this direction have been obtained in [2. We say that a process Y is locally Hölder continuous if, for all T >, there exist α, 1 and C T L 2 Ω such that Y s Y u C T u s α u, s [, T, a.s. Our first result related to stochastic integral is the following. Theorem 1.2 If H t t is adapted and locally Hölder continuous, then 1 lim H u X u+ t X u du = H u dx u, 1.3 in the sense of almost sure convergence, uniformly on the compact sets in time. Proposition 1.3 If X is locally Hölder continuous, then 1 lim X u+ t X u 2 du =< X > t, in the sense of almost sure convergence, uniformly on the compact sets in time. Moreover, if H is a continuous process, 1 lim H u X u+ t X u 2 du = in the sense of almost sure convergence. H u d < X > u, 1.4 2

3. Let us now consider the case where X is the standard Brownian motion B. Since B is a locally Hölder continuous martingale, the conditions in Theorem 1.2 and Proposition 1.3 are fulfilled. Then, it seems natural to determine the rate of convergence in 1.3 and 1.4, i.e. a second order convergence. Note that in [2, some related results have been proven. Let us consider H, t = 1 [ 1 2 H, t = 1 [ 1 H u B u+ t B u du H u B u+ t B u 2 du H u db u, 1.5 H u du, 1.6 where H is a progressively measurable process such that H2 s ds <, for every t >. We begin with H t = B t. In this case, we have: H, t = W t + R t, hal-292344, version 1-1 Jul 28 where and W t = G udb u, G u = 1 u B u B s ds, 1.7 u + R t = 1 1 u B s ds u B u db u. It is easy to verify that R t, as, in L 2 Ω, and therefore does not contribute to the limit. Theorem 1.4 W t, B t t converges in distribution to σw t, B t t, as, where W is a standard Brownian motion, independent from B, and σ 2 = 1 3. We now investigate the convergence of H, t t. We restrict ourselves to processes H of the type H t = H + M t + V t where 1. H is F -measurable, 2. M t is a Brownian martingale, i.e. M t = Λ sdb s, where Λ t is progressively measurable and satisfies Λ2 sds <, 3. V is a continuous process, which is Hölder continuous with order α > 1/2, vanishing at time. Note that if V t = Φ sds, where Φ t t is adapted and locally bounded, then 3 holds with α = 1 and H t is a semimartingale. Using a functional theorem of convergence Proposition 3.2 and Theorem 5.1 in [3 and Theorem 1.4, we obtain the following result. 3

Theorem 1.5 1. For any < t 1 < < t n, the random vector H, t 1,..., H, t n converges in law to σh N,..., σh N, where N is a standard Gaussian r.v, independent from F. 2. If V is a process which is locally Hölder continuous of order α > 1 2, then V, t converges to in L 2 Ω as, uniformly on the compact set in time. hal-292344, version 1-1 Jul 28 3. If M t = Λ sdb s, where Λ s = fb u, u s is a continuous function of the trajectory B u, u s such that t Λ t is a continuous map from R + to L 2 Ω, then the process M, t t converges in distribution to σ Λ udw u t as. 4. If H =, M and V are as in point 2 3, then H, t t converges in law to σ Λ udw u t as. The conditions of Theorem 1.5 related to the process H are likely too strong. In particular, the continuity of t H t and the fact that H is a semimartingale are not needed. Indeed, there exist adapted stepwise processes H so that H, t converges in distribution as, for any t >. More precisely, we have the following result. Theorem 1.6 Let a i i N be an increasing sequence of real numbers which satisfies a = and a n. Let h, h i i N be r.v. s such that h i is F ai - measurable, h is F -measurable and sup i N h i <. Let H be the adapted and stepwise process: Then, 1. 1 H t = h1i {t=} + i h i 1I {t ai,a i+1 }. H sb s+ t B s ds converges almost surely to H sdb s, uniformly on the compact set in time, as. 2. There exists a sequence of i.i.d. r.v s N i i N with Gaussian law N, 1, independent from B such that the r.v. H, t converges in law to h 3 N + i 1 h i h i 1 3 N i 1I {t ai+1 }, as, for fixed time t. A proof of Theorem 1.6 can be found in Section 6.3 of [1. 4. Let us finally present our second order result of convergence related to quadratic variation. 4

Proposition 1.7 Let H s = fb u, u s be a continuous function of the trajectory B u, u s such that s H s is locally Hölder continuous. Then, 2 H, t t converges in distribution to σ H udw u t, as. 5. Let us briefly detail the organization of the paper. Section 2 contains the proofs of the almost convergence results, i.e. Theorem 1.2 and Proposition 1.3. Then, the proof of Theorem 1.4 resp. Proposition 1.7 and Theorem 1.5 is resp. are given in Section 3 resp. Section 4. In the calculations, C will stand for a generic constant random or not. If C is random, then C L 2 Ω. We will use several times a stochastic version of Fubini s Theorem, which can be found in Section IV.5 of [4. 2 Proof of Theorem 1.2 and Proposition 1.3 We begin with the proof of Theorem 1.2 in Points 1-4 below. Then, we deduce Proposition 1.3 from Theorem 1.2 in Point 5. hal-292344, version 1-1 Jul 28 1. Let T >. We suppose that H t t is locally Hölder continuous of order α and we study the almost sure convergence of I t := 1 as, uniformly on t [, T. H u X u+ t X u du to It := H u dx u, By stopping, we can suppose that X t t T and < X > T are bounded by a constant, and there exists a constant C > such that u, s [, T, H s H u C u s α, for some α, α [. 2.1 Let X = X + M + V be the canonical decomposition of X, where M is a continuous local martingale and V is an adapted process with finite variation. It is clear that I t It can be decomposed as 1 t I t It = H u M u+ t M u du H u dx u + 1 H u M u+ t M u du H u dx u. Theorem 1.2 will be proved as soon as I t It converges to, in the case where X is either a continuous local martingale or a continuous finite variation process. We deal with the finite variation case in Point 2. As for the martingale case, the study is divided in two steps: 5

1. First, we prove that there is a sequence n n N such that I n t converges almost surely to It and n see Point 3 below. 2. Second, we show that I t converges almost surely to, uniformly for t [, T see Point 4 below. 2. Suppose that X has a finite variation, writing X u+ t X u = u dx u+ t s and using Fubini s theorem yield to: 1 s I t It = H u du H s dx s, s + t 1 s s = H u H s du dx s H s dx s. s + hal-292344, version 1-1 Jul 28 Using the Hölder property 2.1 in the first integral and the fact that H is bounded by a constant in the second integral, we have for all t [, T : I t It T 1 s C u s α du d X s + s + C α X T + C X X. s C d X s, Consequently, I t It converges almost surely to, as, uniformly on any compact set in time. 3. In the two next points, X is a continuous martingale. We proceed as in step 2 above: observing that X u+ t X u = u dx u+ t s and using Fubini s stochastic theorem come to 1 s I t It = H u du H s dx s. s + Thus, I t It t [,T is a continuous local martingale. Moreover, E< I I > t is bounded since H and < X > are bounded on [, T. Let us introduce p = 21 α + 1. This explicit expression of p in terms of α α 2 will be used later at the end of Point 4. Burkholder-Davis-Gundy inequalities give: T E sup I t It p c p E 1 s p 2 2 H u du H s d < X > s. t [,T s + The Hölder property 2.1 implies that: 1 s H u du H s 1 s H u H s du C α. s + s + 6

Consequently, E sup I t It p C αp E[< X > T p 2 C αp. t [,T Then, for any δ >, Markov inequality leads to : P sup I t It > δ t [,T Cαp δ p. 2.2 Let us now define n n N by n = n 2 pα 2.2 comes to: P sup I n t It > δ t [,T for all n >. Replacing by n in C δ p n 2. Since n=1 n 2 <, the Borel-Cantelli lemma implies that: hal-292344, version 1-1 Jul 28 lim sup n t [,T I n t It =, a.s. 2.3 4. For all, 1[, let n = n denote the integer such that n+1, n. Then, we decompose I t It as follows: I t It = I t I n t + I n t It. 2.3 gives the almost sure convergence of I n t to It, uniformly on [, T. Therefore, the a.s convergence of I t It to, uniformly on [, T, will be obtained as soon as I t I n t goes to, uniformly on [, T. From the definition of I t, it is easy to deduce that we have: I t I n t = 1 H u X u+ t du H u X u+n tdu 1 + 1 H u X u+n t X u du. n The changes of variable either v = u + or v = u + n lead to I t I n t = 1 H v H v n X v dv 2.4 + n t H v n H v X v dv + R t, n n where we gather under the notation R t all the remaining terms. Let us observe that R t is the sum of terms which are of the form 1... dv where 7 b a

1 a b n or 1 b... dv where a b n a n. Since H and X are bounded on [, T, we have R t C n t [, T. 2.5 By the Hölder property 2.1, we get H v H v n C n α, H v n H v C α n. 2.6 Since X and H are bounded, we can deduce from 2.4, 2.5 and 2.6 that: I t I n t C n α + C n α n n + C n, t [, T. Using the definition of n, we infer n Cn 1, n α Cn 21 α pα α, n α n n n 2 p 1+ 2 pα n 21 α pα α. hal-292344, version 1-1 Jul 28 Note that p = 21 α + 1 implies that 21 α α 2 pα goes to a.s, uniformly on [, T, as. α <. As a result, I t I n t 5. In this item, it is supposed that X is a semimartingale, locally Hölder continuous. It is clear that 1 t X u+ t X u 2 du equals [ 1 t X 2 u+ tdu X u+ t X u du X u X u+ t X u du. Thanks to the change of variable v = u + in the first integral, after easy calculations, we get 1 X u+ t X u 2 du = Xt 2 1 X v tdv 2 2 X u X u+ t X u du. Since X is continuous, 1 X2 v tdv tends to X 2 a.s, uniformly on [, T. According to Theorem 1.2, we have 1 lim X u+ t X u 2 du = X 2 t X 2 2 X u dx u. Itô s formula implies that the right-hand side of the above identity equals to < X > t. Replacing u + t by u + + does not change the limit. Then, the measure 1 X u+ + X u 2 du converges a.s. to the measure d < X > u. It leads to the convergence a.s. of 1 t H ux u+ t X u 2 du to H ud < X > u, for H continuous process. 8

3 Proof of Theorem 1.4 Recall that W t and G t are defined by 1.7. We study the convergence in distribution of the two dimensional process W t, B t, as. First, we determine the limit in law of W t. In Point 1 we demonstrate preliminary results. Then, we prove the convergence of the moments of W t in Point 2. By the method of moments, the convergence in law of W t for a fixed time is proven in Point 3. We deduce the finite-dimensionnal convergence in Point 4. Finally, Kolmogorov criterion concludes the proof in Point 5. Then, we briefly sketch in Point 6 the proof of the joint convergence of W t t and B t t. The approach is close to the one of W t t. 1. We begin by calculating the moments of W t and G u. We denote by L = the equality in law. Lemma 3.1 E [ G u 2 [ = u 3 σ 2. Moreover, for all k N, there exists a 3 constant m k such that E G u k m k, u, >. hal-292344, version 1-1 Jul 28 Proof. First, we apply the change of variable s = u u r in 1.7. Then, using the identity B u B u v ; v u = L B v ; v u and the scaling property of B, we get G u L = u u 1 B r dr. Since 1 B rdr = L σn, where σ 2 = 1/3 and N is a standard gaussian r.v, we obtain E [ G u k 3k u 2 = σ k E [ N k. 3.1 Taking k = 2 gives E [ G u 2 = t 3 σ 2. Using u and 3.1, we get 3 E[ G u k m k with m k = σ k E [ N k. 3k 2 Lemma 3.2 For all k 2, there exists a constant Ck such that [ t, E W t k Ck t k 2. Moreover, for k = 2, we have E [ W u W u + 2 σ 2, u. Proof. The Burkhölder-Davis-Gundy inequality and 1.7 give [ [ E W t k t k cke G u 2 2 du. 9

Then, Jensen inequality implies: [ k [ E G u 2 2 t du t k 2 1 E Finaly, applying Lemma 3.1 comes to [ E W t k ckm k t k 2. G u k du. The case k = 2 can be easily treated via 1.7 and Lemma 3.1: E [ W u W u + 2 = = u u + E [ G v 2 dv, u u + σ 2 v 3 dv σ 2. 3 hal-292344, version 1-1 Jul 28 2. Let us now study the convergence of the moments of W t. Proposition 3.3 lim E [ W t 2n = E [ σw t 2n, n N, t. 3.2 Proof. a We prove Proposition 3.3 by induction on n 1. For n = 1, from Lemma 3.1, we have: E [ W t 2 = E [ G u 2 du = Then, E [W t 2 converges to σ 2 t = E[σW t 2. 2 u 3 σ du. Let us suppose that 3.2 holds. First, we apply Itô s formula to W t 2n+2. Second, taking the expectation reduces to the martingale part. Finally, we get 3 E [ W t 2n+2 = 2n + 22n + 1 2 E [ W u 2n G u 2 du. 3.3 b We admit for a while that E [ W u 2n G u 2 σ 2 E [ σw u 2n, u. 3.4 Using Cauchy-Schwarz inequality and Lemmas 3.1, 3.2 give: E [ W u 2n G u 2 E [ W u 4n E [ G u 4 C4nu 2n m 4 C4nm 4 u n. 1

Consequently, we may apply Lebesgue s theorem to 3.3, we have lim E [ W t 2n+2 = = = 2n + 22n + 1 t σ 2 E [ σw u 2n du, 2 2n + 22n + 1 t σ 2n+2 u n 2n! 2 n! 2 du, n 2n + 2! n + 1! 2 σ t 2n+2 = E [ σw n+1 t 2n+2. c We have now to prove 3.4. If u =, E [ W 2n G 2 = = σ 2 E [ σw 2n. If u >, it is clear that: where E [ W u 2n G u 2 = E [ W u + 2n G u 2 + ξ u, 3.5 ξ u = E [{ W u 2n W u + 2n } G u 2. Since G u is independent from F u +, we have hal-292344, version 1-1 Jul 28 E [ W u + 2n G u 2 = E [ W u + 2n E [ G u 2. Finally, plugging the identity above in 3.5 gives: where E [ W u 2n G u 2 = E [ W u 2n E [ G u 2 + ξ u + ξ u, ξ u = E [ W u + 2n W u 2n E [ G u 2. Lemma 3.1 implies that E [ G u 2 tends to σ 2 as. The recurrence hypothesis implies that E [ W u 2n converges to E [ σw u 2n as. It remains to prove that ξ u and ξ u tend to to conclude the proof. The identity a 2n b 2n = a b 2n 1 k= ak b 2n 1 k implies that ξ u is equal to the sum 2n 1 k= S k, u, where S k, u = E [ W u W u + G u 2 W u k W u + 2n 1 k. Applying four times the Cauchy-Schwarz inequality comes to: S k, u [E W u W u + 2 1 2 [ E G u 8 1 4 Lemmas 3.1 and 3.2 lead to [ EW u 8k 1 8 [ EW u + 16n 8 8k 1 8. S k, u CkT n 1 2, u [, T. Consequently, ξ u tends to as. Using the same method, it is easy to prove that ξ u tends to as. 3. From Proposition 3.3, it easy to deduce the convergence in law of W t t being fixed. 11

Proposition 3.4 For any fixed t, W t converges in law to σw t, as. Let us recall the method of moments. Proposition 3.5 Let X, X n n N be r.v s such that E X k <, E X n k <, k, n N and [EX 2k 1 2k lim k <. 3.6 2k If for all k N, lim n EX k n = EX k, then X n converges in law to X as n. Proof of Proposition 3.4. Let t be a fixed time. The odd moments of W t are null. By Proposition 3.3, the even moments of W t tends to σw t. Since σw t is a Gaussian r.v. with variance σ t, it is easy to check that 3.6 holds. As a result, W t converges in law to σw t. 4. Next, we prove the finite-dimensionnal convergence. hal-292344, version 1-1 Jul 28 Proposition 3.6 Let < t 1 < t 2 < < t n. Then, W t 1,..., W t n converges in law to σw t1,..., σw tn, as. Proof. We take n = 2 for simplicity. We consider < t 1 < t 2 and, t 1 t 2 t 1 [. Since t 1 >, note that u + = u for u [t 1, t 2. We begin with the decomposition: W t 2 = W t 1 + 1 2 t 1 + where R 1 t 1, t 2 = 1 is independent from 1 1 + t 1 2 t 1 + u B u B s ds db u + Rt 1 1, t 2, u u B u u B s ds db u. Let us note that W t 1 u B u u B s ds db u. Let us introduce B t = B t+t1 B t1, t. B is a standard Brownian motion. The changes of variables u = t 1 +v and r = s t 1 in t 2 u B t 1 + u u B s ds db u leads to W t 2 = W t 1 + Θ t 1, t 2 + Rt 2 1, t 2 + Rt 1 1, t 2, 3.7 where Θ t 1, t 2 = 1 2 t 1 R 2 t 1, t 2 = 1 Straightforward calculation shows that E v B v B rdr db v, v + v B v B rdr db v. [ Rt 1 1, t 2 2 and E [R 2t 1, t 2 2 are bounded by C. Thus, R 1 t 1, t 2 and R 1 t 1, t 2 converge to in L 2 Ω. 12

Proposition 3.4 gives the convergence in law of Θ t 1, t 2 to σw t2 W t1 and the convergence in law of W t 1 to σw t1, as. Since W t 1 and Θ t 1, t 2 are independent, the decomposition 3.7 implies that W t 1, W t 2 W t 1 converges in law to σw t1, σw t2 W t1, as. Proposition 3.4 follows immediately. 5. We end the proof of the convergence in law of the process W t t by showing that the family of the laws of W t t is tight as, 1. Lemma 3.7 There exists a constant K such that E [ W t W s 4 K t s 2, s t, >. Proof. Applying Burkhölder-Davis-Gundy inequality, we obtain: E [ [ W t W s 4 t 2 ce G u 2 du ct s E [ G u 4 du. s s hal-292344, version 1-1 Jul 28 Using Lemma 3.1, we get E [ W t W s 4 c m 4 t s 2 and ends the proof see Kolmogorov Criterion in Section XIII-1 of [4. 6. To prove the joint convergence of W t, B t t to σw t, B t t, we mimick the approach developed in Points 1-5 above. 6.a. Convergence W t, B t to σw t, B t, t being fixed. First, we prove that lim EW p tb q t = EσW t p B q t, p, q N. 3.8 Let us note that the limit is null when either p or q is odd. Using Itô s formula, we get where E [W t p B q t = α 1 t, = α 2 t, = α 3 t, = pp 1 α 1 t, + 2 qq 1 α 2 t, + pqα 3 t,, 2 E [ W u p 2 B q ug u 2 du, E [ W u p Bu q 2 du, E [ W u p 1 Bu q 1 G u du. To demonstrate 3.8, we proceed by induction on q, then by induction on p, q being fixed. First, we apply 3.8 with q 2 instead of q, then we have directly: lim α 2 t, = E [σw u p E [ Bu q 2 du. 13

As for α 1 t,, we write W u p 2 = W u p 2 W u + p 2 + W u + p 2 B q u = B q u B q u + + B q u +. We proceed similarly with α 3 t,. Reasoning as in Point 2 and using the two previous identities, we can prove: lim α 1 t, = σ 2 E [ σw u p 2 E [B q u du and lim α 3 t, =. Consequently, when either p or q is odd, then lim α i t, =, i = 1, 2 and therefore: lim EW p tb q t = = EσW t p B q t. It remains to determine the limit in the case where p and q are even. Let us denote p = 2p and q = 2q. Then we have hal-292344, version 1-1 Jul 28 lim α 1 t, = = lim α 2 t, = = Then, it is easy to deduce σ 2 p 2! 1 2 p 1 p 1! up σ p 2 q! 2 q q! uq du p 2! q! 2 p +q 1 p 1! q! p + q σp p +q t, p! q 2! 2 p p! σp u p 1 2 q 1 q 1! uq du p! q 2! 2 p +q 1 p! q 1! p + q σp p +q t. lim E [W t p B q t = p! q! 2 p p! σp t p 2 q q! tq = E [σw t p E [B q t. Next, we use a two dimensional version of the method of moments: Proposition 3.8 Let X, Y, Y n n N X n n N be r.v. whose moments are finite. Let us suppose that X and Y satisfy 3.6 and that p, q N, lim n EXnY p n q = EX p Y q. Then, X n, Y n converges in law to X, Y as n. Since W t and B t are Gaussian r.v s, they both satisfy 3.6. Consequently, W t, B t converges in law to σw t, B t as. 6.b. Finite-dimensional convergence. Let < t 1 < t 2. We prove that W t 1, W t 2, B t1, B t2 converges in law to σw t1, σw t2, B t1, B t2. We apply decomposition 3.7 to W t 2. By Point 6.a, W t 1, B t1 converges in law to σw t1, B t1 and Θ t 1, t 2, B t2 B t1 converges to σw t2 σw t1, B t2 B t1. Since Θ t 1, t 2, B t2 B t1 is independent from W t 1, B t1, we can conclude that W t 1, W t 2, B t1, B t2 converges in law to σw t1, σw t2, B t1, B t2. 14

4 Proofs of Theorem 1.5 and Proposition 1.7 1. Convergence in distribution of a family of stochastic integrals with respect to W. Recall that W is a continuous martingale, which converges in distribution to σw as. Then, by Proposition 3.2 of [3, W satisfies the condition of uniform tightness. Consequently, from Theorem 5.1 of [3, we can deduce that for any càdlàg predictable process Λ such that Λ, W converges in distribution to Λ, W, then t the process Λ udw u converges in distribution to σ Λ udw u t t as. hal-292344, version 1-1 Jul 28 2. Proof of Proposition 1.7. Recall that 2 H, t is defined by 1.6. First, let us consider the case H t = 1 for all t. Using Itô s formula, we obtain: B s+ t B s 2 = 2 s+ t s B u B s db u + s + t s. Reporting in 2 t and applying stochastic Fubini s theorem come to 2 1, t = 2W t + 1 s + t s ds t. More generally, if we consider H s = fb u, u s a continuous function of the trajectory so that t H t is locally Hölder continuous, we can write a similar decomposition: where R t = 2 2 H, t = 2 [ t u H u + s H u B u B s ds H u dw u + R t, Since s H s is continuous, then a.s. lim sup 1 t t [,T H s t s ds =. t + db u + 1 t H t + s t s ds. Using the Hölder property of H and Doob inequality, we easily obtain [ 2 t [ u 2 E sup t [,T H s H u B u B s ds db u C 2α. u + Since H s = fb u, u s is a continuous function of the trajectory and W, B converges in distribution to σw, B cf. Theorem 1.4, applying Point 1 with Λ = H leads to Proposition 1.7. 3. Proof of Point 2 of Theorem 1.5. Recall that H, t is defined by 1.5. Since V vanishes at time, we prolong V to, + [, setting V s = 15

if s. Using B s+ t B s = s+ t db s u, then Fubini s stochastic theorem yields to 1 u V, t = V s V u ds db u. Consequently, E sup V, t 2 4 t [,T T u E 1 u 2 V s V u ds du. u Let us bound the term in parenthesis. Since V is Hölder continuous, we have 1 u V s V u ds 1 u C s u α ds C α 1 2. Consequently, u E u sup V, t 2 CT α 1 2. t [,T Since α > 1, item 2 of Theorem 1.5 is proved. 2 hal-292344, version 1-1 Jul 28 4. Proof of Point 3 of Theorem 1.5. Let Λ s = fb u, u s be a continuous function of the trajectory, such that t Λ t is a continuous map from R + to L 2 Ω. Suppose moreover that M t = t Λ udb u. Proceeding as in Point 3 of section 2, we have M, t = Using the identity M u = u u + R t, where M, t = 1 [ 1 M s ds M u db u. u + u M u + u+ M u leads to M, t = M, t 1 u M u M s ds db u, u + and R t = u M udb u. We claim that R t is a remainder term. Indeed, Doob s inequality gives [ E sup R t 2 u 2 u 4 E[ Λ t [,T 3 v db v 2 du 4 u E[Λ 2 vdvdu. Consequently, R t does not contribute to the limit in law of M, t. Using the identity M u M s = Λ u B u B s + u Λ s r Λ u db r, we decompose u M u + u M s ds as u u u Λ u B u B s ds + Λ r Λ u db r ds u + u + s u u = Λ u B u B s ds + r u + Λ r Λ u db r. u + u + 16

Consequently: where R t = 1 M, t = u Λ u dw u + R t, u + r u + Λ r Λ u db r db u. It can be proved, as it is shown previously, that R t goes to in L 2 Ω as. The convergence in law of Λ udw u t to σ Λ udw u t is a direct consequence of item 1 above. hal-292344, version 1-1 Jul 28 5. Proof of Point 1 of Theorem 1.5. We have, for a fixed t and for all < t: H, t = H [ 1 t B s+ t B s ds db s. Since B s+ t B s = s+ t db s u, using Fubini s Theorem allows to gather the integrals and we get H, t = H [ u u + db u = H [ u db u = H N, where N = u db u. The r.v N does not depend on t anymore and is independent from F. Moreover N has a centered Gaussian distribution, with variance u 2 EN 2 = 1 du = 1 3. Finally, N follows the Gaussian law N, σ. Note that H, =. Consequently, the process H, t t cannot converge in distribution as. References [1 Blandine Bérard Bergery. Approximation du temps local et intégration par régularisation. PhD thesis, Nancy Université, 24-27. [2 M. Gradinaru and I. Nourdin. Approximation at first and second order of m- order integrals of the fractional Brownian motion and of certain semimartingales. Electron. J. Probab., 8:no. 18, 26 pp. electronic, 23. [3 A. Jakubowski, J. Mémin, and G. Pagès. Convergence en loi des suites d intégrales stochastiques sur l espace D 1 de Skorokhod. Probab. Theory Related Fields, 811:111 137, 1989. 17

[4 D. Revuz, and M. Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences. Springer-Verlag, Berlin, third edition, 1999. [5 F. Russo and P. Vallois. The generalized covariation process and Itô formula. Stochastic Process. Appl., 591:81 14, 1995. [6 F. Russo and P. Vallois. Itô formula for C 1 -functions of semimartingales. Probab. Theory Related Fields, 141:27 41, 1996. [7 F. Russo and P. Vallois. Stochastic calculus with respect to continuous finite quadratic variation processes. Stochastics Stochastics Rep., 71-2:1 4, 2. [8 F. Russo and P. Vallois. Elements of stochastic calculus via regularisation. In Séminaire de Probabilités, XXXX, Lecture Notes in Math. Springer, Berlin, 26. hal-292344, version 1-1 Jul 28 18