Variations and estimators for selfsimilarity parameters via Malliavin calculus

Size: px
Start display at page:

Download "Variations and estimators for selfsimilarity parameters via Malliavin calculus"

Transcription

1 Variations and estimators for selfsimilarity parameters via Malliavin calculus Ciprian A. Tudor Frederi G. Viens SAMOS-MATISSE, Centre d Economie de La Sorbonne, Université de Paris Panthéon-Sorbonne, 9, rue de Tolbiac, 75634, Paris, France. tudor@univ-paris.fr Dept. Statistics and Dept. Mathematics, Purdue University, 5. University St., West Lafayette, I , USA. viens@purdue.edu ovember, 8 Abstract Using multiple stochastic integrals and the Malliavin calculus, we analyze the asymptotic behavior of quadratic variations for a speci c non-gaussian selfsimilar process, the Rosenblatt process. We apply our results to the design of strongly consistent statistical estimators for the selfsimilarity parameter H. Although in the case of the Rosenblatt process our estimator has non-gaussian asymptotics for all H > =, we show the remarkable fact that the process s data at time can be used to construct a distinct, compensated estimator with Gaussian asymptotics for H (=; =3). AMS Classi cation umbers: 6F5, 6H5, 6G8. Key words: multiple stochastic integral, Hermite process, fractional Brownian motion, Rosenblatt process, Malliavin calculus, non-central limit theorem, quadratic variation, Hurst parameter, selfsimilarity, statistical estimation. Introduction. Context and motivation A selfsimilar process is a stochastic process such that any part of its trajectory is invariant under time scaling. Selfsimilar processes are of considerable interest in practice in modeling various phenomena, including internet tra c (see e.g. [3]), hydrology (see e.g. [] ), or economics (see e.g. [], [3]). In various applications, empirical data also shows strong correlation of observations, indicating the presence, in addition to selfsimilarity, of long-range dependence. We refer to the monographs [6] or [4] for various properties and elds of applications of such processes. The motivation for this work is to examine non-gaussian selfsimilar processes using tools from stochastic analysis. We will focus our attention on a special such process, the so-called Rosenblatt process. It belongs to a class of selfsimilar processes which also exhibit long range dependence, and

2 which appear as limits in the so-called on-central Limit Theorem: the class of Hermite processes. We study the behavior of the quadratic variations for the Rosenblatt process, extending recent results by [5], [6], [3], and we apply the results to the study of estimators for the selfsimilarity parameter of. Recently, results on variations or weighted quadratic variation of the fractional Brownian motion have been obtained in [5], [6], [3], among others. The Hermite processes were introduced by Taqqu (see [6], [7]) and by Dobrushin and Major (see [5]). The Hermite process of order q can be written for every t as q H (t) = c(h; q) R q " t qy (s y i ) i= + + H q! ds # dw (y ) : : : dw (y q ); () where c(h; q) is an explicit positive constant depending on q and H and such that E q H () =, x + = max(x; ), the selfsimilarity (Hurst) parameter H belongs to the interval ( ; ) and the above integral is a multiple Wiener-Itô stochastic integral with respect to a Brownian motion (W (y)) yr (see []). We mention that the Hermite processes of order q >, which are non-gaussian, have only been de ned for H > ; how to de ne these processes for H it is still an open problem. The case q = is the well-known fractional Brownian motion (fbm): it is Gaussian. One recognizes that when q =, () is the moving average representation of fractional Brownian motion. The Rosenblatt process is the case q =. All Hermite processes share the following basic properties: they exhibit long-range dependence (the long-range covariance decays at the rate of the nonsummable power function n H ); they are H-selfsimilar in the sense that for any c >, q H (ct) t and ch q H (t) are equal in t distribution; they have stationary increments, that is, the distribution of q H (t + h) q H (h) t depend on h > ; does not they share the same covariance function E q H (t)q H (s) =: R H (t; s) = th + s H jt sj H ; s; t ; consequently, for every s; t the expected squared increment of the Hermite process is h i E q H (t) q H (s) = jt sj H ; () from which it follows by Kolmogorov s continuity criterion, and the fact that each L p ()-norm of the increment of q H over [s; t] is commensurate with its L ()-norm, that this process is almost-surely Hölder continuous of any order < H; the q-th Hermite process lives in the so-called q-th Wiener chaos of the underlying Wiener process W, since it is a q-th order Wiener integral. The stochastic analysis of fbm has been developped intensively in recent years and its applications are many. Other Hermite processes are less studied, but are still of interest because of their long range dependence, selfsimilarity and stationarity of increments. The great popularity of fbm in modeling is due to these properties, and that one prefers fbm rather than higher order Hermite process because it is a Gaussian process, and its calculus is much easier. But in concrete situations when empirical data

3 attests to the presence of selfsimilarity and long memory without the Gaussian property, one can use a Hermite process living in a higher chaos. The Hurst parameter H characterizes all the important properties of a Hermite process, as seen above. Therefore, estimating H properly is of the utmost importance. Several statistics have been introduced to this end, such as wavelets, k-variations, variograms, maximum likelihood estimators, or spectral methods. Information on these various approaches can be found in the book of Beran []. In this paper we will use variation statistics to estimate H. Let us recall the context. Suppose that a process ( t ) t[;] is observed at discrete times f; ; : : : ; ; g and let a be a lter of length l P and p a xed power; that is, a is an l + dimensional vector a = (a ; a ; : : : ; a l ) such that l q= a qq r = for r p and P l q= a qq p 6=. Then the k-variation statistic associated to the lter a is de ned as where for i fl; ; g, V (k; a) = l i V a = i=l V 4 a E h V a l i a q q= i k i q 3 i k 5 When is fbm, these statistics are used to derive strongly consistent estimators for the Hurst parameter, and their associated normal convergence results. A detailed study can be found in [7], [] or more recently in [4]. The behavior of V (k; a) is used to derive similar behaviors for the corresponding estimators. The basic result for fbm is that, if p > H + 4, then the renormalized k-variation V (k; a) converges to a standard normal distribution. The easiest and most natural case is that of the lter a = f; g, in which case p = ; one then has the restriction H < 3 4. The techniques used to prove such convergence in the fbm case in the above references are strongly related to the Gaussian property of the observations; they appear not to extend to non-gaussian situations. Our purpose here is to develop new techniques that can be applied to both the fbm case and other non-gaussian selfsimilar processes. Since this is the rst attempt in such a direction, we keep things as simple as possible: we treat the case of the lter a = f; g with a k-variation order = (quadratic variation), but the method can be generalized. As announced above, we further specialize to the simplest non-gaussian Hermite process, i.e. the one of order, the Rosenblatt process. We now give a short overview of our results (a more detailed summary of these facts is given in the next subsection). We obtain that, after suitable normalization, the quadratic variation statistic of the Rosenblatt process converges to a Rosenblatt random variable with the same selfsimilarity order; in fact, this random variable is the observed value of the original Rosenblatt process at time, and the convergence occurs in the mean square. More precisely, the quadratic variation statistic can be decomposed into the sum of two terms: a term in the fourth Wiener chaos (that is, an iterated integral of order 4 with respect to the Wiener process) and a term in the second Wiener chaos. The fourth Wiener chaos term is wellbehaved, in the sense that it has a Gaussian limit in distribution, but the second Wiener chaos term is ill-behaved, in the sense that its asymptotics are non-gaussian, and are in fact Rosenblatt-distributed. This term being of a higher order than the well-behaved one, it is responsible for the asymptotics of the entire statistic. But since its convergence occurs in the mean-square, and the limit is observed, we can construct an adjusted variation by subtracting the contribution of the ill-behaved term. We nd an estimator for the selfsimilarity parameter of the Rosenblatt process, based on observed data, whose asymptotic distribution is normal. Our main tools are the Malliavin calculus, the Wiener-Itô chaos expansions, and recent results on the convergence of multiple stochastic integrals proved in [9], [], [], or [3]. The key point is the 3 :

4 following: if the observed process lives in some nite Wiener chaos, then the statistic V can be decomposed, using product formulas and Wiener chaos calculus, into a nite sum of multiple integrals. Then one can attempt to apply the criteria in [] to study the convergence in law of such sequences and to derive asymptotic normality results, and/or lack thereof, on the estimators for the Hurst parameter of the observed process. The criteria in [] are necessary and su cient conditions for convergence to the Gaussian law; in some instances, these criteria fail (e.g. the fbm case with H > 3=4), in which case a proof of non-normal convergence by hand, working directly with the chaoses, can be employed. It is the basic Wiener chaos calculus that makes this possible.. Summary of results We now summarize the main results in this paper in some detail. As stated above, we use quadratic variation with a = f; g. We consider the following two processes, observed at the discrete times fi=g i= : the fbm process = B, and the Rosenblatt process =. In either case, the standardized quadratic variation, and the Hurst parameter estimator, are given by! V = V (; f ; g) := j (i=) ((i ) =)j H ; (3) i= ^H = ^H (; f ; g) := log log ( i ) (i ) : (4) We choose to use the normalization in the de nition of V (as e.g. in [4]) although sometimes in the literature it does not appears. The H-dependent constants c j;h (et. al.) referred to below are de ned explicitly in lines (8), (), (7), (), (8), and (39). Here and throughout, L () denotes the set of square-integrable random variables measurable w.r.t. the sigma- eld generated by W. This sigma- eld is the same as that generated by B or by. The term Rosenblatt random variable denotes a r.v. whose distribution is the same as that of (). We rst recall the followings facts, relative to Brownian motion.. if = B and H (=; 3=4), then (a) p =c ;H V converges in distribution to the standard normal law; (b) p log() p c;h ^H H converges in distribution to the standard normal law;. if = B and H (3=4; ), then (a) p 4 4H =c ;H V converges in L () to a standard Rosenblatt random variable with parameter H = H ; (b) H log() p c;h ^H H converges in L () to the same standard Rosenblatt random variable; 3. if = B and H = 3=4, then (a) r = c ;H log V converges in distribution to the standard normal law; i= 4

5 (b) p log p c ( ^H (; a) ;H H) converges in distribution to the standard normal law. The convergences for the standardized V s in points.a) and.a) have been known for some time, in works such as [7] or [8]. Lately, even stronger results, which also give error bounds, have been proven. We refer to [8] for the case one dimensional case and H (; 3 4 ), to [] for then one-dimensional case and H [ 3 4 ; ) and to [9] for the multidimensional case and H (; 3 4 ). In this paper we prove the following results for the Rosenblatt process =, as!. 4. if = and H (=; ), then with c 3;H in (8), (a) H V (; a)= (4c 3;H ) converges in L () to the Rosenblatt random variable (); (b) H c 3;H log () ( ^H (; a) H) converges in L () to the same Rosenblatt random variable (); 5. if = and H (=; =3), then with e ;H and f ;H in (7) and (39), (a) (b) p pe;h +f ;H hv (; a) p c3;h p pe;h +f ;H h log () (H ^H (; a)) normal law. i () converges in distribution to the standard normal law; H p i c3;h () converges in distribution to the standard H ote that () is the actual observed value of the Rosenblatt process at time, which is why it is legitimate to include it in a formula for an estimator. Points 4 and 5 are new results. The subject of variations and statistics for the Rosenblatt process has received too narrow a treatment in the literature, presumably because standard techniques inherited from the on Central Limit Theorem (and based sometimes on the Fourier transform formula for the driving Gaussian process) are di cult to apply (see [3], [5], [7]). Our Wiener chaos calculus approach allows us to show that the standardized quadratic variation and corresponding estimator both converge to a Rosenblatt random variable in L (). Here our method has a crucial advantage: we are able to determine which Rosenblatt random variable it converges to; it is none other than the observed value (). The fact we are able to prove L () convergence, not just convergence in distribution, is crucial. Indeed, when H < =3, subtracting an appropriately normalized version of this observed value from the quadratic variation and its associated estimator, we prove that asymptotic normality does hold in this case. This unexpected result has important consequences for the statistics of the Rosenblatt process, since it permits the use of standard artillery in parameter estimation and testing. Our asymptotic normality result for the Rosenblatt process was speci cally made possible by showing that V can be decomposed into two terms: a term T 4 in the fourth Wiener chaos and a term T in the second Wiener chaos. While the second-wiener-chaos term T always converges to the Rosenblatt r.v. (), the fourth chaos term T 4 converges to a Gaussian r.v. for H 3=4. We conjecture that this asymptotic normality should also occur for Hermite processes of higher order q 3, and that the threshold H = 3=4 is universal. The threshold H < =3 in the results above comes from the discrepancy that exists between a normalized T and its observed limit (). If we were to rephrase results 4 and 5 above with T instead of () (which is not a legitimate operation when de ne an estimator since T is not observed), the threshold would be H 3=4 and the constant f ;H would vanish. Beyond our basic interest concerning parameter estimation problems, let us situate our paper in the context of some recent and interesting works on the asymptotic behavior of p-variations (or weighted 5

6 variations) for Gaussian processes, namely the papers [3], [5], [6], and [5]. These recent papers study the behavior of sequences of the type! j (i=) ((i ) =)j h ( ((i )=)) H i= where is a Gaussian process (fractional Brownian motion in [3], [5] and [6], and the solution of the heat equation in [5]) or the iterated Brownian motion in [7], and h is a regular deterministic function. In the fractional Brownian motion case, the behavior of such sums varies according to the values of the Hurst parameter, the limit being sometimes a Gaussian random variable and sometimes a deterministic integral. We believe our work is the rst to tackle a non-gaussian case, that is, when the process above is a Rosenblatt process. Although we restrict ourselves to the case when h we still observe the appearance of interesting limits, depending on the Hurst parameter: while in general the limit of the suitably normalized sequence is a Rosenblatt random variable (with the same Hurst parameter H as the data, which poses a slight problem for statistical applications), the adjusted variations (that is to say, the sequences obtained by precisely subtracting the portion responsible for the non-gaussian convergence) do converge to a Gaussian limit for H (=; =3). This article is structured as follows. Section presents preliminaries on fractional stochastic analysis. Section 3 contains proofs of our results for the non-gaussian, Rosenblatt process. Some calculations are recorded as lemmas that are proved in the Appendix (Section 5). Section 4 establishes our parameter estimation results, which follow nearly trivially from the theorems in Section 3. We wish to thank an anonymous referee who pointed out an number of inaccuracies in the original submission. Preliminaries Here we describe the elements from stochastic analysis that we will need in the paper. Consider H a real separable Hilbert space and (B('); ' H) an isonormal Gaussian process, that is a centered Gaussian family of random variables such that E (B(')B( )) = h'; i H. Denote by I n the multiple stochastic integral with respect to B (see []). This I n is actually an isometry between the Hilbert space H n (symmetric tensor product) equipped with the scaled norm p n! k k H n and the Wiener chaos of order n which is de ned as the closed linear span of the random variables H n (B(')) where ' H; k'k H = and H n is the Hermite polynomial of degree n. We recall that any square integrable random variable which is measurable with respect to the - algebra generated by B can be expanded into an orthogonal sum of multiple stochastic integrals F = n I n (f n ) where f n H n are (uniquely determined) symmetric functions and I (f ) = E [F ]. We are actually use in these papers only multiple integrals with respect to the standard Wiener process with time horizon [; ] and in this case we will always have H = L ([; ]). This notation will be used throughout the paper. We will need the general formula for calculating products of Wiener chaos integrals of any orders p; q for any symmetric integrands f H p and g H q ; it is p^q p q I p (f)i q (g) = r! I p+q r (f r g) (5) r r r= 6

7 as given for instance in D. ualart s book [, Proposition..3]; the contraction f r g is the element of H (p+q r) de ned by (f ` g)(s ; : : : ; s n `; t ; : : : ; t m `) = f(s ; : : : ; s n `; u ; : : : ; u`)g(t ; : : : ; t m `; u ; : : : ; u`)du : : : du`: (6) [;T ] m+n ` We now introduce the Malliavin derivative for random variables in a nite chaos. If f H n we will use the following rule to di erentiate in the Malliavin sense D t I n (f) = ni n (f n (; t)); t [; ]: It is possible to characterize the convergence in distribution of a sequence of multiple integrals to the standard normal law. We will use the following result (see Theorem 4 in [], see also []). Theorem Fix n and let (F k ; k ), F k = I n (f k ) (with f k H n for every k ) be a sequence of square integrable random variables in the n th Wiener chaos such that E[Fk ]! as k! : Then the following are equivalent: i) The sequence (F k ) k converges in distribution to the normal law (; ). ii) One has E[Fk 4 ]! 3 as k!. iii) For all l n it holds that lim k! kf k l f k k H (n l) =. iv) kdf k k H! n in L () as k!, where D is the Malliavin derivative with respect to B. Criterion (iv) is due to []; we will refer to it as the ualart Ortiz-Latorre criterion. A multidimensional version of the above theorem has been proved in [3] (see also []). 3 Variations for the Rosenblatt process Our observed process is a Rosenblatt process ((t)) t[;] with selfsimilarity parameter H ( ; ). This centered process is selfsimilar with stationary increments, and lives in the second Wiener chaos. Its covariance is identical to that of the fractional Brownian motion. Our goal is to estimate its selfsimilarity parameter H from discrete observations of its sample paths. As far as we know, this direction has seen little or no attention in the literature, and the classical techniques (e.g, the ones from [5], [6], or [7]) do not work well for it. Therefore, the use of the Malliavin calculus and multiple stochastic integrals is of interest. The Rosenblatt process can be represented as follows (see [8]): for every t [; ] t t t H (t) := (t) = K H (u; y )@ K H (u; y )du dw (y )dw (y ) (7) y _y where (W (t); t [; ]) is some standard Brownian motion, K H is the standard kernel of fractional Brownian motion (see any reference on fbm, such as [, Chapter 5]), and H = H + and d(h) = 7 ((H ))= : (8) (H + ) H=

8 For every t [; ] we will denote the kernel of the Rosenblatt process with respect to W by t L H t (y ; y ) := L t (y ; y ) := K H (u; y )@ K H (u; y )du [;t] (y ; y ): (9) y _y In other words, in particular, for every t (t) = I (L t ()) where I denotes the multiple integral of order introduced in Section. Consider now the lter a = f V (; a) = i= ; g and the -variations given by ( i ) ( i ) E ( i ) ( i ) = H The product formula for multiple Wiener-Itô integrals (5) yields Setting for i = ; : : : ; i= I (f) = I 4 (f f) + 4I (f f) + I (f f); A i := L i " ( i ) (i # ) H : L i ; () we can thus write ( i ) (i ) = (I (A i )) = I 4 (A i A i ) + 4I (A i A i ) + I (A i A i ) and this implies that the -variation is decomposed into a 4th chaos term and a nd chaos term: V (; a) = H (I 4 (A i A i ) + 4I (A i A i )) := T 4 + T : i= A detailed study of the two terms above will shed light on some interesting facts: if H 3 4 the term T 4 continue to exihibit normal behavior (when renormalized, it converges in law to a Gaussian distribution), while the term T, which turns out to be dominant, never converges to a Gaussian law. One can say that the second Wiener chaos portion is ill-behaved ; however, once it is subtracted, one obtains a sequence converging to (; ) (for H ( ; 3 ), which has an impact for statistical applications. 3. Expectation evaluations 3.. The term T Let us evaluate the mean square of the second term T := H 4 I (A i A i ): i= 8

9 We use the notation = i ; i for i = ; : : : ;. The contraction Ai A i is given by With (A i A i )(y ; y ) = i i A i (x; y )A i (x; y )dx = d(h) i dx [; i ](y _ x) [; i ](y _ K H (u; x)@ K H (u; y )du [; i _ K H (u; x)@ K H (u; y )du x_y x_y! K H (v; x)@ K H (v; y )dv [; i _ K H (v; x)@ K H (v; y )dv x_y x_y : () note the following fact (see [], Chapter 5): u^v a (H) := H H = H (H + ) = K H (u; y )@ K H (v; y )dy = a(h)ju vj H ; (3) in fact, this relation can be easily derived from R t^s K H (t; u)k H (s; u)du = R H (t; s), and will be used repeatedly in the sequel. To use this relation, we rst expand the product in the expression for the contraction in (), taking care of keeping track of the indicator functions. The resulting initial expression for (A i A i )(y ; y ) contains 4 terms, which are all of the following form: C a;b := d (H) dx [;a] (y _ x) [;b] (y _ x) K H (u; K (u; y ) du u=y _x b v=y K H (v; K H (v; y ) dv: Here to perform a Fubini by bringing the integral over x inside, we rst note that x < u ^ v while u [y ; a] and v [y ; b]. Also note that the conditions x u and u a imply x a, and thus [;a] (y _ x) can be replaced, after Fubini, by [;a] (y ). Therefore, using (3), the above expression equals a C a;b = d (H) [;a][;b] (y ; y K H y = d (H) [;a][;b] (y ; y ) = d (H) a u=y b a u=y b b u^v (u; y ) K H (v; y ) K H (u; K H (v; x) dx K H (u; y K H (v; y ) ju vj H dudv: K (u; y K H (v; y ) ju vj H dudv: v=y The last inequality above comes from the fact that the indicator functions in y ; y are redundant: they can be pulled back into the integral over dudv and therein, the K H (u; y ) K H (v; y ) are, by de nition, as functions of y and y, supported by smaller intervals than [; a] and [; b], namely [; u] and [; v] respectively.! 9

10 (A A )(y ; y ) = a(h)d(h) ow, the contraction (A i A i )(y ; y ) equals C i=;i= +C (i )=;(i )= C (i )=;i= C i=;(i )=. Therefore, from the last expression above, du dv@ K H (u; y )@ K H (v; y )ju vj H i i i i y y + i y i y i y i du du du y i y i y dv@ K H (u; y )@ K H (v; y )ju vj H dv@ K H (u; y )@ K H (v; y )ju vj H dv@ K H (u; y )@ K H (v; y )ju vj H! : (4) Since the integrands in the above 4 integrals are identical, we can simplify the above formula, grouping the rst two terms, for instance, to obtain an integral of v over = i ; i, with integration over u in [y ; i n ]. The same operation on the last two terms gives negative the same integral over v, with integration over u in [y ; i n ]. Then grouping these two resulting terms yields a single term, which is an integral for (u; v) over. We obtain the following nal expression for our contraction: (A i A i )(y ; y ) = K H (u; y )@ K H (v; y )ju vj H dudv: (5) ow, since the integrands in the double Wiener integrals de ning T are symmetric, we get E T = 4H 6! ha i A i ; A j A j i L ([;] ): i;j= To evaluate the inner product of the two contractions, we rst use Fubini with expression (5); by doing so, one must realize that, the support K H (u; y ) is fu > y g, which then makes the upper endpoint for the integration in y redundant; similar remarks hold with u ; v; v, and y. In other words, we have ha i A i ; A j A j i L ([;]) = a(h) d(h) 4 dy dy du dv dudv ju vj H ju v j K H (u; y )@ K H (v; y )@ K H (u ; y )@ K H (v ; y ) = a(h) 4 d(h) 4 ju vj H ju v j H du dv dvdu u^u K H (u; y )@ K H (u ; y K H (v; y )@ K H (v ; y )dy = a(h) 4 d(h) 4 ju vj H ju v j H ju u j H jv v j H du dv dvdu (6) where we used the expression (3) in the last step. Therefore we have immediately E T = 4H 3a(H) 4 d(h) 4 du dv dvdu (7) i;j= ju vj H ju v j H ju u j H jv v j H

11 By Lemma in the Appendix, we conclude that lim E T H = 64a(H) d(h) 4! H = 6d(H) := c 3;H : (8) H 3.. The term T 4 ow for the L -norm of the term denoted by T 4 := H I 4 (A i A i ); i= by the isometry formula for multiple stochastic integrals, and using a correction term to account for the fact that the integrand in T 4 is non-symmetric, we have E[T4 ] = 8 4H ha i A i ; A j A j i L ([;] 4 ) i;j= + 4 4H 4hA i A j ; A j A i i L ([;] ) =: T 4; + T 4; : i;j= We separate the calculation of the two terms T 4; and T 4; above. We will see that these two terms are exactly of the same magnitude, so both calculations have to be performed precisely. The rst term T 4; can be written as T 4; = 8 4H hai ; A j i L ([;] ) : i;j= We calculate each individual scalar product ha i ; A j i L ([;] ) as ha i ; A j i L ([;] ) = i j = d(h) i i j A i (y ; y )A j (y ; y )dy dy = d(h) i dy dy [; i ^ j ](y _ y K H (u; y )@ K H (u; y )du [; i ](y _ y K H (u; y )@ K H (u; y )du y _y y _y! K H (v; y )@ K H (v; y )dv [; j ](y _ y K H (v; y )@ K H (v; y )dv y _y y _y j u^v K H (u; y )@ K H (v; y )dy : Here (3) yields ha i ; A j i L ([;] ) = d(h) a(h) ju vj H dudv where again we used the notation = i ; i for i = ; : : : ;. We nally obtain " ha i ; A j i L ([;] ) = d(h) a(h) i j H i j + H i j H(H ) H# (9)

12 where, more precisely, d(h) a(h) (H(H given by c ;H )) =. Speci cally with the constants c ;H, c ;H, and c ;H := + k= k H (k ) H (k + ) H ; c ;H := H (H ) = (4H 3) ; c ;H := (H(H )) = 9=6 () using Lemmas 8, 9, and an analogous result for H = 3=4, we get, asymptotically for large, lim T 4; = 6c ;H ; = < H < 3! 4 ; () lim 4 4H T 4; = 6c ;H ; H > 3! 4 ; () lim! log T 4; = 6c ;H = 6; H = 3 4 : (3) The second term T 4; can be dealt with by obtaining an expression for ha i A j ; A j A i i L ([;] ) in the same way as the expression obtained in (6). We get T 4; = 6 4H ha i A j ; A j A i i L ([;] ) i;j= = 6d(H) 4 a(h) 4 i;j= dydzdy dz jy z + i jj H jy z + i jj H jy y + i jj H jz z + i jj H : ow similarly to the proof of Lemma, we nd the the following three asymptotic behaviors: if H ( ; 3 4 ), then, ;H T 4; converges to, where if H > 3 4, then ;H 4 4H T 4; converges to, where if H = 3 4 then 3;H (= log )T 4; converges to, where ;H := 6d(H) 4 a(h) 4 c ;H ; (4) ;H := 3d(H) 4 a(h) 4 ( x)x 4H 4 dx; (5) 3;H := 3d(H) 4 a(h) 4 : (6)

13 Combining these results for T 4; with those for T 4; in lines (), (), and (3), we obtain the asymptotics of E T 4 as! : lim E T 4 = e;h ; if H (! ; 3 ); lim 4 4 4H E T 4 = e;h ; if H ( 3! 4 ; ) lim! log E T 4 = e3;h ; if H = 3 4 where, with i;h : i = ; ; 3 given in (4), (5), (6), we de ned e ;H := (=)c ;H + ;H ; e ;H := (=)c ;H + ;H ; e 3;H := c 3;H + 3;H : (7) Taking into account the estimations (), (), (3), with c 3;H in (8), we see that E T4 is always of smaller order than E T ; therefore the mean-square behavior of V is given by that of the term T only, which means we obtain for every H > = lim E! " H V (; a) 3. ormality of the 4th chaos term T 4 when H 3=4 p c3;h # = : (8) The calculations for T 4 above prove that lim! E[G ] = for H < 3=4 where e ;H is given in (7) and G := p! H e = ;H I 4 A i A i : (9) Similarly, for H = 3 4, we showed that lim! E[ G ~ ] = where e 3;H is given in (7) and s! ~G := log H e 3;H I 4 A i A i : (3) Using the criterion of ualart and Ortiz-Latorre (Part (iv) in Theorem ), we prove the following asymptotic normality for G and ~ G. Theorem If H (=; 3=4); then G given by (9) converges in distribution as If H = 3=4 then ~ G given by (3) converges in distribution as i= i= lim G = (; ): (3)! lim ~G = (; ): (3)! Proof. We will denote by c a generic positive constant not depending on. Step : setup and expectation evaluation. Using the derivation rule for multiple stochastic integrals, the Malliavin derivative of G is D r G = p H e = ;H 4 I 3 ((A i A i )(; r)) 3 i=

14 and its norm is kdg k L ([;]) = 4H 6e ;H The product formula (5) gives kdg k L ([;]) = 4H 6e ;H i;j= i;j= dri 3 ((A i A i )(; r)) I 3 ((A j A j )(; r)) : dr + 9I 4 ((A i A i )(; r) (A j A j )(; r)) + 9I ((A i A i )(; r) (A j A j )(; r)) +3!I ((A i A i )(; r) 3 (A j A j )(; r)) I 6 ((A i A i )(; r) (A j A j )(; r)) =: J 6 + J 4 + J + J : First note that, for the non-random term J that gives the expected value of the above, we have J = 6e ;H 4H 3! A i (y ; y )A i (y 3 ; y 4 )A j (y ; y )A j (y 3 ; y 4 )dy dy dy 3 dy 4 [;] 4 = 96 4H e ;H i;j= i;j= ha i ; A j i L ([;] ) : This sum has already been treated: we know from () that J =4 converges to, i.e. that lim! E[kDG k L ([;])] = 4. This mean, by the ualart Ortiz-Latorre criterion, that we only need to show that all other terms J 6 ; J 4 ; J converge to zero in L () as! : Step : order-6 chaos term. We consider rst the term J 6 : J 6 = c 4H i;j= dri 6 ((A i A i )(; r) (A j A j (; r))) = c 4H I 6 ((A i A j ) (A i A j )) : We study the mean square of this term. We have, since the L norm of the symmetrization is less than the L norm of the corresponding unsymmetrized function 3 E 4@ I 6 ((A i A j ) (A i A j )) A 5 i;j= i;j= 6! h(a i A j ) (A i A j ); (A k A l ) (A k A l )i L ([;] 6 ) i;j;k;l = 6! ha i ; A k i L ([;] )ha j ; A l i L ([;] )ha i A j ; A k A l i L ([;] ): i;j;k;l We get E J6 c 8H du dv du dv ju vj H ju u j H jv v j H ju v j H i;j;k;l I k I l " i k H i k + H i k H# " j l H j l + H j l H# : 4

15 First we show that for H (=; 3=4), we have for large E J 6 c 8H 6 : (33) i With the notation as in Step of this proof, making the change of variables u = (u ) and similarly for the other integrands, we obtain E J6 c 8H 8H 8 4 4H dudvdu dv [;] 4 i;j;k;l ju v + i jj H ju u + i kj H ju v + j kj H jv v + k lj H ji kj H ji k + j H ji k j H jj lj H jj l + j H jj l j H = c dudvdu dv [;] 4 i;j;k;l ju v + i jj H ju u + i kj H ju v + j kj H jv v + k lj H ji kj H ji k + j H ji k j H jj lj H jj l + j H jj l j H Again we use the fact that the dominant part in the above expression is the one when all indices are distant by atleast two units. In this case, up to a constant, we have the upper bound ji kj H for the quantity ji kj H ji k + j H ji k j H. By using Riemann sums, we can write E J6 c 4 i;j;k;l f( i ; j ; k ; l ) A 8H 8 4H 4 where f is a Riemann-integrable function on [; ] 4 and the Riemann sum converges to the nite integral of f therein. Estimate (33) follows. Step : chaos terms of order 4 and. To treat the term J 4 = c 4H i;j= dri 4 ((A i A i )(; r) (A j A j )(; r)) ; since I 4 (g) = I 4 (~g) where ~g denotes the symmetrization of the function g, we can write J 4 = c 4H ha i ; A j i L (;] I 4 (A i A j ) + c 4H I 4 (A i A j ) (A i A j ) =: J 4; + J 4; : i;j= i;j= 5

16 Both terms above have been treated in previous computations. To illustrate it, the rst summand J 4; can be bounded above as follows E jj 4; j c 8H i;j;k;l= = c 8H i;j;k;l= " i k + H i k + " k l + H k l + ha i ; A j i L ([;] )ha i ; A k i L ([;] )ha k ; A l i L ([;] )ha j ; A l i L ([;] ) " i j + H i j + H i H k H i j H # # " k H j l + H j l + # l H H j # l H and using the same bound cji jj H for the quantity ji j + j H + ji j j H ji jj H when ji jj we obtain E jj 4; j c 8H 8H c 8H 6 4 i;j;k;l= i;j;k;l= ji jj H ji kj H jj lj H jk lj H ji jj H ji kj H jj lj H jk lj H 4(H ) : This tends to zero at the speed 8H 6 as! by a Riemann-sum argument since H < 3 4. One can also show that E jj 4; j converges to zero at the same speed because E jj 4; j = c 8H i;j;k;l= c 8H 6 : i;j;k;l= h(a i A j ); (A k A l )i L ([;] ) 8H (8H 8) 8 [;] 4 ju vj + i jjju v + k ljju u + i kjjv v + k lj H ddvdu dvdu! Thus we obtain E J4 c 8H 6 A similar behavior can be obtained for the last term J by repeating the above arguments E J c 8H 6 (34) (35) Step 3: conclusion. Putting (33), (34), (35) together, and recalling the convergence result for E T4 proved in the previous subsection, we can apply the ualart Ortiz-Latorre criterion, and use the same method as in the case H < 3 4 for H = 3=4, to conclude the theorem s proof. 6

17 3.3 Anormality of the second chaos term T, and limit of the -variation This paragraph studies the asymptotic behavior of the term denoted by T which appears in the decomposition of V (; a). Recall that this is the dominant term, given by! T = 4 H I (A i A i ) and, with p c 3;H = 4d (H) given in (8), we showed that lim E H T c =! 3;H = : With T := H T c = 3;H, one can show that in L (), lim! kdt k L ([;]) = + c where c is a strictly positive constant. As a consequence the ualart-ortiz criterium cannot be used. However, it is straightforward to nd the limit of T, and thus of V, in L () in this case. We have the following result. Theorem 3 For all H (=; ), the normalized -variation H V (; a)= (4d (H)) converges in L () to the Rosenblatt random variable (). ote that this is the actual observed value of the Rosenblatt process at time. Proof. Since we already proved that H T 4 converges to in L (), it is su cient to prove that H T = (4d (H)) () converges to in L (). Since T is a second-chaos random variable, i.e. is of the form I (f ) where f is a symmetric function in L [; ], it is su cient to prove that H i= 4d(H) f converges to L in L [; ], where L is given by (9). From (5) we get f (y ; y ) = 4 H a(h)d(h) ju vj K H (u; y )@ K H (v; y )dudv i= We now show that H 4d(H) f converges pointwise, for y ; y [; ] to the kernel of the Rosenblatt random variable. On the interval, we may replace the evaluation K H K H at u and v by setting u = v = i=. We then get that f (y ; y ) is asymptotically equivalent to 4 H a (H) d (H) i=y K H (i=; y )@ K H (i=; y ) dudv ju vj H = 4 H d (H) i= i=y K H (i=; y )@ K H (i=; y ) i= where we used the identity RR dudv ju vj H = a (H) H = a (H) H. Therefore we can write for every y ; y (; ), by invoking a Riemann sum approximation, lim! H 4d(H) f (y ; y ) = d(h) lim! i=y K H (i=; y )@ K H (i=; y ) i= = K H (u; y )@ K H (u; y )du = L (y ; y ) y _y (36) 7

18 To nish the proof, it su ces to check that the sequence H f is Cauchy in L ([; ] ). This can be checked by a straightforward calculation. Indeed, one has, with C(H) a positive constant not depending on M and, k H f M H f M k L ([;] ) = C(H) H i;j= M + C(H)M H i;j= i M i M i M i M ju vj H ju v j H ju u j H jv v j H du dv dudv j M j M j M j M C(H)M H H M H H ju vj H ju v j H ju u j H jv v j H du dv dudv M i= j= j M j M j M j M du dv dudv ju vj H ju v j H ju u j H jv v j H : (37) The rst two terms have already been studied in Lemma. We have shown that H i;j= ju vj H ju v j H ju u j H jv v j H du dv dudv converges to (a (H) H(H )). Thus each of the rst two terms in (37) converge to C (H) times that same constant as M; go to in nity. By the change of variables already used several times u = (u ), the last term in (37) is equal to i C (H) (M) H M (M)H ju vj H ju v j H u = C (H) M dudvdu dv M i= j= [;] 4 ju vj H ju v j H u u M i= j= u M + i u M + i [;] 4 dudvdu dv j M j M H H v v v M + i v M + i j M j M H H For large i; j the term u M is negligible in front of i M and it can be ignored. Therefore, the last term in (37) is a equivalent to a Riemann sum than tends as M;! to the constant R R R ju vjh R dudv jx yj(h ). This is precisely equal to (a (H) H(H )), i.e. the limit of the sum of the rst two terms in (37). Since the last term has a leading negative sign, the announced Cauchy convergence is established, nishing the proof of the theorem. Remark 4 One can show that the -variations V (; a) converge to zero almost surely as goes to in nity. Indeed, the results in this section already show that V (; a) converges to in L (), and thus in probability, as! ; to obtain almost sure convergence we only need to use an argument in [4] (proof of Proposition ) for empirical means of discrete stationary processes. j : 8

19 3.4 ormality of the adjusted variations According to Theorem 3 which we just proved, in the Rosenblatt case, the standardization of the random variable V (; a) does not converge to the normal law. But this statistic, which can be written as V = T 4 +T has a small normal part, which is given by the asymptotics of the term T 4, as we can see from Theorem. Therefore, V T will converge (under suitable scaling) to the Gaussian distribution. Of course, the term T, which is an iterated stochastic integral, is not practical because it cannot be observed. But, replacing it with its limit () (this IS observed), one can de ned an adjusted version of the statistics V that converges, after standardization, to the standard normal law. The proof of this fact is somewhat delicate. If we are to subtract a multiple of () from V in order to recuperate T 4, and hope for a normal convergence, the rst calculation would have to be as follows: V (; a) p c3;h H () = V (; a) T + T p c3;h H = T 4 + p T c3;h H p c3;h H () () := T 4 + U : (38) The term T 4, when normalized as p p e;h T 4, converges to the standard normal law, as we proved in Theorem. To get a normal convergence for the entire expression in (38), one may hope that the p h i c3;h additional term U := H H p c3;h T () goes to fast enough. It is certainly true that U does go to, as we have just seen in Theorem 3. However the proof of that theorem did not investigate the speed of this convergence of U. For this convergence to be fast enough, one must multiply the expression by the rate p which is needed to ensure the normal convergence of T 4 : we would need U =. Unfortunately, this is not true. A more detailed calculation will show that U is precisely of order p. This means that we should investigate whether p U itself converges in distribution to a normal law. Unexpectedly, this turns out to be true if (and only if) H < =3. Proposition 5 With U as de ned in (38), and H < =3, we have p U converging in distribution to a centered normal with variance equal to f ;H := 3d (H) 4 a (H) where the function F is de ned by F (x) = dudvdu dv j(u u )x + j H [;] 4 k= k H F k h a(h) ju vjju v jj(v v )x + j H a(h) ju vjj(v u )x + j H + j(u u )x + j H i : (4) Before proving this proposition, let us record its consequence. Theorem 6 Let ((t); t [; ]) be a Rosenblatt process with selfsimilarity parameter H (=; =3) and let previous notations for constants prevail. Then, the following convergence occurs in distribution: p p c3;h p V (; a) = (; ): e;h + f ;H lim! H () (39) 9

20 Proof. By the considerations preceding the statement of Proposition 5, and (38) in particular, we have that p p c3;h V (; a) H () = p T 4 + p U : Theorem proves that p T 4 converges in distribution to a centered normal with variance e ;H. Proposition 5 proves that p U converges in distribution to a centered normal with variance f ;H. Since these two sequences of random variables live in two distinct chaoses (fourth and second respectively), Theorem in [3] implies that the sum of these two sequences converges in distribution to a centered normal with variance e ;H + f ;H. The theorem is proved. To prove Proposition 5, we must rst perform the calculation which yields the constant f ;H therein. This result is relegated to the Appendix, as Lemma, and shows that E[( p U ) ] converges to f ;H. Another (very) technical result needed for the proof of Proposition 5, which is used to guarantee that p U has a normal limiting distribution, is also recorded in the Appendix as Lemma. An explanation of why the conclusions of Proposition 5 and Theorem 6 cannot hold when H =3 is given at the end of this article, in the Appendix, after the proof of Lemma. ow we prove the proposition. Proof of Proposition 5. Since U is a member of the second chaos, we introduce a notation for its kernel. We write p p f;h U = I (g ) : where g is therefore the following symmetric function in L [; ] : g (y ; y ) := H = H p f;h 4d(H) f (y ; y ) L (y ; y ) : h Lemma proves that E (I (g )) i = kg k L ([;] ) converges to as!. By a convenient modi cation of the ualart Ortiz-Latorre criterion for nd-chaos sequences (see Theorem, point (ii) in [], which is recorded a part (iii) in Theorem herein) we have that I (g ) will converge to a standard normal if (and only if) lim kg g k L! ([;] ) = ; which would conclude the proof of the proposition. This fact does hold if H < =3. We have recorded this technical and delicate calculation as Lemma in the Appendix. Following the proof of this lemma, is a discussion of why the above limit cannot be when H =3. 4 The estimators for the selfsimilarity parameter In this part we construct estimators for the selfsimilarity exponent of a Hermite process based on the discrete observations of the driving process at times ; ; : : : ;. It is known that the asymptotic behavior of the statistics V (; a) is related to the asymptotic properties of a class of estimators for the Hurst parameter H. This is mentioned for instance in [4]. We recall the setup of how this works. Suppose that the observed process is a Hermite process; it may be Gaussian (fractional Brownian motion) or non-gaussian (Rosenblatt process or even a higher

21 order Hermite process). With a = f S (; a) = ; +g, the -variation is denoted by i= ( i ) (i ) ; (4) Recall that E [S (; a)] = H : By estimating E [S (; a)] by S (; a) we can construct the estimator ^H (; a) = log S (; a) log : (4) which coincides with the de nition in (4) given at the beginning of this paper. To prove that this is a strongly consistent estimator for H, we begin by writing + V (; a) = S (; a) H where V is the original quantity de ned in (3), and thus log ( + V (; a)) = log S (; a) + H log = ( ^H (; a) H) log : Moreover, by Remark 4, V (; a) converges almost surely to, and thus log ( + V (; a)) = V (; a)(+ o()) where o () converges to almost surely as!. Hence we obtain V (; a) = (H ^H (; a)) (log ) ( + o()): (43) Relation (43) means that V s behavior immediately give the behavior of ^H H. Speci cally, we can now state our convergence results. In the Rosenblatt data case, the renormalized error ^H H does not converge to the normal law. But one can obtain from Theorem 6 an adjusted version of this error that converges to the normal distribution. Theorem 7 Suppose that H > and the observed process is a Rosenblatt process with selfsimilarity parameter H. Then, strong consistency holds for ^H, i.e. almost surely, lim! In addition, we have the following convergence in L (): lim! H ^H (; a) = H: (44) d(h) log () ( ^H (; a) H) = (); (45) where () is the observed process at time. Moreover, if H < =3, then, in distribution as!, with c 3;H, e ;H and f ;H in (8), (7), and (39), p p p log () ( ^H c3;h (; a) H) e;h + f H ()! (; ) ;H Proof. This follows from Theorem 6, Theorem 3, and relation (43).

22 5 Appendix Lemma 8 The series P k= k H (k ) H (k + ) H is nite if and only if H (=; 3=4). Proof. Since k H (k ) H (k + ) H = k H f k, with f(x) := ( x) H ( + x) H being asymptotically equivalent to H(H )x for small x, the general term of the series is equivalent to (H) (H ) k 4H 4. Lemma 9 When H (3=4; ), P i;j=; ;;ji to H (H ) = (H 3=4) as!. jj i j H i j H i j+ H converges Proof. Let us write x = ji jj =, = H, and h = =. Then using a Taylor expansion to order 3, we have x (x h) (x + h) = h ( ) x + ch 3 3 for some (x h; x + h) and some constant c. Under the restriction x h, we have x= x h, which implies that the above correction term ch 3 jj 3 c h 3 x 3 for some other constant c. ow we can write the series of interest as i;j=; ;;ji + i;j=; ;;ji jj i;j=; ;;ji jj i j H i j H i j +! H 4 jh (H )j i j 4H 4 c 5 i j 4H 5 + c 6 i j jj i;j=; ;;ji where c is another constant. Replacing the + signs in line (48) by signs, we obtain the opposite inequality in line (47). We will show that the terms in line (48) are of a lower order in than the term in line (47). This will imply that P i;j=; ;;ji jj ha i ; A j i H is asymptotically equivalent to the right-hand side of line (47). Using a limit of a Riemann sum, we have lim j(i j) =j 4H 4 = jx yj 4H 4 dxdy =! i;j=; ;;ji jj [;] (H ) (4H 3) : Therefore the term on the right-hand side of line (47) is asymptotically equivalent to the expression H (H ) = (4H 3). On the other hand, for line (48), the series cannot be compared to Riemann sums. Rather, they converge (indeed, 4H 5 < ). We have jj 4H 6 (46) (47) (48) i;j=; ;;ji i;j=; ;;ji jj jj ji jj 4H 5 = ji jj 4H 6 = k= k= k 4H 5 c H ; k 4H 6 c H:

23 Therefore both terms in line (48) are smaller than a constant times 4H, which in our case is negligible compared to. In conclusion, we have proved that times the series (46) converges to jh(h )j (H )(4H 3) = H (H ) H 3=4, which concludes the proof. Lemma For all H > =, with = i ; i, (i = ; : : : ; ) lim H ju vj H ju v j H ju u j H jv v j H du dv dvdu (49)! i;j= = a(h) H H Proof. We make the change of variables u = (u i ) with du = du and we proceed similarly for the other variables u ; v; v. We obtain, for the integral we need to calculate: ju vj H ju v j H ju u j H jv v j H du dv dvdu = 4H dudvdu dv ju vj H ju v j H ju u + i jj H jv v + i jj H ; where we used the fact that 8H 8 = 4H 4. This needs to be summed over P i;j= ; the sum can be divided into two parts: a diagonal part containing the terms i = j and a non-diagonal part containing the terms i 6= j. As in the calculations contained in the previous sections, one can see that the non-diagonal part is dominant. Indeed, the diagonal part of (49) is equal to H dudvdu dv ju vj H ju v j H ju u j H jv v j H i= [;] 4 = H dudvdu dv ju vj H ju v j H ju u j H jv v j H [;] 4 and this tends to zero because H >. Therefore the behavior of the quantity in the statement of the lemma will be given by that of H i>j dudvdu dv ju vj H ju v j H ju u + i jj H jv v + i jj H = i H dudvdu dv i= k= ju vj H ju v j H ju u + kj H jv v + kj H = H ( k) dudvdu dv k= ju vj H ju v j H ju u + kj H jv v + kj H : 3

24 ote that H = ( k)ju u + kj H jv v + kj H k= ( k= k )ju u + k jh j v v + k jh : Because the terms of the form (u u ) = are negligible in front of k= for all but the smallest k s, the above expression is asymptotically equivalent to the Riemann sum approximation of the Riemann integral ( x) x 4H 4 dx = = (H ) = (H) where we used H = H. The lemma follows. pu Lemma With f ;H given in (39), and U in (38) we have lim! E = f ;H. Proof. We have seen that p c 3;H = 4d(H). We also have de ned p U = H = p c3;h H p c3;h T () : Let us simply compute the L -norm of the term in brackets. Since this expression is a member of the second chaos, and more speci cally since T = I (f ) and () = I (L ) where f (given in (36)) and L (given in (9)) are symmetric functions in L ([; ] ), it holds that " H # E p T () = H c3;h 4d(H) f L The rst term has already been computed. It gives H 4d(H) kf k L ([;] ) = H a 4 (H)d (H) i;j= L ([;] ) = H 4d(H) kf k L ([;] ) H 4d(H) hf ; L i L ([;] ) + kl k L ([;] ) : [;] 4 dudvdu dv ju vjju v jju u + i jjjv v + i jj H : 4

25 By using the expression of the kernel L and Fubini s theorem, the scalar product of f and L gives H 4d(H) hf ; L i L ([;] ) H = dy dy 4d(H) f (y ; y )L (y ; y ) = H a(h) 3 d(h) dudv du ju vjju u jjv u j H i= = H a(h) 3 d(h) dudv du ju vjju u jjv u j H i;j= = H a(h) 3 d(h) i;j= [;] 3 ju vjju u + i jjjv u + i jj H dudvdu : Finally, the last term kl k L ([;] ) can be written in the following way kl k L ([;] ) = d(h) a(h) ju u j (H ) dudu [;] = d(h) a(h) ju u j (H ) dudu i;j= = d(h) a(h) H ju u + i jj (H ) dudu : [;] i;j= One can check that, when drawing these three contributions together, the diagonal terms corresponding to i = j vanish. Thus we get p E U = 4d (H) H H 4d(H) f L L ([;] ) = 3 H H d(h) 4 a(h) ( k ) dudvdu dv k= [;] 4 h a(h) ju vjju v jju u + kjjv v + kj H a(h) ju vjju u + kjjv u + kj i H + ju u + kj H [;]4 = 3a(H) d(h) 4 ( k )k (H ) k (H ) dudvdu dv j u u k k= "a(h) ju vjju v jj v v k H + j a(h) ju vjj v H u + j + j u u + j H k k = 3d(H) 4 a(h) ( k )k H F k k= # + j H 5

26 where we introducted the function F given earlier in (4). This function F is of class C on the interval [; ]. It can be seen that F () = dudvdu dv a(h) ju vjju v j H a(h)ju vj + [;] 4! = a(h) ju vj H a(h) ju vj H dudv + = : [;] [;] Similarly, one can also calculate the derivative F and check that F () =. Therefore F (x) = o(x) as x!. To investigate the sequence a := P k= ( k )kh F k, we split it up into two pieces: a = ( k )k H F = k= k= k H F k + (k + )k H F k k= =: b + c k Since b is the partial sum of a sequence of positive terms, one only needs to check that the series is nite. The relation F (=k) =k yields that it is nite i H 3 <, which is true. For the term c, one notes that we may replace the factor k + by k, since, by the calculation done for b, P k= kh F k converges to. Thus asymptotically we have c ' k= k H 3 F k kf k k= k H 3 which thus converges to. We have proved that lim a = lim b = P k= kh F k, which nishes the proof of the Lemma. Lemma With g (y ; y ) := H = H p f;h 4d(H) f (y ; y ) L (y ; y ) : we have lim! kg g k L ([;] ) = as soon as H < =3. Proof. We omit the leading constant f = ;H which is irrelevant. Using the expression (36) for f we have g (y ; y ) = H = d(h)a(h) K H (u; y )@ K H (v; y )ju vj H dvdu L (y ; y ): Here and below, we will be omitting indicator functions of the type [; i+ ](y ) because, as we said before, these are implicitly contained in the support K H. By decomposing the expression for L from (9) over the same blocks as for f, we can now express the contraction g g : (g g )(y ; y ) = H (A B + C ) 6

Variations and Hurst index estimation for a Rosenblatt process using longer filters

Variations and Hurst index estimation for a Rosenblatt process using longer filters Variations and Hurst index estimation for a Rosenblatt process using longer filters Alexandra Chronopoulou 1, Ciprian A. Tudor Frederi G. Viens 1,, 1 Department of Statistics, Purdue University, 15. University

More information

SELF-SIMILARITY PARAMETER ESTIMATION AND REPRODUCTION PROPERTY FOR NON-GAUSSIAN HERMITE PROCESSES

SELF-SIMILARITY PARAMETER ESTIMATION AND REPRODUCTION PROPERTY FOR NON-GAUSSIAN HERMITE PROCESSES Communications on Stochastic Analysis Vol. 5, o. 1 11 161-185 Serials Publications www.serialspublications.com SELF-SIMILARITY PARAMETER ESTIMATIO AD REPRODUCTIO PROPERTY FOR O-GAUSSIA HERMITE PROCESSES

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information

arxiv: v2 [math.pr] 22 Aug 2009

arxiv: v2 [math.pr] 22 Aug 2009 On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,

More information

hal , version 2-18 Jun 2010

hal , version 2-18 Jun 2010 SELF-SIMILARITY PARAMETER ESTIMATIO AD REPRODUCTIO PROPERTY FOR O-GAUSSIA HERMITE PROCESSES ALEXADRA CHROOPOULOU, FREDERI G. VIES, AD CIPRIA A. TUDOR hal-9438, version - 18 Jun 1 Abstract. Let (Z (q,h)

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

Rough paths methods 4: Application to fbm

Rough paths methods 4: Application to fbm Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:

More information

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

From Fractional Brownian Motion to Multifractional Brownian Motion

From Fractional Brownian Motion to Multifractional Brownian Motion From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES CIPRIAN A. TUDOR We study when a given Gaussian random variable on a given probability space Ω, F,P) is equal almost surely to β 1 where β is a Brownian motion

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Statistical Aspects of the Fractional Stochastic Calculus

Statistical Aspects of the Fractional Stochastic Calculus Statistical Aspects of the Fractional Stochastic Calculus Ciprian A. Tudor Frederi G. Viens SAMOS-MATISSE, Centre d'economie de La Sorbonne, Universite de Paris Pantheon-Sorbonne, 9, rue de Tolbiac, 75634,

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata Robust Estimation and Inference for Extremal Dependence in Time Series Appendix C: Omitted Proofs and Supporting Lemmata Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill January

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

MA 8101 Stokastiske metoder i systemteori

MA 8101 Stokastiske metoder i systemteori MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Convergence for periodic Fourier series

Convergence for periodic Fourier series Chapter 8 Convergence for periodic Fourier series We are now in a position to address the Fourier series hypothesis that functions can realized as the infinite sum of trigonometric functions discussed

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

ERROR BOUNDS ON THE NON-NORMAL APPROXI- MATION OF HERMITE POWER VARIATIONS OF FRAC- TIONAL BROWNIAN MOTION

ERROR BOUNDS ON THE NON-NORMAL APPROXI- MATION OF HERMITE POWER VARIATIONS OF FRAC- TIONAL BROWNIAN MOTION Elect. Comm. in Probab. 13 28), 482 493 ELECTRONIC COMMUNICATIONS in PROBABILITY ERROR BOUNDS ON THE NON-NORMAL APPROXI- MATION OF HERMITE POWER VARIATIONS OF FRAC- TIONAL BROWNIAN MOTION JEAN-CHRISTOPHE

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

7.5 Partial Fractions and Integration

7.5 Partial Fractions and Integration 650 CHPTER 7. DVNCED INTEGRTION TECHNIQUES 7.5 Partial Fractions and Integration In this section we are interested in techniques for computing integrals of the form P(x) dx, (7.49) Q(x) where P(x) and

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

BERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS

BERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS BERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS SHOO SETO Abstract. These are the notes to an expository talk I plan to give at MGSC on Kähler Geometry aimed for beginning graduate students in hopes to motivate

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Near convexity, metric convexity, and convexity

Near convexity, metric convexity, and convexity Near convexity, metric convexity, and convexity Fred Richman Florida Atlantic University Boca Raton, FL 33431 28 February 2005 Abstract It is shown that a subset of a uniformly convex normed space is nearly

More information

Quantitative Techniques (Finance) 203. Polynomial Functions

Quantitative Techniques (Finance) 203. Polynomial Functions Quantitative Techniques (Finance) 03 Polynomial Functions Felix Chan October 006 Introduction This topic discusses the properties and the applications of polynomial functions, specifically, linear and

More information

An introduction to quantum stochastic calculus

An introduction to quantum stochastic calculus An introduction to quantum stochastic calculus Robin L Hudson Loughborough University July 21, 214 (Institute) July 21, 214 1 / 31 What is Quantum Probability? Quantum probability is the generalisation

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Real Analysis: Homework # 12 Fall Professor: Sinan Gunturk Fall Term 2008

Real Analysis: Homework # 12 Fall Professor: Sinan Gunturk Fall Term 2008 Eduardo Corona eal Analysis: Homework # 2 Fall 2008 Professor: Sinan Gunturk Fall Term 2008 #3 (p.298) Let X be the set of rational numbers and A the algebra of nite unions of intervals of the form (a;

More information

Simple Estimators for Semiparametric Multinomial Choice Models

Simple Estimators for Semiparametric Multinomial Choice Models Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper

More information

As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing).

As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing). An Interlude on Curvature and Hermitian Yang Mills As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing). Suppose we wanted

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS

FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS FOCK SPACE TECHNIQUES IN TENSOR ALGEBRAS OF DIRECTED GRAPHS ALVARO ARIAS Abstract. In [MS], Muhly and Solel developed a theory of tensor algebras over C - correspondences that extends the model theory

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

arxiv: v1 [math.pr] 23 Jan 2018

arxiv: v1 [math.pr] 23 Jan 2018 TRANSFER PRINCIPLE FOR nt ORDER FRACTIONAL BROWNIAN MOTION WIT APPLICATIONS TO PREDICTION AND EQUIVALENCE IN LAW TOMMI SOTTINEN arxiv:181.7574v1 [math.pr 3 Jan 18 Department of Mathematics and Statistics,

More information

Random Homogenization and Convergence to Integrals with respect to the Rosenblatt Process

Random Homogenization and Convergence to Integrals with respect to the Rosenblatt Process andom Homogenization and Convergence to Integrals with respect to the osenblatt Process Yu Gu Guillaume Bal September 7, Abstract This paper concerns the random fluctuation theory of a one dimensional

More information

Stochastic solutions of nonlinear pde s: McKean versus superprocesses

Stochastic solutions of nonlinear pde s: McKean versus superprocesses Stochastic solutions of nonlinear pde s: McKean versus superprocesses R. Vilela Mendes CMAF - Complexo Interdisciplinar, Universidade de Lisboa (Av. Gama Pinto 2, 1649-3, Lisbon) Instituto de Plasmas e

More information

Analysis of the Rosenblatt process

Analysis of the Rosenblatt process Analysis of the Rosenblatt process Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris 1, 9, rue de Tolbiac, 75634 Paris Cedex 13, France. June 3, 6

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Exercises to Applied Functional Analysis

Exercises to Applied Functional Analysis Exercises to Applied Functional Analysis Exercises to Lecture 1 Here are some exercises about metric spaces. Some of the solutions can be found in my own additional lecture notes on Blackboard, as the

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Homogenization in probabilistic terms: the variational principle and some approximate solutions

Homogenization in probabilistic terms: the variational principle and some approximate solutions Homogenization in probabilistic terms: the variational principle and some approximate solutions Victor L. Berdichevsky Mechanical Engineering, Wayne State University, Detroit MI 480 USA (Dated: October

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

Chapter 4. The First Fundamental Form (Induced Metric)

Chapter 4. The First Fundamental Form (Induced Metric) Chapter 4. The First Fundamental Form (Induced Metric) We begin with some definitions from linear algebra. Def. Let V be a vector space (over IR). A bilinear form on V is a map of the form B : V V IR which

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu Long-Range Dependence and Self-Similarity c Vladas Pipiras and Murad S. Taqqu January 24, 2016 Contents Contents 2 Preface 8 List of abbreviations 10 Notation 11 1 A brief overview of times series and

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Appendix for "O shoring in a Ricardian World"

Appendix for O shoring in a Ricardian World Appendix for "O shoring in a Ricardian World" This Appendix presents the proofs of Propositions - 6 and the derivations of the results in Section IV. Proof of Proposition We want to show that Tm L m T

More information

Wiener integrals, Malliavin calculus and covariance measure structure

Wiener integrals, Malliavin calculus and covariance measure structure Wiener integrals, Malliavin calculus and covariance measure structure Ida Kruk 1 Francesco Russo 1 Ciprian A. Tudor 1 Université de Paris 13, Institut Galilée, Mathématiques, 99, avenue J.B. Clément, F-9343,

More information

ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO SCALE

ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO SCALE Albeverio, S. and Kawasaki, S. Osaka J. Math. 5 (04), 37 ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO

More information

Lecture 6: Contraction mapping, inverse and implicit function theorems

Lecture 6: Contraction mapping, inverse and implicit function theorems Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices Applied Mathematics E-Notes, 1(01), 18-188 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices

More information

APPENDIX C: Measure Theoretic Issues

APPENDIX C: Measure Theoretic Issues APPENDIX C: Measure Theoretic Issues A general theory of stochastic dynamic programming must deal with the formidable mathematical questions that arise from the presence of uncountable probability spaces.

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park Korean J. Math. 3 (015, No. 1, pp. 1 10 http://dx.doi.org/10.11568/kjm.015.3.1.1 KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION Yoon Tae Kim and Hyun Suk Park Abstract. This paper concerns the

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

Stochastic Processes (Master degree in Engineering) Franco Flandoli

Stochastic Processes (Master degree in Engineering) Franco Flandoli Stochastic Processes (Master degree in Engineering) Franco Flandoli Contents Preface v Chapter. Preliminaries of Probability. Transformation of densities. About covariance matrices 3 3. Gaussian vectors

More information

Scattering for the NLS equation

Scattering for the NLS equation Scattering for the NLS equation joint work with Thierry Cazenave (UPMC) Ivan Naumkin Université Nice Sophia Antipolis February 2, 2017 Introduction. Consider the nonlinear Schrödinger equation with the

More information

Functional Analysis: Assignment Set # 3 Spring 2009 Professor: Fengbo Hang February 25, 2009

Functional Analysis: Assignment Set # 3 Spring 2009 Professor: Fengbo Hang February 25, 2009 duardo Corona Functional Analysis: Assignment Set # 3 Spring 9 Professor: Fengbo Hang February 5, 9 C6. Show that a norm that satis es the parallelogram identity: comes from a scalar product. kx + yk +

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

AMS 212A Applied Mathematical Methods I Appendices of Lecture 06 Copyright by Hongyun Wang, UCSC. ( ) cos2

AMS 212A Applied Mathematical Methods I Appendices of Lecture 06 Copyright by Hongyun Wang, UCSC. ( ) cos2 AMS 22A Applied Mathematical Methods I Appendices of Lecture 06 Copyright by Hongyun Wang UCSC Appendix A: Proof of Lemma Lemma : Let (x ) be the solution of x ( r( x)+ q( x) )sin 2 + ( a) 0 < cos2 where

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information