Statistical Aspects of the Fractional Stochastic Calculus

Size: px
Start display at page:

Download "Statistical Aspects of the Fractional Stochastic Calculus"

Transcription

1 Statistical Aspects of the Fractional Stochastic Calculus Ciprian A. Tudor Frederi G. Viens SAMOS-MATISSE, Centre d'economie de La Sorbonne, Universite de Paris Pantheon-Sorbonne, 9, rue de Tolbiac, 75634, Paris, France. Dept. Statistics and Dept. Mathematics, Purdue University, 5 N. University St., West Lafayette, IN , USA. viens@purdue.edu September, 6 Abstract We apply the techniques of stochastic integration with respect to the fractional Brownian motion and the theory of regularity and supremum estimation for stochastic processes to study the maximum likelihood estimator (MLE) for the drift parameter of stochastic processes satisfying stochastic equations driven by fractional Brownian motion with any level of Holder-regularity (any Hurst parameter). We prove existence and strong consistency of the MLE for linear and nonlinear equations. We also prove that a version of the MLE using only discrete observations is still a strongly consistent estimator. Introduction Stochastic calculus with respect to fractional Brownian motion (fbm) has recently known an intensive development, motivated by the wide array of applications of this family of stochastic processes. For example, fbm is used as a model in network trac analysis; recent work and empirical studies have shown that trac in modern packet-based high-speed networks frequently exhibits fractal behavior over a wide range of time scales; this has major implications for the statistical study of such AMS subject classications. Primary: 6M9; secondary: 6G8, 6H7, 6H. Key words and phrases: maximum likelihood estimator, fractional Brownian motion, strong consistency, stochastic dierential equation, Malliavin calculus, Hurst parameter. Acknowledgment of support: second author partially supported by NSF grant no

2 Statistics of Fractional Brownian Motion trac. An other example of applications is in quantitative nance and econometrics: the fractional Black-Scholes model has been recently introduced (see e.g. [9], [4]) and this motivates the statistical study of stochastic dierential equations governed by fbm. The topic of parameter estimation for stochastic dierential equations driven by standard Brownian motion is of course not new. Diusion processes are widely used for modeling continuous time phenomena; therefore statistical inference for diusion processes has been an active research area over the last decades. When the whole trajectory of the diusion can be observed, then the parameter estimation problem is somewhat simpler. But in practice data is typically collected at discrete times and thus of particular contemporary interest are works in which an approximate estimator, using only information gleaned from the underlying process in discrete time, is able to do as well as an estimator that uses continuously gathered information. This is in fact a rather challenging question and several methods have been employed to construct good estimators for discretely observed diusions; amongs these methods, we refer to numerical approximation to the likelihood function (see At-Sahalia [], Poulsen [34], Beskos et al. [4]), martingale estimating functions (see Bibby and Sorensen [5] ), indirect statistical inference (see Gourieroux et al. [8]), or the Bayesian approach (see Elerian et al. [5]), some sharp probabilistic bounds on the convergence of estimators in [6], or [], [33], [] for particular situations. We mention the survey [38] for parameter estimation in discrete cases, further details in the works of [7], [], or the book []. Parameter estimation questions for stochastic dierential equations driven by fbm are, in contrast, in their infancy. Some of the main contributions include [5], [4], [36] or [6]. We take up these estimation questions in this article. Our purpose is to contribute further to the study of the statistical aspects of the fractional stochastic calculus, by introducing the systematic use of ecient tools from stochastic analysis, to yield results which hold in some non-linear generality. We consider the following stochastic equation X t = b(x s )ds + B H t ; X = () where B H is a fbm with Hurst parameter H (; ) and the nonlinear function b satises some regularity and non-degeneracy conditions. We estimate the parameter on the basis of the observation of the whole trajectory of the process X. The parameter H, which is assumed to be known, characterizes the local behavior of the process, with Holder-regularity increasing with H; if H = =, fbm is standard Brownian motion (BM), and thus has independent increments; if H > =, the increments of fbm are positively correlated, and the process is more regular than BM;

3 Statistics of Fractional Brownian Motion 3 if H < =, the increments are negatively correlated, and the process is less regular than BM. H also characterizes the speed of decay of the correlation between distant increments. Estimating long-range dependence parameters is a dicult problem in itself, which has received various levels of attention depending on the context; the text [3] can be consulted for an overview of the question; we have found the yet unpublished work [], available online, which appears to propose a good solution applicable directly to fbm. Herein we do not address the Hurst parameter estimation issue. The results we prove in this paper are as follows: for every H (; ), we give concrete assumptions on the nonlinear coecient b to ensure the existence of the maximum likelihood estimator (MLE) for the parameter (Proposition ); for every H (; ) and under certain hypotheses on b which include nonlinear classes, we prove the strong consistency of the MLE (Theorems and 3, depending on whether H < = or H > =; and Proposition and Lemma 3 for the scope of non-linear applicability of these theorems); note that for H > = and b linear, this has also been proved in [4]; for every H (; ), the bias and mean-square error for the MLE are estimated in the linear case (Proposition 3); this result was established for H > = in [4]. In this paper we also present a rst practical implementation of the MLE studied herein, using only discrete observations of the solution X of equation (), by replacing integrals with their Riemann sum approximations. We show that the discretization time-step for the Riemann sum approximations of the MLE can be xed while still allowing for a strongly consistent estimator in large time, a result valid in the linear case and some non-linear classes (Proposition 5 and Theorem 4). To establish all these results, we use techniques in stochastic analysis including the Malliavin calculus, and supremum estimations for stochastic processes. The Malliavin calculus, or the stochastic calculus of variations, was introduced by P. Malliavin in [9] and developed by D. Nualart in [3]. Its original purpose was to study the existence and the regularity of the density of solutions to stochastic dierential equations. Since our hypotheses in the present paper to ensure existence and strong consistency of the MLE are given in terms of certain densities (see Condition (C)), the techniques of the Malliavin calculus appear as a natural tool.

4 Statistics of Fractional Brownian Motion 4 We believe our paper is the rst instance where the Malliavin calculus and supremum estimations are used to treat parameter estimation questions for fractional stochastic equations. These techniques should have applications and implications in statistics and probability reaching beyond the question of MLE for fbm. For example, apart from providing the rst proof of strong consistency of the MLE for an fbm-driven dierential equation with non-linear drift or with H < =, more generally, in (It^o-) diusion models, the strong consistency of an estimator follows if one can prove that an expression of the type I t := R t f (X s )ds tends to as t! almost surely. To our knowledge, a limited number of methods has been employed to deal with this kind of problem: for example, if X is Gaussian the Laplace transform can be computed explicitly to show that lim t! I t = a.s.; also, if X is an ergodic diusion, a local time argument can be used to show the above convergence. Particular situations have also been considered in [], []. We believe that our stochastic analytic tools constitute a new possibility, judging by the fact that the case of H < = is well within the reach of our tools, in contrast with the other above-mentioned methods, as employed in particular in [4] (see however a general Bayesian-type problem discussed in [5]). The organization of our paper is as follows. Section contains preliminaries on the fbm. In Section 3 we show the existence of the MLE for the parameter in (8) and in Section 4 we study its asymptotic behavior. Section 5 contains some additional results in the case when the drift function is linear. In Section 6, a discretized version of the MLE is studied. The Appendix (Section 7) contains most of the technical proofs. We gratefully acknowledge our debt to the insightful comments of the editor, associate editor, and two referees, which resulted in several important improvements on an earlier version of this paper. Preliminaries on the fractional Brownian motion and fractional calculus We consider (Bt H ) t[;t ], B H = a fractional Brownian motion with Hurst parameter H (; ), in a probability space (; F; P). This is a centered Gaussian process with covariance function R given by R(t; s) = E B H t B H s = th + s H jt sj H s; t [; T ]: () Let us denote by K the kernel of the fbm such that (see e.g. [3]) B H t = K(t; s)dw s (3)

5 Statistics of Fractional Brownian Motion 5 where W is a Wiener process (standard Brownian motion) under P. Denote by E H the set of step functions on [; T ] and let H be the canonical Hilbert space of the fbm; that is, H is the closure of E with respect to the scalar product h [;t] ; [;s] i H = R(t; s): The mapping [;t]! Bt H can be extended to a isometry between H and the Gaussian space generated by B H and we denote by B H (') the image of ' H by this isometry. We also introduce the operator K from E H to L ([; T ]) dened by (K ')(s) = K(T; s)'(s) + Z T s ('(r) (r; s)dr: With this notation we have (K [;t] )(s) = K(t; s) and hence the process W t = (K ; [;t] )(s)db H s (5) is a Wiener process (see []); in fact, it is the Wiener process referred to in formula (3), and for any non-random ' H, we have B H (') = R T (K ')(s)dw (s), where the latter is a standard Wiener integral with respect to W. Lastly we recall some elements of fractional calculus. Let f be an L function over the interval [; T ] and >. Then I +f(t) = () Z T f(s) (t s) ds and D +f(t) = d ( ) dt Z T f(s) (t s) ds are the Riemann-Liouville fractional integrals and derivatives of order (; ). The latter admit the following Weil representation Z D+f(t) f(t) t = ( ) t + f(t) f(y) dy (t y) + where the convergence of the integrals at t = y holds in the L p -sense (p > ). We can formally dene, for negative orders ( < ), the fractional integral operators as I = D. If K H is the linear operator (isomorphism) from L ([; T ]) onto I H+ + (L ([; T ])) whose kernel is K(t; s), [3] can be consulted for formulas for K H ; we provide here the formulas for the inverse operator of K H in terms of fractional integrals K H h (s) = s H H I + (s H h (s))(s); H (6) and K H h (s) = s H D H + (s H h (s))(s); H : (7)

6 Statistics of Fractional Brownian Motion 6 3 The maximum likelihood estimator for fbm-driven stochastic dierential equations We will analyze the estimation of the parameter R based on the observation of the solution X of the stochastic dierential equation X t = b(x s )ds + B H t ; X = (8) where B H is a fbm with H (; ) and b : R! R is a measurable function. Let us recall some known results concerning equation (8): In [3] the authors proved the existence and uniqueness of a strong solution to equation (8) under the following assumptions on the coecient b: { if H <, b satises the linear growth condition jb(x)j C( + jxj); { if H >, b is Holder-continuous of order ( H ; ). In [7] an existence and uniqueness result for (8) is given when H > under the hypothesis b(x) = b (x) + b (x), b satisfying the above conditions and b being a bounded nondecreasing left (or right) continuous function. Remark The case of the Holder-continuous drift is elementary: it is not dicult to show that the usual Picard iteration method can be used to prove the existence and uniqueness of a strong solution. Throughout the paper, from now on, we will typically avoid the use of explicit H-dependent constants appearing in the denitions of the operator kernels related to this calculus, since our main interest consists of asymptotic properties for estimators. In consequence, we will use the notation C(H); c(h); c H ; for generic constants depending on H, which may change from line to line. Our construction is based on the following observation (see [3]). Consider the process B ~ t H = Bt H + R t u sds where the process u is adapted and with integrable paths. Then we can write where = W t + ~B H t = We have the following Girsanov theorem. K(t; s)dz s (9) Z K H u r dr (s)ds: ()

7 Statistics of Fractional Brownian Motion 7 Theorem i) Assume that u is an adapted process with integrable paths such that t! u s ds I H+ + L ([; T ]) a.s. ii) Assume that E(V T ) = where Z T Z Z V T = exp K T Z! H u r dr (s)dw s K H u r dr (s) ds : () Then under the probability measure ~P dened by d~p=dp = V T it holds that the process Z dened in () is a Brownian motion and the process B ~ H (9) is a fractional Brownian motion on [; T ]. Hypothesis. We need to make, at this stage and throughout the remainder of the paper, the following assumption on the drift: b is dierentiable with bounded derivative b ; thus the ane growth condition holds. This Girsanov theorem is the basis for the following expression of the MLE. Proposition Denote, for every t [; T ], by Z Q t = Q t (X) = K H b(x r )dr (t): () Then Q L ([; T ]) almost surely and the MLE is given by t = R t Q sdw s R t Q sds : (3) Before proving Proposition, we need the following estimates: Lemma For every s; t [; T ], sup jx s j st Ct + sup st B H s e Ct (4) and jx t X s j C + sup jx u j ut! jt sj + B H t B H s : (5)

8 Statistics of Fractional Brownian Motion 8 Proof: With C the linear growth constant of b, we have, for any s jx s j Z s jj jb(x u )j du + B H s and by Gronwall's lemma jx s j Cs + sup us C B H u Z s ( + jx u j) du + sup B H u us e Cs ; s [; T ] and the estimate (4) follows. The second estimate follows by b's ane growth. Proof of Proposition. Let h (t) = b (X s ) ds: We prove that the process h satises i) and ii) of Theorem. Note rst that the application of the operator K H preserves the adaptability. We treat separately the cases when H is bigger or less than one half. The case H < =: To prove i), we only need to show that Q L ([; T ]) P-a.s. Now using relation (6) we thus have, for some constant C H which may change from line to line, using the hypothesis jb (x)j C ( + jxj), for all s T, Z s jq s j C H s H = (s u) = H u = H b (X u ) du C H + sup jx u j ; (6) us which we can rewrite, thanks to Lemma, as sup jq s j C (H; T ) st + sup jx (s)j st which, thanks to inequality (4), is of course much stronger than Q L ([; T ]) a.s.. To prove ii) it suces to show that there exists a constant > such that sup E exp(q s) < : st Indeed, one can invoke an argument used by Friedman in [7], Theorem., page 5, showing that this condition implies the so-called Novikov condition (see [35]), itself implying ii). Since Q satises (6), the above exponential moment is a trivial consequence of inequality (4) and the Fernique's theorem on the exponential integrability of the square of a seminorm of a Gaussian process.! ;

9 Statistics of Fractional Brownian Motion 9 The case H > =: Using formula (7) we have in this case that Q s = c H "s H b(x s ) + H Z s s H b(x s )s H b(x u )u H (s u) H+ = c H "s H b(x s ) + H Z s s H s H u H b(xs ) du + H Z s s H and using the fact that Z s s we get jq s j c H s H jb(x s )j + s H H b(x s ) b(x u ) (s u) H+ u H du # (s u) H+ u H (s u) H du = c(h)s H Z s b(x s ) b(x u ) (s u) H+ u H du! du # := A(s) + B(s): The rst term A(s) above can be treated as in [3], proof of Theorem 3, due to our Lipschitz assumption on b. We obtain that for every >, E exp( A sds) < : (8) To obtain the same conclusion for the second summand B(s) we note that by Lemma, up to a multiplicative constant, the random variable is bounded by + sup jx u j ut G =! sup u<st jx s X u j ju sj H " jt sj H+" + sup u<st and it suces to use the calculations contained in [3]. jbs H Bu H j ju sj H " Conclusion. Properties i) and ii) are established for both cases of H, and we may apply Theorem. Expression (3) for the MLE follows a standard calculation, since (using the notation P for the probability measure induced by (X s ) st, and the fact that P = P), (7) F () := log dp dp = Q s dw s Q sds: (9)

10 Statistics of Fractional Brownian Motion We nish this section with some remarks that will relate our construction to previous works ([5], [4], [36]). Details about these links are given in Section 5. Alternative form of the MLE. By (8) we can write, by integrating the quantity K ; [;t] (s) for s between and t, K ; [;t] () (s)dx s = On the other hand, by (8) again, K ; [;t] () (s)b(x s )ds + W t : () X t = K(t; s)dz s () where Z is given by (). Therefore, we have the equality K ; [;t] () (s)dx s = : () By combining () and () we obtain Z K H b(x r )dr (s)ds = K ; [;t] () (s)b(x s )ds and thus the function t! R t K; [;t] () (s)b(x s )ds is absolutely continuous with respect to the Lebesgue measure and Q t = d dt K ; [;t] () (s)b(x s )ds: (3) By () we get that the function (9) can be written as F () = Q s dz s + Q sds: As a consequence, the maximum likelihood estimator t has the equivalent form R t t = Q sdz s R t Q sds : (4) The above formula (4) shows explicitly that the estimator t is observable if we observe the whole trajectory of the solution X.

11 Statistics of Fractional Brownian Motion 4 Asymptotic behavior of the maximum likelihood estimator This section is devoted to studying the strong consistency of the MLE (3). A similar result has been proven in the case b(x) x and H > in [4]. We propose here a proof of strong consistency for a class of functions b which contains signicant non-linear examples. By replacing () in (4), we obtain that R t t = Q sdw s R t Q sds with Q given by () or (3). To prove that t! almost surely as t! (which means by denition that the estimator t is strongly consistent), by the strong law of large numbers we need only show that lim t! Q sds = a.s. : (5) To prove that lim t! R t Q sds = in a non-linear case, it is necessary to make some assumption of non-degeneracy on the behavior of b. In order to illustrate our method using the least amount of technicalities, we will restrict our study to the case where the function jbj satises a simple probabilistic estimate with respect to fractional Brownian motion. (C) There exist positive constants t and K b, both depending only on H and the function b, and a constant < = ( + H) such that for all t t and all " >, we have ~P jq t (~!)j = p t < " "t H K b, where under ~P, ~! has the law of fractional Brownian motion with parameter H. 4. The case H < In this part we prove the following result. Theorem Assume that H < = and that Condition (C) holds. Then the estimator t is strongly consistent, that is, lim t = almost surely: t! Before proving this theorem, we discuss Condition (C). To understand this condition, we rst note that with t H the positive measure on [; t] dened by t H (dr) = (r=t)= H (t r) = H dr, according to the representation (6), we have Q t = t H (ds) b(~! s )

12 Statistics of Fractional Brownian Motion and therefore, by the change of variables r = s=t, Q t p t = D = Z Z H (dr) b (~! tr) t H (6) H (dr) b th ~! r t H ; (7) where the last inequality is in distribution under ~P. If b has somewhat of a linear behavior, we can easily imagine that b t H ~! r =t H will be of the same order as b (~! r ). Therefore Q t = p t should behave, in distribution for xed t, similarly to the universal random variable R H (dr) b (~! r) (whose distribution depends only on b and H). Generally speaking, if this random variable has a bounded density, the strongest version of condition (C), i.e. with =, will follow. In the linear case, of course, the factors t H disappear from expression (7), leaving a random variable which is indeed known to have has a bounded density, uniformly in t, by the arcsine law. The presence of the factor t H in Condition (C) gives even more exibility, however, since in particular it allows a bound on the density of Q t = p t to be proportional to t H. Leaving aside these vague considerations, we now give, in Proposition, a simple sucient condition on b which implies condition (C). The proof of this condition uses the tools of the Malliavin calculus; as such, it requires some extra regularity on b. We also give a class of non-linear examples of b's satisfying (C) (Condition (9) in Lemma 3) which are more restricted in their global behavior than in Proposition, but do not require any sort of local regularity for b. Proposition Assume H < =. Assume that b is bounded and that b satises jb (x)j b = + jxj for some (H= ( H) ; ). Assume moreover that jb j is bounded below by a positive constant b. Then, letting = holds., Condition (C) Remark The condition < = ( + H) from Condition (C) does translates as > H= ( H), which is consistent with < because H < =: The non-degeneracy condition on jb j above can be relaxed. It is possible, for example, to prove that if, for x x, only the condition jb (x)j x holds, then Condition (C) holds as long as does not exceed a positive constant (H) depending only on H. However, such a proof is more technical than the one given below, and we omit it. The hypothesis of fractional power decay on b, while crucial, does allow b to have a truly non-linear behavior. Compare with Lemma 3 below, which would correspond to the case = here. The hypotheses of the above proposition imply that b is monotone.

13 Statistics of Fractional Brownian Motion 3 The proof of Proposition requires a criterion from the Malliavin calculus, which we present here. The book [3] by D. Nualart is an excellent source for proofs of the results we quote. Here we will only need to use the following properties of the Malliavin derivative D with respect to W (recall that W is the standard Brownian motion used in the representation (3), i.e. dened in (5)). For simplicity of notation we assume that all times are bounded by T =. The operator D, from a subset of L () into L ( [; ]), is essentially the only one which is consistent with the following two rules:. Consider a centered random variable in the Gaussian space generated by W (rst chaos); it can be therefore represented as Z = W (f) = R f (s) dw (s) for some non-random function f L ([; ]). The operator D picks out the function f, in the sense that for any r [; ], D r Z = f (r) :. D is compatible with the chain rule, in the sense that for any C (R) such that both F := (Z) and (Z) belongs to L (), for any r [; ], D r F = D r (Z) = (Z) D r Z = (Z) f (r) : For instance, using these two rules, denition (3) and formula (5) relative to the fbm ~! under ~P, we have that under ~P, for any r s, D r b t H ~! s = t H b t H ~! s K (s; r) : (8) It is convenient to dene the domain of D as the subset D ; of r.v.'s F L () such that D F L ( [; ]). Denote the norm in L ([; ]) by kk. The set D ; forms a Hilbert space under the norm dened by kf k ; = E jf j + E kd F k = E jf j + E Z jd r F j dr: Similarly, we can dene the second Malliavin derivative D F as a member of L [; ], using an iteration of two Malliavin derivatives, and its associated Hilbert space D ;. Non-Hilbert spaces, using other powers than, can also be dened. For instance, the space D ;4 is that of random variables F having two Malliavin derivatives, and satisfying kf k 4 ;4 = E jf j4 + E kd F k + E D ;F 4 L ([;] ) Z Z = E jf j 4 + E jd r F j dr + E jd r D s F j drds <

14 Statistics of Fractional Brownian Motion 4 We also note that the so-called Ornstein-Uhlenbeck operator L acts as follows (see [3, Proposition.4.4]): LF = L (Z) = Z (Z) + (Z) kfk : We have the following Lemma, whose proof we omit because it follows from ([3, Proposition... and Exercise..]). h Lemma Let F be a random variable in D ;4, such that E kdf k 8i <. Then F has a continuous and bounded density f given by " LF DF DF ; D F!# L f (x) = E (F >x) kdf k ([;] ) kdf k 4 Proof of Proposition. Step : strategy. Using the identity in law (7), and the shorthand notation = H, let F = Q Z t p = (dr) b th ~! r t t H : It is sucient to prove that F has a density which is bounded by K b t H where the constant K b depends only on b and H. Indeed ~P jq t (~!)j = p t < " R " K bt H dx = "t H K b. In this proof, C b;h denotes a constant depending only on b and H, whose value may change from line to line. Step : calculating the terms in Lemma. We begin with the calculation of DF. Since the Malliavin derivative is linear, we get D r F = t H R (ds) D r b t H ~! s. Then from (8) we get Z Z r D r F = Z r (ds) b t H ~! s K (s; r) : Thus we can calculate Z Z kdf k = dr (ds) b t H ~! s K (s; r) = = Z Z (ds) ds b t H ~! s b (ds) ds b t H ~! s b t H Z min(s;s ) ~! s K (s; r) K s; r dr t H ~! s R s; s ; where R is the covariance of fbm in (). A similar calculation yields D q;rf = t H Z max(q;r) (ds) b t H ~! s K (s; r) K (s; q)

15 Statistics of Fractional Brownian Motion 5 and D F Z L ([;] ) = th Z (ds) ds b t H ~! s b For the Ornstein-Uhlenbeck operator, which is also linear, we get LF = Z (ds) b t H ~! s ~!s + b t H ~! s t H s H : t H ~! s R s; s : Step : estimating the terms in Lemma. With the expressions in the previous step, using the hypotheses of the proposition, we now obtain, for some constant C H depending only on H, ~E [jlf j] C H b Z " #! + t H b (ds) s H ~E + t H j~! s j = C H b + t H b E " Z (ds) s H + (ts) H jzj where Z is a generic standard normal random variable. We deal rst with the integral in s. For s [; =], (ds) has a bounded density, and thus for any a > and any <, we have R = ds ( + as ) ( ) a ; on the other hand, for s [=; ], we can bound + (ts) H jzj above by (t=) H jzj. We immediately obtain, using a = t H jzj and = H, ~E [jlf j] C H;b + t H( ) H E h jzj C H;b + t H( ) : i + t H( h ) E jzj i Z = #! ; ds ( s) =+H The estimation of D F is similar. Using its expression in the previous step, the measure d=dsd=ds R (s; s ), and the fact that R ds ( + as ) ( ) a, with = H < =, we get ~E D F L ([;] ) C b;ht H( ) : Also almost surely, for any p, for some constant C H;p depending only on H and p, since b has a constant sign, we obtain! kdf k p = ZZ [;] (ds) ds R s; s b t H ~! s b t H ~! s! p= C H;p b p :

16 Statistics of Fractional Brownian Motion 6 Lastly, it is convenient to invoke the Cauchy-Schwartz inequality to get DF DF ; D F D F L ([;] ) L kdf k 4 ([;] ) kdf k ; Step 3: applying Lemma ; conclusion. The third estimate in the previous step (for p = 8) proves trivially that ~E kdf k 8 is nite. That F D ;4 follows again trivially from the boundedness of b and b using the expressions in Step. Thus Lemma applies. We conclude from the estimates in the previous step that F has a density f which is bounded as f (x) C H;b + t H( ) b With t, the conclusion of the proposition follows. A smaller class of functions b satisfying condition (C), but which is not restricted to H < =, is given in the following result, proved in the Appendix. Lemma 3 Let H (; ). Assume xb(x) has a constant sign for all x R + and a constant sign for all x R. Assume jb (x) =xj = c + h (x) (9) for all x, where c is a xed positive constant, and lim x! h (x) =. Then Condition (C) is satised with =. Condition (C) also holds for any b of the above form to which a constant C is added: j(b (x) C) =xj = c + h (x) and lim x! h (x) =. Note that this condition is less restrictive than saying b is asymptotically ane, since it covers the family b (x) = C + cx + (jxj ^ ) for any (; ). In some sense, Condition (C) with = appears to be morally equivalent to this class of functions. Proof of Theorem. Step : setup. Since we only want to show that (5) holds, and since R t jq sj ds is increasing, it is sucient to satisfy condition (5) for t tending to innity along a sequence (t n ) nn. We write, according to the representation (6), for each xed t, Z I t = I t (X) := jq s (X)j s ds = s H (dr) b(x r ) ds where X is the solution of the Langevin equation (8) and the positive measure s H is dened by s H (dr) = (r=s)= H (s r) = H dr. Since the theorem's conclusion is an almost sure statement about X, and from the Girsanov Theorem applied to

17 Statistics of Fractional Brownian Motion 7 X, the measures P and ~P are equivalent, it is enough work under ~P, i.e. to assume that X is a standard fbm; we omit the dependence of Q s and I t on X, for simplicity. We will show that with t n = n k for some positive integer k chosen below, I tn converges to, by restricting the integration dening I tn to a small interval of length b n = n j near t n, where j will be another integer chosen below. Indeed I tn = n n t n jq s j ds b n jq tn j b n jq s j ds n t n b n jq tn Q s j jq tn + Q s j ds b n jq tn j sup jq tn s[t n b n;t n] b n jq tn j b n sup jq s j s[;t n] Q s j jq tn + Q s j sup jq tn s[t n b n;t n]! Q s j := A n B n : (3) Step : the diverging term A n. The term A n is easily shown to converge to innity almost surely thanks to condition (C) modulo a condition on j and k. Indeed, we have for any sequence a n ; h i P b n jq tn j < a n K b t H n a n (b n t n ) = = K b n k(h =)+j= a n : In order for the right-hand side of the last expression to be summable in n while being able to choose lim n! a n =, it is sucient to impose j + < ( H) k: (3) Thus specically, under this condition, with a n = n` for < ` < ( H) k j, we have by the Borel-Cantelli lemma the existence of an almost surely nite n such that n > n implies A n = b n jq tn j > n`. Step 3: the error term B n. To control the term B n we need two estimates, whose proofs are presumably well understood, but which we have included in the appendix, as a precise reference does not seem to be available. They are established using boundedness and Holder-continuity properties of fbm, which are based on that process's Gaussian property. For any sequences b n ; t n such that b n t n (meaning lim n b n =t n = ), for any xed " >, there exists an almost-surely nite random :

18 Statistics of Fractional Brownian Motion 8 variable c " such that for all n >, we have and sup jq s j c " (t n ) =+" : (3) s[;t n] sup jq t Q s j c " (t n ) = H+" (b n ) H " : (33) s;t[t n b n;t n] These results' proofs are discussed in Section 7.. following: almost-surely B n c "b n (t n ) =+" (t n ) = H+" (b n ) H " = c " (t n ) H+" (b n ) +H " = c "n k( H+") j(+h ") : One immediately obtains the The statement that B n converges to (i.e. that " can be chosen positive while having the power in the last expression above be negative) now follows from assuming that j and k are related by k ( H) j ( + H) < : (34) Step 4: conclusion. We may now use the results of the last two parts, namely that A n > n` while lim n! B n =, to conclude from (3) that the statement of the theorem holds, provided that conditions (3) and (34) hold, i.e. j + H < k < + H H j: The theorem now follows because the relation < = ( + H) in Condition (C) j+ implies that if j is large enough, we do indeed have H < +H H j. 4. The case H > Due to the fact that the function Q is less regular in this case, we should not expect that the proof of the following Theorem be a consequence of the proof in the case H < =: Nevertheless, it deviates from the former proof very little. On the other hand, we cannot rely on Proposition to nd a convenient sucient condition for Condition (C); instead one can look to the non-linear class of examples in Lemma 3, which satisfy the strong version ( = ) of Condition (C) for all H (; ): The next result's proof is in the Appendix. Theorem 3 Assume that H > and b satises Condition (C) with = (e.g. b satises Condition (9) in Lemma 3). Then the maximum likelihood estimator t is strongly consistent.

19 Statistics of Fractional Brownian Motion 9 5 The linear case In this section we present some comments in the case when the drift b is linear. We will assume that b(x) x to simplify the presentation. In this case, the solution X to equation (8) is the fractional Ornstein-Uhlenbeck process and it is possible to prove more precise results concerning the asymptotic behavior of the maximum likelihood estimator. Remark 3 In [9], it is shown that there exists an unique almost surely continuous process X that satises the Langevin equation (8) for any H (; ). Moreover the process X can be represented as X t = e (t u) db H u ; t [; T ] (35) where the above integral is a Wiener integral with respect with B H (which exists also as a pathwise Riemann-Stieltjes integral). It follows from the stationarity of the increments of B H that X is stationary and the decay of its auto-covariance function is like a power function. The process X is ergodic, and for H >, it exhibits a long-range dependence. Let us briey recall the method employed in [4] to estimate the drift parameter of the fractional OU process. Let us consider the function, for < s < t, k(t; s) = c H s H (t s) H with c H = H ( 3 H) (H + ) (36) and let us denote its Wiener integral with respect to B H by M H t = k(t; s)db H s : (37) It has been proved in [3] that M H is a Gaussian martingale with bracket hm H i t :=! t H = H t H with H = H (3 H) (H + ) ( 3 : (38) H) The authors called M H the fundamental martingale associated to fbm. Therefore, observing the process X given by (8) is the same thing as observing the process Z KB t = k(t; s)dx s which is actually a semimartingale with the decomposition Z KB t = Q KB s d! H s + M H t (39)

20 Statistics of Fractional Brownian Motion where Q KB t = d d! H k(t; s)x s ds; t [; T ]: (4) By using Girsanov's theorem (see [3] and [4]) we obtain that the MLE is given by t := KB t = R t QKB s R t (QKB s dz KB s ) d! H s : (4) Remark 4 We can observe that our operator (3) or (4) coincides (possibly up to a multiplicative constant) with the one used in [4] and given by (4). Assume that H < ; the case H > is just a little more technical. Proof. Using relations () and (36) we can write Q t = C(H)t H = C(H)t H It is not dicult to see that d dt s H (t s) H b(x s )ds d dt k(t; s)b(x s)ds = C(H)t H d dt R t k(t; s)b(x s)ds = C(H)t H Q KB t Q t = C(H)t = On the other hand, it can be similarly seen that Z KB t = C(H) k(t; s)b(x s )ds: and therefore H Q KB t : (4) s H dz s : (43) and the estimation given by (4) and (4) coincide up to a constant. To compute the expression of the bias and of the mean square error and to prove the strong consistency of the estimator, one has the option, in this explicit linear situation, to compute the Laplace transform of the quantity R t (QKB s ) d! s H. This is done for H > = in [4], Section 3., and the following properties are obtained: the estimator t is strongly consistent, that is, t! almost surely when t! ; the bias and the mean square error are given by { If <, when t!, then E( t ) v t, E( t ) v jj; (44) t

21 Statistics of Fractional Brownian Motion { If >, when t!, then E( t ) v p sin H 3 e t p t (45) E( t ) v p sin H 5 e t p t: (46) It is interesting to realize that the rate of convergence of the bias and of the mean square error does not depends on H. In fact the only dierence between the classical case (see [8]) and the fractional case is the presence of the constant p H in (44), (45), and (46). It is natural to expect the same results if H <. This is true, as stated below, and proved in the Appendix in Section 7.4. Proposition 3 If H <, then (44), (45), and (46) hold. 6 Discretization In this last section we present a discretization result which allows the implemention of an MLE for an fbm-driven stochastic dierential equation. We rst provide background information on discretely observed diusion processes in the classical situation when the driving noise is the standard Brownian motion. Assume that dx t = b(x t ; ) + (X t ; )dw t where ; b are known functions, W is a standard Wiener process and is the unknown parameter. If continuous information is available, the parameter estimation by using maximum likelihood method is somewhat simpler; indeed, the maximum likelihood function can be obtained by means of the standard Girsanov theorem and there are results on the asymptotic behavior of the estimator (consistency, eciency etc...). We refer to the monographs [8], [37] or [3] for complete expositions of this topic. \Real-world" data is, however, typically discretely sampled (for example stock prices collected once a day, or, at best, at every tick). Therefore, statistical inference for discretely observed diusion is of great interest for practical purposes and at the same time, it poses a challenging problem. Here the main obstacle is the fact that discrete-time transition functions are not known analytically and consequently the likelihood function is in general not tractable. In this situation there are alternative methods to treat the problem. Among these methods, we refer to numerical approximation to the likelihood function (see At-Sahalia [], Poulsen [34], Beskos et al. [4]), martingale estimating functions (see Bibby and Sorensen [5] ), indirect statistical inference (see Gourieroux et al. [8] ) or Bayesian approaches (see Elerian et al. [5]). We refer to [38] for a survey of methods of estimations in the discrete case. When the transition functions of the diusion X are known, and (x; ) = x with

22 Statistics of Fractional Brownian Motion unknown and not depending on, then Dacunha-Castelle and Florens-Zmirou [] propose a maximum likelihood estimator which is strongly consistent for the pair (; ). They also gives a measure of the loss of information due to the discretization as a function depending on the interval between two observations. A more particular situation is the case when is known (assume that = ). Then the maximum likelihood function, given by exp can been approximated using Riemann sums as exp X b(x ti ) X ti+ X ti N i= NX i= R t b(x s)dx s b(x ti ) (t i+ t i )! : R t b(x s) ds As a consequence a the following maximum likelihood estimator can be obtained from the discrete observations of the process X at times t ; : : : ; t N in a xed interval [; T ];with discrete mesh size decreasing to as N! (see [7], including proof of convergence to the continuous MLE): P N i= N;T = b(x t i )(X ti+ X ti ) P N i= jb(x t i )j (t i+ t i ) : (47) In the fractional case, we are aware of no such results. We propose a rst concrete solution to the problem. We choose to work with the formula (4) by replacing the stochastic integral in the numerator and the Riemann integral in the denominator by their corresponding approximate Riemann sums, using discrete integer time. Specically we dene for any integer n, P n m= n := Q m (Z m+ Z m ) P n m= jq mj : (48) Our goal in this section is to prove that n is in fact a consistent estimator for. By our Theorems and 3, it is of course sucient to prove that lim n! n n = almost surely. One could also consider the question of the discretization of T using a ne time mesh for xed T, and showing that this discretization converges almost surely to t ; by time-scaling such a goal is actually equivalent to our own. It is crucial to note that in the fractional case the process Q given by () depends continuously on X and therefore the discrete observation of X does not allow directly to obtain the discrete observation of Q. We explain how to remedy this issue: Q m appearing in (48) can be easily approximated if we know the values of X n ; n since only a deterministic integral appears in the expression of (); indeed, for H <, the quantity nx Q n = c(h)n H (n j) H j H b(x j ) (49) j=,

23 Statistics of Fractional Brownian Motion 3 can be deduced from observations and it holds that lim n (Q n Qn ) = almost surely. This last fact requires a proof, which is simpler than the proof of convergence of n n to, but still warrants care; we present the crucial estimates of this proof in the appendix, in Section Note moreover that calculation of n also relies on Z m, which is not observable; yet from formula (), where Z m is expressed as a stochastic integral of a deterministic function against the increments of X, again, we may replace all the Z m 's by their Riemann sum; proving that these sums converge to the Z m 's follows from calculations which are easier than those presented in Section 7.5., because they only require discretizing the deterministic integrand. We summarize this discussion in the following precise statement, referring to Section 7.5. for indications of its proof. Proposition 4 With Q n as in (49) and Z n = P n j= K ; [;n] () (j) (X j+ X j ), then almost surely n n converges to, where n is given by (48) with Z and Q replaced by Z and Q. Let hmi n denote the quadratic variation at time n of a square-integrable martingale M. We introduce the following two semimartingales: A t := B t := Q s dz s (5) Q [s] dz s (5) where [s] denotes the integer part of s. We clearly have B n = P n m= Q m (Z m+ Z m ). Thus using the fact that Z is a Brownian motion under ~P, we see that while ha Bi n = Z n hbi n = nx m= n X Q s Q [s] ds = jq m j (5) m= Z m+ m jq s Q m j ds: (53) Therefore from denitions (3) and (48) we immediately get the expressions n = A n hai n and n = B n hbi n : The following proposition denes a strategy for proving that n { and, by the previous proposition, n { is a consistent estimator for. See the Appendix, Section 7.5., for its proof.

24 Statistics of Fractional Brownian Motion 4 Proposition 5 Let H (; ). If there exists a constant > such that n ha Bi n = hbi n is bounded almost surely for n large enough, for all h k i, for some constant K >, almost surely, for large n, hbi k n KE hbi k n ; h and for all k >, E jha Bi n j ki h n k E jhbi n j ki, then almost surely lim n! n =. This proposition allows us to prove the following, under the condition (C') below, which is stronger than (C), but still allows for non-linear examples. Theorem 4 Assume b is bounded and the following condition holds: (C') There exist positive constants t and K b, both depending only on H and the function b, such that for all t t and all " >, we have ~P jq t (~!)j = p t < " "K b, where under ~P, ~! has the law of fractional Brownian motion with parameter H. Then for all H (; =), almost surely lim n! n = where the discretization n of the maximum likelihood estimator n is dened in (48). If H (=; ), the same conclusion holds if we assume in addition that b is bounded. By Proposition 4, the above statements hold with replaced by. Remark 5 Condition (C') holds as soon as the random variable Q t (~!) = p t has a density that is bounded uniformly t. When H < =, this is a statement about the random variables R H (ds) b th ~! s t H. In all cases, Condition (C') holds for the class of non-linear functions dened in Lemma 3. Remark 6 We conjecture that Theorem (4) holds if we replace (C') by (C), in view for example of the fact that the conditions of Proposition 5 hold for any < H. Step in the theorem's proof is the obstacle to us establishing this. Proof of Theorem 4: strategy and outline. First note that since the probability measures P and ~P are equivalent (see Theorem ), almost sure statements under one measure are equivalent to statements about the same stochastic processes under the other measure, and therefore we may prove the statements in the theorem by assuming that the process Z in the denitions (5) and (5) is a standard Brownian motion, since such is its law under ~P.

25 Statistics of Fractional Brownian Motion 5 Furthermore, for the same reason, we can assume that, in these same denitions, Q is given by formula () where X is replaced by ~! whose law is that of standard fbm. We will use specically, instead of (), the explicit formula (6) when H < =: For H > = the formula (7) must be used instead, which shows the need for a control of b's second derivative. For the sake of conciseness, we restrict our proofs to the case H < =. The result of the theorem is establised as soon as one can verify the hypotheses of Proposition 5. Here we present only the proof of the rst of the three hypotheses. The other two are proved using similar or simpler techniques. To achieve our goal in this proof, it is thus sucient to prove that almost surely, for large n, hbi n n while ha Bi n n where the values and are non random and >. We establish these estimates in the appendix. Summarizing, we rst prove that for appropriately chosen constants A and, for all integers m in the interval [n A n 4A ; n A ], almost surely for large n, jq m j exceeds n A(= ) = (this is done by combining Condition (C') and inequality (33), in Step in Section 7.5.3). Because of the positivity of all the terms in the series (5), this easily implies the lower bound hbi n cn + for some constant c (Step in Section 7.5.3). Step 3 in Section then details how to use inequality (33) to show the generic term of the series (53) dening ha Bi n is bounded above by n. The theorem then easily follows (Step 4 in Section 7.5.3). 7 Appendix 7. Proof of Lemma 3 Dene V t := t H Z H (dr) b t H ~! r D= Q t p t ; Our assumption implies four dierent scenarios in terms of the constant sign of b on R + or R. We will limit this proof to the situation where b (x) has the same sign as x. The other three cases are either similar or easier. Thus we have b (x) = cx+xh (x). Also dene V = R H (dr) c~! r and E t = V t V so that V t = E t + V. Now E t = Z H (dr) b t H ~! r t H Z = t H H (dr) b t H ~! r = Z H (dr) ~! r h t H ~! r : c~! r ct H ~! r

26 Statistics of Fractional Brownian Motion 6 For ~P-almost every ~!, the function ~! is continuous, and thus bounded on [; ]. Therefore, ~P-almost surely, uniformly for every r [; ], lim t! h t H ~! r =. Thus the limit is preserved after integration against H, which means that ~P-almost surely, lim t! E t =. Now x " >. There exists t (~!) nite ~P-almost surely such that for any t > t (~!), je t j ". Thus if jv t j < ", we must have jv j = jv + E t E t j = jv t E t j jv t j + je t j ". This proves that ~P-almost surely, and therefore, lim sup t! lim sup fjv t j < "g fjv j < "g t! ~P [jv t j < "] ~P lim sup fjv t j < "g t! ~P [jv j < "] : Now we invoke the fact that V is precisely the random variable studied in the rst, linear, example, so that ~P [jv j < "] K" for some constant K depending only on c and H, nishing the proof of the lemma under Condition (9). The proof of the last statement of the lemma is identical to the above development, the constant C only adding a term to E t which converges to deterministically. 7. Proof of relations (3) and (33) To prove relation (3), note that by (6), with ~! a standard fbm, we get sup s[;t n] jq s j C b sup s[;t n] Z p s H (dr) ( + j~! sr j) s H dr C (b; H) sup s[;t n] C (b; H) t = n + sup j~! u j u[;s]! tn H + sup j~! s j s[;t n] In the last line above, for any " >, for any t >,! s = H : (54) sup j~! s j c " t H+" (55) s[;t] where c " is an almost surely nite random variable not depending on t. Indeed, rst dene the sequence of random variables S n = sup s[n;n+] j~! s j. Using scaling, for xed n, S n is equal in distribution to n H sup s[;+=n] j~! s j, which is bounded above by n H sup s[;] j~! s j. By the Gaussian property of ~! and its almost-sure continuity, sup s[;] j~! s j is a random variable with moments of all orders, so that by

27 Statistics of Fractional Brownian Motion 7 Chebyshev's inequality, the probability P S n > n H+" is summable, which implies by Borel-Cantelli that there exists an almost-surely nite n (~!) such that for t > n (~!), sup s[n ;t] j~! s j t H+" : (56) On the other hand, by almost-sure (H + ")-Holder continuity of ~! at the origin, and ~!'s a.s. boundedness on [; n ], we get that sup s[;n ] j~! s j =s H+" is a bounded random variable. This combined with (55) yields (55), which in turn proved (3) via (54). Relation (33) is established using similar techniques and calculations. 7.3 Proof of the Theorem 3 Recall from the proof of Proposition that we can write Q t = c(h)t H b(x t ) + c (H) t H(dr) (b(x t ) b(x r )) : (57) We note that in this case the expression t H (dr) does not determine a measure, but we still use this notation to simplify the presentation; the Lipschitz assumption on b and the Holder property of X do ensure the existence of the integral. One can actually follow the proof in the case H < line by line. All we have to do here is to prove an equivalent of relations (3) and (33) on the supremum and variations of Q, this being the only point where the form of Q, which diers depending on whether H is bigger or less than =, is used. We will illustrate how the second summand of Q in (57) (which is the most dicult to handle) can be treated. Under ~P, we denote X by ~!, since its law is that of standard fbm. Denoting by Q t = R t t H (dr) (b(x t) b(x r )), it holds Q tt = t D = Z t H(dr) (b(x t ) b(x r )) H(dr) b(th ~! ) b(t H ~! r ) t H : where again = D denotes equality in distribution. Now, let Vt := t Q t. Omitting the details, we state that instead of relations (3) and (33) on Q, it is equivalent to show the following bound for some M > : " # ~E sup V t V HM M bn s C M;H;b : (58) s;t[t n b n;t n] t n

28 Statistics of Fractional Brownian Motion 8 We have V t V s = Z H(dr)t H b(t H ~! ) b(t H ~! r ) b(s H ~! ) + b(s H ~! r ) + (t H s H ) thus for all M, it holds that Z H(dr) b(s H ~! ) b(s H ~! r ) := J + J ; Z ~E jj j M jt H s H j M b M s HM E ~ H(dr) j~! r C M;H;b jt s +H sj M s HM M ~! j and it since s n t n, this is clearly bounded above by C M;H;b (jt sj=t n ) MH. For the term denoted by J one can obtain the same bound simply by using the Lipschitz property of b: for every a; b, using the mean-value theorem, for points a [s H a; t H a] and b [s H b; t H b], we obtain and thus b(t H a) b(t H b) b(s H a) + b(s H b) = b ( a ) t H s H a + b ( b ) t H s H b b jt H s H j (jaj + jbj) ; ~E jj j M jt H s H M Z j C M;b;H ~E C M;H;b jt t n t H sj MH : M H(dr) (j~! r j + j~! j) Obtaining the supremum in (58) now follows using the standard Kolmogorov lemma on continuity (see [35, Theorem I..]). A direct proof, via (3) and (33), not via (58), is also possible. 7.4 Proof of Proposition 3 To avoid tedious calculations with fractional integrals and derivatives, we will take advantage of the calculations performed in [4] when H > ; nevertheless we believe that a direct proof is also possible. Actually the only moment when the authors of [4] use the fact that H is bigger than is the computation of the process Q. By

Rough paths methods 4: Application to fbm

Rough paths methods 4: Application to fbm Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction

More information

Supplement A: Construction and properties of a reected diusion

Supplement A: Construction and properties of a reected diusion Supplement A: Construction and properties of a reected diusion Jakub Chorowski 1 Construction Assumption 1. For given constants < d < D let the pair (σ, b) Θ, where Θ : Θ(d, D) {(σ, b) C 1 (, 1]) C 1 (,

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

From Fractional Brownian Motion to Multifractional Brownian Motion

From Fractional Brownian Motion to Multifractional Brownian Motion From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December

More information

ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WITH MONOTONE DRIFT

ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WITH MONOTONE DRIFT Elect. Comm. in Probab. 8 23122 134 ELECTRONIC COMMUNICATIONS in PROBABILITY ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WIT MONOTONE DRIFT BRAIM BOUFOUSSI 1 Cadi Ayyad University FSSM, Department

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Variations and estimators for selfsimilarity parameters via Malliavin calculus

Variations and estimators for selfsimilarity parameters via Malliavin calculus Variations and estimators for selfsimilarity parameters via Malliavin calculus Ciprian A. Tudor Frederi G. Viens SAMOS-MATISSE, Centre d Economie de La Sorbonne, Université de Paris Panthéon-Sorbonne,

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Pathwise Construction of Stochastic Integrals

Pathwise Construction of Stochastic Integrals Pathwise Construction of Stochastic Integrals Marcel Nutz First version: August 14, 211. This version: June 12, 212. Abstract We propose a method to construct the stochastic integral simultaneously under

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

SELF-SIMILARITY PARAMETER ESTIMATION AND REPRODUCTION PROPERTY FOR NON-GAUSSIAN HERMITE PROCESSES

SELF-SIMILARITY PARAMETER ESTIMATION AND REPRODUCTION PROPERTY FOR NON-GAUSSIAN HERMITE PROCESSES Communications on Stochastic Analysis Vol. 5, o. 1 11 161-185 Serials Publications www.serialspublications.com SELF-SIMILARITY PARAMETER ESTIMATIO AD REPRODUCTIO PROPERTY FOR O-GAUSSIA HERMITE PROCESSES

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

MA 8101 Stokastiske metoder i systemteori

MA 8101 Stokastiske metoder i systemteori MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Representing Gaussian Processes with Martingales

Representing Gaussian Processes with Martingales Representing Gaussian Processes with Martingales with Application to MLE of Langevin Equation Tommi Sottinen University of Vaasa Based on ongoing joint work with Lauri Viitasaari, University of Saarland

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

On It^o's formula for multidimensional Brownian motion

On It^o's formula for multidimensional Brownian motion On It^o's formula for multidimensional Brownian motion by Hans Follmer and Philip Protter Institut fur Mathemati Humboldt-Universitat D-99 Berlin Mathematics and Statistics Department Purdue University

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Linear stochastic approximation driven by slowly varying Markov chains

Linear stochastic approximation driven by slowly varying Markov chains Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,

More information

Abstract. The probabilistic approach isused for constructing special layer methods to solve thecauchy problem for semilinear parabolic equations with

Abstract. The probabilistic approach isused for constructing special layer methods to solve thecauchy problem for semilinear parabolic equations with NUMERICAL METHODS FOR NONLINEAR PARABOLIC EQUATIONS WITH SMALL PARAMETER BASED ON PROBABILITY APPROACH GRIGORI N. MILSTEIN, MICHAEL V. TRETYAKOV Weierstra -Institut f r Angewandte Analysis und Stochastik

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Problem List MATH 5143 Fall, 2013

Problem List MATH 5143 Fall, 2013 Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Existence and uniqueness of solutions for nonlinear ODEs

Existence and uniqueness of solutions for nonlinear ODEs Chapter 4 Existence and uniqueness of solutions for nonlinear ODEs In this chapter we consider the existence and uniqueness of solutions for the initial value problem for general nonlinear ODEs. Recall

More information

arxiv: v2 [math.pr] 22 Aug 2009

arxiv: v2 [math.pr] 22 Aug 2009 On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v

f(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the

More information

THE LUBRICATION APPROXIMATION FOR THIN VISCOUS FILMS: REGULARITY AND LONG TIME BEHAVIOR OF WEAK SOLUTIONS A.L. BERTOZZI AND M. PUGH.

THE LUBRICATION APPROXIMATION FOR THIN VISCOUS FILMS: REGULARITY AND LONG TIME BEHAVIOR OF WEAK SOLUTIONS A.L. BERTOZZI AND M. PUGH. THE LUBRICATION APPROXIMATION FOR THIN VISCOUS FILMS: REGULARITY AND LONG TIME BEHAVIOR OF WEAK SOLUTIONS A.L. BERTOI AND M. PUGH April 1994 Abstract. We consider the fourth order degenerate diusion equation

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Itô formula for the infinite-dimensional fractional Brownian motion

Itô formula for the infinite-dimensional fractional Brownian motion Itô formula for the infinite-dimensional fractional Brownian motion Ciprian A. Tudor SAMOS-MATISSE Université de Panthéon-Sorbonne Paris 1 9, rue de Tolbiac, 75634 Paris France. January 13, 25 Abstract

More information

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Variations and Hurst index estimation for a Rosenblatt process using longer filters

Variations and Hurst index estimation for a Rosenblatt process using longer filters Variations and Hurst index estimation for a Rosenblatt process using longer filters Alexandra Chronopoulou 1, Ciprian A. Tudor Frederi G. Viens 1,, 1 Department of Statistics, Purdue University, 15. University

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin. STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER Department of Mathematics University of Wisconsin Madison WI 5376 keisler@math.wisc.edu 1. Introduction The Loeb measure construction

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

BOOK REVIEW. Review by Denis Bell. University of North Florida

BOOK REVIEW. Review by Denis Bell. University of North Florida BOOK REVIEW By Paul Malliavin, Stochastic Analysis. Springer, New York, 1997, 370 pages, $125.00. Review by Denis Bell University of North Florida This book is an exposition of some important topics in

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Midterm 1. Every element of the set of functions is continuous

Midterm 1. Every element of the set of functions is continuous Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Densities for the Navier Stokes equations with noise

Densities for the Navier Stokes equations with noise Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Integral representations in models with long memory

Integral representations in models with long memory Integral representations in models with long memory Georgiy Shevchenko, Yuliya Mishura, Esko Valkeila, Lauri Viitasaari, Taras Shalaiko Taras Shevchenko National University of Kyiv 29 September 215, Ulm

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

An adaptive numerical scheme for fractional differential equations with explosions

An adaptive numerical scheme for fractional differential equations with explosions An adaptive numerical scheme for fractional differential equations with explosions Johanna Garzón Departamento de Matemáticas, Universidad Nacional de Colombia Seminario de procesos estocásticos Jointly

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals

Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals J. Wong (Fall 217) October 7, 217 What did we cover this week? Introduction to the Laplace transform Basic theory Domain and range of L Key

More information

with Perfect Discrete Time Observations Campus de Beaulieu for all z 2 R d M z = h?1 (z) = fx 2 R m : h(x) = zg :

with Perfect Discrete Time Observations Campus de Beaulieu for all z 2 R d M z = h?1 (z) = fx 2 R m : h(x) = zg : Nonlinear Filtering with Perfect Discrete Time Observations Marc Joannides Department of Mathematics Imperial College London SW7 2B, United Kingdom mjo@ma.ic.ac.uk Francois LeGland IRISA / INRIA Campus

More information

Wiener integrals, Malliavin calculus and covariance measure structure

Wiener integrals, Malliavin calculus and covariance measure structure Wiener integrals, Malliavin calculus and covariance measure structure Ida Kruk 1 Francesco Russo 1 Ciprian A. Tudor 1 Université de Paris 13, Institut Galilée, Mathématiques, 99, avenue J.B. Clément, F-9343,

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Constrained Leja points and the numerical solution of the constrained energy problem

Constrained Leja points and the numerical solution of the constrained energy problem Journal of Computational and Applied Mathematics 131 (2001) 427 444 www.elsevier.nl/locate/cam Constrained Leja points and the numerical solution of the constrained energy problem Dan I. Coroian, Peter

More information

Density estimates and concentration inequalities with Malliavin calculus

Density estimates and concentration inequalities with Malliavin calculus Density estimates and concentration inequalities with Malliavin calculus Ivan Nourdin and Frederi G Viens Université Paris 6 and Purdue University Abstract We show how to use the Malliavin calculus to

More information