An Introduction to Malliavin Calculus and its applications to Finance

Size: px
Start display at page:

Download "An Introduction to Malliavin Calculus and its applications to Finance"

Transcription

1 An Introduction to Malliavin Calculus and its applications to Finance Vlad Bally 1, Lucia Caramellino, Luana Lombardi 3 May 4, 1 1 Laboratoire d Analyse et de Mathématiques Appliquées, UMR 85, Université Paris-Est Marne-la-Vallée; mailto: <bally@univ-mlv.fr> Dipartimento di Matematica, Università di Roma-Tor Vergata; mailto: 3 Dipartimento di Matematica, Università di L Aquila; mailto:

2 Contents 1 Abstract Integration by Parts Formula The one dimensional case The sensitivity problem The density of the law Conditional expectations The multidimensional case Brownian Malliavin calculus 8.1 The finite dimensional case Main definitions and properties Differential operators. First properties The infinite dimensional case The set Dom p D = D 1,p The set Dom p δ Properties Examples The Clark-Ocone formula The set Dom p L The integration by parts formula Multidimensional Brownian motion Higher order derivatives and integration by parts formulas Diffusion processes Appendix. Wiener chaos decomposition Applications to Finance The Clark-Ocone formula and the replicating portfolio Sensitivity computation The delta Some other examples Conditional expectation Diagonalization procedure and first formulas Localized formulas References 59 i

3 Preface From the theoretical point of view, these notes follow the ones written by Vlad Bally [1]. In addition, examples of applications of Malliavin calculus coming from Finance are developed. This has been the main contribution of Luana Lombardi, who worked on these arguments in 8 to achieve an internship required by her PhD project. Lucia Caramellino

4 Chapter 1 Abstract Integration by Parts Formula In this chapter we introduce in an abstract way the main tool of Malliavin calculus we are going to study, that is integration by parts formulas, and we stress some important consequences: the use for computing sensitivities, as well as for representing the density and the conditional expectation. For the sake of simplicity, we split such an introduction in two sections, giving the onedimensional case and the multidimensional one. 1.1 The one dimensional case Let Ω, F, P denote a probability space and let E stand for the expectation under P. The sets Cc k R d and Cb krd denote the space of functions f : R d R which are continuously differentiable up to order k, with compact support and with bounded derivatives respectively. When the functions are infinitely differentiable, we similarly set Cc R d and Cb Rd. Definition Let F, G : Ω R be integrable random variables. We say that the integration by parts formula IPF ; G holds if there exists an integrable random variable HF ; G such that IPF ; G : Eϕ F G = EϕF HF ; G, ϕ C c R. 1.1 Moreover, we say that the integration by parts formula IP k F ; G holds if there exists an integrable random variable H k F ; G such that IP k F ; G : Eϕ k F G = EϕF H k F ; G, ϕ C c R 1. Remark By using standard regularization results e.g. by mollifiers, the test functions Cc R in IP k F ; G can be replaced by Cc k R or also by R and Ck b R. C b Obviously, IP 1 F ; G means IPF ; G and HF ; G = H 1 F ; G. Moreover, if IPF ; G and IPF ; HF ; G hold, then IP F ; G holds with H F ; G = 1

5 HF ; HF ; G. A similar statement holds for higher order derivatives. As an example, in IP k F, ; 1 this leads us to define H k F, 1 H k F by recurrence: H F = 1, H k F = HF ; H k 1 F, k 1. If IPF, G holds then EHF, G = : take G = 1 in 1.1. Moreover, the weight HF ; G in IPF ; G is not unique: for any random variable R such that EϕF R = that is, ER F = a.s. one may use HF ; G + R as well in fact, what is unique is EHF, G F. In numerical methods this plays an important role because if one wants to compute EϕF HF ; G using a Monte Carlo method then one would like to work with a weight which gives minimal variance see e.g. Fournié et al. [9]. Note also that in order to perform a Monte Carlo algorithm one has to simulate F and HF ; G. In some particular cases HF ; G may be computed directly, using some methods ad hoc. But Malliavin calculus gives a systematic access to the computation of this weight. Typically, in the applications F is the solution of some stochastic equation and HF ; G appears as an aggregate of differential operators in Malliavin s sense acting on F. These quantities are also related to some stochastic equations and so one may use some approximations of these equations in order to produce concrete algorithms. Let us give a simple example. Take F = and G = g where f, g are some differentiable functions and is a centered gaussian random variable of variance σ. Then [ Ef g = E f g ] σ g 1.3 so IPF ; G holds true with HF ; G = g σ g. This follows from a direct application of the standard integration by parts, but in the presence of the gaussian density px = 1 x πσ exp σ : Ef g = f xgxpxdx = fxg xpx + gxp xdx [ = fx g x + gx p x px [ ] = E f g σ g ] pxdx Malliavin calculus produces the weights HF ; G for a large class of random variables represents the simplest example of this kind - but this is not the subject of this section. Here we give some consequences of the above property The sensitivity problem In many applications one considers quantities of the form EϕF x where F x is a family of random variables indexed on a finite dimensional parameter x. A typical example is F x = Xt x which is a diffusion process starting from x. In order to study the sensitivity of this quantity with respect to the parameter x, one has to prove that x EϕF x is differentiable and to evaluate the derivative.

6 There are two ways to tackle this problem: using a pathwise approach or an approach in law. The pathwise approach supposes that x F x ω is differentiable for almost every ω and this is the case for x X x t ω for example and ϕ is differentiable also. Then x EϕF x = Eϕ F x x F x. But this approach breaks down if ϕ is not differentiable. The second approach overcomes this difficulty using the smoothness of the density of the law of F x. So, in this approach one assumes that F x p x ydy and x p x y is differentiable for each y. Then x EϕF x = ϕy x p x ydy = ϕy x ln p x yp x ydy = EϕF x x ln p x F. Sometimes engineers call x ln p x F the score function. But of course this approach works when one knows the density of the law of F x. The integration by parts formula IPF x, x F x permits to write down the equality x EϕF x = Eϕ F x x F x = EϕF x HF x ; x F x without having to know the density of the law of F x. It is worth remarking that the above equality holds true even if ϕ is not derivable because there are no derivatives in the first and last term - in fact one may use some regularization arguments and then pass to the limit. Therefore the quantity of interest is the weight HF x ; x F x. Malliavin calculus is a machinery allowing to compute such quantities for a large class of random variables for which the density of the law is not known explicitly for example, diffusion processes. This is the approach in Fournié et al. [8] and [9] to the computation of Greeks sensitivities of the price of European and American options with respect to certain parameters in Mathematical Finance problems The density of the law Hereafter, the notation 1 A x or 1 x A stands for the indicator function, that is 1 A x = 1 if x A and 1 A x = if x / A. Lemma Suppose that F satisfies IPF ; 1. Then the law of F is absolutely continuous with respect to the Lebesgue measure and the density of the law is given by px = E1 [x, F HF ; Moreover, p is continuous and px as x. Proof. The formal argument is the following: since δ y = y 1 [, y one uses IPF ; 1, so that Eδ F x = E y 1 [, F x = E1 [, F xh 1 F ; 1 = E1 [x, F HF ; 1. In order to let this reasoning rigorous, one has to regularize the Dirac function. So we take a positive function ϕ C c R with the support equal to [ 1, 1] and such that ϕydy = 1 and for each δ > we define ϕ δ y = δ 1 ϕyδ 1. Moreover we define Φ δ to be the primitive of ϕ δ, i.e. Φ δ y = y ϕ δzdz, and we construct some random variables θ δ of law ϕ δ ydy which are independent of F. Since θ δ weakly converges to as δ, for each f Cc R we have EfF = lim δ EfF θ δ

7 Setting Λ as the law of F, we can write EfF θ δ = fu vϕ δ vdvdλu = fzϕ δ u zdzdλu = fzeϕ δ F zdz = fzeφ δf zdz = fzeφ δ F zhf ; 1dz. Now, Φ δ is uniformly bounded in δ and Φ δ y 1 [x, y, as δ for a.e. y. Then using Lebesgue dominated convergence theorem we pass to the limit in the above relationship and we obtain EfF = fze1 [z, F HF ; 1dz for any f Cc R, so that z E1 [z, F HF ; 1 is the probability density function of F, which is also continuous. In fact, if z n z one has 1 [zn, F 1 [z, F a.s. So, by applying the Lebesgue dominated convergence theorem, one has pz n = E1 [zn, F HF ; 1 E1 [z, F HF ; 1 = pz, i.e. p is a continuous function. Finally, if z + then 1 [z, F a.s. and then pz. If instead z, one uses the same argument but to the representation px = E1,x F HF ; which follows from the fact that 1 [x,+ = 1 1,x and by recalling that EHF ; 1 = see Remark Remark [Bounds] Suppose that HF ; 1 is square integrable. Then, using Chebishev s inequality px PF x HF ; 1. In particular lim x px = and the convergence rate is controlled by the tails of the law of F. For example if F has finite moments of order p this gives px C x p/. In significant examples, such as diffusion processes, the tails have even exponential rate. So the problem of the upper bounds for the density is rather simple on the contrary, the problem of lower bounds is much more challenging. The above formula gives a control for x. In order to obtain similar bounds for x one has to employ formula 1.6. We go now further and treat the problem of the derivatives of the density function. Lemma Suppose that IP i F ; 1, i = 1,..., k + 1 holds true. Then the density p is k times differentiable and p i x = 1 i E1 x, F H i+1 F ; 1, i =, 1,..., k. 1.7 Proof. Let i = 1. We define Ψ δ x = x Φ δydy, so that Ψ δ = ϕ δ, and we come back to the proof of Lemma By using IP F, 1 we have Eϕ δ F z = EΨ δ = EΨ δ F zh F ; 1, 4

8 so that EfF θ δ = fzeψ δ F zh F ; 1dz. Since lim δ Ψ δ F z = F z + we obtain EfF = fzef z + H F ; 1dz and so pz = EF z + H F ; 1. The pleasant point in this new integral representation of the density is that z F z + is differentiable. Taking derivatives in the above formula gives p z = E1 [z, F H F ; 1 and the proof is completed for i = 1. In order to deal with higher order derivatives, one uses more integration by parts in order to obtain pz = Eη i F zh i+1 F ; 1 where η i is an i times differentiable function such that η i i x = 1 i 1 [, x. Remark [Bounds] The integral representation formula 1.7 permits to obtain upper bounds of the derivatives of the density p. In particular, suppose that F has finite moments of any order and that IP i F ; 1 holds true for every i N and H i F ; 1 is square integrable. Then p is infinitely differentiable and p i x PF > x H i F ; 1 C x q/ for every q N. So p S, the Schwartz space of rapidly decreasing functions. [Integration by parts & densities] Lemma shows that there is an intimate relationship quasi equivalence between the integration by parts formula and the existence of a good density of the law of F. In fact, suppose that F pxdx, where p is differentiable and p F is integrable. Then, for every f Cc R Ef F = f xpxdx = fxp xdx = fx p x px 1 p>xpxdx = E ff p F pf 1 p>f. So IPF, 1 holds with HF ; 1 = p F 1 pf p>f L 1 because p F L 1 Ω. By iteration, we obtain the following chain of implications: IP k+1 F, 1 holds true p is k times differentiable and p k F L 1 Ω IP k F, 1 holds true and H k F ; 1 = 1 k p k F 1 pf p>f L 1 Ω. 5

9 1.1.3 Conditional expectations The computation of conditional expectations is crucial for numerically solving certain non linear problems coming from dynamical programming algorithms. Several authors see Fournié et al. [9], Lions and Regnier [15], Bally et al. [3], Kohatsu-Higa and Petterson [11], Bouchard et al. [6] have employed formulas based on Malliavin calculus techniques in order to compute conditional expectations. In this section we give the abstract form of this formula. Lemma Let F and G be real random variables such that IPF ; 1 and IPF ; G hold true. Then EG F = x = E1 [x, F HF ; G E1 [x, F HF ; with the convention that the term in the right hand side is null when the denominator is null. Proof. Let θx stand for the term in the right hand side of the above equality. We have to check that for every f Cc R one has EfF G = EfF θf. Using the regularizing functions from the proof of Lemma we write EθF ff = fzθzpzdz = fze1 [, F zhf ; Gdz = lim fzeφ δ F zhf ; Gdz δ = lim fzegϕ δ F zdz δ and the proof is completed. = E G lim δ 1. The multidimensional case fzϕ δ F zdz = EGfF In this section we deal with a d dimensional random variable F = F 1,..., F d. The results concerning the density of the law and the conditional expectation are quite similar. Let us introduce some notations. For i = 1,..., d, we set i. For a multi-index α = α x 1,..., α i k {1,..., d} k, we denote α = k and α = α1 αk with the convention that is just the identity. The integration by parts formula is now the following. Definition Let F : Ω R d and G : Ω R be integrable random variables. Let α {1,..., d} k, k N, be a multi-index. We say that the integration by parts formula IP α F ; G holds if there exists an integrable random variable H α F ; G such that IP α F ; G : E α ϕf G = EϕF H α F ; G, ϕ C c R

10 Again, for α = k, the set Cc R d can be replaced by Cc k R d, Cb Rd or also Cb krd. Let us give a simple example which turns out to be central in Malliavin calculus. Take F = f 1,..., m and G = g 1,..., m where f, g are some differentiable functions and 1,..., m are independent, centered gaussian random variables with variance σ 1,..., σ m respectively. We denote = 1,..., m. Then for each i = 1,..., m f E x i g = E f [g i σ i g ] x i, 1.1 as an immediate consequence of 1.3 and of the independence of 1,..., m. It then follows that IP {i} ; g holds for every i = 1,..., d. We give now the result concerning the density of the law of F. Proposition 1... i Suppose that IP 1,...,d F ; 1 holds true. Then the density p of F exists and is given by px = E1 Ix F H 1,...,d F ; where Ix = d i=1 [xi,. In particular p is continuous. ii Suppose that for every multi-index α, IP α F ; 1 holds true. Then α p exists and is given by α px = 1 α E1 Ix F H α+1 F ; where α + 1 =: α 1 + 1,..., α d + 1. Moreover, if H α F ; 1 L Ω and F has finite moments of any order then p S, S being the Schwartz space of the infinitely differentiable functions which decrease rapidly to infinity, together with all the derivatives. Proof. The formal argument for i is based on δ y = 1,...,1 1 I y and the integration by parts formula. In order to let it rigorous one has to regularize the Dirac function as in the proof of Lemma In order to prove ii one employs the same pushing back Schwartz distribution argument as in the proof of Lemma Finally, in order to obtain bounds we write α px PF 1 > x 1,..., F d > x d H α+1 F ; 1. If x 1 >,..., x d >, the Chebishev s inequality yields α px C q x q for every q N. If the coordinates of x are not positive we have to use a variant of 1.1 which involves, x i ] instead of x i,. The result concerning the conditional expectation reads as follows. Proposition Let F = F 1,..., F d and G be two random variables such that either IP 1,...,d F ; 1 and IP 1,...,d F ; G hold true. Then EG F = x = E1 IxF H 1,...,d F ; G E1 Ix F H 1,...,d F ; with the convention that the term in the right hand side is null when the denominator is null. Proof. The proof is the same as for Lemma 1.1.7, by using the regularization function ϕ δ x = d i=1 ϕ δx i and Φ δ x = d i=1 Φ δx i and the fact that 1,...,1 Φ δ x = ϕ δ x. 7

11 Chapter Brownian Malliavin calculus.1 The finite dimensional case In this section we introduce the finite dimensional simple functionals and the finite dimensional simple process; we define the Malliavin derivative and the Skorohod integral for these finite dimensional objects and we derive their general important properties, as the duality formula, the chain rule, the Clark-Ocone formula and the integration by parts formula. We will use here the space C k p R d of the functions f : R d R whose derivatives up to order k exist, are continuous and with polynomial growth. Similarly we define C p R d..1.1 Main definitions and properties Let W = W 1,... W d be a d dimensional Brownian Motion defined on a probability space Ω, F, P and we assume that the underlying filtration {F t } t [,1] w.r.t. W is a Brownian motion, is the one generated by W and augmented by the P-null sets. To simplify the notations, we suppose for the moment that d = 1, the multidimensional case to be deserved later in Section.3. For each n, k N we denote t k n = k n and k n = W t k+1 n W t k n, k =,..., n 1. We denote n = n,..., n 1 n. Notice that n is a multidimensional Gaussian r.v., taking values in R n, with independent components: n N, n I n n where N m, Γ denotes the Gaussian law with mean m and covariance matrix Γ and I d d the d d identity matrix. Definition.1.1. A simple functional of order n is a random variable of the form F = f n where f C p R n. We denote the space S n of the simple functionals of order n by S n = {F = f n : f C p R n } and define the space of all simple functionals as S = n N S n. 8

12 Remark S n S n+1, in fact we have [t k n, t k+1 n = [t k n+1, t k+1 n+1 [t k+1 n+1, tk+ n+1, so that F = f..., k n,... = f..., k n+1 + k+1 n+1,..... S L p Ω, F 1, P for all p 1, as a consequence of the fact that f has polynomial growth and that any Gaussian r.v. has finite moment of any order. 3. S is a linear dense subset of L Ω, F 1, P. There are several ways to show the validity of this assertion, we leave a possible proof in Appendix.6 see next Proposition.6.4. Definition.1.3. A process U : [, 1] Ω R is called a simple process of order n if for any k =,..., n 1, there exists a process U k S n such that U t ω = n 1 k= U k ω1 [t k n,t k+1 n t. We denote by P n the space of the simple processes of order n, i.e. P n = { n 1 } U : [, 1] Ω R : U t ω = U k ω1 [t k n,t k+1 n t; U k S n k= and the space of all simple processes is given by P = n N P n. Since U k S n, one has U k = u k n,..., n 1 n, where u k Cp R n. Therefore, u k depends on all the increments of the Brownian Motion, so that a simple process is generally not adapted. But, one has that U is adapted if and only if U k = u k n,..., k 1 n for any k =,..., n 1. Remark S n S n+1 implies that P n P n+1.. For each fixed ω Ω, t U t ω is an element of L [, 1], B[, 1], dt, and in general belongs to L p [, 1], B[, 1], dt for any p 1. Then, if U, V P we can define the scalar product on this space by using the standard one on L [, 1], that is U, V = U s V s ds. Notice that U, V depends on ω and moreover, is an a.s. finite r.v. 3. For the sake of simplicity, set H 1 = L [, 1], B[, 1], dt = {φ : [, 1] R ; φ s ds < } and L p H 1 = { [ U : Ω H 1 : E U p 1 ] p } H 1 = E U s ds <. Then P L p H 1 for all p N. 4. P is a dense subset of L H 1 L Ω [, 1], F 1 B[, 1], P dt. 9

13 .1. Differential operators. First properties We can now introduce the Malliavin derivative and its adjoint operator, the Skorohod integral. Definition.1.5. The Malliavin derivative of a r.v. F = f n S n is the simple process {D t F } t [,1] P n given by D t F = n 1 k= f x k n1 [t k n,t k+1 n t. We recall that x k represents the increment k n = W t k+1 W n t k n. From the definition, we have that D t F = F for t [t k n, t k+1 k n. If we denote n t n = k n when t [t k n, t k+1 n, t n represents the increment of W corresponding to t. Therefore, we can use the following notation: D t F = F t n f n k n, 1 n,..., n 1 n, as t [t k n, t k+1 n. n Notice that the definition is well posed, in the sense that the operator D does not depend on n. In fact, for F S n S n+1 we have F k n n = F k n+1 n+1 = F k+1 n+1 n+1,.1 because t [t k n, t k+1 n = [t k n+1 [tk+1 n+1, tk+ n+1 and F = f..., k n,... = f..., k n+1 + k+1,.... Therefore,.1 allows to define n+1 n+1, t k+1 D : S = n S n P = n P n as follows: D t F = F t n, as t [, 1]. n Definition.1.6. The Skorohod integral is defined as the operator δ : P S, δu = n 1 k= where U = n 1 k= u k n 1 [t k n,t k+1 n P n P. u k n k n u k x k n 1 n Note that the definition again does not depend on n and so it is correct. Remark.1.7. Skorohod integral vs Ito integral We have already noticed that a process U P n is F t -adapted if and only if u k n does depend only on the variables 1 n,..., k 1 n. Consequently, u k = and in such a case, x k δu = n 1 k= u k n k n = U s dw s, that is, δu coincide with the Ito integral w.r.t. W. This shows that the Skorohod integral aims to be an extension of the Ito integral over the set of non adapted processes. 1

14 We can now prove the link between Malliavin derivatives and Skorohod integrals and investigate some immediate properties of such operators. Proposition.1.8. i [Duality] For any F S and U P one has E DF, U = EF δu. ii [Chain rule] Let F = F 1,... F m where F i S, i = 1,... m and Φ C p R m. Then ΦF S and DΦF = x iφf DF i. i=1 iii [Skorohod integral of a special product] Let U P and F S. Then δf U = F δu DF, U. Proof. i Let n denote an integer such that F S n and U P n. Then, n 1 f E DF, U = E x k nu k n 1 n. k= n is a vector of i.i.d. Gaussian r.v. s with variance h n = 1/ n. Then, we can use 1.1 and we obtain f [ E x k nu k n = E f n u k n k n u ] k h n x k n. By replacing everything we obtain E DF, U = E f n n 1 k= [ u k n k n u k x k n 1 n ] = EF δu. The proof of ii is straightforward. iii Take G S. By using the duality formula and the chain rule, we have E[GδF U] = E[ DG, F U ] = E[ F DG, U ] = E[ DGF GDF, U ] = E[ DGF, U ] E[G DF, U ] = E[GF δu] E[ DF, U ]. Then, E[GδF U] = E[GF δu DF, U ] for any G S, and iii immediately follows. We are now ready to prove a first integration by parts formula in the Malliavin sense. For F = F 1,..., F m, with F i S for any i = 1,..., m, set σ F as the following m m symmetric matrix: σ ij F = DF i, DF j = D t F i D t F j dt, i, j = 1,..., m. σ F is called the Malliavin covariance matrix associated to F. It is a positive definite matrix, because for any ξ R m one has σ F ξ, ξ = σ ij F ξi ξ j = D t F i ξ i D t F j ξ j dt = D t F i ξ i dt. i, i, i=1 11

15 Proposition.1.9. [MIbP formula] Let F = F 1,..., F m and G be such that F 1,..., F m, G S. Suppose that σ F is invertible and let γ F denote the inverse of σ F. Suppose moreover that det γ F S. Then for every ϕ Cb 1Rm ϕ E x i F G = EϕF H i F ; G with q=1 m H i F ; G = δ γ ji F GDF j Proof. By using the chain rule, we can write DϕF, DF j ϕ = x q F DF q, DF j ϕ = x q=1 F σqj q F Since σ F is invertible with inverse matrix γ F, we can write Therefore, ϕ m x i F = DϕF, DF j γ ji F, i = 1,..., m. ϕ E x i F G m = E DϕF, DF j γ ji F G = E DϕF, DF j γ ji F G = E ϕf δ m DF j γ ji F G, j = 1,..., m. and the above steps make sense because all the r.v. s and processes involved are, by hypothesis, in the right spaces.. The infinite dimensional case The duality formula is the one to be used in order to show that the operators D and δ are closable and this last property allows one to extend them to the infinite dimensional case, that is for r.v. s and processes non necessarily depending on the increments of the Brownian motion but depending on the whole path. Let us start from the following facts. We have seen that D : S L Ω P L H 1 and δ : P L H 1 S L Ω. The operators δ and D are linear but unbounded, i.e. it does not exist a constant C such that for any F S one has DF L H 1 = E Anyway, we can state the following property: D s F ds C F L Ω. 1

16 Lemma..1. D and δ are both closable, that is i if {F n } n S is such that lim n F n = in L Ω and lim n DF n = U in L H 1 then U = ; ii if {U n } n P is such that lim n U n = in L H 1 and lim n δu n = F in L Ω then F =. Proof. i Take {F n } n S such that lim n F n = in L Ω and lim n DF n = U in L H 1. Since P is dense in L H 1, it is sufficient to prove that E U, V = for any V P. In fact, if V P, by using the duality formula one has E U, V = lim n E DF n, V = lim n EF n δv = The proof of ii is similar...1 The set Dom p D = D 1,p We first introduce a suitable set on which the Malliavin derivative D is well defined and then, extending the set S of the simple functionals. Definition... Let p N. We say that F Dom p D = D 1,p if there exists a sequence {F n } n S such that lim n F n = F in L p Ω and lim n DF n = U in L p H 1 for some U L p H 1. In this case we define DF = U = lim n DF n in L p H 1. Since p p and L p H 1 L p H 1 for p p, we have D 1,p D 1,p. We put D 1, = Dom D = p N D 1,p. We observe that D 1, does not depend on the sequence F n, n N because D is closable, but is not an algebra. We note that D 1, is an algebra and the definition of DF does not depend on p. We define a norm 1,p on D 1,p by F p 1,p = F p p + DF p 1 L p H 1 E F p + E p/ D t F dt. Notice that for p =, the norm 1, is the one resulting from the scalar product F, G 1, = EF G + E D s F D s Gds. Moreover, D 1, is a Hilbert space. Remark..3. F S 1,p if there exists F n S, n N such that F n F in L p Ω and F n n N is a Cauchy sequence in 1,p ; it then follows that D 1,p Dom p D = S 1,p; 13

17 Dom p D is complete, i.e. every Cauchy sequence in Dom p D converges to an element of Dom p D. Indeed consider a Cauchy sequence F n n N with respect to 1,p. This sequence is also a Cauchy one with respect to p and we know that L p is complete, so there exists F L p Ω such that F n F in p. Since F n Dom p D we may find a sequence of a simple functionals F n s.t. F n F n 1,p 1 n so that F n n N is a Cauchy sequence with respect to 1,p and F n F in p. So F Dom p D... The set Dom p δ Again, we introduce a suitable set on which the Skorohod integral δ is well defined and then, extending the set P of the simple processes. We start similarly to Definition... Definition..4. Let p N. We say that U Dom p δ if there exists a sequence U n P, n N such that lim n U n = U in L p H 1 and lim n δu n = F in L p Ω for some F L p Ω. In this case we define δu = F = lim n δu n in L p Ω. On P, we consider the norm U δ,p = U Lp H 1 + δu p and we have Dom p δ = P δ,p...3 Properties Sometimes it is unpleasant to compute Malliavin derivatives or Skorohod integrals through limits. We necessarily need a criterion, for example as follows Proposition..5. [Criterion] i Let F L Ω. Suppose that there exists a sequence F n D 1, s.t. i lim n F n = F in L Ω ii sup n F n 1, C <. Then F Dom D and F 1, C. Moreover, if sup n F n 1,p C p then F 1,p C p. ii Let U L H 1. Suppose that there exists a sequence U n Dom δ s.t. i lim n U n = U in L H d ii sup n U n δ, C <. Then U Dom δ and U δ, C. Moreover if sup n U n δ,p C p then U δ,p C p. 14

18 Proof. i Any bounded set in a Hilbert space is relatively compact, so we may find F D 1, s.t. F n F weakly. We use Mazur s lemma 1 :for each n N there exists k n and λ n k, k = n,..., k kn n, s.t. k=n λn k = 1 and F n := k n k=n λn k F k F strongly with respect to 1, and, in particular in L Ω. Notice that F F n = k n k=n λ n kf F k k n k=n It follow that F = F and so F D 1,. We also have λ k F F k sup F F k. k n k n F 1, = lim F n 1, lim λ n n n k F n 1, C. Let us now prove the assertion concerning the p-norm. Passing to a subsequence we may assume that F n F a.s. Since sup n F n 1,p C p we may use uniformly integrability in order to derive F n F with respect to 1,p for p < p. Then F 1,p sup n F n 1,p sup n F n 1,p C p. And finally, F 1,p sup p <p F 1,p C p. Similar arguments give ii. We have seen in the finite dimensional framework that the Malliavin integration by parts formula can be achieved once some properties are verified, in particular the duality relationship, the chain rule and, for practical purposes, the Skorohod integral of a special product. In other words, if Proposition.1.8 continues to hold. The answer is positive, and in fact one has k=n Proposition..6. i [Duality] For F Dom D and U Dom δ, E DF, U = EF δu. ii [Chain rule] Let F = F 1,... F m where F i Φ Cb 1Rm. Then ΦF D 1, and D 1,, i = 1,... m and DΦF = x iφf DF i. i=1 If F i D 1, then the conclusion is true for ϕ C 1 pr m. iii [Skorohod integral of a special product] Let u Dom δ and F D 1, such that F u Dom δ. Then δf U = F δu DF, U. 1 Mazur s lemma. Let X, denote a Banach space and {u n } n X such that u n u weakly that is, fu n fu for each continuous linear functional f. Then there exists a function N : N N and for any n N some numbers {αn k ; k = 1,..., Nn} such that αn k > for any k = 1,..., Nn, Nn k=1 αn k = 1 and such that the convex combination v n = Nn k=1 αn ku k strongly converges to u, i.e. v n u as n. 15

19 Proof. i For F Dom D and U Dom δ, take {F n } n S and {U n } n P such that, as n, F n F, δu n δu in L Ω and DF n DF, U n U in L H 1. By applying the duality relationship between S and P Proposition.1.8, E DF, U = lim n E DF n, U n = lim n EF nδu n = EF δu. ii Let us first prove that if F k S for any k = 1,..., m and Φ C 1 b Rm then ΦF D 1, and the chain rule holds. In fact, let {Φ n } n C b Rm C p R m denote a sequence such that Φ n Φ and Φ n Φ as n. Since Φ n F S, the chain rule holds by Proposition.1.8. Now, Φ n F ΦF Φ n Φ and for each k one has x kφ n F DF k x kφf DF k L H 1 Φ n Φ D k F L H 1 and this gives the statement. Suppose now that F k D 1, for any k = 1,..., m and Φ Cb 1Rm. We then take {Fn k } n S such that Fn k F k 1,. Since Φ has bounded derivatives we immediately obtain ΦF n ΦF. Moreover, from the first part of the proof we know that DΦF n = m k=1 x kφf ndfn k. Then, we have to prove that for each k, We can write where x kφf n DF k n x kφf DF k L H 1. x kφf n DF k n x kφf DF k L H 1 a n + b n a n = x kφf n DF k n DF k L H 1 b n = x kφf n x kφf DF k L H 1 Concerning a n, since x kφ is bounded, one has As for b n, first notice that a n const DF k n DF k L H 1. b n = E x kφf n x kφf D t F k dt Now, if we pass to any subsequence s.t. F n F a.s. and use Lebesgue s theorem, we immediately obtain b n = E x kφf n x kφf 1 D sf k. iii Let G S. Using the duality formula we can write E[GδF U] = E[ D sg F U s ds] = E[ D sf G GD s F U s ds] = E[GF δu] E[G D sf U s ds]. This relation is true for all G S, so we have the thesis. 16

20 Remark..7. Notice that if F i D 1,, i = 1,..., m, then we can use Holder s inequality in particular, to show that b n as n in the above proof of ii in Proposition..6 and then we get that the chain rule holds also for Φ CpR 1 m. Actually, the chain rule holds also in other situation, for example under the requirement that Φ is only Lipschitz continuous see e.g. Nualart [18], Proposition 1..3, p. 3. Example..8. Let F D 1, be such that e F L p for any p. Then e F D 1, and De F = e F DF. In fact, let {ψ n } n 1 Cc R be a sequence such that ψ n x = 1 if x n, ψ n x = if x > n + 1, ψ n 1 for any x and sup n sup x ψ nx <. Set now G n = ψ n F e F. Notice that G n = Ψ n F with Ψ n x = ψ n xe x Cc R, so that G n D 1, and the chain rule holds: DG n = Ψ nf DF = e F DF ψ nf + ψ n F. Then, it is sufficient to prove that G n e F L H 1. In fact, we have in L Ω and DG n e F DF in G n e F = E e F ψ n F 1. But, e F ψ n F 1 a.s. and e F ψ n F 1 e F L 1, so that by Lebesgue s dominated convergence theorem one has G n e F. As for the second statement, by Hölder s inequality we have G n e F DF L H 1 = E e F D s F ψ nf + ψ n F 1 ds E e pf ψ nf + ψ n F 1 p ds 1/p DF L q H 1 where p, q >, 1 p + 1 q = 1. By using arguments similar to the ones developed above, one has E e pf ψ nf +ψ n F 1 p ds, and the statement holds...4 Examples We give here some leading examples. Example..9. [Brownian motion] Take F = W t, as t [, 1]. F Dom D and D s W t = 1 s t. In fact, we can write denoting the integer part Then W t = n t k= W t k+1 n W t k n + W t W n t n. Now, since i F n := n t k= W t W k+1 n t k n W t in L Ω as n, 17

21 ii F n D 1, and D s F n = 1 s n t n 1 s t = U in L H 1 as n, it immediately follows that D s W t exists and is equal to 1 s t. Example..1. [Ito integral of square integrable functions] Let ϕ L [, 1] and set W ϕ := ϕ rdw r. Then, W ϕ D 1, and D s W ϕ = ϕs. The proof is a consequence of the following steps. step 1 Let ϕ be a step function on the dyadic intervals, i.e. ϕs = n 1 k= ϕ k 1 [t k n,t k+1 n s. Then W ϕ = n 1 k= ϕ k k n is a simple functional and we compute directly the derivative: D s W ϕ = ϕ ks = ϕs. step Let ϕ L, 1 be a continuous function. Then, there exists a sequence {ϕ n } n of step functions such that ϕ n ϕ in L, 1 as n. Now, step 1 ensures us that D s W ϕ n = ϕ n s Since ϕ n ϕ in L, 1, the statement immediately follows. step 3 The generalization to general functions ϕ belonging to L, 1 follows from the fact that the set of the continuous functions on, 1 is a dense subset of L, 1. Example..11. For ϕ l L, 1, l = 1,..., m, and for Φ C 1 pr m, set F = Φ ϕ 1 sdw s,..., Then F D 1, and D s F = x kφ ϕ 1 sdw s,..., k=1 ϕ m sdw s. ϕ m sdw s ϕ k s. The proof is an immediate consequence of Example..1 and the chain rule. Remark..1. Example..11 is particularly important if one is interested in studying the link with the definition of Malliavin derivatives as done in many texts, as for example the widely well-known one by Nualart [18]. There, the set of simple processes S is given by the random variables F of the form F = f ϕ 1 sdw s,..., ϕ n sdw s where n N, f Cc R n and ϕ i H 1 = L [, 1]B[, 1], dt Then, for F as above, the Malliavin derivative is defined as D t F = n x kf ϕ 1 sdw s,..., k=1 18 ϕ n sdw s ϕ k t..

22 Furthermore, on S one sets F 1, = F L Ω + DF L Ω [,1] and defines D 1, = S 1,. Now, Exercise.3.7 allows one to prove that this definition of Malliavin derivative agrees with the one already presented in these notes. Remark..13. Consider a smooth functional of the form F = fw t1,..., W tn with f C p and < t 1 < < t n 1, so that D t F = n x ifw t1,..., W tn 1 t ti i=1 Then, for h H 1 = L [, 1], B[, 1], dt one has DF, h = = n x ifw t1,..., W tn 1 t ti h t dt i=1 n x ifw t1,..., W tn i=1 ti h t dt t1 fw t1 + ε = lim h tdt,..., W tn + ε t 1 h tdt fw t1,..., W tn ε ε Therefore, for any h H 1 one gets DF, h = d dε F ω + ε htdt ε= that is, for such F s the Malliavin derivative DF is linked to the directional derivative of F in the directions of the Cameron Martin space H 1 = {φ C[, 1], R : φ t = t h sds, for h L [, 1]}. Example..14. [Lebesgue and Ito integrals] Let U denote an adapted process such that E U r dr <. Set I U = U r dr and I 1 U = We assume that for each fixed r [, 1], U r D 1, and i sup r 1 U r 1, < ; ii setting τ n r = r n / n and U n r U r U n r 1,dr = as n. = U τn r, then E U r U τn r + U r dw r. D s U r D s U τn r ds dr 19

23 Then, I i U D 1, for i =, 1 and one has: and D s I U = D s U r dr = D s I 1 U = D s U r dw r = U s + In fact, suppose first i = 1. Then, Therefore, D s I 1 U n = and notice that n 1 k= I 1 U n = D s U k/ n k n D s I 1 U n U s + s n 1 k= s D s U r dr.3 s U k/ n k n. n 1 = U n s / n + D s U r dw r.4 k= n s D s U r dw r in L Ω as n D s U k/ n k n because of i. Now, by ii, we have I 1 U n I 1 U in L Ω. Using i, we obtain sup n I 1 U n 1, <. Then we can use the criterion in Proposition..5 in order to get I 1 U D 1,. Now, since we know that I 1 U D 1,, we have DI 1 U = lim n DI 1 U n in L Ω, and.4 is proved. Concerning.3, one can proceed in a similar way. Example..15. We show here the Malliavin differentiability of the maximum of a Brownian motion. Let us put M = sup s 1 W s we test the time interval [, 1] but nothing changes for more general intervals and we show that D t M = I [,τ] t, where τ is the a.s. unique point at which W attains its maximum. For any n N, we put M n = max k=,..., n W k/ n. Notice that M n M a.s. and M n M 4M L 1 Ω, so that by the Lebesgue dominated convergence theorem one has M n M in L Ω. Thus, it remains to show that M n D 1, and D t M n 1 [,τ] t in L [, 1] Ω. By setting ϕ n : R n +1 R, ϕ n x = maxx,..., x n, then obviously M n = ϕ n W, W 1/ n,..., W 1. The function ϕ n is not a Cp 1 function, so the chain rule in Proposition..6 cannot be immediately applied. However, ϕ n is a Lipschitz continuous function and its partial derivatives exist a.e., so smoothing arguments allow to state the validity of the chain rule see e.g. Nualart [18], Proposition 1..3, p. 3: M n = ϕ n W, W 1/ n,..., W 1 D 1, and D t M n = = n k= n k= ϕ n x k W, W 1/ n,..., W 1 D t W k/ n ϕ n x k W, W 1/ n,..., W 1 1 t<k/ n. Recall the reflecting principle for a Brownian motion: for any x >, one has Psup t T W t > x = PW T > x. For T = 1 one gets PM > x = PW 1 > x and then M has a probability density function given by f M x = /π exp x / 1 x>, which tells us that M L p for any p.

24 We set A = {ϕ n x = x } and, as k = 1,..., n, A k = {ϕ n x x,..., ϕ n x x k 1, ϕ n x = x k }. Then, x kϕ n x = 1 Ak x a.e., so that we can write D t M n = n k= 1 W,W 1/ n,...,w 1 A k 1 t<k/ n = 1 [,τn]t where τ n denotes the a.s. unique point among the k/ n s such that M n = W τn. Straightforward computations allow to see that E D t M n 1 [,τ] t dt = E τ n τ. Now, τ n τ a.s. because W has continuous paths - notice that this proves the a.s. uniqueness of τ - and τ n τ, so E τ n τ, which in turn implies that D t M n 1 [,τ] t in L [, 1] Ω. Then, D t M = 1 [,τ] t. Example..16. We compute here the Skorohod integral of the Brownian bridge process on [, 1], which corresponds in some sense to a Brownian motion forced to be in two fixed points x and y at time and 1 respectively. There are several ways to introduce such a process; for example, the Brownian bridge can be seen as ut = x + ty x + W t tw 1, where B is a one dimensional Brownian motion. Then, by recalling that Skorohod and Ito integrals coincide on adapted processes, one has δu = xw 1 + y x t dw t + W t dw t δv W 1, where vt = t. By using iii of Proposition..6, δv W 1 = W 1 t dw t D tw 1 t dt = W 1 t dw t 1. Moreover, by Ito s formula applied to fw t = Wt and to gt, W t = tw t one gets W t dw t = 1 W 1 1 and t dw t = W 1 W t dt respectively. Then δu = yw 1 + W 1 + x y..5 The Clark-Ocone formula W t dt 1 W 1. We recall the martingale representation formula: if F L Ω, F 1, P then there exists a real valued and F t -adapted process ϕ L Ω [, 1], F 1 B[, 1], P dt such that F = EF + ϕ sdw s. When the random variable F is Malliavin derivable, one can write down explicitly the process ϕ. In fact, one has Proposition..17. [Clark-Ocone formula] If F D 1, then F = EF + ED t F F t dw t. 1

25 Proof. Without loss of generality we can assume that EF = otherwise, we work with F EF, so that by the Brownian martingale representation theorem one has F = ϕ sdw s for some F t -adapted process in L Ω [, 1]. Let us set P ad the subset of the simple processes P which are F t -adapted. For U P ad one has δu = U sdw s, so that EF δu = E ϕ s dw s U s dw s = E On the other hand, by the duality one has It then follows that EF δu = E DF, U = E = E ED s F F s U s ds. U, ϕ ED F F L Ω [,1] = E ϕ s U s ds. D s F U s ds U s ϕ s ED s F F s ds = for any U P ad. The statement now follows by noticing that the closure of P ad w.r.t. the norm in L Ω [, 1] is given by all the F t -adapted processes belonging to L Ω [, 1]. Corollary If F D 1, then F is a.s. constant if and only if DF =.. If A F 1 then 1 A D 1, if and only if either PA = 1 or PA =. As a consequence, D 1, is strictly included in L Ω, F 1, P. Proof. The proof of 1. is immediate from the Clark-Ocone formula. As for., if 1 A D 1, then by the chain rule we get D1 A = D1 A = 1 A D1 A. Now, if D1 A then 1 = 1 A which is impossibile. Then, D1 A =, that is 1 A = const which is true if either PA = 1 or PA =. The converse is immediate. As an example, tale A = {W t > } and F = 1 A. Then F L Ω, F 1, P, because EF = PW t > = 1/ while 1 A / D 1,, so that D 1, is actually strictly included in L Ω, F 1, P...6 The set Dom p L We introduce here the Ornstein-Uhlembeck operator L. On the class of simple functionals S one has L : S S, LF = δdf. The following duality relationship holds: EF LG = E DF, DG = ELF G. Similar arguments give that L is closable, so that one can give the following

26 Definition..19. F DomL Dom L if there exists a sequence of simple functionals {F n } n such that F n F in L Ω and LF n G in L Ω, for some G L Ω. We then we define LF := G = lim n LF n. If the above convergence holds in L p R, p we say that F Dom p L. We put Dom L = p Dom p L Obviously, for F DomL one again has LF = δdf. Moreover, on S we may define the norm F L,p = F p + LF p so that Dom p L = S L,p. The following chain rule holds: Proposition... Let F = F 1,... F m where F i Dom L, i = 1,... m and Φ C p R m. Then ΦF Dom L and LΦF = x iφf LF i + i=1 The proof is left as an exercise. x i x j ΦF DF i, DF j. i, Remark..1. Consider m 1 paths ϕ 1,..., ϕ m in H 1 and set F i = W ϕ i = ϕi sdw s. Such r.v. s play a crucial role in Malliavin calculus see also next Appendix.6 and in this special context, they allow to give a rough interpretation of the denomination Ornstein-Uhlembeck operator given to L = δd. But for a deeper motivation, we refer to the interesting initial part of the book of Sanz-Solé [19]. Set a ij = ϕ i, ϕ j = ϕi sϕ j s ds and notice that this is a symmetric, non negative definite m m matrix, so that it has a square root σ that is, σ is a m m matrix such that σσ = a. Now, for F i = W ϕ i one has DF i = ϕ i. Therefore, LF i = δϕ i = W ϕ i = F i and DF i, DF j = ϕ i, ϕ j = a ij. Then for any f Cp R m, Proposition.. gives LfF = F i x iff + i=1 Now, the analogous operator on R m, that is L fx = x i x ifx + i=1 a ij x i x ff j i, a ij x i x fx j i, is the infinitesimal generator of the diffusion process X on R m evolving as dx t = X t dt + σ dw t which is an Ornstein-Uhlembeck process...7 The integration by parts formula An important consequence of the duality formula is the integration by parts formula. 3

27 Definition... Let F = F 1,..., F m with F i D 1,. The Malliavin covariance matrix of F is defined as the symmetric positive definite matrix given by σ ij F = DF i, DF j = We introduce the non-degeneracy assumption: D s F i D s F j ds. N-D Edet σ F p <, p N..5 If N-D is holds then σ F is almost surely invertible and we denote γ F = σ 1 F. The integration by parts formula reads as follows: Theorem..3. [MIbP formula] Let F = F 1,..., F m with F i D 1, and G D 1,. Suppose also that σ i,j F D1,, N-D holds for F and that DF i p N Dom pδ, i = 1,... m. Then for every ϕ CpR 1 m we have where H i F, G = E i ϕf G = EϕF H i F, G, i = 1,..., m.6 δgγ ij F DF j = Gγ i,j F LF j + DGγ i,j F, DF j..7 Proof. First, let us notice that the second equality in.7 follows from the Skorohod integral of a special product property see iii of Proposition..6. Using the chain rule we can write that D s ϕf = ϕf D s F. Then, DϕF, DF i H1 = σ F ϕf i, which yields i ϕf = DϕF, γ F DF i. By using the duality formula, one gets E i ϕf G = E DϕF, Gγ F DF i = EϕF δgγ F DF i and the statement holds..3 Multidimensional Brownian motion In this section we deal with a d-dimensional Brownian motion W = W 1,... W d defined on a complete probability space Ω, F, P, where F = {F t } t [,1] is the one generated by W and augmented by the P-null sets. The definitions of Malliavin derivative and Skorohod integral, as well as the resulting properties, can be extended as in the standard calculus. It is easy to describe what are the main ideas. For example, we have seen that the Malliavin derivative is given by D t F = F W t 4

28 where the above derivative has to be intended in some sense. Now, since we have now a d-dimensional Brownian motion, and then d independent Brownian motions, such derivative becomes now a gradient since in principle it can be done w.r.t. all the d directions: D t F = Dt 1 F,..., Dt d F, DtF i = F Wt i, i = 1,..., d. Now, concerning the Skorohod integral, it will be again the adjoint operator. Since the principal tool is the duality relationship, that is E DF, U = EF δu, it is clear that the domain of the operator δ is necessarily based on processes taking values on R d. And moreover, for adapted processes the Skorohod and the Ito integral will agree: for an adapted process U t = U 1 t,..., U d t with the usual properties giving the Ito integrability, δu = Ut i dwt i. i=1 But let us start by introducing the notations. For n, k N, we denote t k n = k n and We set now k,i n = W i t k+1 n W i t k n, k =,..., n 1 and i = 1,... d. k n = k,1 n,..., k,d n, k =,..., n 1. the symbol denoting the transpose. Let us recall that, as i, k vary, the r.v. s k,i n are i.i.d. and k,i n N, 1. Therefore, n n = n,..., n 1 n R d n is a d n matrix. Now, a simple functional of order n is a random variable of the form F = f n where f C p R d n. The space of the simple functionals of order n is S n = {F = f n : f C p R d n }. We set S = n S n as the set of all the simple functionals. A process U : [, 1] Ω R d is called a simple process of order n if U t = Ut 1,..., Ut d with U i t ω = n 1 k= U i k1 [t k n,t k+1 n t, U i k S n, k =,..., n 1, i = 1,..., d. It is worth to notice that U t is a r.v. taking values on R d. Recall that the requirement U i k S n allows to write the ith component U i of a simple process of order n as U i t ω = n 1 k= u i k n 1 [t k n,t k+1 n t, ui k C p R d n, k =,..., n 1. as i = 1,..., d. Again, a simple process of order n is adapted if and only if u i k n u i k n,..., n 1 n = u i k n,..., k 1 n 5

29 for any k and i. We set Pn d as the set of the simple processes of order n and P d = n P n d as the set of all the simple processes. For each fixed ω Ω, t U t is an element of L [, 1], B[, 1], dt, R d = {φ : [, 1] R d : φ is Borel measurable and φs ds < } := H d. Then, on P d we can define the scalar product by using the usual one on L : for U, V P d, U, V = Notice the resulting value is a r.v. Now, let us denote L p H d = i=1 U i s V i s ds. { U : Ω H d : E[ U p 1 H d ] = E Then, P d L p H d for all p N. i=1 p } Us i ds <. Definition.3.1. The Malliavin derivative of a variable F = f n S n is the simple process {D t F } t [,1] P d n given by where D i tf = n 1 k= D t F = D 1 t F,... D d t F, f x k,i n1 [t k n,t k+1 n t, i = 1,..., d. Notice that D i t is the Malliavin derivative described in the previous section if one considers the Brownian motion W i. In some sense, in order to define D i t one has to freeze all the random sources expect for the ith one. That is why D i t is often called as the Malliavin derivative in the ith direction of the Brownian motion. Definition.3.. The Skorohod integral is defined as the operator δ : P d S, δu = δ i U i where, as i = 1,..., d, for Ut i = n 1 k= ui k n1 [t k n,t k+1 t, δ i U i = n 1 k= i=1 u i k n k,i n ui k x k,i n 1 n. Again, δ i U i agrees with the one-dimensional definition of the Skorohod integral: simply, work on the ith Brownian motion W i, or equivalently, on the ith direction of the Brownian motion W. Notice also that whenever U is adapted, δu = i=1 n 1 k= n u k =, for any i, so that x k,i u k n k,i n = 6 UsdW i s, i i=1

30 that is the Skorohod integral coincides with the Ito one. Similarly to what developed in Section.1., one has the same result as in Proposition.1.8, i.e. Proposition.3.3. i [Duality] For any F S and U P, E DF, U = EF δu ii [Chain rule] Let F = F 1,... F m where F i S, i = 1,... m and Φ C 1 b Rm. Then ΦF S and D i ΦF = x lφf D i F l, i = 1,..., d. l=1 iii [Skorohod integral of a special product] For U P d and F S, δf U = F δu DF, U. The proofs are identical to the ones of Proposition.1.8. In particular, the duality relationship allow to extend the operators in the infinite dimensional case. In fact, by developing the same arguments as in Section., one can immediately prove that the operators D and δ are closable. Then, D : D 1, L Ω L H d and δ : Dom δ L H d L Ω. All properties in Proposition.3.3 can be extended and read as follows. Proposition.3.4. i [Duality] For any F D 1, and U Dom δ, E DF, U = EF δu ii [Chain rule] Let F = F 1,... F m where F i Φ Cb 1Rm. Then ΦF S and D 1,, i = 1,... m and D i ΦF = x lφf D i F l, i = 1,..., d. l=1 iii [Skorohod integral of a special product] For U Dom δ and F D 1, such that F U Dom δ, δf U = F δu DF, U. Again, the proof follows by density arguments similar to the ones developed in Proposition..6. Concerning the examples discussed in Section..4, let us see what happens in the multidimensional case the proofs are similar, so we omit them. Example.3.5. [Brownian motion - see Example..9] Take F = W i t, as t [, 1]. Then F Dom D and D j sw i t = 1 i=j 1 s t. 7

31 Example.3.6. [Ito integral of square integrable functions - see Example..1] Let ϕ L [, 1] and set W j ϕ := ϕ rdwr j. Then, W j ϕ D 1, and { DsW i j ϕs if i = j ϕ = otherwise Example.3.7. [See Example..11] For ϕ j l j = 1,..., d, and for Φ CpR 1 m, set L, 1, l = 1,..., m and Then F D 1, and D i sf = F = Φ x kφ k=1 ϕ j 1 sdw j s,..., ϕ j 1 sdw j s,..., ϕ j msdw j s. ϕ j msdw j s ϕ i ks. Example.3.8. [Ito integrals - see Example..14] Let U denote an adapted process such that E U r dr <. Set I U = U r dr and for i = 1,..., d, I i U = We assume that for each fixed r [, 1], U r D 1, and i sup r 1 U r 1, < ; ii setting τ n r = r n / n and U n r U r U n r 1,dr = as n. = U τn r, then E U r U τnr + Then, I i U D 1, for any i =, 1,..., d and one has: and as i = 1,..., d, D j si i U = D j s D j si U = D j s U r dr = U s + DsU i r dw U r dwr i r i s = DsU j r dwr i s s U r dw i r. D s U r D s U τnr ds dr D j su r dr.8 if i = j if i j.9 As for the Ornstein-Uhlembeck operator L, on the class of simple functionals S one has L : S S, LF = δdf = δ i D i F, so that i=1 EF LG = E DF, DG = ELF G. 8

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Rough paths methods 4: Application to fbm

Rough paths methods 4: Application to fbm Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park

KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION. Yoon Tae Kim and Hyun Suk Park Korean J. Math. 3 (015, No. 1, pp. 1 10 http://dx.doi.org/10.11568/kjm.015.3.1.1 KOLMOGOROV DISTANCE FOR MULTIVARIATE NORMAL APPROXIMATION Yoon Tae Kim and Hyun Suk Park Abstract. This paper concerns the

More information

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic

More information

White noise generalization of the Clark-Ocone formula under change of measure

White noise generalization of the Clark-Ocone formula under change of measure White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

A Fourier analysis based approach of rough integration

A Fourier analysis based approach of rough integration A Fourier analysis based approach of rough integration Massimiliano Gubinelli Peter Imkeller Nicolas Perkowski Université Paris-Dauphine Humboldt-Universität zu Berlin Le Mans, October 7, 215 Conference

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

arxiv: v1 [math.pr] 10 Jan 2019

arxiv: v1 [math.pr] 10 Jan 2019 Gaussian lower bounds for the density via Malliavin calculus Nguyen Tien Dung arxiv:191.3248v1 [math.pr] 1 Jan 219 January 1, 219 Abstract The problem of obtaining a lower bound for the density is always

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

LAN property for ergodic jump-diffusion processes with discrete observations

LAN property for ergodic jump-diffusion processes with discrete observations LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE) Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Estimates for the density of functionals of SDE s with irregular drift

Estimates for the density of functionals of SDE s with irregular drift Estimates for the density of functionals of SDE s with irregular drift Arturo KOHATSU-HIGA a, Azmi MAKHLOUF a, a Ritsumeikan University and Japan Science and Technology Agency, Japan Abstract We obtain

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

An Introduction to Malliavin Calculus. Denis Bell University of North Florida

An Introduction to Malliavin Calculus. Denis Bell University of North Florida An Introduction to Malliavin Calculus Denis Bell University of North Florida Motivation - the hypoellipticity problem Definition. A differential operator G is hypoelliptic if, whenever the equation Gu

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES CIPRIAN A. TUDOR We study when a given Gaussian random variable on a given probability space Ω, F,P) is equal almost surely to β 1 where β is a Brownian motion

More information

Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension

Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension Available online at www.sciencedirect.com Stochastic Processes and their Applications 22 (202) 48 447 www.elsevier.com/locate/spa Gaussian estimates for the density of the non-linear stochastic heat equation

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Probability approximation by Clark-Ocone covariance representation

Probability approximation by Clark-Ocone covariance representation Probability approximation by Clark-Ocone covariance representation Nicolas Privault Giovanni Luca Torrisi October 19, 13 Abstract Based on the Stein method and a general integration by parts framework

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften an der Fakultät für Mathematik, Informatik

More information

Densities for the Navier Stokes equations with noise

Densities for the Navier Stokes equations with noise Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Gaussian Random Fields

Gaussian Random Fields Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes

The Stein and Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes ALEA, Lat. Am. J. Probab. Math. Stat. 12 (1), 309 356 (2015) The Stein Chen-Stein Methods for Functionals of Non-Symmetric Bernoulli Processes Nicolas Privault Giovanni Luca Torrisi Division of Mathematical

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Sobolev Spaces. Chapter Hölder spaces

Sobolev Spaces. Chapter Hölder spaces Chapter 2 Sobolev Spaces Sobolev spaces turn out often to be the proper setting in which to apply ideas of functional analysis to get information concerning partial differential equations. Here, we collect

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 2 November 6 November Deadline to hand in the homeworks: your exercise class on week 9 November 13 November Exercises (1) Let X be the following space of piecewise

More information

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2 /4 On the -variation of the divergence integral w.r.t. fbm with urst parameter < 2 EL ASSAN ESSAKY joint work with : David Nualart Cadi Ayyad University Poly-disciplinary Faculty, Safi Colloque Franco-Maghrébin

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

Quantitative stable limit theorems on the Wiener space

Quantitative stable limit theorems on the Wiener space Quantitative stable limit theorems on the Wiener space by Ivan Nourdin, David Nualart and Giovanni Peccati Université de Lorraine, Kansas University and Université du Luxembourg Abstract: We use Malliavin

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d 66 2. Every family of seminorms on a vector space containing a norm induces ahausdorff locally convex topology. 3. Given an open subset Ω of R d with the euclidean topology, the space C(Ω) of real valued

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0) Mollifiers and Smooth Functions We say a function f from C is C (or simply smooth) if all its derivatives to every order exist at every point of. For f : C, we say f is C if all partial derivatives to

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information