Malliavin Calculus: Analysis on Gaussian spaces

Similar documents
Contents. 1 Preliminaries 3. Martingales

The Wiener Itô Chaos Expansion

Malliavin calculus and central limit theorems

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

White noise generalization of the Clark-Ocone formula under change of measure

Hodge de Rham decomposition for an L 2 space of differfential 2-forms on path spaces

An Introduction to Malliavin Calculus

Malliavin Calculus in Finance

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

A Concise Course on Stochastic Partial Differential Equations

UNBOUNDED OPERATORS ON HILBERT SPACES. Let X and Y be normed linear spaces, and suppose A : X Y is a linear map.

Rough paths methods 4: Application to fbm

arxiv: v2 [math.pr] 22 Aug 2009

An Introduction to Malliavin Calculus. Denis Bell University of North Florida

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

Introduction to Infinite Dimensional Stochastic Analysis

The Poisson Process in Quantum Stochastic Calculus

Quasi-invariant Measures on Path Space. Denis Bell University of North Florida

Calculus in Gauss Space

A Spectral Gap for the Brownian Bridge measure on hyperbolic spaces

HEAT KERNEL ANALYSIS ON INFINITE-DIMENSIONAL GROUPS. Masha Gordina. University of Connecticut.

n E(X t T n = lim X s Tn = X s

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

Itô formula for the infinite-dimensional fractional Brownian motion

1. Stochastic Processes and filtrations

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

An Infinitesimal Approach to Stochastic Analysis on Abstract Wiener Spaces

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

An Introduction to Malliavin calculus and its applications

TD 1: Hilbert Spaces and Applications

BOOK REVIEW. Review by Denis Bell. University of North Florida

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Chaos expansions and Malliavin calculus for Lévy processes

Tools of stochastic calculus

An Introduction to Malliavin Calculus and its applications to Finance

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

ON MARTIN HAIRER S THEORY OF REGULARITY STRUCTURES

An Overview of the Martingale Representation Theorem

1 Continuity Classes C m (Ω)

Some Properties of NSFDEs

Applications of Ito s Formula

13 The martingale problem

-variation of the divergence integral w.r.t. fbm with Hurst parameter H < 1 2

Outline of Fourier Series: Math 201B

A Wavelet Construction for Quantum Brownian Motion and Quantum Brownian Bridges

Admissible vector fields and quasi-invariant measures

16 1 Basic Facts from Functional Analysis and Banach Lattices

Representing Gaussian Processes with Martingales

Stochastic Differential Equations

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Trace Class Operators and Lidskii s Theorem

5 Compact linear operators

Finite element approximation of the stochastic heat equation with additive noise

Sobolev Spaces. Chapter 10

Random Fields: Skorohod integral and Malliavin derivative

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Vector measures of bounded γ-variation and stochastic integrals

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations.

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

1.2 Fundamental Theorems of Functional Analysis

Citation Osaka Journal of Mathematics. 41(4)

arxiv: v1 [math.pr] 10 Jan 2019

Lecture 19 L 2 -Stochastic integration

i. v = 0 if and only if v 0. iii. v + w v + w. (This is the Triangle Inequality.)

Peter Hochs. Strings JC, 11 June, C -algebras and K-theory. Peter Hochs. Introduction. C -algebras. Group. C -algebras.

Exercises to Applied Functional Analysis

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

Introduction to Infinite Dimensional Stochastic Analysis

Inequalities of Babuška-Aziz and Friedrichs-Velte for differential forms

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

1.4 The Jacobian of a map

WHITE NOISE REPRESENTATION OF GAUSSIAN RANDOM FIELDS

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

Analysis Preliminary Exam Workshop: Hilbert Spaces

The Gram matrix in inner product modules over C -algebras

Functional Analysis I

Representations of Gaussian measures that are equivalent to Wiener measure

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

From Fractional Brownian Motion to Multifractional Brownian Motion

On chaos representation and orthogonal polynomials for the doubly stochastic Poisson process

DIVERGENCE THEOREMS IN PATH SPACE

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Heat Kernel and Analysis on Manifolds Excerpt with Exercises. Alexander Grigor yan

Real Analysis Prelim Questions Day 1 August 27, 2013

MTH 503: Functional Analysis

A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators

Recitation 1 (Sep. 15, 2017)

Errata Applied Analysis

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Transcription:

Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011

Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered real valued Gaussian random variables defined on it. If the random variables of the given Hilbert space generate the underlying σ-algebra, we call the Gaussian space irreducible. We speak about Gaussian spaces by means of a coordinate space. Let (Ω, F, P) be a complete probability space, H a Hilbert space, and W : H L 2 [(Ω, F, P); R] a linear isometry. Then W is called isonormal Gaussian process if W (h) is a centered Gaussian random variable for all h H. In particular (Ω, F, P) together with W (H) is Gaussian.

Example Given a d-dimensional Brownian motion (W t ) t 0 on its natural filtration (F t ) t 0, then W (h) := d k=1 0 h k (s)dw k s is an isonormal Gaussian process for h H := L 2 (R 0 ; R d ).

Notation In the sequel we shall apply the following classes of functions on R n C 0 (R n ) C b (Rn ) C p (R n ), which denote the functions with compact support, with bounded derivatives of all orders and with derivatives of all orders of polynomial growth.

Smooth random variables Let W be an isonormal Gaussian process. We introduce random variables of the form F := f (W (h 1 ),..., W (h n )) for h i H (mind the probabilistic notation, which would be bad style in analysis). If f belongs to one of the above classes of functions, the associated random variables are denoted by S 0 S b S p and we speak of smooth random variables. The polynomials of elements W (h) are denoted by P.

Generation property The algebra P is dense in L 2 (Ω, F H, P), where F H denotes the completed σ-algebra generated by the random variables W (h) for h H.

Proof Notice that it is sufficient to prove that every random variable F, which is orthogonal to all exp(w (h)) for h H, vanishes. Choose now an ONB (e i ) i 1, then the entire function (λ 1,..., λ n ) E(F exp( n λ i W (e i ))) vanishes, which in turn means that E(F σ(w (e 1 ),..., W (e n ))) = 0 by uniqueness of the Fourier transform, hence F = 0. Therefore polynomials of Gaussians qualify as smooth test functions, since they lie in all L p for 1 p < and are dense. i=1

The representation of a smooth random variable is unique in the following sense: let F = f (W (h 1 ),..., W (h n )) = g(w (g 1 ),..., W (g m )), and denote the linear space h 1,..., h n, g 1,..., g n with orthonormal basis (e i ) 1 i k and representations h i = g j = k a il e l l=1 k b jl e l. l=1 Then the functions f A and g B coincide everywhere.

Notation Notice the following natural isomorphisms L 2 [(Ω, F, P); H] = L 2 (Ω, F, P) H (ω F (ω)h) F h. If we are additionally given a concrete representation H = L 2 [(T, B, µ); G], then L 2 [(Ω T, F B, P µ); G] = L 2 (Ω, F, P) H ((ω, t) F (ω)h(t)) F h.

The Malliavin Derivative For F S p we denote the Malliavin derivative by DF L 2 [(Ω, F, P); H] defined via n DF = f (W (h 1 ),..., W (h n )) h i x i i=1 for F = f (W (h 1 ),..., W (h n )). The definition does not depend on the particular representation of the smooth random variable F = f (W (h 1 ),..., W (h n )). If we are given a concrete representation H = L 2 (T, B, µ), then we can identify L 2 (Ω, F, P) H = L 2 (Ω T, F B, P µ) and we obtain a measurable, not necessarily adapted process (D t F ) t T as Malliavin derivative.

Integration by parts 1 Let F be a smooth random variable and h H, then E( DF, h ) = E(FW (h)).

Integration by parts 2 Let F, G be smooth random variables, then for h H E(G DF, h ) + E(F DG, h ) = E(FG W (h)).

Proof The first equation can be normalized such that h = 1. Additionally there are by a variable transformation orthonormal elements e i such that F = f (W (e 1 ),..., W (e n )) with f C p (R n ) and h = e 1. Then f 1 E( DF, h ) = (x) R n x 1 (2π) n exp( x 2 2 )dx i.p. 1 = f (x)x 1 R n (2π) n exp( x 2 2 )dx = E(F W (e 1 )) = E(FW (h)).

Proof The second integration by parts formula follows directly from the Leibnitz rule D(FG) = FDG + GDF for F, G S p.

The Malliavin derivative is closable We have already defined the derivative operator D : S p L q [(Ω, F, P)] L q [(Ω, F, P); H] for q 1. This linear operator is closable by integration by parts: given a sequence of smooth functionals F n 0 in L q and DF n G in L q [(Ω, F, P); H] as n, then E( G, h H F ) = lim n E( DF n, h F ) = = lim n E( F n DF, h ) + lim n E(F nf W (h)) = 0 for F S p. Notice that S p q 1 L q. So G = 0 and therefore D is closeable. We denote the closure on each space by D 1,q, respectively.

Operator norms Given q 1, then we denote by F 1,q := (E( F q ) + E( DF q H )) 1 q the operator norm for any F S p. By closeability we know that the closure of this space is a Banach space, denoted by D 1,q and a Hilbert space for q = 2. We have the continuous inclusion D 1,q L q [(Ω, F, P)] which has as image the maximal domain of definition of D 1,q in L q, where we shall write by slight abuse of notation again D for the Malliavin derivative.

Higher Derivatives By tensoring the whole procedure we can define Malliavin derivative for smooth functionals with values in V, an additionally given Hilbert space, S p V L p [(Ω, F, P)] V, where we take the algebraic tensor products. We define the Malliavin derivative on this space by D id, and proceed as before showing that the operator is closable. Consequently we can define higher derivatives via iteration D k F = DD k 1 F for smooth functionals F L q [(Ω, F, P)] V. Closing the spaces we get Malliavin derivatives D k for elements of L q [(Ω, F, P); V ] to L q [(Ω, F, P); V H k ] by induction.

Operator norms We define the norms F k,q := (E( F q ) + k E( D j F q V H j )) 1 q for k 1 and q 1. The respective closed spaces D k,q (V ) are Banach spaces (Hilbert spaces), the maximal domains of D k in L q (Ω, F, P; V ). The Fréchet space p 1 k 1 D k,p (V ) is denoted by D (V ). j=1

Monotonicity We see immediately the monotonicity F k,p F j,q for p q and k j by norm inequalities of the type f p f q for 1 p q for f p 1 L p [Ω, F, P].

Chain rule Let φ C 1 b (Rn ) be given, such that the partial derivatives are bounded and fix p 1. If F D 1,p (R n ), then φ(f ) D 1,p and D(φ(F )) = n i=1 φ x i (F )DF i Hence D is an C -algebra. A similar result holds true of φ is only globally Lipschitz, however, we cannot identify the derivative then anymore.

Proof The proof is done by approximating F i by smooth variables F i n and φ by φ ψ ɛ, where ψ ɛ is a Dirac sequence of smooth functions. For the approximating terms the formula is satisfied, then we obtain n i=1 φ x i (F )DF i D((φ ψ ɛ ) F i n) p 0 as ɛ 0 and n, so by closedness we obtain the result since (φ ψ ɛ ) F i n φ F in L p as ɛ 0 and n.

Malliavin derivative as directional derivative Consider the standard example h d k=1 0 h k (s)dws k with Hilbert space H = L 2 (R 0 ; R d ). Assume Ω = C(R 0 ; R d ), then we can define the Cameron-Martin directions h (t t 0 h s ds), which embeds H C(R 0 ; R d ). If we consider a smooth random variable F = f (W t ), then DF, h = f (W t ) 0 1 [0,t] (s)h(s)ds = d dɛ ɛ=0f (W t +ɛ t 0 h(s)ds), so the Malliavin derivative evaluated in direction h appears as directional derivative in a Cameron-Martin direction, which are the only directions where directional derivatives make sense for P-almost surely defined random variables.

Malliavin derivative as directional derivative Taking the previous consideration seriously we can replace h by a predictable strategy a such that the stochastic exponential of d t k=1 0 ak s dws k is a closed martingale, then we obtain ( d. ) E ( DF, a ) = E dɛ ɛ=0f (. + ɛ a s ds) 0 = d (. ) dɛ ɛ=0e F (. + ɛ a s ds) = d dɛ ɛ=0e = E(F (.) ( d k=1 0 F (.) exp(ɛ t 0 d k=1 a k s dw k s ) for smooth bounded random variables F. 0 a k s dw k s ɛ2 2 0 a s 2 ds) )

The adjoint The adjoint operator δ : dom 1,2 (δ) L p (Ω) H L 2 (Ω) is a closed densely defined operator. We concentrate here on the case p = 2. By definition u dom 1,2 (δ) if and only if F E( DF, u ) for F D 1,2 is a bounded linear functional on L 2 (Ω). If u dom 1,2 (δ), we have the following fundamental integration by parts formula E( DF, u ) = E(F δ(u)) for F D 1,2. δ is called the Skorohod integral or divergence operator or simply adjoint operator.

We obtain immediately H dom 1,2 (δ), the deterministic strategies, with δ(1 h) = δ(h) = W (h). A smooth elementary process is given by u = n F j h j j=1 with F j S p and h j H. We shall denote the set of such processes by the (algebraic) tensor product S p H. By integration by parts we can conclude that S p H dom 1,2 (δ) and δ(u) = n F j W (h j ) j=1 n DF j, h j H, j=1

since for all G S p E( u, DG H ) = = = n E(F j h j, DG H ) j=1 n E( h j, D(F j G) H ) E(G h j, DF j H ) j=1 n E(F j W (h j )G) E(G h j, DF j H ). j=1

Skorohod meets Ito Given an isonormal Gaussian process W and define a sub-σ-algebra F G F H by means of a closed subspace G H. If F D 1,2 is F G -measurable, then h, DF H = 0 P-almost surely, for all h G.

Skorohod meets Ito The almost sure identity holds for smooth random variables F = f (W (h 1 ),..., W (h n )), but every F D 1,2 can be approximated by smooth random variables in L 2 such that also the derivatives are approximated (closedness!), hence the result follows.

Skorohod meets Ito Given a d-dimensional Brownian motion (W t ) t 0 on its natural filtration (F t ) t 0, then W (h) := d k=1 0 h k (s)dw k s is an isonormal Gaussian process for h H := L 2 (R 0 ; R d ). Define H t H those functions with support in [0, t] for t 0 and denote F t := F Ht. Hence for F t -measurable F D 1,2 we obtain that 1 [0,t] DF = DF almost surely.

Skorohod meets Ito Consequently for a simple, predictable strategy u(s) = n F j h j j=1 with h j = 1 ]tj,t j+1 ]e k, for 0 = t 0 < t 1 < < t n+1 and F j L 2 [(Ω, F tj, P)] for j = 1,..., n, and e k R d a canonical basis vector, that δ(u) = = n F j W (h j ) j=1 n j=1 n DF j, h j H j=1 F j (W k j+1 W k j ).

Skorohod meets Ito Given a predictable strategy u L 2 pred (Ω R 0; R d ), then δ(u) = d k=1 0 u k (s)dw k s.

Skorohod meets Ito The Skorohod integral is a closed operator, the Ito integral is continuous on the space of predictable strategies. Both operators coincide on the dense subspace of simple predictable strategies, hence by the fact that δ is closed we obtain that they conincide on L 2 pred (Ω R 0; R d ).

The Clark-Ocone formula Let F H = F and let F D 1,2, then F = E(F ) + d i=1 0 E(D i tf F t )dw i t.

Proof By martingale representation we know that any G L 2 has a representation G = E(G) + d i=1 0 φ i tdw i t, hence E(FG) = E(F )E(G) + E ( d i=1 0 D i tf φ i tdt ), which yields the result.