Introduction to Malliavin calculus and Applications to Finance Part II Giulia Di Nunno Finance and Insurance, Stochastic Analysis and Practical Methods Spring School Marie Curie ITN - Jena 29
PART II 1. Short on Lévy processes 2. Malliavin calculus for Poisson random measures Iterated Itô integrals Wiener-Itô chaos expansion Skorohod integral Malliavin derivative Clark-Ocone formulae 3. Incomplete markets: Minimal variance hedging 4. Minimal variance hedging under partial information References
1. Short on Lévy processes Definition A Lévy process is a stochastically continuous stochastic process η(t), t [, T ], (η() = ) with stationary and independent increments. Lévy-Khintchine formula. For every t, the random variable η(t) is distributed according to the law given by: with Ψ(u) := iαu 1 2 σ2 u 2 + E [ e iuη(t)] = e iψ(u), z <1 ( e iuz 1 iuz ) ( ν(dz)+ e iuz 1 ) ν(dz). z 1 Here α R and σ 2 and ν = ν(dz), z R := R \ {}, is a σ-finite measure on B(R ) satisfying R min(1, z 2 )ν(dz) <. The law of a Lévy process is characterized by the characteristic triplet (α, σ 2, ν).
This process admits a modification with right-continuous and left-limited (càdlàg) trajectories. We assume to work with such a modification. The Lévy-Itô decomposition theorem. A Lévy process η(t), t [, T ], admits the integral representation in the form t t η(t) = at + σw (t) + zñ(ds, dz) + zn(ds, dz) z <1 z 1 for some constants a R, σ R. Here W (t), t [, T ] (W () = ), is a standard Wiener process and N(dt, dz), (t, z) [, T ] R, is the Poisson random measure describing the jumps of the process and Ñ(dt, dz) := N(dt, dz) ν(dz)dt where E [ N(dt, dz) ] = ν(dz)dt.
For the applications we have in mind we will assume that R z 2 ν(dz) <. This actually implies that the Lévy process takes values in L 2 (P). We will consider Lévy process with zero-expectation. Then the Lévy-Itô decomposition can be written as: t (1) η(t) = σw (t) + zñ(ds, dz). R
Itô-Lévy processes. Inspired by the representation (1), we will consider processes admitting integral representation of form t t η(t) = σ(s)dw (s) + γ(s, z)ñ(ds, dz), t [, T ], R where σ and γ are F-adapted processes (predictable for the càdlàg modification) such that: [ t t ] E σ 2 (s)ds + γ 2 (s, z)ν(dz)ds <. R
A combination of Gaussian and pure jump Lévy noises. Denote the probability space on which W (t), t [, T ], is a Wiener process by (Ω W, F W, P W ) and denote the one on which Ñ(dt, dz) = N(dt, dz) ν(dz)dt is a compensated Poisson random measure by (Ω en, F en, P en ). We set Ω = Ω W Ω en, F = F W F en, P = P W P en. Correspondingly, let F W := {Ft W, t [, T ]} and F en N := {F e t, t [, T ]} be the P-augmented filtrations generated by the values of W and Ñ. Set F W = FT W and F en N = F e T.
Following this approach we study separately the Malliavin calculus for the Brownian motion and the compensated Poisson random measure and then we merge the results on the space (Ω, F, P) just defined. One could follow another approach and consider the calculus with respect to the whole process η directly. See Di Nunno (27). In the coming sections we concentrate on the calculus with respect to the Poisson kind of noise only.
2. Malliavin calculus for Poisson random measures Iterated Itô integrals. Define G n := { (t 1, z 1,..., t n, z n ) : t 1... t n T, z i R, i = 1,..., n } For any g L 2 (G n ), i.e. g 2 L 2 (G n) := G n g 2 (t 1, z 1,..., t n, z n )dt 1 ν(dz 1 ) dt n ν(dz n ) <, the n-fold iterated integral J n (f ) is the random variable in L 2 (P) defined as t2 J n (g) := g(t 1, z 1,..., t n, z n )Ñ(dt 1, dz 1 ) Ñ(dt n, dz n ). R R We set J (g) := g for g R. Directly from the properties of Itô integrals we have: J n (g) L 2 (P), by the Itô isometry J n (g) 2 L 2 (P) = g 2 L 2 (G n). If g L 2 (G m ) and f L 2 (G n ) (m < n), then E [ J m (g)j n (f ) ] =.
The symmetrization f of f is defined by f (t1, z 1,..., t n, z n ) = 1 f (t σ1, z σ1,..., t σn, z σn ), n! the sum being taken over all permutations σ = (σ 1,..., σ n ) of {1,..., n}. A function f is symmetric if f = f. We denote the space of square integrable symmetric functions by L 2 ((λ ν) n ). Note that the symmetrization is over the n pairs (t i, z i ) and not over the 2n variables. For any symmetric f L 2 ((λ ν) n ) we define its n-fold iterated integral by (2) I n (f ) := f (t 1, z 1,..., t n, z n )Ñ n (dt, dz) = n!j n (f ). ([,T ] R ) n σ
Wiener-Itô chaos expansion for Poisson random measures Any F L 2 (P) F T -measurable random variable admits the representation F = I n (f n ) via a unique sequence f n L 2 ((λ ν) n ), n = 1, 2,... Here we set I (f ) := f for f R. Moreover, we have that n= F 2 L 2 (P) = n! f n 2 L 2 ((λ ν) n ). n=
Skorohod integral Let X (t, z), t T, z R, be a stochastic process with F T -measurable values such that [ ] E X 2 (t, z)ν(dz)dt <. R Then for each (t, z), the random variable X (t, z) has an expansion of the form X (t, z) = I n (f n (, t, z)) where f n (, t, z) L 2 ((λ ν) n ). n= Let f n (t 1, z 1,..., t n, z n, t n+1, z n+1 ) be the symmetrization of f n (t 1, z 1,..., t n, z n, t, z) as a function of the n + 1 pairs.
Suppose that (n + 1)! f n 2 L 2 ((λ ν) n+1 ) <. n= Then the Skorohod integral δ(x ) of X with respect to Ñ is defined as (3) δ(x ) := Moreover, X (t, z)ñ(δt, dz) := I n+1 ( f n ). R n= δ(x ) 2 L 2 (P) = (n + 1)! f n 2 L 2 ((λ ν) n+1 ) <. n=
Next we consider Skorohod integrals with respect to t η(t) = zñ(dt, dz), t [, T ]. R Let Y (t), t [, T ], be a measurable stochastic process such that X (t, z) := Y (t)z, (t, z) [, T ] R, is Skorohod integrable with respect to Ñ. Then we define the Skorohod integral of Y with respect to η by Y (t)δη(t) := R Y (t)zñ(δt, dz).
Some basic properties of the Skorohod integral The Skorohod integral is a linear operator E [ δ(u) ] = Theorem: Skorohod integral as extension of the Itô integral (a) Let [ X (t, z), t [, T ], z R, be a predictable process such that ] E R X 2 (t, z)ν(dz)dt <. Then X is both Itô and Skorohod integrable with respect to Ñ and X (t, z)ñ(δt, dz) = X (t, z)ñ(dt, dz). R R (b) Let [ Y (t), t [, T ], be a predictable process such that ] T E Y 2 (t)dt <. Then Y is both Itô and Skorohod integrable with respect to η and Y (t)δη(t) = Y (t)dη(t).
Example What is η(t )δη(t)? Since η(t ) = zñ(dt, dz) = I 1(f 1 (t 1, z 1 )), R where f 1 (t 1, z 1 ) = z 1, we have η(t )δη(t) = I 1 (z 1 z)ñ(δt, dz) = I 2(z 1 z 2 ) R t 2 = 2 z 1 z 2 Ñ(dt 1, dz 1 )Ñ(dt 2, dz 2 ) R R = 2 z 2 η(t 2 )Ñ(dt 2, dz 2 ) = 2 η(t 2 )dη(t 2) R = η 2 (T ) z 2 N(dt, dz). R
Malliavin derivative In the case of the compensated Poisson random measure the set D 1,2 consists of all F T -measurable random variables F L 2 (P) with chaos expansion such that F = I n (f n ), f n L 2 ((λ ν) n ), n= F 2 D 1,2 := nn! f n 2 L 2 ((λ ν) n ) <. n=1 The Malliavin derivative D t,z F of F D 1,2 at (t, z) is defined as (4) D t,z F = ni n 1 (f n (, t, z)). n=1 Here I n 1 (f n (, t, z)) means that the (n 1)-fold iterated integral of f n is regarded as a function of its (n 1) first pairs of variables (t 1, z 1 ),..., (t n 1, z n 1 ), while the final pair (t, z) is kept as a parameter.
Theorem: Closability of the Malliavin derivative. Suppose F L 2 (P) and F k, k = 1, 2,..., are in D 1,2 and that (i) F k F, k in L 2 (P) (ii) D t,z F k, k = 1, 2,..., converges in L 2 (P λ ν). Then F D 1,2 and D t,z F k D t,z F, k, in L 2 (P λ ν). Theorem: Duality formula Let X (t, z), t [, T ], z R, be Skorohod integrable and F D 1,2. Then [ ] [ ] E F X (t, z)ñ(δt, dz) = E X (t, z)d t,z F ν(dz)dt. R R Thus the Skorohod integral is the adjoint to the Malliavin derivative. This also yields the closability of the Skorohod integral as operator.
About this definition of the Skorohod integral, see e.g. Kabanov (1975 - Poisson type noise) inspired by Hitsuda (1972, 1979 - Brownian noise) About this presentation of the Malliavin derivative, see e.g. Løkka (24) More details in the reference list
About the calculus. The Skorohod integral and the Malliavin derivative for the Poisson random measure share many of the properties that we have already mentioned in the case of the Brownian motion. However, there are important aspects of difference as it will appear clear with the chain rule here below. In the following we will mention only some of the most important results. Theorem: Chain rule. Let F D 1,2 and let ϕ be a real continuous function on R. If ϕ(f ), ϕ(f + D t,z F ) L 2 (P), then ϕ(f ) D 1,2 and Example. D t,z ϕ(f ) = ϕ(f + D t,z F ) ϕ(f ). D t,z η 2 (T ) = ( η(t ) + D t,z η(t ) ) 2 η 2 (T ) = ( η(t ) + z ) 2 η 2 (T ) = 2η(T )z + z 2,
The following result relies on the duality formula. Theorem: Integration by parts Let X (t, z), t [, T ], z R, be a Skorohod integrable stochastic process and F D 1,2 such that the product X (t, z)(f + D t,z F ), t [, T ], z R, is Skorohod integrable. Then F X (t, z)ñ(δt, dz) R = X (t, z) ( F + D t,z F ) Ñ(δt, dz) + X (t, z)d t,z F ν(dz)dt. R R
Fundamental theorem of calculus Let [ X (s, ζ), (s, ζ) [, T ] R, be a process such that ] E R X 2 (s, ζ)ν(dζ)ds <. Assume that X (s, ζ) D 1,2 for all (s, ζ) [, T ] R, and that D t,z X (, ) is Skorohod integrable with [ ( ) 2ν(dz)dt ] E D t,z X (s, ζ)ñ(δs, dζ) <. R R Then X (s, ζ)ñ(δs, dζ) D 1,2 R and D t,z X (s, ζ)ñ(δs, dζ) = D t,z X (s, ζ)ñ(δs, dζ) + X (t, z). R R In particular, if X (s, ζ) = Y (s)ζ, then D t,z Y (s)δη(s) = D t,z Y (s)δη(s) + zy (t).
Clark-Ocone formula. Let F D 1,2. Then (5) F = E[F ] + E[D t,z F F t ]Ñ(dt, dz). R Observation. Suppose F D 1,2 has the form F = ϕ(η(t )) for some continuous real function ϕ(x), x R. Then by the Clark-Ocone theorem combined with the Markov property of the process η, we get ϕ(η(t )) = E [ ϕ(η(t )) ] + E[ϕ(y + η(t t) + z) ϕ(y + η(t t))] y=η(t) Ñ(dt, dz). R
Clark-Ocone theorem for Gaussian and Poisson type noises Recall that we can combine the Brownian motion and the Poisson setting on: Ω = Ω W Ω en, F T = F W T F e N T, P = P W P en. Then we can also consider H α (ω) := I α (W )(f W,α (W ))(ω W ) I α ( e N)(f en,α ( e N))(ω en ) for any α = (α (W ), α (e N) ) with α (k) =, 1,..., for k = W, Ñ. Here I α (k)(f k,α (k)) is the α (k) -fold Itô integral with respect to the Brownian motion, if k = W, or to the compensated Poisson random measure, if k = Ñ. Any F L 2 (P) be a real F T -measurable random variable can be written as F = H α α for an appropriate choice of deterministic symmetric integrands.
We say that F D 1,2 := D W 1,2 De N 1,2 if F 2 D 1,2 := α (W ) = + α (e N) = α (W ) α (W )! f W,α (W ) 2 L 2 ([,T ] α(w ) ) α (e N) α (e N)! f en,α ( e N) 2 L 2 (([,T ] R ) α(e N) ) <. If F D 1,2 we define the Malliavin derivative DF of F as the gradient DF = ( D t F, D t,z F ) where D t F is the derivative with respect to the W and D t,z F the one with respect to Ñ. Clark-Ocone theorem for combined Gaussian-pure jump Lévy noise. Let F D 1,2. Then (6) F = E[F ] + E [ ] T D t F F t dw (t) + E [ D t,z F F t ]Ñ(dt, dz). R
Comments: Léon et al. (22) suggests a different approach to prove a Clark-Ocone formula and a Malliavin derivative for pure jump Lévy processes. The Clark-Ocone formula in this case is basically equivalent to the one presented here. The domain of the Malliavin derivatives, D 1,2, may be a constraint in the applications. An extension of the Malliavin derivatives operators to the whole L 2 (P) is achieved using techniques of white noise analysis. Ref. e.g. Aase et al. (2), Di Nunno et al. (24), Øksendal et al. (24). A different approach to obtain an integral representation theorem for all random variables in L 2 (P) and for general martingales as integrators was introduced by Di Nunno (22, 27). This approach introduces a non-anticipating stochastic derivative and does not use Malliavin calculus techniques.
3. Incomplete markets: Minimal variance hedging Let us consider a market model of the form: risk free asset S (t) = 1, t [, T ], risky assets ds 1 (t) = σ(t)dw (t) + γ(t, z)ñ(dt, dz), t (, T ], R where σ(t), and γ(t, z), t [, T ], z R, are predictable processes which might depend on S 1. We denote by F = {F t, t [, T ]} the filtration generated by the Brownian motion and the Poisson random measure. The market is assumed frictionless and with no trading constrains. In the framework of Itô stochastic calculus we remember that the set of integrands A is given by F-adapted stochastic processes such that [ E ϕ 2 (t) ( σ 2 (t) + γ 2 (t, z)ν(dz) ) ] dt <. R The set of these processes constitutes the set A of admissible portfolios.
We can show that this is an incomplete market (e.g. F = η 2 (T ) is not replicable). Then there will be an infinite choice of possible risk-neutral probability measures all equivalent from the non-arbitrage point of view. Let us suppose to have chosen one and to work in a risk neutral environment. Namely, P is a risk neutral probability measure and with S (t) 1, for all t our prices are already discounted.
Minimal variance hedging Question: given an F T -measurable claim F L 2 (P), how close can we get to F at time T by hedging?
Minimal variance hedging Question: given an F T -measurable claim F L 2 (P), how close can we get to F at time T by hedging? Of course, if the claim F is replicable, then we can perfectly replicate. But in an incomplete market, the claim F might be not replicable and then?
Minimal variance hedging Question: given an F T -measurable claim F L 2 (P), how close can we get to F at time T by hedging? Of course, if the claim F is replicable, then we can perfectly replicate. But in an incomplete market, the claim F might be not replicable and then? If we consider closeness in term of variance, the question is then: Given F L 2 (F T ) find ϕ A such that (7) [ (F inf E E[F ] ϕ A ) 2 ] [ (F T ) 2 ] ϕ(t)ds(t) = E E[F ] ϕ (t)ds(t). Here A is the set of admissible portfolios, i.e. set of integrands. The portfolio ϕ is called a minimal variance hedging portfolio
Minimal variance hedging Question: given an F T -measurable claim F L 2 (P), how close can we get to F at time T by hedging? Of course, if the claim F is replicable, then we can perfectly replicate. But in an incomplete market, the claim F might be not replicable and then? If we consider closeness in term of variance, the question is then: Given F L 2 (F T ) find ϕ A such that (7) [ (F inf E E[F ] ϕ A Here A is the set of admissible portfolios. ) 2 ] [ (F T ) 2 ] ϕ(t)ds(t) = E E[F ] ϕ (t)ds(t). The portfolio ϕ is called a minimal variance hedging portfolio We characterize the minimal variance hedging portfolio by means of Malliavin calculus.
Case γ : Brownian motion case First of all we consider the complete market case. This is the original example/application of the Clark-Ocone theorem in finance. Ref. e.g. Ocone et al. (1991). Theorem Any claim F D 1,2 = D B 1,2 admits representation: (8) F = EF + 1 T E [ σ 2 (t) ] E [ ] D t F F t σ(t)dw (t). [This is just a variation of the original Clark-Ocone theorem for a Brownian motion with volatility (equiv. variance) σ.] Thus we have that the minimal variance hedging strategy is given by: (9) ϕ = σ(t) E [ σ 2 (t) ]E[ D t F F t ], t [, T ].
General case Theorem The minimal variance hedging strategy ϕ of a contingent claim F D 1,2 is given by (1) ϕ = 1 {σ(t)e λ(t) [ ] D t F F t + γ(t, z)e [ ] } D t,z F F t ν(dz), R where λ(t) := E [ σ 2 (t) + R γ 2 (t, z)ν(dz) ].
Examples Suppose the dynamics of the risky security is given by: ds 1 (t) = dw (t) + R zñ(dt, dz). Let us consider F := R zñ(dt, dz). Then, since D t F and D t,z F = z = E [ ] D t,z F F t, we get that the minimal variance hedging strategy is: ϕ (t) = 1 z 2 ν(dz), t [, T ], λ R with λ = 1 + R z 2 ν(dz). (Note that this process is actually constant).
Suppose the dynamics of the risky security is given by: ds 1 (t) = R zñ(dt, dz). Let us consider F = S1 2 (T ) = E[F ] + 2S 1 (t)ds 1 (t) + z 2 Ñ(dt, dz). R Then D t,z F = ( S 1 (T ) + D t,z S 1 (T ) ) 2 S 2 1 (T ) = 2S 1 (T )z + z 2 and since S 1 is an F-martingale, the minimal variance hedging strategy is ϕ 1 ( (t) = 2z 2 E [ ] S R z 2 1 (T ) F t + z 3 ) ν(dz) ν(dz) R = 2S 1 (t) + R z 3 ν(dz) R z 2 ν(dz). We can also compute the claim ˆF which gives the minimal variance hedge of F. This is ˆF = E[F ] + 2S 1 (t)ds 1 (t) + z R ζ 3 ν(dζ) dz). R R ζ2ν(dζ)ñ(dt, (Note that ˆF F, in fact F is not perfectly replicable.)
Sketch of the proof Let ϕ be the minimal variance hedging strategy and let ˆF be the minimal variance hedge: ˆF := E[F ] + = E[F ] + ϕ (t)ds 1 (t) ϕ (t)σ(t)dw (t) + ϕ (t)γ(t, z)ñ(dt, dz). R To prove the statements it is enough to show that (11) E [ (F ˆF ) G ] = for all F T -measurable G L 2 (P), of the form G := E[G] + = E[G] + ψ(t)ds 1 (t) ψ(t)σ(t)dw (t) + ψ(t)γ(t, z)ñ(t, z) R (ψ A).
By the Clark-Ocone formula we have F = E[F ] + E [ ] T D t F F t dw (t) + E [ ]Ñ(t, D t,z F F t z). R Substituting this into (11) and applying some calculus, we obtain that E [ (F ˆF ) G ] [ ] = E ψ(t)l(t)dt =, with L(t) = σ(t)e [ ] D t F F t ϕ (t)σ 2 (t) [ [ ] + γ(t, z)e Dt,z F F t ϕ (t)γ 2 (t, z) ] ν(dz). R This holds for all ψ A if and only if L(t). Note that we can write L(t) = {σ(t)e [ ] D t F F t + γ(t, z)e [ ] } D t,z F F t ν(dz) λ ϕ (t), t. R Hence we conclude that: ϕ = 1 {σ(t)e [ ] D t F F t + γ(t, z)e [ ] } D t,z F F t ν(dz). λ R
Clark-Ocone formula under change of measure Girsanov theorem for Lévy processes. Let θ(s, x) 1, s [, T ], x R and u(s), s [, T ], be F predictable processes such that R { log(1 + θ(s, x)) +θ 2 (s, x)}ν(dx)dt < P a.e., Let { t Z(t) = exp + t t + u 2 (s)ds < P a.e. t u(s)dw (s) u 2 (s)ds {log(1 θ(s, x)) + θ(s, x)}ν(dx)ds R } log(1 θ(s, x))ñ(ds, dx), t [, T ]. R
Assume Z(t), t [, T ], is a martingale and define dq = Z(T )dp on F T. Set Ñ Q (dt, dx) = θ(t, x)ν(dx)dt + Ñ(dt, dx) and dw Q (t) = u(t)dt + dw (t). Then ÑQ(, ) and W Q ( ) are compensated Poisson random measure of N(, ) and Brownian motion under Q, respectively.
Clark-Ocone formula under change of measure. Let F D 1,2 and F L 2 (Q). Under appropriate integrability assumptions, the result can be formulated and the integral representation of F with respect to W Q and ÑQ is as follows: F = E Q [F ] + E Q [ Dt F F t D t u(s)dw Q (s) F t ] dwq (t) [ ] + E Q F ( H 1) + HDt,x F F t ÑQ (dt, dx), R where { t [ H = exp Dt,x θ(s, x) + log(1 D t,xθ(s, x) R 1 θ(s, x) )(1 θ(s, x))] ν(dx) ds + log(1 D } t,xθ(s, x) 1 θ(s, x) )ÑQ(ds, dx).
4. Minimal variance hedging under partial information In many situations, one cannot rely on the complete knowledge of the information contained in F := { F t : t [, T ] }. It is often more realistic to consider the case of a dealer who has only a partial information at disposal for his financial decisions. We denote by E = {E t : t [, T ]}, E t F t t, the flow of information available to the trader. Examples of such a situation include: Delayed information: E t = F t δ, t δ, and E t = E, t < δ Trivial information (no information): E t = trivial, t [, T ] Price information: E t = σ{s 1 (s) : s t}, t [, T ].
In this situation it is natural to think of a modified minimal variance hedging problem and study the minimal variance hedging under partial information: Given F L 2 (F T ) find ϕ A E such that [ (F T ) 2 ] [ (F T ) 2 ] inf E E[F ] ϕ(t)ds(t) = E E[F ] ϕ (t)ds(t). ϕ A E Here A E is defined as the set of the E-adapted processes ϕ such that [ E ϕ 2 (t) ( σ 2 (t) + γ 2 (t, z)ν(dz) ) ] dt <. R A E is the set of admissible portfolios.
Theorem The minimal variance hedging strategy ϕ E under partial information of a contingent claim F D 1,2 is given by (12) ϕ E = 1 { E [σ(t)e [ ] D t F F t + γ(t, z)e [ ] D t,z F F t ν(dz) E t ]}, λ E R where [ ] λ E := E σ 2 (t) + γ(t, z)ν(dz) E t. R
Example Same as before: suppose the dynamics of the risky security is given by: ds 1 (t) = R zñ(dt, dz). Let us consider F = S1 2 (T ) = E[F ] + 2S 1 (t)ds 1 (t) + z 2 Ñ(dt, dz). R The minimal variance hedging strategy under partial information is ϕ 1 ( E(t) = 2z 2 E [ ] S R z 2 1 (T ) E t + z 3 ) ν(dz) ν(dz) R = 2E [ ] S 1 (t) E t + R z 3 ν(dz) R z 2 ν(dz). In this case the minimal variance hedge under partial information is ˆF E = E[F ] + 2E [ ] T S 1 (t) E t ds1 (t) + z R ζ 3 ν(dζ) dz). R R ζ2ν(dζ)ñ(dt,
References This presentation follows: G. Di Nunno, B. Øksendal and F. Proske, Malliavin Calculus for Lévy Processes with Applications to Finance, in Progress, Springer 28. References on the topic at the base of this presentation include: K. Aase, B. Øksendal, N. Privault and J. Ubøe, White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance, Finance and Stochastics, 4 (2), 465-496. F. E. Benth, G. Di Nunno, A. Løkka, B. Øksendal and F. Proske, Explicit representation of the minimal variance portfolio in market driven by Lévy processes, Math. Finance, 13 (23), 54-72. G. Di Nunno, Stochastic integral representations, stochastic derivatives and minimal variance hedging, Stochastics Stochastics Rep., 73 (22), 181-198. G. Di Nunno, On orthogonal polynomials and the Malliavin derivative for Lévy stochastic measures, SMF, Seminaires et Congrès. 16 (27), 55-69. G. Di Nunno, B. Øksendal and F. Proske, White noise analysis for Lévy processes, Journal of Functional Analysis, 26 (24), 19-148. H. Föllmer and M. Schweizer, Hedging of contingent claims under incomplete information, Applied Stochastic Analysis, 389-414, Stochastics Monogr., 5 (1991), Gordon and Breach. Ju.M. Kabanov, A generalized Itô formula for an extended stochastic integral with respect to Poisson random measure, Uspehi Mat. Nauk., 29 (1974), 167-168. Ju.M. Kabanov, Integral representations of functionals of processes with independent increments, Teor. Verojatnost. i Primenen., 19 (1974), 889-893. Ju.M. Kabanov, Extended stochastic integrals, Teor. Verojatnost. i Primenen., 2 (1975), 725-737.
Léon, J. A., J. L. Solé, F. Utzet and J. Vives, On Lévy processes, Malliavin calculus and market models with jumps, Finance Stochast., 6 (22), 197-225. A. Løkka, Martingale representation and functionals of Lévy processes. Stochastic Analysis and Applications 22 (24), 867-892. D. Nualart and W. Schoutens, Chaotic and predictable representations for Lévy processes Stochastic Process. Appl., 9 (2), 19-122. D. Nualart and J. Vives, Anticipative calculus for the Poisson process based on the Fock space Lecture Notes in Math., 1426 (199), 154-165. D.L. Ocone and I. Karatzas. A generalized Clark representation formula, with application to optimal portfolios. Stochastics Stochastics Rep., 34 (1991), 187-22. B. Øksendal and F. Proske, White noise for Poisson random measures. Potential Analysis 21 (24), 375 43. J. Picard, Formules de dualité sur l espace de Poisson, Ann. Inst. H. Poincaré Probab. Statist., 32 (1996), 59-548.