Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no.

Similar documents
Lecture 12. F o s, (1.1) F t := s>t

(B(t i+1 ) B(t i )) 2

An essay on the general theory of stochastic processes

A Concise Course on Stochastic Partial Differential Equations

Ordinary differential equations with measurable right-hand side and parameters in metric spaces

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

n E(X t T n = lim X s Tn = X s

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS

Poisson random measure: motivation

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Stochastic Differential Equations.

Harnack Inequalities and Applications for Stochastic Equations

Topics in fractional Brownian motion

Densities for the Navier Stokes equations with noise

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Notes on uniform convergence

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Existence and Comparisons for BSDEs in general spaces

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

LOCALLY INTEGRABLE PROCESSES WITH RESPECT TO LOCALLY ADDITIVE SUMMABLE PROCESSES

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Processes with independent increments.

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Applications of Ito s Formula

1 Brownian Local Time

Summable processes which are not semimartingales

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations

An Overview of the Martingale Representation Theorem

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

DIFFERENTIAL CRITERIA FOR POSITIVE DEFINITENESS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Exercises. T 2T. e ita φ(t)dt.

Weak solutions of mean-field stochastic differential equations

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

Hardy-Stein identity and Square functions

Stochastic Processes

PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

On pathwise stochastic integration

The Wiener Itô Chaos Expansion

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Stochastic Calculus (Lecture #3)

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

Pathwise Construction of Stochastic Integrals

Lecture 21 Representations of Martingales

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Lecture 19 L 2 -Stochastic integration

REAL AND COMPLEX ANALYSIS

Stochastic Processes

A Short Introduction to Diffusion Processes and Ito Calculus

Estimates for the density of functionals of SDE s with irregular drift

13 The martingale problem

P (A G) dp G P (A G)

Itô isomorphisms for L p -valued Poisson stochastic integrals

Almost sure limit theorems for U-statistics

I forgot to mention last time: in the Ito formula for two standard processes, putting

µ X (A) = P ( X 1 (A) )

Stochastic Integration.

Lecture 22 Girsanov s Theorem

Tools from Lebesgue integration

Vector measures of bounded γ-variation and stochastic integrals

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

Regular Variation and Extreme Events for Stochastic Processes

16 1 Basic Facts from Functional Analysis and Banach Lattices

Malliavin Calculus: Analysis on Gaussian spaces

BOUNDEDNESS OF SET-VALUED STOCHASTIC INTEGRALS

Random Process Lecture 1. Fundamentals of Probability

1. Stochastic Processes and filtrations

{σ x >t}p x. (σ x >t)=e at.

Infinitely divisible distributions and the Lévy-Khintchine formula

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

Contents. 1 Preliminaries 3. Martingales

Convergence of Banach valued stochastic processes of Pettis and McShane integrable functions

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Elementary properties of functions with one-sided limits

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Fast-slow systems with chaotic noise

MATHS 730 FC Lecture Notes March 5, Introduction

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

1 Definition of the Riemann integral

The Pedestrian s Guide to Local Time

Finite element approximation of the stochastic heat equation with additive noise

Information and Credit Risk

Convergence at first and second order of some approximations of stochastic integrals

Jae Gil Choi and Young Seo Park

Transcription:

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces B. Rüdiger, G. Ziglio no. 187 Diese rbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft getragenen Sonderforschungsbereiches 611 an der Universität Bonn entstanden und als Manusript vervielfältigt worden. Bonn, Otober 24

Ito formula for stochastic integrals w.r.t. compensated Poisson random measures on separable Banach spaces B.Rüdiger*, G. Ziglio** * Mathematisches Institut, Universität Koblenz-Landau, Campus Koblenz, Universitätsstrasse 1, 567 Koblenz, Germany; * SFB 611, Institut für ngewandte Mathemati, bteilung Stochasti, Universität Bonn, Wegelerstr. 6, D -53115 Bonn, Germany. ** Facoltà di Scienze, Università di Trento, Via Sommarive 14, I-385 Povo (Trento) Italia bstract We prove the Ito formula (3) for Banach valued stochastic processes with jumps, the martingale part given by a stochastic integral w.r.t. a compensated Poisson random measure. Such stochastic integrals have been discussed in [35]. The Ito formula is computed here for Banach valued functions. MS -classification (2): 6H5, 6G51, 6G57, 46B9, 47G99 Keywords: Stochastic integrals on separable Banach spaces, random martingales measures, compensated Poisson random measures, additive processes, random Banach valued functions, type 2 Banach spaces, M-type 2 Banach spaces, Ito formula, Lévy -Ito decomposition theorem. 1 Introduction In [35] the definition of stochastic integrals Z t (ω) := t f(s, x, ω) (N(dsdx)(ω) ν(dsdx)) (1) of Hilbert and Banach valued functions f(s, x, ω), w.r.t. compensated Poisson random measures (cprms) N(dsdx)(ω) ν(dsdx), has been discussed. (See also [7],[8], [9], [1],[13], [19],[25], [24],[26], [31], [32], [33], [34], [37], [38] for the discussion of stochastic integrals w.r.t. cprms on infinite dimensional spaces). The state space of f is denoted with F, and is a separable Banach space. s, t IR +, x is defined on some separable Banach space E, is a Borel set of E \ {}, and ω is defined on a probability space (Ω, F, P ). The cprm q(dsdx)(ω) :=N(dsdx)(ω) ν(dsdx) is associated to some E - valued additive process (X t ) t IR+ on (Ω, F, P ) (see Section 2 for the concept of cprm, and Section 3 for the discussion of the integrals (1)). The problem of giving an 1

appropriate meaning to (1) comes from the property of the cprms to be only σ -finite (in general not finite) random measures. The cprms are in general not finite on the sets (, T ] Λ, if it does not hold that / Λ. For real valued functions this problem was already discussed e.g. in [14], where the integrals are defined by approximating the integral (1) in L p (Ω, F, P ), p = 1, 2, with integrals on the set B c (, ɛ), where with B c (, ɛ) we denote the complementary of the ball of radius ɛ >, centered in (we refer to Definition 3.2 for a precise statement). The definition of the integrals (1) for the real valued case was discussed in a different way than [14] in [36] and [6]. In [36] and [6] the natural definition of Ito integrals (consisting of an L p (Ω, F, P ) -approximation of integrals of appropriate simple functions ), is extended to the case where integration is performed w.r.t. cprms instead of the Brownian motion. (In [36] cprm associated to α -stable processes are considered, while [6] discusses general martingale measures on IR d ). In [35] the definition of stochastic integrals (1) for real valued random functions, given in [14], and resp. [36], [6], has been generalized to the case of Hilbert and Banach valued random functions. We call them simple -p -, and resp. strong -p -integrals. It has been proven that the definition of strong -p- integral (which for the real valued case coincides with the one used in [36], [6]) is stronger than the definition of simple -p- integral (which in the real valued case coincides with the one of [14]). (In a similar way lie e.g. the definition of a Lebegues integral is a stronger definition than the definition of Riemann integral.) In fact, we need one more sufficient condition for the integrand f, if we require the existence of the simple -p-integral. We need that f has a left -continuous version. (In [14] it is required that for the existence of the simple -p -integral the integrand has to be predictable, in the sense that it is in the smallest σ -algebra generated by the left -continuous functions). In [35] it has been proven that if however this condition holds, then the simple -2- and strong -2- integrals coincide (see also Section 3, for precise statements). The results in [35], beside being a generalization to Hilbert and Banach spaces, unify therefore the definition of the integral (1) proposed in [14] with the one of [36], [6], for the case of integrands which have a left continuous version. The definition of the integral (1) on infinite dimensional spaces, lie Hilbert or Banach spaces, is necessary to discuss stochastic differential equations (SDEs) with non Gaussian noise on such spaces (see e.g. [4], [3], where this problem is discussed). In fact, from the Lévy -Ito decomposition theorem [15], [7], [1] it follows that additive non Gaussian noise is naturally defined through integrals of the form (1). In [22], [23], existence and uniqueness of SDEs with non Gaussian additive noise has been proven on separable Banach spaces, under local Lipshitz conditions for the drift- and noise - coefficients. (We refer to [22] for precise statements). In [23] the continuous dependence of the solutions on the coefficients of such SDEs has been shown and the Marov properties have been 2

discussed. pplications of such SDEs related to financial models, are given in [23], too. In this article we prove the Ito formula for the Banach valued processes t t Y t (ω) := Z t (ω) + g(s, ω)dh(s, ω) + (s, x, ω)n(dsdx)(ω), (2) Z t (ω) being the stochastic integral in (1). We assume that h(s, ω) : IR + Ω IR is a real valued, cádlàg process, whose almost all paths are of bounded variation, g is a Banach valued random (cád -làg or cág -làd) function depending on time (see Section 4 for precise statements). We consider H : IR + F G, with G a separable Banach space, H(s, y), s IR +, y F twice Fréchet differentiable. We shall prove (see Section 4, Section 5, Section 6 for precise statements) that the following Ito formula holds: Λ H(t, Y t (ω)) H(, Y (ω)) = + t sh(s, Y s (ω))ds + t {H(s, Y (ω) + f(s, x, ω)) H(s, Y s (ω))} q(dsdx)(ω) + t {H(s, Y (ω) + f(s, x, ω)) H(s, Y s (ω)) y H(s, Y s (ω))f(s, x, ω)} ν(dsdx) + t Λ {H(s, Y (ω) + (s, x, ω)) H(s, Y s (ω))} N(dsdx)(ω) + t yh(s, Y s )g(, ω)dh(s, ω) + <s t {H(s, Y (ω) + g(, ω) h(s, ω)) H(s, Y s (ω)) y H(s, Y s (ω))g(, ω) h(s, ω)} P a.s., (3) where s H(s, y) denotes the Fréchet derivative of H w.r.t. s IR + for y F fixed, and y H(s, y) denotes the Fréchet derivative of H w.r.t. y F, for s IR + fixed and q(dsdx)(ω) := N(dsdx)(ω) ν(dsdx) Remar 1.1 The simplest example for a process (2), for which the Ito formula (3) can be applied, is when (Y t ) t IR+ coincides with a pure jump additive process ( t ) t IR+ with values in a Banach space E. From the Lévy -Ito decomposition theorem [15], [7], [1] it follows that t (ω) = t < x 1 x(n(dsdx)(ω) ν(dsdx))+α t + where α t E. The stochastic integral w.r.t. simple -p -integral with p = 1, resp. p = 2, if t < x 1 t x >1 xn(dsdx)(ω) P a.s. (4) q(dsdx)(ω) is a strong -p and x p ν(dsdx) <, (5) 3

and if, in case of p = 2, the Banach space is of type 2 [1]. (See Definition 3.19 or [5], [2] for the definition of type 2 Banach spaces.) In the general case it is only a simple -p -integral, as follows from [7] (see also final Remar in [1]). series of other examples for which the Ito formula (3) can be applied on infinite dimensional Banach spaces are in [22], [23] [2] (see e.g. also [12], [25]). Suppose F = G = IR, and H depends not on the time s IR. Suppose that f is left continuous in time, so that the integral in (1) is a simple -2 -integral. Suppose also that the term t g(s, ω)dh(s, ω) in (2) is sipped. The Ito formula in (41) coincides for this case with the Ito -formula found in [14]. In [25], [13] an Ito formula is also given for Hilbert -valued stochastic integrals of the form (1). The stochastic integrals (1) are there defined in a more abstract way, by assuming some conditions which control the second moment of the integrals of approximating functions. These conditions are satisfied by our strong -2 - integrals (used also in [36], [6] for the real valued case), so that it follows that the Ito -formula in [25], [13] is satisfied by such integrals. The Ito formula in [25], [13] is however not given in terms of the cprm q(dsdx)(ω) and compensator ν(dsdx), lie done in this paper (3), where we also generalize to the case of Banach valued functions. n Ito formula in terms of the cprm q(dsdx)(ω) and compensator ν(dsdx) is however necessary to establish the generator of some jump Marov process Y t (ω) of the form (2), lie e.g. the problems discussed in [23]. 2 Compensated Poisson random measures on separable Banach spaces We assume that a filtered probability space (Ω, F, (F t ) t +, P ), satisfying the usual hypothesis, is given: i) F t contain all null sets of F, for all t such that t < + ii) F t = F + t, where F + t = u>t F u, for all t such that t < +, i.e. the filtration is right continuous. In this Section we recall the definition of compensated Poisson random measures associated to additive processes on (Ω, F, (F t ) t +, P ) with values in (E, B(E)), where in the whole paper we assume that E is a separable Banach space with norm and B(E) is the corresponding Borel σ -algebra (see also [12], [1], [35]). Definition 2.1 process (X t ) t with state space (E, B(E)) is an F t - additive process on (Ω, F, P ) if i) (X t ) t is adapted (to (F t ) t ) ii) X = a.s. 4

iii) (X t ) t has increments independent of the past, i.e. X t X s is independent of F s if s < t iv) (X t ) t is stochastically continuous v) (X t ) t is càdlàg. n additive process is a Lévy process if the following condition is satisfied vi) (X t ) t has stationary increments, that is X t X s has the same distribution as X t s, s < t. Remar 2.2 That any process satisfying i) -iv) has a càdlàg version follows from its being, after compensation, a martingale (see e.g. [11], [16],[29]). Let (X t ) t be an additive process on (E, B(E)) (in the sense of Definition 2.1). Set X t := lim s t X s and X s := X X s. We denote with N(dtdx)(ω) the Poisson random measure associated to the additive process (X t ) t, and with ν(dtdx) its compensator. (See e.g. [12] or [35].) q(dtdx)(ω) := N(dtdx)(ω) ν(dtdx) (6) is the compensated Poisson random measure associated to (X t ) t. (We sometimes omit to write the dependence on ω). We refer to e.g. [12] or [35] for the properties of q(dtdx)(ω). We recall however that N(dtdx)(ω), for each ω fixed, resp. ν(dtdx), are σ -finite measures on the σ -algebra B(IR + E \ {}), generated by the product semi -ring S(IR + ) B(E \ {}) of product sets (t 1, t 2 ], with t 1 < t 2, and B(E \ {}), where B(E \ {}) is the trace σ -algebra on E \ {} of the Borel σ -algebra B(E) on E. If however Λ F(E \ {}), with F(E \ {}) := {Λ B(E \ {}) : (Λ) c }, (7) then ω Ω N((t 1, t 2 ] Λ)(ω) = t 1<s t 2 1 Λ ( X s ) < (8) and with P (N((t 1, t 2 ] Λ) = ) = exp( ν((t 1, t 2 ] Λ)) (ν((t 1, t 2 ] Λ))! (9) ν((t 1, t 2 ] Λ) := E[N((t 1, t 2 ] Λ)] < (1) 5

3 Stochastic integrals w.r.t. compensated Poisson random measures on separable Banach spaces Let F be a separable Banach space with norm F. (When no misunderstanding is possible we write instead of F.) Let F t := B(IR + (E \ {})) F t be the product σ -algebra generated by the semi -ring B(IR + (E \ {})) F t of the product sets F, B(IR + E \ {}), F F t. Let T >, and M T (E/F ) := {f : IR + E \ {} Ω F, such that f is F T /B(F ) measurable f(t, x, ω) is F t adapted x E \ {}, t (, T ]} (11) Here we recall the definition of stochastic integrals of random functions f(t, x, ω) M T (E/F ) with respect to the compensated Poisson random measures q(dtdx)(ω) := N(dtdx)(ω) ν(dtdx) associated to an additive process (X t ) t introduced in [35] (and [1] for the case of deterministic functions f(x) not depending on time t IR + and on ω Ω). There is a natural definition of stochastic integral w.r.t. N(dtdx)(ω), resp. q(dtdx)(ω), on those sets (, T ] Λ where the measures N(dtdx)(ω) (with ω fixed) and resp. ν(dtdx) are finite, i.e. / Λ. ccording to [35] we give the following definition Definition 3.1 Let t (, T ], Λ F(E \ {}) (defined in (7)), f M T (E/F ). The natural integral of f on (, t] Λ w.r.t. Poisson random measure N(dtdx)(ω) is t f(s, x, ω) N(dsdx)(ω) := Λ <s t f(s, ( X s)(ω), ω)1 Λ ( X s (ω)) ω Ω (12) Definition 3.2 Let t (, T ], Λ F(E \ {}) (defined in (7)), f M T (E/F ). ssume that f(,, ω) is Bochner integrable on (, T ] Λ w.r.t. ν, for all ω Ω fixed. The natural integral of f on (, t] Λ w.r.t. the compensated Poisson random measure q(dtdx) := N(dtdx)(ω) ν(dtdx) is t f(s, x, ω) (N(dsdx)(ω) ν(dsdx)) := Λ <s t f(s, ( X s)(ω), ω)1 Λ ( X s (ω)) t f(s, x, ω)ν(dsdx) ω Ω(13) Λ where the last term is understood as a Bochner integral, (for ω Ω fixed) of f(s, x, ω) w.r.t. the measure ν. 6

It is more difficult to define the stochastic integral on those sets (, T ], B(E \ {}), s.th. ν((, T ] ) =. For real valued functions this problem was already discussed e.g. in [36] and [6], [14], [39] (for general martingale measures), where two different definitions of stochastic integrals were introduced. In [35] the definitions of stochastic integrals of Hilbert and Banach valued random functions, which generalize (in a natural way) the definitions of such stochastic integrals (of real valued random functions) have been given, for the case where integration is performed w.r.t. compensated Poisson random measures on separable Banach spaces. Lie in [35], we shall call them strong -p -, resp. simple -p -integrals. The strong -p -integral is the limit in L F p (Ω, F, P ) (the space of F -valued random variables Y, with E[ Y p ] <, defined in Definition 3.8) of the the natural integrals (13) of the simple functions defined in (14). (We refer to Definition 3.9 for a precise statement.) The simple -p -integral is the limit in L F p (Ω, F, P ) of the natural integrals over the sets (, T ] Λ ɛ, with T > and Λ ɛ := B c (, ɛ), when ɛ, where with B c (, ɛ) we denote the complementary of the ball of radius ɛ > centered in (we refer to Definition 3.2 for a precise statement). In [35] sufficient conditions for the strong -1 - and simple -1- integral are found on any separable Banach space, and sufficient conditions of the strong -2- and simple -2 -integral are found on separable Banach spaces of M -type 2 and type 2 (see Theorems 3.12-3.16 and Theorems 3.22-3.26). (See Definition 3.19 or [5], [2] for the definition of type 2 Banach spaces, Definition 3.17 or [27], [28] for the definition of M -type 2 Banach spaces). Here we first introduce the notion of strong -p, as it turns out from [35] that the strong -p integral is a stronger definition, than the simple -p -integral. In fact, if the sufficient conditions in Theorems 3.12-3.16 for the existence of the strong -p -integrals for f(t, x, ω) are satisfied, and, moreover, f(, x, ω) is left continuous on t IR + for each x E \ {} and ω Ω fixed, then f is also simple -p integrable, and the simple -p and strong -p, -integrals coincide (see Theorems 3.22-3.26). We first introduce the simple functions and recall the definition of strong -p -integral, p 1. Definition 3.3 function f belongs to the set Σ(E/F ) of simple functions, if f M T (E/F ), T > and there exist n IN, m IN, such that n 1 m f(t, x, ω) = 1,l (x)1 F,l (ω)1 (t,t +1 ](t)a,l (14) =1 l=1 where,l F(E \ {}) (i.e. /,l ), t (, T ], t < t +1, F,l F t, a,l F. For all 1,..., n 1 fixed,,l1 F,l1,l2 F,l2 = if l 1 l 2. Remar 3.4 Let f Σ(E/F ) be of the form (14), then the natural integral defined in Definition 3.2 is given also in the following form: 7

T f(t, x, ω)q(dtdx)(ω) = n 1 =1 l=1 m a,l 1 F,l (ω)q((t, t +1 ] (, T ],l )(ω). (15) for all B(E \ {}), T >. This is an easy consequence of the Definition of the random measure q(dtdx)(ω). We recall here the definition of strong -p -integral (Definition 3.9 below) given in [35] through approximation of the natural integrals of simple functions. Before we establish some properties of the functions f Mν T,p (E/F ), p 1, with T Mν T,p (E/F ) := {f M T (E/F ) : E[ f(t, x, ω) p ] ν(dtdx) < }, (16) where with E[f] we denote the expectation with respect to the probability measure P. Let ν P be the product measure on the semi -ring B(IR + (E \ {})) F of the product sets F, B(IR + (E \ {})), F F. Let us also denote by ν P the unique extension of ν P on the product σ -algebra F := B(IR + (E \ {})) F generated by B(IR + (E \ {})) F. Starting from here we assume in the whole article that the following hypothesis is satisfied: Hypothesis : ν is a product measure ν = α β on the σ -algebra generated by the semi -ring S(IR + ) B(E \ {}), of a finite measure α on S(IR + ) and a σ -finite measure β on B(E \ {}). α(dt) is absolutely continuous w.r.t. the Lebesgues measure dt on B(IR + ), i.e. α(dt) dt Definition 3.5 Let f : IR + (E \{}) Ω F be given. sequence {f n } n IN of F T /B(F ) -measurable functions is L p -approximating f on (, T ] Ω w.r.t. ν P, if f n is ν P -a.s. converging to f, when n, and lim n T E[ f n (t, x, ω) f(t, x, ω) p ] dν =, (17) i.e. f n f converges to zero in L p ((, T ] Ω, ν P ), when n. Theorem 3.6 [35] Let p 1, T >, then for all f Mν T,p (E/F ) and all B(E \ {}), there is a sequence of simple functions {f n } n IN which satisfies the following Property P: f n Σ(E/F ) n IN, and f n is L p -approximating f on (, T ] Ω w.r.t. ν P. Remar 3.7 From the proofs of Theorem 3.6 in [35] it follows that if f Mν T,p (E/F ) Mν T,q (E/F ) with p, q 1, T >, then, for all B(E \ {}) there is a sequence of simple functions {f n } n IN, f n Σ(E/F ) n IN, s.th. f n is both L p - and L q - approximating f on (, T ] Ω w.r.t. ν P. 8

Definition 3.8 Let p 1, L F p (Ω, F, P ) is the space of F -valued random variables, such that E Y p = Y p dp <. We denote by p the quasi -norm ([2]) given by Y p = (E Y p ) 1/p. Given (Y n ) n IN, Y L F p (Ω, F, P ), we write lim p n Y n = Y if lim n Y n Y p = Definition 3.9 Let p 1, T >. We say that f is strong -p -integrable on (, T ] w.r.t. q(dtdx)(ω), B(E \ {}), if there is a sequence {f n } n IN Σ(E/F ), s.th. f n is L p -approximating f on (, T ] Ω w.r.t. ν P, and for any such sequence the limit of the natural integrals of f n w.r.t. q(dtdx) exists in L F p (Ω, F, P ) for n, i.e. T T f(t, x, ω)q(dtdx)(ω) := lim p n f n (t, x, ω)q(dtdx)(ω) (18) exists. Moreover, the limit (18) does not depend on the sequence {f n } n IN Σ(E/F ), which is L p -approximating f on (, T ] Ω w.r.t. ν P. We call the limit in (18) the strong -p -integral of f w.r.t. q(dtdx) on (, T ]. The strong -p -integral of f w.r.t. q(dtdx) on (, T ], with < < T, is defined as T T f(t, x, ω)q(dtdx)(ω) := f(t, x, ω)q(dtdx)(ω) f(t, x, ω)q(dtdx)(ω) (19) Remar 3.1 Let f M T,r ν (E/F ) M T,q ν (E/F ), r, q 1. If f is both r- and q- strong -integrable on (, T ], r, q 1, then from Remar 3.7 it follows that the strong -r -integral coincides with the strong -q -integral. In fact from any sequence f n, for which the limit in (18) holds with p = r and p = q, it is possible to extract a subsequence for which the convergence (18) holds also P - a.s.. Proposition 3.11 [35] Let p 1. Let f be strong -p -integrable on (, T ], t B(E \ {}). Then the strong -p -integral f(s, x)q(dsdx), t [, T ], is an F t -martingale with mean zero. Theorem 3.12 [35] Let f Mν T,1 (E/F ), then f is strong -1 -integrable w.r.t. q(dt, dx) on (, t], for any < t T, B(E \ {}). Moreover t t E[ f(s, x, ω)q(dsdx)(ω) ] 2 E[ f(s, x, ω) ]ν(dsdx) (2) 9

Theorem 3.13 [35] Suppose (F, B(F )):= (H, B(H)) is a separable Hilbert space. Let f Mν T,2 (E/H), then f is strong -2 -integrable w.r.t. q(dtdx) on (, t], for any < t T, B(E \ {}). Moreover t t E[ f(s, x, ω)q(dsdx)(ω) 2 ] = E[ f(s, x, ω) 2 ]ν(dsdx) (21) Theorem 3.14 [23] Suppose that F is a separable Banach space of M -type 2. Let f Mν T,2 (E/F ), then f is strong 2 -integrable w.r.t. q(dtdx) on (, t], for any < t T, B(E \ {}). Moreover t t E[ f(s, ω)q(dsdx)(ω) 2 ] K2 2 E[ f(s, ω) 2 ]ν(dsdx). (22) where K 2 is the constant in the Definition 3.17 of M -type 2 Banach spaces. Remar 3.15 Theorem 3.14 was first proven in [35], however only for functions f(t, x, ω) = f(t, ω), i.e. which do not depend on the variable x E. Theorem 3.16 [35] Suppose that F is a separable Banach space of type 2. Let f Mν T,2 (E/F ), and f be a deterministic function, i.e. f(t, x, ω) = f(t, x), then f is strong 2 -integrable w.r.t. q(dtdx) on (, t], for any < t T, B(E \ {}). Moreover inequality (22) holds, with K 2 being the constant in the Definition 3.19 of type 2 Banach spaces. We recall here the definitions of M -type 2 and type 2 separable Banach spaces (see e.g. [27], [5]). Definition 3.17 separable Banach space F, with norm, is of M -type 2, if there is a constant K 2, such that for any F - valued martingale (M ) 1,..,n the following inequality holds: E[ M n 2 ] K 2 with the convention that M 1 =. n =1 E[ M M 1 2 ] (23) We remar that a separable Hilbert space is in particular a separable Banach space of M -type 2. In fact, any 2 -uniformly -smooth separable Banach space is of M -type 2 [28], [4]. We recall here the definition of 2 - uniformly -smooth separable Banach space. Definition 3.18 separable Banach space F, with norm, is 2 -uniformly -smooth if there is a constant K 2 >, s.th. for all x, y F x + y 2 + x y 2 2 x 2 + K 2 y 2 1

Definition 3.19 separable Banach space F is of type 2, if there is a constant K 2, such that if {X i } n i=1 is any finite set of centered independent F - valued random variables, such that E[ X i 2 ] <, then n n E[ X i 2 ] K 2 E[ X i 2 ] (24) i=1 We remar that any separable Banach space of M -type 2 is a separable Banach space of type 2. Moreover, a separable Banach space is of type 2 as well as of cotype 2 if and only if it is isomorphic to a separable Hilbert space [18], where a Banach space of cotype 2 is defined by putting instead of in (24) (see [5], or [2]). We recall the definition of simple -p -integrals w.r.t. the compensated Poisson random measures q(dtdx), introduced in [35]. Definition 3.2 Let p 1, T >, B(E \ {}). Let f M T (E/F ) and f bounded ν P -a.s. on (, T ] Ω. f is simple -p integrable on (, T ] Ω, if for all sequences {δ n } n IN, δ n IR +, such that lim n δ n =, the limit with p lim n <s T i=1 1 Xs(ω) Λ δn f(s, X s (ω), ω) T Λ δn f(t, x, ω)ν(dtdx) (25) Λ δn := {x E \ {} : δ n < x } (26) exists and does not depend on the choice of the sequences {δ n } n IN, satisfying the above properties. We call the limit (25) the simple -p integral of f on (, T ] w.r.t. the compensated Poisson random measure N(dtdx) ν(dtdx), and denote it with T f(t, x, ω)(n(dtdx)(ω) ν(dtdx)) (27) Remar 3.21 If f is both r- and q- simple integrable on (, T ], p,q 1, then the limit in (25) with p = r coincides P -a.s. with the limit in (25) with p = q, as there exist in both cases a subsequence of a sequence {δ n } n IN, such that lim n δ n =, for which the convergence (25) holds also P - a.s.. The following Theorems 3.22-3.27 and Corollary 3.28 have been proven in [35]. Theorem 3.22 Let B(E \ {}), f Mν T,1 (E/F ), and assume f is left continuous in the time interval (, T ] for every x and P - a.e. ω Ω fixed. Then f is simple 1 -integrable on (, T ] and the simple 1 -integral coincides with the strong 1 -integral. 11

Theorem 3.23 Let F be a separable Hilbert space. Let B(E \ {}), f Mν T,2 E/F ), and let f be left continuous in the time interval (, T ] for every x and P - a.e. ω Ω fixed. Then f is simple 2 -integrable on (, T ]. The simple 2 -integral coincides with the strong 2 -integral. Theorem 3.24 Let F be a separable Banach space of M -type 2. Let B(E \ {}), f Mν T,2 E/F ). Let f be left continuous in the time interval (, T ] for P - a.e. ω Ω fixed. Then f is simple 2 -integrable on (, T ]. The simple 2 -integral coincides with the strong 2 -integral. Remar 3.25 Theorem 3.24 has been stated in [35] for functions f, such that f(s, x, ω) = f(s, ω), i.e. such that f does not depend on the variable x E. The proof of the Theorem goes through however for general functions, lie stated in 3.24, and is in this case a consequence of Theorem 3.14 (see Remar 3.15). Theorem 3.26 Let F be a separable Banach space of type 2. Let B(E \ {}), f Mν T,2 E/F ), and let f be left continuous in the time interval (, T ] for every x fixed. Let f be a deterministic function, i.e. f(t, x, ω) = f(t, x). Then f is simple 2 -integrable on (, T ]. The simple 2 -integral coincides with the strong 2 -integral. The statements in Theorems 3.22-3.26 are proven in [35] by first proving the following Theorem 3.27 and its Corollary 3.28, that we shall also use in the next Section of this article. Theorem 3.27 Let Λ F(E\{}) (defined in (7)). Let f be uniformly bounded on (, T ] Λ Ω, and left continuous in the time interval (, T ] for every x Λ and for P - a. e. ω Ω. Let p 1. t (, T ] the strong -p -integral of f, coincides with the natural integral of f on (, t] Λ. Corollary 3.28 Let Λ F(E \ {}), and let f be left continuous in the time interval (, T ], for every x Λ and for P - a. e. ω Ω fixed. Suppose that one of the following three hypothesis is satisfied: i) f Mν T,1 (E/F ), ii) f Mν T,2 (E/F ) and F is a separable Banach space of M -type 2. iii) f Mν T,2 (E/F ), f(s, x, ω) = f(s, x), i.e. f is a deterministic function, and F is a Banach space of type 2. Then the natural integral of f on (, t] Λ is ν P - a.s. bounded t (, T ] and coincides with the strong 1 -integral of f, if condition i) is satisfied, with the strong 2 -integral of f, if condition ii), or iii) is satisfied. 12

Remar 3.29 The condition ii) in Corollary 3.28 has been proven in [35] for the case where f does not depend on the variable x E. In fact, ii) follows from Theorem 3.14, which in [35] was still proven only for this case (see Remar 3.15. The same proof holds however for the general case, lie stated in ii). Similar to what is discussed for the Ito -integral w.r.t. the Brownian motion, the concept of strong -p -integral (1) can be generalized by approximating (1) with the natural integral of simple functions which converge to the integrand f in probability, instead of converging in L p ((, T ] Ω, ν P ). This is discussed in [36] for the case of stochastic integrals (1) of real valued functions w.r.t. cprm associated to α -stable processes, and in [35] for Banach valued functions and general cprms. Such stochastic integrals are called in [35] strong integrals of type p, to distinguish these from the strong -p- integrals. We report here the definition of strong integrals of type p, and some related results [35] needed in Section 6, where we shall prove that the Ito formula (3) holds also for such integrals. Definition 3.3 Let p 1, T >, B(E\{}). We say that f M T (E/F ) is strong integrable on (, T ], with strong integral of type p, if for all N IN f p N (t, x, ω) := f(t, x, ω)1 t f p dν N is strong p -integrable and the following limit (28) exists q(dtdx) is the strong -p integral of the func- where T tion f p N lim N T f(t, x, ω)1 t (t, x, ω). T f(t, x, ω)q(dtdx)(ω) := f(t, x, ω)1 t f(s,y,ω) p ν(dsdy) N q(dtdx)(ω) in probability, (28) f p dν N Remar 3.31 Let p 1. Let f, g be strong integrable on (, T ], B(E \ {}), T >, with strong integral of type p. For any a, b IR, a f + b g is strong integrable on (, T ] with strong integral of type p, and T T T a f(s, x, ω)q(dsdx)+ b g(s, x, ω)q(dsdx) = (a f(s, x, ω)+b g(s, x, ω))q(dsdx) Let N T,p ν (E/F ) := {f M T (E/F ) : T (29) f(t, x, ω) p ν(dtdx) < P a.s.}. (3) 13

Proposition 3.32 [35] Suppose i ) f N T,1 ν (E/F ), then f is strong integrable on (, t] with strong integral of type 1, for all t (, T ], B(E \ {}). Proposition 3.33 [35] Suppose that one of the following two hypothesis is satisfied: ii ) f Nν T,2 (E/F ) and F is a separable Banach space of M -type 2, iii ) f Nν T,2 (E/F ), f(s, x, ω) = f(s, x), i.e. f is a deterministic function, and F is a Banach space of type 2. then f is strong integrable on (, t] with strong integral of type 2, for all t (, T ], B(E \ {}). Theorem 3.34 [35] Let T >, p 1, then for all f Nν T,p (E/F ), for all B(E \ {}), there is a sequence of simple functions {f n } n IN satisfying the following property : Property Q: f n Σ(E/F ) n IN, f n converges ν P -a.s. to f on (, T ] Ω, when n, and lim n T f n (t, x) f(t, x) p dν = P a.s. (31) Theorem 3.35 [35] Let T >, B(E \ {}). Define p = 1 if the hypothesis i ) in Proposition 3.32 are satisfied, or p = 2, if the hypothesis ii ) or iii ) in Proposition 3.33 are satisfied. Suppose that {f n } n IN Nν T,p (E/F ) satisfies property Q in Theorem 3.34, and there exists g Nν T,p (E/F ), s.th. then T T f n p dν T T f(t, x)q(dtdx) = lim n g p dν P a.s. (32) f n (t, x)q(dtdx) in probability (33) Remar 3.36 Taing {f n } n IN Σ(E/F ) instead of {f n } n IN N T,p ν (E/F ) in Theorem 3.35, we can see that the strong integrals of type p have similar properties compared with the strong -p- integrals. Remar 3.37 Under the hypothesis ii ) Proposition 3.33 and Theorem 3.35 have been proven in [35] for the case where f does not depend on the variable x E. In fact, under the condition ii ) the statements follow from Theorem 3.14, which in [35] was still proven only for this case (see Remar 3.15). The same proofs hold however for the general case. 14

4 The Ito formula for Banach valued functions Lie in the previous Sections, we assume that E and F are separable Banach spaces, q(dtdx) = N(dtdx)(ω) ν(dtdx) is the compensated Poisson random measure associated to the additive process (X t ) t with values on E. Let < t T, B(E \ {}) and t Z t (ω) := f(s, x, ω)q(dsdx)(ω) (34) We assume that one of the following hypothesis is satisfied: i) f Mν T,1 (E/F ), ii) f Mν T,2 (E/F ) and F is a separable Banach space of M -type 2, iii) f Mν T,2 (E/F ), f(s, x, ω) = f(s, x), i.e. f is deterministic, and F is a Banach space of type 2. Z t (ω) is then the strong -p -integral of f w.r.t. q(dsdx)(ω), with p = 1 if i) holds, and p = 2 if ii) or iii) holds. In this and the coming Sections we analyze the Ito -formula for the F -valued random process (Y t ) t, with t t Y t (ω) := Z t (ω) + g(s, ω)dh(s, ω) + (s, x, ω)n(dsdx)(ω). (35) We assume that h(s, ω) : IR + Ω IR is a real valued, cádlàg process, whose almost all paths are of bounded variation, moreover h(s, ω) s [,T ] is F s -adapted for all s [, T ], and jointly measurable in all its variables. The random process g(s, ω) : IR + Ω F is F s -adapted for all s [, T ], and jointly measurable in all its variables, and is either cádlàg or cáglàd (i.e. left continuous with right limit, for the definition see e.g. [3]). We assume that, for P -a.e. ω Ω, g(s, ω) is Bochner integrable w.r.t. the signed Lesbegues measure induced by the bounded variation process (h(s, ω)) s, i.e. t Λ g(s, ω) d h(s, ω) < P a.e. (36) where ( h(s, ω) ) s is the variation process associated to (h(s, ω)) s (see e.g. [3] for the definition of variation process). Moreover, we assume that Λ F(E \ {}) so that / Λ, (37) (s, x, ω) M T (E/F ) and (s, x, ω) is finite P -a.s. for every s [, T ], x Λ. Moreover (s, x, ω) is cádlàg or cáglàd. We also assume that s [, T ] 1 Λ ( X s (ω)) (s, X s (ω), ω) = 1 ( X s (ω)) f(s, X s (ω), ω) = P a.s., (38) 15

s [, T ] X s (ω) h(s, ω) = P a.s., (39) where h(s, ω) := lim t s h(t, ω) lim t s h(t, ω) P a.s. (s, x, ω) := lim t s (t, x, ω) lim t s (t, x, ω) P a.s. f(s, x, ω) := lim t s f(t, x, ω) lim t s f(t, x, ω) P a.s. Moreover s [, T ] 1 ( X s (ω))1 Λ ( X s (ω)) = P a.s., (4) Let H : F G, G be a separable Banach space, < t T. (In the next Section we shall also consider H depending on time.) In this Section we shall prove that the following Ito formula holds (see Theorem 4.4 and Theorem 4.6 for a precise statement): H(Y t (ω)) H(Y (ω)) = t {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} q(dsdx)(ω) + t {H(Y (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω)} ν(dsdx) + t Λ {H(Y (ω) + (s, x, ω)) H(Y s (ω))} N(dsdx)(ω) + t H (Y s (ω))g(, ω)dh(s, ω) + <s t {H(Y (ω) + g(, ω) h(s, ω)) H(Y s (ω)) H (Y s (ω))g(, ω) h(s, ω)} P a.s., (41) where H(y) is twice Fréchet differentiable y F. H (y) (resp. H (y)) denotes the first (resp. second) Fréchét derivative in y. Obviously H : F L(F/G), and H : F L(F, L(F/G)), where, as usual, we denote with L(K/L) the Banach space of linear bounded operators from a Banach space K to a Banach space L, with the usual supremum norm. With K we denote the norm of a Banach space K, so that in particular F L(K/L) = F(y) sup L y K y K, if F L(K/L). When no misunderstanding is possible, we shall write simply, leaving the subscription which denotes the Banach space. In the above example we would e.g. write F instead of F L(K/L), and y, resp. F(y), instead of y K, resp. F(y) L. We shall use in this Section the following inequalities, which can be found e.g. in [17], Chapt. X. H(y) H(y ) H (y )(y y ) y y sup H (y + θ(y y )) H (y ) <θ 1 (42) H(y) H(y ) y y sup H (y + θ(y y )) (43) <θ 1 16

H (y) H (y ) y y sup H (y + θ(y y )) (44) <θ 1 We now prove the properties 1)-6) in the following Theorem 4.1 and Lemma 4.3, which imply that all the integrals in equation (41) are well defined. Theorem 4.1 Let the following hypothesis hold hypothesis B: (t least) one of the hypothesis i) -iii) is satisfied. G is a separable Banach space. If ii) or iii) is satisfied, we mae the additional hypothesis that G is a separable Banach space of M -type 2 Let H : F G, with H(y) twice Fréchet differentiable for all y F. Let the second Fréchet derivative H be uniformly bounded on each centered ball of radius R, i.e. B(, R) with R, and the Fréchet derivative H be uniformly bounded. Then 1) H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω) is P - a.s. Bochner integrable w.r.t ν on (, T ], B(E \ {}). 2) H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) Mν T,1 (E/G), resp. Mν T,2 (E/G), if i), resp. ii) or iii), is satisfied. Remar 4.2 Let C := sup y F H (y), then from (43) it follows that H(y) H(y ) C y y (45) From (42) and (44) it follows that for each R >, there exists a constant C R, depending on R, such that if y, y B(, R), Proof of Theorem 4.1: H(y) H(y ) H (y )(y y ) C R y y 2 (46) Let us first prove property 1). Suppose that condition i) is satisfied, then property 1) follows from (45). In fact T H(Y (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω) ν(dsdx) 2 T C f(s, x, ω) ν(dsdx) (47) where the r.h.s. in (47) is P -a.s. finite, as f M T,1 ν (E/F ). Suppose that condition ii) or iii) is satisfied, then property 1) follows from (46). In fact, from Remar 4.2, it follows that there is a finite constant C R (ω) > depending on ω Ω, s.th. T H(Y (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω) ν(dsdx) T C R(ω) f(s, x, ω) 2 ν(dsdx), (48) 17

where the r.h.s. in (41) is P -a.s. finite, as f M T,2 ν (E/F ). Property 2) follows from theorems 3.12-3.16, and T T E[ H(Y (ω) + f(s, x, ω)) H(Y s (ω)) p ]ν(dsdx) E[sup <θ 1 H (Y s (ω) + θf(s, x, ω)) p f(s, x, ω) p ]ν(dsdx) C p T E[ f(s, x, ω) p ]ν(dsdx). (49) Lemma 4.3 Suppose that all the hypothesis in Theorem 4.1 are verified. The following properties hold 6) 4) T T 3) H (Y s )g(, ω) d h(s, ω) < P a.s. <s t H(Y (ω) + g(, ω) h(s, ω)) H(Y s (ω)) < P a.s. 5) <s t H (Y s (ω)g(, ω) h(s, ω) < P a.s. Λ {H(Y (ω) + (s, x, ω)) H(Y s (ω))} N(dsdx)(ω) < P a.s. Proof of Lemma: Property 3) follows from the assumption that H is uniformly bounded on F, which implies T H (Y s )g(, ω) d h(s, ω) C T Property 4) follows from inequality (45), as this implies g(, ω) d h(s, ω) < (5) <s t H(Y (ω) + g(, ω) h(s, ω)) H(Y s (ω)) C <s t g(, ω) h(s, ω)) C T g(, ω) d h(s, ω) (51) Property 5) is proven by similar arguments, and property 6) follows from the assumption that H is uniformly bounded on F, the assumption that (s, x, ω) is finite P -a.s. for every s [, T ], x Λ and the definition of natural integral w.r.t. N(dtdx)(ω). Theorem 4.4 Suppose that the hypothesis in Theorem 4.1 are all satisfied. Let f(, x, ω) be left continuous on (, T ], for all x and P -a.e. ω Ω fixed. Then on (, T ] H(Y s (ω)+f(s, x, ω)) H(Y s (ω)) is strong -p and simple -p- integrable with p = 1, resp. p = 2, if i), resp. ii) or iii), is satisfied. Moreover (41) holds P -a.s.. 18

Remar 4.5 The integral defined in (34) is, according to Theorems 3.12-3.14 and Theorems 3.22-3.26, a strong -p and simple -p -integral, with p = 1, if condition i) is satisfied, resp. p = 2, if condition ii), or iii) is satisfied. From the inequalities (2), (21), (22) it follows that Z (ω) is P -a.e. uniformly bounded on [, T ]. Proof of Theorem 4.4: That on (, T ], H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) is strong -p- and simple -p- integrable with p = 1 (resp. p = 2) if i) (resp. ii), or iii)) is satisfied, follows from Theorem 3.22-3.26 and Theorem 4.1. We now prove that (41) holds P -a.s.. First we assume the additional hypothesis that /, i.e. that F(E \ {}). From hypothesis i), or ii), or iii), the above additional hypothesis F(E\{}), and Corollary 3.28, it follows that Z t (ω) = t f(s, ( X s )(ω), ω)1 ( X s (ω)) f(s, x, ω)ν(dsdx). (52) <s t From 2) in Theorem 4.1 and Corollary 3.28 it follows that the natural integral (Definition 3.2) of H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) exists, coincides with the strong -p -integral, with p = 1, resp. p = 2, and has the following form t {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} q(dsdx)(ω) = <s t H(Y s (ω) + f(s, X s (ω), ω)) H(Y s (ω))1 ( X s (ω)) t {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} ν(dsdx) P a.e. (53) Moreover, it is easily checed, that H (Y s (ω))f(s, x, ω) and H(Y s (ω)+f(s, x, ω)) H(Y s (ω)) are Bochner integrable for P -q.e. ω Ω. Let then, for all ω Ω, Γ n ω() := {,..., 2 n 1 : s ( n, n +1 ] : X s(ω) } (54) with n := + (t ) 2 n, (55) Γ n ω(λ) := {,..., 2 n 1 : s ( n, n +1] : X s (ω) Λ} (56) γ n ω := {,..., 2 n 1 : s ( n, n +1] : h(s, ω) } (57) H(Y t (ω)) H(Y (ω)) = 2 n 1 = H(Y n +1 (ω)) H(Y n (ω)) = Γ n ω () Γn ω (Λ) γn ω H(Y n +1 (ω)) H(Y n (ω)) + / Γ H(Y n ω () Γn ω (Λ) γn n (ω)) H(Y ω +1 n (ω)) (58) 19

s H is continuous on F, it follows lim n Γ H(Y n ω () Γn ω (Λ) γn ω n +1 (ω)) H(Y n (ω)) = <s t H(Y (ω) + f(s, X s (ω), ω)) H(Y s (ω))1 ( X s (ω)) + <s t H(Y (ω) + (s, X s (ω), ω)) H(Y s (ω))1 Λ ( X s (ω)) + <s t H(Y (ω) + g(, ω) h(, ω)) H(Y s (ω)) = t {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} q(dsdx)(ω) + t Λ {H(Y (ω) + (s, x, ω)) H(Y s (ω))} N(dsdx)(ω) + <s t H(Y (ω) + g(, ω) h(, ω)) H(Y s (ω)) + t {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} ν(dsdx) (59) P a.s.. (41) follows once shown that for some subsequence of {n} n IN, that for simplicity we still denote with {n}, the following convergence holds for n lim n {H(Y n +1 (ω)) H(Y n (ω))} / Γ n ω () Γn ω (Λ) γn ω = t H (Y s (ω))f(s, x, ω)ν(dsdx) + t H (Y s )g(, ω)dh(s, ω) <s t {H (Y s (ω))g(, ω) h(s, ω)} P a.s. (6) Proof of (6): We mae a Taylor expansion of the function H(y): We first prove that / Γ n ω () Γn ω (Λ) γn ω H(Y n +1 (ω)) H(Y n (ω)) = / Γ n ω () Γn ω (Λ) γn ω H (Y n (ω))(y n +1 (ω) Y n (ω)) + / Γ er n n ω () Γn ω (Λ) γn ω (ω) = / Γ H (Y n ω () Γn ω (Λ) γn n ω (ω))(z n +1 (ω) Z n (ω)) + 2 n 1 =1 H (Y n (ω))( n +1 g(s, ω)dh(s, ω)) n Γ n ω () Γn ω (Λ) γn H (Y ω n (ω))( n +1 g(s, ω)dh(s, ω)) n + / Γ n ern ω () Γn ω (Λ) γn ω (ω) (61) lim n / Γ n ω () Γn ω (Λ) γn ω er n (ω) = P a.e. (62) From Remar 4.5 it follows that for P -a.e. ω Ω there is a finite constant R(ω) such that Y t (ω) R(ω), t (, T ]. From (46) it follows that for P -a.e. ω Ω there is a finite constant C(ω) >, s.th. 2

er n (ω) C(ω) Y n +1 (ω) Y n (ω) 2 (63) and / Γ C(ω) Y nω () Γnω (Λ) γnω n+1 (ω) Y n (ω) 2 2C(ω) / Γ n ω () Γn ω (Λ) γn ω Z n +1 (ω) Z n (ω) 2 +2C(ω) / Γ n n +1 g(s, ω)dh(s, ω) 2 ω () Γn ω (Λ) γn ω n 2C(ω) sup / Γ n ω () Γ n n +1 f(s, x, ω)ν(dsdx) ω (Λ) γn ω n 2 n 1 = n +1 f(s, x, ω)ν(dsdx) n +2C(ω) sup / Γ n ω () Γ n ω (Λ) γn ω n +1 n / Γ n ω () Γn ω (Λ) γn ω n +1 n g(s, ω)dh(s, ω) g(s, ω)dh(s, ω) (64) From i), or ii), or iii), and the assumption that F(E \ {}), it follows that f(s, x, ω) is Bochner integrable P -a.e. w.r.t. ν, it follows lim sup n sup / Γ n ω () Γ n n +1 f(s, x, ω)ν(dsdx) ω (Λ) γn ω n n lim sup n sup / Γ n ω () Γ n +1 f(s, x, ω) ν(dsdx) = ω (Λ) γn ω n P a.s. (65) 2 n 1 = n +1 f(s, x, ω)ν(dsdx) n Moreover we have that t sup / Γ n ω () Γ n ω (Λ) γn ω n +1 n / Γ n ω () Γn ω (Λ) γn ω n +1 n sup / Γ n ω () Γ n ω (Λ) γn ω n +1 n f(s, x, ω) ν(dsdx) (66) g(s, ω)dh(s, ω) g(s, ω)dh(s, ω) g(s, ω)dh(s, ω) t g(s, ω) dh(s, ω) P a.s. (67) and lim sup n sup / Γ n ω () Γ n n +1 g(s, ω)dh(s, ω) ω (Λ) γn ω n n lim sup n sup / Γ n ω () Γ n +1 g(s, ω) d h(s, ω) = ω (Λ) γn ω n P a.s. (68) 21

It follows that lim n / Γ n ω () Γn ω (Λ) γn ω Y n +1 (ω) Y n (ω) 2 = P a.s. (69) and hence from (63) it follows that (62) holds. We have lim n / Γ H (Y n ω () Γn ω (Λ) γn ω n (ω))(z n +1 (ω)) Z n (ω)) = lim n / Γ n H (Y ω () Γn ω (Λ) γn ω n (ω)) n +1 f(s, x, ω)ν(dsdx) n = t H (Y s (ω))f(s, x, ω)ν(dsdx) P a.e., (7) where the last equality in (7) follows from the following estimates and the hypothesis that ν(dsdx) = α(ds)β(dx), with α(ds) ds: lim sup n / Γ n n +1 ω () Γn ω (Λ) γn ω n H (Y n (ω))f(s, x, ω)ν(dsdx) n +1 n H (Y s (ω))f(s, x, ω)ν(dsdx) C(ω) n / Γ n +1 ω () Γn ω (Λ) γn ω n Y n(ω) Y (ω) f(s, x, ω) ν(dsdx) n C(ω){sup / Γ n ω () Γ n +1 f(s, x, ω) ν(dsdx) ω (Λ) γn ω n n + sup / Γ n ω () Γ n +1 g(s, ω) d h(s, ω) } ω (Λ) γn ω n 2 n 1 = f(s, x, ω) ν(dsdx) = P a.s. (71) n +1 n where the first inequality and convergence to zero in (71) holds by the same arguments as in the proof of (62). (6) is proven, once we have shown that 2 n 1 =1 H (Y n (ω))( n +1 n g(s, ω)dh(s, ω)) Γ n ω () Γn ω (Λ) γn H (Y ω n (ω))( n +1 g(s, ω)dh(s, ω)) n = t H (Y s )g(, ω)dh(s, ω) <s t {H (Y s (ω))g(, ω) h(s, ω)} P a.s. (72) From the continuity of H it follows that lim n Γ n H (Y ω () Γn ω (Λ) γn n ω (ω))( n +1 g(s, ω)dh(s, ω)) n = <s t {H (Y s (ω)g(, ω) h(s, ω)} P a.s., (73) 22

while from Theorem 21 Chapt. II, Paragraph 5 of [3] it follows 2 n 1 lim n =1 H (Y n (ω))( n +1 g(s, ω)dh(s, ω)) n = t H (Y s )g(, ω)dh(s, ω), (74) where convergence holds in probability (see [3] for a stronger statement). It follows that for some subsequence of {n} n IN, that for simplicity we still denote with {n}, convergence in (74) and hence in (6) holds P -a.s.. It follows that (41) holds for every F(E \ {}), i.e. such that /. In order to prove that (41) holds for any B(E \ {}), we first consider (41) with a set B c (, ɛ) instead of, and then tae the limit ɛ. In fact, from Theorem 3.22-3.26, it follows that for some sequence {ɛ n } n IN, such that ɛ n when n, the following limit holds. t lim n B c (,ɛ (H(Y n) (ω) + f(s, x, ω)) H(Y s (ω))q(dsdx)(ω) t (H(Y (ω) + f(s, x, ω)) H(Y s (ω))q(dsdx)(ω) when n P a.e. and from the definition of Bochner integral it follows that t lim ɛ B c (,ɛ) (H(Y (ω) + f(s, x, ω)) H(Y s (ω) H (Y s (ω))f(s, x, ω))ν(dsdx) = t (H(Y (ω) + f(s, x, ω)) H(Y s (ω) H (Y s (ω))f(s, x, ω))ν(dsdx) P a.e. so that (41) holds P -a.e. In the next theorem we leave out the hypothesis that f(, x, ω) is left continuous. Theorem 4.6 Suppose that the same hypothesis as in Theorem 4.1 hold, then H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) is strong -1, resp. strong -2, - integrable on (, T ] if condition i), resp. ii) or iii), is satisfied. (41) holds P -a.s.. Remar 4.7 Z t (ω), defined in (34), is in Theorem 4.6, according to Theorems 3.12-3.16, a strong -p -integral, with p = 1, if condition i) is satisfied, resp. p = 2, if condition ii) or iii), is satisfied. Moreover E[ Z s ] is uniformly bounded on [, T ]. Proof of Theorem 4.6: 23

That H(Y s (ω)+f(s, x, ω)) H(Y s (ω)) is strong -1, resp. strong -2, - integrable on (, T ] if condition i), resp. ii) or iii), is satisfied, follows from Theorem 3.12-3.16 and Theorem 4.1. Let us prove that (41) holds P -a.s.. Let {f n } n IN Σ(E/F ). Suppose that {f n } n IN is L p -approximating f on (, T ] Ω w.r.t. ν P with p = 1 (resp. p = 2) if condition i) (resp. ii) or iii)) is satisfied. Let t Zt n (ω) := f n (s, x, ω)q(dsdx)(ω) (75) Y n t (ω) := t t f n (s, x, ω)q(dsdx)(ω)+ g(s, ω)dh(s, ω)+ t Λ (s, x, ω)n(dsdx)(ω). (76) then from Theorem 4.4 it follows that Y n t (ω) satisfies (41) P- a.s. for all t T. We now from Theorem 3.12 - Theorem 3.16 that lim p n Z n t (ω) = Z t (ω). It follows that there is a subsequence, that for simplicity we denote again with {Z n t (ω)} n IN, such that so that and hence lim n Zn t (ω) = Z t (ω) P a.s. for all t T, (77) lim Y t n (ω) = Y t (ω) P a.s. for all t T, (78) n lim H(Y t n (ω)) H(Y n (ω)) = H(Y t (ω)) H(Y (ω)) P a.s. for all t T, n t lim n = t Λ {H(Y n (ω) + (s, x, ω)) H(Y n (ω))} N(dsdx)(ω) (79) Λ {H(Y (ω) + (s, x, ω)) H(Y s (ω))} N(dsdx)(ω) P a.s. for all t T, (8) lim n <s t {H(Y n (ω) + g(, ω) h(s, ω)) H(Y n (ω)) H (Y n (ω)g(, ω) h(s, ω)} = {H(Y s (ω) + g(, ω) h(s, ω)) H(Y s (ω)) H (Y s (ω)g(, ω) h(s, ω)} P a.s. for all t T. (81) We shall prove that there is a subsequence such that the following properties (82), (83) hold. lim n t H (Y n )g(, ω)dh(s, ω) = t H (Y s )g(, ω)dh(s, ω) P a.s., (82) 24

t lim n = t {H(Y n (ω) + f n (s, x, ω)) H(Y n (ω)) H (Y n (ω))f n (s, x, ω)} ν(dsdx) {H(Y (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω)} ν(dsdx) P a.s.. (83) Moreover, we shall prove that for some subsequence lim p t n = t {H(Y s n (ω) + f n (s, x, ω)) H(Ys n (ω))} q(dsdx)(ω) {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} q(dsdx)(ω). (84) It then follows that there is a subsequence, so that the limit in (84) holds also P -a.s.. From (79)- (84) it then follows that (41) holds P -a.s.. Proof of (82): Let us first prove the following statement: Ys n (ω) converges for some subsequence, that for simplicity we denote still with Ys n (ω), ds P - a.s. to Y s (ω) on [, T ] Ω (ds denoting the Lesbegues measure on IR + ). This is a consequence of Doob s inequality applied to the sub -martingales Zt n (ω) Z t (ω). (That Zt n (ω) Z t (ω) is a sub -martingale follows from Proposition 3.11 and Lemma 8.11, Chapt. II, Part I, [21].) In fact, and if i) holds, then sup Yt n (ω) Y t (ω) = sup Zt n (ω) Z t (ω) t [,T ] t [,T ] P (sup t [,T ] Zt n (ω) Z t (ω) ɛ) 1 ɛ sup t [,T ] E[ Zt n (ω) Z t (ω) ] 1 T ɛ E[ f n(s, x, ω) f(s, x, ω) ], (85) while if ii) or iii) holds, then E[sup s [,T ] Zs n (ω) Z s (ω) 2 ] sup s [,T ] E[ Zs n (ω) Z s (ω) 2 ] T E[ f n(s, x) f(s, x) 2 ν(dsdx) when n, (86) The statement is then a consequence of Lesbegue s dominated convergence theorem. (82) follows then from inequality (44), which implies the existence of a constant R(ω) depending on ω, such that t H (Y n )g(, ω)dh(s, ω) t H (Y s )g(, ω)dh(s, ω) R(ω) t Y n (ω) Y s (ω) sup s [,T ] g(s, ω) d h(s, ω) (87) 25

Proof of (83): Suppose condition i) is satisfied. Let C := sup z F H (z). Then using (42), we obtain: t {H(Y s n (ω) + f n (s, x, ω)) H(Ys n (ω)) H (Ys n (ω))f n (s, x, ω)} {H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω)} ν(dsdx) Since 4C t f n(s, x, ω) f(s, x, ω) (88) t lim E[ f n (s, x, ω) f(s, x, ω) ] = n by hypothesis, it follows that there is some subsequence, that we denote for simplicity again with {f n }, such that t f n (s, x, ω) f(s, x, ω) ν(dsdx) P a.s. when n (89) Suppose condition ii), or iii) is satisfied (i.e. p = 2). By hypothesis {f n } n IN converges ν P - a.s. to f on [, T ] Ω. Moreover Ys n (ω) converges for some subsequence ν P - a.s. to Y s (ω). This is a consequence of the hypothesis that ν(dsdx) = α(ds)β(dx) with α(ds) ds and Doob s inequality (86). It follows that there is a subsequence s.th. the integrand in (83) converges ν P - a.s. to zero when n. Moreover using (46) and Remar 4.5 it follows that there is a constant R(ω) depending on ω Ω, s.th. for some subsequence t {H(Y s n (ω) + f n (s, x, ω)) H(Ys n (ω)) H (Ys n (ω))f n (s, x, ω)} {H(Y s (ω) + f(s, x, ω)) H(Y s (ω)) H (Y s (ω))f(s, x, ω)} ν(dsdx) 4R(ω) t ( f n(s, x, ω) 2 + f(s, x, ω) 2 )ν(dsdx) 16R(ω) t f n(s, x, ω) f(s, x, ω) 2 ν(dsdx) + 16R(ω) t f(s, x, ω) 2 ν(dsdx) 16R(ω) t f(s, x, ω) 2 ν(dsdx) P a.s. when n (9) where convergence in (9) holds for some subsequence, by a similar argument as in (89). (83) is then a consequence of the Lebegues dominated convergence theorem. Proof of (84): Let p = 1 if condition i) is satisfied, or p = 2 if condition ii) or iii) is satisfied. From the Theorems 3.12-3.16 and inequality (45) it follows 26

E[ t t {H(Y n (ω) + f n (s, x, ω)) H(Y n (ω))} q(dsdx)(ω) {H(Y (ω) + f(s, x, ω)) H(Y s (ω))} q(dsdx)(ω) p ] 2 p t K p {H(Y s n (ω) + f n (s, x, ω)) H(Ys n (ω))} {H(Y s (ω) + f(s, x, ω)) H(Y s (ω))} p ν(dsdx) 4 p C p t K p E[ f n(s, x, ω) p + f(s, x, ω) p ν(dsdx) 8 p C p t K p E[ f n(s, x) f(s, x) p ]ν(dsdx) + 8 p C p Kp 2 t E[ f(s, x) p ]ν(dsdx) 8 p C p t K p E[ f(s, x) p ]ν(dsdx) when n (91) where for p = 1, K p = 1, while in case p = 2, K p is the constant in the Definition 3.17 of M -type 2, if condition ii) holds, resp. K p is the twice the constant in the Definition 3.19 of type 2 Banach space, if condition iii) holds. That (84) holds for some subsequence follows from the Lebegues dominated convergence theorem and the fact that the integrand in the l.h.s. of (91) converges for some subsequence ν P - a.s. to zero, on [, T ] Ω. 5 The Ito formula for time dependent Banach valued functions We shall consider in this article also the case where H depends (continuously) on the time variable, too, i.e. We shall prove the Ito formula (3). H : IR + F G (92) (s, y) G Theorem 5.1 Suppose that hypothesis B (given in Theorem 4.1) is satisfied. Suppose that the Fréchet derivatives s H(s, y) and y H(s, y) exists and are uniformly bounded on [, t] F, and all the second Fréchet derivatives s s H(s, y), s, y H(s, y), y s H(s, y) and y y H(s, y) exists and are uniformly bounded on [, t] B(, R), for all R. Then 1) H(s, Y s (ω) + f(s, x, ω)) H(s, Y s (ω)) y H(s, Y s (ω))f(s, x, ω) is P - a.s. Bochner integrable w.r.t ν on (, T ], B(E \ {}). 2) H(s, Y s (ω)+f(s, x, ω)) H(s, Y s (ω)) Mν T,1 (E/G), resp. Mν T,2 (E/G), if i), resp. ii) or iii), is satisfied. T 3) yh(s, Y s )g(, ω) d h(s, ω) < P -a.s. 27