Generalized Gaussian Bridges

Size: px
Start display at page:

Download "Generalized Gaussian Bridges"

Transcription

1 TOMMI SOTTINEN ADIL YAZIGI Generalized Gaussian Bridges PROCEEDINGS OF THE UNIVERSITY OF VAASA WORKING PAPERS 4 MATHEMATICS 2 VAASA 212

2 Vaasan yliopisto University of Vaasa PL 7 P.O. Box 7 (Wolffintie 34 FI 6511 VAASA Finland ISBN ISSN = Proceedings of the University of Vaasa. Working Papers University of Vaasa

3 III Publisher Date of publication Vaasan yliopisto June 212 Author(s Type of publication Tommi Sottinen Working Papers Adil Yazigi Name and number of series Proceedings of the University of Vaasa Contact information University of Vaasa Department of Mathematics and Statistics P.O. Box 7 FI 6511 Vaasa, Finland ISBN ISSN Number Language of pages 27 English Title of publication Generalized Gaussian Bridges Abstract A generalized bridge is the law of a stochastic process that is conditioned on linear functionals of its path. We consider two types of representations of such bridges: orthogonal and canonical. In the canonical representation the filtrations and the linear spaces generated by the bridge process and the original process coincide. In the orthogonal representation the bridge is constructed from the entire path of the underlying process. The orthogonal representation is given for any continuous Gaussian process but the canonical representation is given only for so-called prediction-invertible Gaussian processes. Finally, we apply the canonical bridge representation to insider trading by interpreting the bridge from an initial enlargement of ltration point of view. Keywords Canonical representation, enlargement of ltration, fractional Brownian motion, Gaussian process, Gaussian bridge, Hitsuda representation, insider trading, orthogonal representation, prediction-invertible process, Volterra process.

4

5 V Contents 1. INTRODUCTION ABSTRACT WIENER INTEGRALS AND RELATED HILBERT SPACES ORTHOGONAL GENERALIZED BRIDGE REPRESENTATION CANONICAL GENERALIZED BRIDGE REPRESENTATION APPLICATION TO INSIDER TRADING REFERENCES... 2

6

7 GENERALIZED GAUSSIAN BRIDGES TOMMI SOTTINEN AND ADIL YAZIGI Abstract. A generalized bridge is the law of a stochastic process that is conditioned on linear functionals of its path. We consider two types of representations of such bridges: orthogonal and canonical. In the canonical representation the filtrations and the linear spaces generated by the bridge process and the original process coincide. In the orthogonal representation the bridge is constructed from the entire path of the underlying process. The orthogonal representation is given for any continuous Gaussian process but the canonical representation is given only for so-called prediction-invertible Gaussian processes. Finally, we apply the canonical bridge representation to insider trading by interpreting the bridge from an initial enlargement of filtration point of view. Mathematics Subject Classification (21: 6G15, 6G22, 91G8. Keywords: Canonical representation, enlargement of filtration, fractional Brownian motion, Gaussian process, Gaussian bridge, Hitsuda representation, insider trading, orthogonal representation, prediction-invertible process, Volterra process. 1. Introduction Let X = (X t t [,T be a continuous Gaussian process with positive definite covariance function R, mean function m of bounded variation, and X = m(. We consider the conditioning, or bridging, of X on N linear functionals G T = [G i T N i=1 of its paths: (1.1 G T (X = [ g(t dx t = N g i (t dx t. i=1 We assume, without any loss of generality, that the functions g i are linearly independent. Indeed, if this is not the case then the linearly dependent, or redundant, components of g can simply be removed from the conditioning (1.2 below without changing it. The integrals in the conditioning (1.1 are the so-called abstract Wiener integrals defined properly in Definition 2.5 later. Informally, the generalized Gaussian bridge X g;y is (the law of the Gaussian process X conditioned on the set (1.2 { } g(t dx t = y = N { i=1 g i (t dx t = y i }. The rigorous definition is given in Definition 1.3 later. Date: August 2, 212. A. Yazigi was funded by the Finnish Doctoral Programme in Stochastics and Statistics. 1

8 2 Working Papers For the sake of convenience, we will work on the canonical filtered probability space (Ω, F, F, P, where Ω = C([, T, F is the Borel σ -algebra on C([, T with respect to the supremum norm, and P is the Gaussian measure corresponding to the Gaussian coordinate process X t (ω = ω(t: P = P[X. The filtration F = (F t t [,T is the intrinsic filtration of the coordinate process X that is augmented with the null-sets and made right-continuous Definition. The generalized bridge measure P g;y is the regular conditional law [ P g;y = P g;y T [X = P X g(t dx t = y. A representation of the generalized Gaussian bridge is any process X g;y satisfying [ P [X g;y = P g;y T [X = P X g(t dx t = y. Note that the conditioning on the P-null-set (1.2 in Definition 1.3 above is not a problem, since the canonical space of continuous processes is small enough to admit regular conditional laws. Also note that as a measure P g;y the generalized Gaussian bridge is unique, but it has several different representations X g;y. Indeed, for any representation of the bridge one can combine it with any P-measure-preserving transformation to get a new representation. In this paper we provide two different representations X g;y. The first representation, given by Theorem 3.1, is called the orthogonal representation. This representation is a simple consequence of orthogonal decompositions of Hilbert spaces associated with Gaussian processes and it can be constructed for any continuous Gaussian process for any conditioning functionals. The second representation, given by Theorem 4.24, is called the canonical representation. This representation is more interesting but also requires more assumptions. The canonical representation is dynamically invertible in the sense that the linear spaces L t (X and L t (X g;y (see Definition 2.1 later generated by the process X and its bridge representation X g;y coincide for all times t [, T. Moreover, the bridge X g;y can be interpreted as the original process X with an added information drift that bridges the process at the final time T. This dynamic drift interpretation should turn out to be useful in applications. We give one such application in connection to insider trading in Section 5. This application is, we must admit, a bit classical. On earlier work related to stochastic bridges, we would like to mention first Alili [1, Baudoin [2 and Gasbarra et al. [7. In [1 generalized Brownian bridges were considered. It is our opinion that our article extends [1 considerably, although we do not consider the non-canonical representations of [1. The article [2 is, in a sense, more general than this article, since we condition on fixed values y, but in [2 the conditioning is on a probability law. However, in [2 only the Brownian bridge was considered. In that sense our approach is more general. In [7 bridges were studied in

9 Working Papers 3 the same general Gaussian setting as in this article. In this article, however, we generalize the results of [7 to generalized bridges. Secondly, we would like to some related works on Markovian and Lévy bridges: [5, 6, 8, 1. This paper is organized as follows. In Section 2 we recall some Hilbert spaces related to Gaussian processes. In Section 3 we give the orthogonal representation for the generalized bridge in the general Gaussian setting. Section 4 deals with the canonical bridge representation. First we give the representation for Gaussian martingales. Then we introduce the so-called prediction-invertible processes and develop the canonical bridge representation for them. Then we consider Gaussian Volterra processes, such as the fractional Brownian motion, as examples of prediction-invertible processes. Finally, in Section 5 we apply the bridges to insider trading. Indeed, the bridge process can be understood from the initial enlargement of filtration point of view. 2. Abstract Wiener Integrals and Related Hilbert Spaces In this section X = (X t t [,T is a continuous (and hence separable Gaussian process with positive definite covariance R, mean zero and X =. Definitions 2.1 and 2.2 below give us two central separable Hilbert spaces connected to separable Gaussian processes Definition. Let t [, T. The linear space L t (X is the Gaussian closed linear subspace of L 2 (Ω, F, P generated by the random variables X s, s t. The linear space is a Gaussian Hilbert space with the inner product Cov[,. Note that since X is continuous, R is also continuous, and hence L t (X is separable, and any orthogonal basis (ξ n n=1 of L t(x is a collection of independent standard normal random variables. (Of course, since we chose to work on the canonical space, L 2 (Ω, F, P is itself a separable Hilbert space Definition. Let t [, T. The abstract Wiener integrand space Λ t (X is the completion of the linear span of the indicator functions 1 s := 1 [,s, s t, under the inner product, extended bilinearly from the relation 1 s, 1 u = R(s, u. The elements of the abstract Wiener integrand space are equivalence classes of Cauchy sequences (f n n=1 of piecewise constant functions. The equivalence of (f n n=1 and (g n n=1 means that where =,. f n g n, as n, 2.3. Remark. (i The elements of Λ t (X cannot in general be identified with functions as pointed out e.g. by Pipiras and Taqqu [14 for the case of fractional Brownian motion with Hurst index H > 1/2.

10 4 Working Papers However, if R is of bounded variation one can identity the function space Λ t (X Λ t (X: { t } Λ t (X = f R [,t ; f(sf(u R (ds, du <. (ii While one may want to interpet that Λ s (X Λ t (X for s t it may happen that f Λ t (X, but f1 s Λ s (X. Indeed, it may be that f1 s > f. See Bender and Elliott [3 for an example in the case of fractional Brownian motion. The space Λ t (X is isometric to L t (X. Indeed, the relation (2.4 I X t [1 s := X s, s t, can be extended linearly into an isometry from Λ t (X onto L t (X Definition. The isometry It X : Λ t (X L t (X extended from the relation (2.4 is the abstract Wiener integral. We denote f(s dx s := It X [f. 3. Orthogonal Generalized Bridge Representation Denote by g the matrix [ g ij := g i, g j := Cov g i (t dx t, g j (t dx t. Note that g does not depend on the mean of X nor on the conditioned values y: g depends only on the conditioning functions g = [g i N i=1 and the covariance R. Also, since g i s are linearly independent and R is positive definite, the matrix g is invertible Theorem. The generalized Gaussian bridge X g;y can be represented as ( (3.2 X g;y t = X t 1 t, g g 1 g(u dx u y. Moreover, any generalized Gaussian bridge X g;y is a Gaussian process with E [ ( X g;y T t = m(t 1 t, g g 1 g(u dm(u y, Cov [ X g;y t, Xs g;y = 1 t, 1 s 1 t, g g 1 1 s, g. Proof. It is well-known (see, e.g., [15, p. 34 from the theory of multivariate Gaussian distributions that conditional distributions are Gaussian

11 Working Papers 5 with T E [X t g(udx u = y = m(t + 1 t, g g 1 (y T Cov [X t, X s g(u dx u = y = 1 t, 1 s 1 t, g g 1 1 s, g. g(u dm(u, The claim follows from this Corollary. Let X be a centered Gaussian process with X = and let m be a function of bounded variation. Let X g := X g; be a bridge where the conditional functionals are conditioned to zero. Then (X + m g;y t = X g t + (m(t 1 t, g g 1 g(u dm(u + 1 t, g g 1 y Remark. Corollary 3.3 tells us how to construct, by adding a deterministic drift, a general bridge from a bridge that is constructed from a centered process with conditioning y =. So, in what follows, we shall almost always assume that the process X is centered, i.e. m(t =, and all conditionings are to y = Example. Let X be a zero mean Gaussian process with covariance R. Consider the conditioning on the final value and the average value: 1 T X T =, X t dt =. This is a generalized Gaussian bridge. Indeed, 1 T X T = X t dt = 1 dx t =: T t T dx t =: g 1 (t dx t, g 2 (t dx t.

12 6 Working Papers Now, and 1 t, g 1 = E [X t X T = R(t, T, [ 1 T 1 t, g 2 = E X t X s ds = 1 R(t, s ds, T T g 1, g 1 = E [X T X T = R(T, T, [ 1 T g 1, g 2 = E X T X s ds = 1 R(T, s ds, T T [ 1 T g 2, g 2 = E X s ds 1 X u du = 1 T T T 2 g = 1 T 2 Thus, by Theorem 3.1, g 1 = 1 [ g R(s, u dsdu, R(T, T R(s, u R(T, sr(t, u ds du, g 2, g 2 g 1, g 2 g 1, g 2 g 1, g 1 X g t = X t 1 t, g 1 g 2, g 2 1 t, g 2 g 1, g 2 g 1 t, g 2 g 1, g 1 1 t, g 1 g 1, g 2 g = X t. g 1 (t dx t g 2 (t dx t R(t, T R(s, u R(t, sr(t, sds du R(T, T R(s, u R(T, sr(t, u ds dux T T R(T, T R(t, s R(t, T R(T, sds R(T, T R(s, u R(T, sr(t, u ds du T t T dx t Remark. (i The conditioning on N conditions can also be done iteratively as follows: Let X n := X g 1,...,g n;y 1,...,y n and let X := X be the original process. Then the orthogonal generalized bridge representation X N can be constructed from the rule Xt n = Xt n 1 1 [ t, g n T n 1 g n (u dxu n 1 y n, g n, g n n 1 where, n 1 is the inner product in L T (X n 1. (ii If the conditioning variables g j are indicator functions 1 tj then the corresponding generalized bridge is a multibridge. That is, it is pinned down to values y j at points t j. For the multibridge X N = X 1t 1,...,1t N ;y 1,...,y N the orthogonal bridge decomposition can be constructed from the iteration X t = X t, X n t = X n 1 t R n 1(t, t n R n 1 (t n, t n [ X n 1 t n y n,

13 where R (t, s = R(t, s, R n (t, s = R n 1 (t, s R n 1(t, t n R n 1 (t n, s. R n 1 (t n, t n Working Papers 7 4. Canonical Generalized Bridge Representation The problem with the orthogonal bridge representation (3.2 of X g;y is that in order to construct it at any point t [, T one needs the whole path of the underlying process X up to time T. In this section we construct a bridge representation that is canonical in the following sense: 4.1. Definition. The bridge X g;y is of canonical representation if, for all t [, T, X g;y t L t (X and X t L t (X g;y Remark. Since the conditional laws of Gaussian processes are Gaussian and Gaussian spaces are linear, the assumptions X g;y t L t (X and X t L t (X g;y of Definition 4.1 are the same as assuming that X g;y t is Ft X -measurable and X t is Ft Xg;y -measurable (and, consequently, Ft X = Ft Xg;y. This fact is very special to Gaussian processes. Indeed, in general conditioned processes such as generalized bridges are not linear transformations of the underlying process. We shall require that the restricted measures P g,y t := P g;y F t and P t := P F t are equivalent for all t < T (they are obviously singular for t = T. To this end we assume that the matrix [( (4.3 g ij (t := E G i T (X G i t(x (G j T (X Gj t (X [ = E g i (s dx s g j (s dx s is invertible for all t < T. t 4.4. Remark. On notation: in the previous section we considered the matrix g, but from now on we consider the function g (. Their connection is of course g = g (. We hope that is overloading of notation does not cause confusion to the reader. Gaussian Martingales. We first construct the canonical representation when the underlying process is a continuous Gaussian martingale M with strictly increasing bracket M and M =. Note that the bracket is strictly increasing if and only if the covariance R is positive definite. Indeed, for Gaussian martingales we have R(t, s = Var(M t s = M t s. Define a Volterra kernel (4.5 l g (t, s := g (t g 1 (t g(s. Note that the kernel l g depends on the process M through its covariance,, and in the Gaussian martingale case we have g ij (t = t t g i (sg j (s d M s.

14 8 Working Papers The following Lemma 4.6 is the key observation in finding the canonical generalized bridge representation. Actually, it is a multivariate version of Proposition 6 of [ Lemma. Let l g be given by (4.5 and let M be a continuous Gaussian martingale with strictly increasing bracket M and M =. Then the Radon-Nikodym derivative dp g t /dp t can be expressed in the form { dp g s t = exp l g (s, u dm u dm s 1 ( s } 2 l g (s, u dm u d M s dp t 2 for all t [, T. Proof. Let p(y; µ, Σ := be the Gaussian density on R N { 1 (2π N/2 exp 1 } Σ 1/2 2 (y µ Σ 1 (y µ and let α g t (dy := P [G T (M dy F M t be the conditional law of the conditioning functionals G T (M = g(s dm s given the information F M t. First, by the Bayes formula, we have dp g t = dαg t dp t dα g (. Second, by the martingale property, we have ( dα g p ; G t (M, g (t t dα g ( = (, p ; G (M, g ( where we have denoted G t (M = g(s dm s. Third, denote ( p ; G t (M, g (t ( =: p ; G (M, g ( ( 1 g ( 2 exp {F (t, Mt F (, M }, g (t with F (t, M t = 1 ( ( g(s dm s g 1 ( g(s dm s. 2 Then, straightforward differentiation yields F s (s, M s ds = 1 ( s 2 l g (s, u dm u d M s, F x (s, M s dm s = s 2 F x 2 (s, M s d M s = log l g (s, u dm u dm s, ( g (t g ( 1 2

15 Working Papers 9 and the form of the Radon Nikodym derivative follows by applying the Itô formula Corollary. The canonical bridge representation M g satisfies the stochastic differential equation (4.8 dm t = dm g t l g (t, s dms g d M t, where l g is given by (4.5. Moreover M = M g. Proof. The claim follows by using the Girsanov s theorem Remark. (i Note that s l g (s, u 2 du ds <. In view of (4.8 this means that the processes M and M g are equivalent in law on [, T. Indeed, equation (4.8 can be viewed as the Hitsuda representation between two equivalent Gaussian processes, cf. Hida and Hitsuda [9. Also note that s l g (s, u 2 du ds = meaning that the measures P and P g are singular on [, T. (ii In the case of y, the formula (4.8 takes the form ( (4.1 dm t = dm g;y t + g (t g 1 (ty l g (t, s dms g;y d M t. Next we solve the stochastic differential equation (4.8 of Corollary 4.7. In general, solving a Volterra Stieltjes equation like (4.8 in a closed form is difficult. Of course, the general theory of Volterra equations suggests that the solution will be of the form (4.13 of Theorem 4.11 below, where l g is the resolvent kernel of l g determined by the resolvent equation (4.14 given below. Also, the general theory suggests that the resolvent kernel can be calculated implicitly by using the Neumann series. In our case the kernel l g is a quadratic form that factorizes in its argument. This allows us to calculate the resolvent l g explicitly as (4.12 below Theorem. Let s t [, T. Define the Volterra kernel (4.12 l g (t g(t, s := l g (t, s g (s = g (tg (t g 1 (t Then the bridge M g has the canonical representation (4.13 dm g t = dm t i.e., (4.13 is the solution to (4.8. g(s g (s. l g(t, s dm s d M t,

16 1 Working Papers Proof. Equation (4.13 is the solution to (4.8 if the kernel l g satisfies the resolvent equation (4.14 l g (t, s + l g(t, s = s l g (t, ul g(u, s d M u. Indeed, suppose (4.13 is the solution to (4.8. This means that ( dm t = dm t l g(t, s dm s d M t l g (t, s ( dm s s or, in the integral form, by using the Fubini s theorem, M t = M t + s s s s l g(u, s d M u dm s l g (u, s d M u dm s u l g(s, u dm u d M s d M t, l g (s, vl g(v, ud M v d M u dm s. The resolvent criterion (4.14 follows by identifying the integrands in the d M u dm s -integrals above. Finally, let us check that the resolvent equation (4.14 is satisfied with l g and l g defined by (4.5 and (4.12, respectively: s since l g (t, ul g(u, s d M u = s g (t g 1 (tg(u g (ug (u g 1 g(s (u g (s d M u = g (t g 1 g(s (t g (s s g(u g (ug (u g 1 (u d M u = g (t g 1 g(s (t g 1 (u g (ud g (u g (s s = g (t g 1 g(s ( (t g (t g (s g (s = g (t g 1 g (t (tg(s g (s g (t g 1 (tg(s = l g(t, s + l g (t, s, d g (t = g (tg(td M t. So, the resolvent equation (4.14 holds. Gaussian Prediction-Invertible Processes. Let us now consider a Gaussian process X that is not a martingale. For a Gaussian process X its prediction martingale is the process ˆX defined as ˆX t = E [ X T Ft X.

17 Working Papers 11 Since for Gaussian processes ˆX t L t (X, we may write, at least formally, that ˆX t = p(t, s dx s, where the abstract kernel p depends also on T (since ˆX depends on T. In Definition 4.15 below we assume that the kernel p exists as a real, and not only formal, function. We also assume that the kernel p is invertible Definition. A Gaussian process X is prediction-invertible if there exists a kernel p such that its prediction martingale ˆX is continuous, can be represented as ˆX t = p(t, s dx s, and there exists an inverse kernel p 1 such that, for all t [, T, p 1 (t, L 2 ([, T, d ˆX and X can be recovered from ˆX by X t = p 1 (t, s d ˆX s Remark. In general it seems to be a difficult problem to determine whether a Gaussian process is prediction-invertible or not. In the discrete time non-degenerate case all Gaussian processes are prediction-invertible. In continuous time the situation is more difficult, as Example 4.17 below illustrates. Nevertheless, we can immediately see that if the centered Gaussian process X with covariance R is prediction-invertible, then the covariance must satisfy the relation R(t, s = s p 1 (t, u p 1 (s, u d ˆX u, where ˆX u = Var (E [X T F u. However, this criterion does not seem to be very helpful in practice Example. Consider the Gaussian slope X t = tξ, t [, T, where ξ is a standard normal random variable. Now, if we consider the raw filtration Gt X = σ(x s ; s t, then X is not prediction invertible. Indeed, then ˆX = but ˆX t = X T, if t (, T. So, ˆX is not continuous. On the other hand, the augmented filtration is simply Ft X = σ(ξ for all t [, T. So, ˆX = X T. Note, however, that in both cases the slope X can be recovered from the prediction martingale: X t = t ˆX T t. In order to represent abstract Wiener integrals of X in terms of Wiener Itô integrals of ˆX we need to extend the kernels p and p 1 to linear operators: Definition. Let X be prediction-invertible. Define operators P and P 1 by extending linearly the relations P[1 t = p(t,, P 1 [1 t = p 1 (t,. Now the following lemma is obvious.

18 12 Working Papers Lemma. Let f be such a function that P 1 [f L 2 ([, T, d ˆX and let ĝ L 2 ([, T, d ˆX. Then (4.2 (4.21 f(t dx t = ĝ(t d ˆX t = P 1 [f(t d ˆX t, P[ĝ(t dx t Remark. (i Equation (4.2 or (4.21 can actually be taken as the definition of the Wiener integral with respect to X. (ii The operators P and P 1 depend on T. (iii If p 1 (, s has bounded variation, we can represent P 1 as P 1 [f(s = f(sp 1 (T, s + s (f(t f(s p 1 (dt, s. Similar formula holds for P also, if p(, s has bounded variation. (iv Let g X (t denote the remaining covariance matrix with respect to X, i.e., [ g X ij (t = E g i (s dx s g j (s dx s. t Let ĝ ˆX(t denote the remaining covariance matrix with respect to ˆX, i.e., Then g ˆX ij (t = g X ij (t = P 1 [g ˆX ij (t = t t g i (sg j (s d ˆX s. t P 1 [g i (sp 1 [g j (s d ˆX s. Now, let X g be the bridge conditioned on g(s dx s =. By Lemma 4.19 we can rewrite the conditioning as (4.23 g(t dx t = P 1 [g(t d ˆX(t =, With this observation the following theorem, that is the main result of this article, follows Theorem. Let X be prediction-invertible Gaussian process. Assume that, for all t [, T and i = 1,..., N, g i 1 t Λ t (X. Then the generalized bridge X g admits the canonical representation (4.25 X g t = X t p 1 (t, up [ˆl ĝ(u, (s d ˆX u dx s, where s ĝ i = P 1 [g i, ˆl ĝ (u, v = ĝ ˆX (uĝ (u( ĝ ˆX 1 (u ĝ ˆX ij (t = t ĝ i (sĝ j (s d M s = g X ij (t. ĝ(v ĝ ˆX (v,

19 Working Papers 13 Proof. Since ˆX is a Gaussian martingale and because of the equality (4.23 we can use Theorem We obtain d ˆXĝs = d ˆX s s ˆl ĝ (s, u d ˆX u d ˆX s. Now, by using the fact that X is prediction invertible, we can recover X from ˆX, and consequently also X g from ˆXĝ by operating with the kernel p 1 in the following way: (4.26 X g t = p 1 (t, s d ˆXĝs ( s = X t p 1 (t, s ˆl ĝ (s, u d ˆX u d ˆX s. In a sense the representation (4.26 is a canonical representation already. However, let us write it in terms of X instead of ˆX. We obtain s X g t = X t p 1 (t, s P [ˆl ĝ(s, (u dx u d ˆX s = X t s p 1 (t, up [ˆl ĝ(u, (s d ˆX u dx s Remark. Recall that, by assumption, the processes X g and X are equivalent on F t, t < T. So, the representation (4.25 is an analogue of the Hitsuda representation for prediction-invertible processes. Indeed, one can show, just like in [16, 17, that a zero mean Gaussian process X is equivalent in law to the zero mean prediction-invertible Gaussian process X if it admits the representation where f(t, s = X t = X t s f(t, s dx s p 1 (t, up [v(u, (s d ˆX u for some Volterra kernel v L 2 ([, T 2, d ˆX d ˆX. An important class of prediction-invertible processes is the class of the so-called invertible Gaussian Volterra processes: Definition. V is an invertible Gaussian Volterra process if it is continuous and there exist Volterra kernels k and k 1 such that (4.29 (4.3 V t = W t = k(t, s dw s, k 1 (t, s dv s. Here W is the standard Brownian motion, k(t, L 2 ([, t = Λ t (W and k 1 (t, Λ t (V for all t [, T.

20 14 Working Papers Remark. (i The representation (4.29, defining a Gaussian Volterra process, states that the covariance R of V can be written as R(t, s = s k(t, uk(s, u du. So, in some sense, the kernel k is the square root, or the Cholesky decomposition, of the covariance R. (ii The inverse relation (4.3 means that the indicators 1 t, t [, T, can be approximated in L 2 ([, t with linear combinations of the functions k(t j,, t j [, t. I.e., the indicators 1 t belong to the image of the operator K extending the kernel k linearly as discussed below. Precisely as with the kernels p and p 1, we can define the operators K and K 1 by linearly extending the relations K[1 t := k(t, and K 1 [1 t := k 1 (t,. Then, just like with the operators P and P 1, we have f(t dv t = g(t dw t = K 1 [f(t dw t, K[g(t dv t. The connection between the operators K and K 1 and the operators P and P 1 are K[g = k(t, P 1 [g, K 1 [g = k 1 (T, P[g. So, invertible Gaussian Volterra processes are prediction-invertible and the following corollary to Theorem 4.24 is obvious: Corollary. Let V be an invertible Gaussian Volterra process and let K[g i L 2 ([, T for all i = 1,..., N. Denote g(t := K[g(t k(t, t. Then the bridge V g admits the canonical representation (4.33 V g t where = V t s k(t, uk(t, u k 1 K 1 [ l g(u, (s du dv s, (T, s l g (u, v = g W (u g (u( g W 1 (u g W ij (t = t g i (s g j (s ds = g X ij (t. g(v g W (v,

21 Working Papers Example. An important example of an invertible Gaussian Volterra process is the fractional Brownian motion B with Hurst index H (, 1. It is a centered Gaussian process with B = and covariance function R(t, s = 1 ( t 2H + s 2H t s 2H. 2 It is well-known that the fractional Brownian motion is an invertible Gaussian Volterra process with K[f(s = c H s 1 2 H I H 1 2 T K 1 [f(s = 1 s 1 2 H I 1 2 H T c H Here I H 1 2 T and I 1 2 H T [, T of order H 1 2 and 1 2 H, respectively: I H 1 2 T [f(t = [( H 1 2 f (s, [( H 1 2 f (s. are the Riemann-Liouville fractional integrals over 1 T Γ(H 1 2 t t 1 d Γ( 3 2 H dt and c H is the normalizing constant ( 2HΓ(H + 1 c H = Here Γ(x = f(s (s t 3 2 H ds, for H > 1 2, f(s (s t H Γ( 3 2 H Γ(2 2H e t t x 1 dt ds, for H < 1 2, 1 2. is the Gamma function. (For the proofs of these facts and for more information on the fractional Brownian motion we refer to the monographs by Biagini et al. [4 and Mishura [ Application to Insider Trading We consider insider trading in the context of initial enlargement of filtrations. Our approach here is motivated by Imkeller [11. Consider an insider who has at time t = some insider information of the evolution of the price process on a financial asset S over a period [, T. We want to calculate the additional expected utility for the insider trader. To make the maximization of the utility of terminal wealth reasonable we have to assume that our model is arbitrage-free. In our Gaussian realm this boils down to assuming that the (discounted asset prices are governed by the equation ds t (5.1 = a t d M t + dm t, S t where S = 1, M is a continuous Gaussian martingale with strictly increasing M with M =, and the process a is F-adapted satisfying a2 t d M t < P-a.s. Assuming that the trading ends at time T ε, the insider knows some functionals of the return over the interval [, T. If ε = there is obviously

22 16 Working Papers arbitrage for the insider. The insider information will define a collection of functionals G i T (M = g i(t dm t, where g i L 2 ([, T, d M, i = 1,..., N, such that (5.2 g(t ds t S t = y = [y i N i=1, for some y R N. This is equivalent to the conditioning of the Gaussian martingale M on the linear functionals G T = [G i T N i=1 of the log-returns: [ N G T (M = g(t dm t = g i (t dm t. i=1 Indeed, the connection is where g(t dm t = y a, g =: y, [ a, g = [ a, g i N i=1 = N a t g i (t d M t. i=1 As the natural filtration F represents the information available to the ordinary trader, the insider trader s information flow is described by a larger filtration G = (G t t [,T given by G t = F t σ(g 1 T,..., G N T. Under the augmented filtration G, M is no longer a martingale. It is a Gaussian semimartingale with the semimartingale decomposition ( (5.3 dm t = d M t + l g (t, s dm s g (t g 1 (ty d M t, where M is a continuous G-martingale with bracket M, and which can be constructed through the formula (4.1. In this market, we consider the portfolio process π defined on [, T ε Ω as the fraction of the total wealth invested in the asset S. So the dynamics of the discounted value process associated to a self-financing strategy π is defined by V = v and dv t V t = π t ds t S t, or equivalently by ( (5.4 V t = v exp π s dm s + for t [, T ε, ( π s a s 1 2 π2 s d M s. Let us denote by, ε and ε the inner product and the norm on L 2 ([, T ε, d M. For the ordinary trader, the process π is assumed to be a non-negative F-progressively measurable process such that (i P[ π 2 ε < = 1. (ii P[ π, f ε < = 1, for all f L 2 ([, T ε, d M.

23 Working Papers 17 We denote this class of portfolios by Π(F. By analogy, the class of the portfolios disposable to the insider trader shall be denoted by Π(G, the class of non-negative G-progressively measurable processes that satisfy the conditions (i and (ii above. The aim of both investors is to maximize the expected utility of the terminal wealth V T ε, by finding an optimal portfolio π on [, T ε that solves the optimization problem max π E [U(V T ε. Here, the utility function U will the logarithmic utility function, and the utility of the process (5.4 valued at time T ε is ε ε ( log V T ε = log v + π s dm s + π s a s 1 (5.5 2 π2 s d M s = log v + = log v + ε ε π s dm s ε π s dm s π, 2a π ε π s (2a s π s d M s ( From the ordinary trader s point of view M is a martingale. So, T ε E π s dm s = for every π Π(F and, consequently, E [U(V T ε = log v E [ π, 2a π ε. Therefore, the ordinary trader, given Π(F, can solve the optimization problem max E [U(V T ε = log v + 1 π Π(F 2 max E [ π, 2a π ε π Π(F over the term π, 2a π ε = 2 π, a ε π 2 ε. Using the polarization identity gives π, 2a π ε = a 2 ε π a 2 ε a 2 ε, where the maximum is at π = a on [, T ε. This means that the optimal portfolio is π t = a t defined for all t in [, T ε. The corresponding maximal expected utility value is max E [U(V T ε = log v + 1 π Π(F 2 E [ a 2 ε. From the insider trader s point of view the process M is not a martingale under his information flow G. The insider can update his utility of terminal wealth (5.5 by considering (5.3, where M is a continuous G-martingale. This gives log V T ε = log v + + π, ε π s d M s π, 2a π ε l g (, t dm t g g 1 y ε.

24 18 Working Papers Now, the insider maximizes the expected utility over all π Π(G: max π Π(G E [log V T ε = log v + 1 [ ( 2 max E π, 2 a + π Π(G l g (, t dm t g g 1 y π ε. The optimal portfolio π for the insider trader can be computed in the same way as for the ordinary trader. We obtain π t = a t + l g (t, s dm s g (t g 1 (ty, t [, T ε. Since [ E a, l g (, s dm s g g 1 y ε =, we obtain that T ε = max E [U(V T ε max E [U(V T ε π Π(G π Π(F = 1 [ 2 E l g (, s dm s g g 1 y 2 ε. Now, let us use the short-hand notation G t := g(s dm s, g (t, s := g (t g (s, g 1 (t, s := g 1 (t g 1 (s. Then, by expanding the square 2 ε, we obtain [ 2 T ε = E l g (, s dm s g g 1 y 2 ε [ = E g g 1 ( y + G 2 ε = E +E [ ε [ ε y g 1 (tg(tg (t g 1 (ty d M t G t g 1 (tg(tg (t g 1 (tg t d M t.

25 Working Papers 19 Now the formula E[x Ax = Tr[ACovx + E[x AE[x yields 2 T ε = ε ε + y g 1 (tg(tg (t g 1 (ty d M t [ Tr g 1 (tg(tg (t g 1 (t g (, t d M t = y g 1 (T ε, y ε [ + Tr g 1 (tg(tg (t g 1 (t g ( d M t ε [ Tr g 1 (tg(tg (t d M t = (y g, a g 1 (T ε, (y g, a [ +Tr g 1 (T ε, g ( + log g (T ε. g ( We have proved the following proposition: 5.6. Proposition. The additional logarithmic utility in the model (5.1 for the insider with information (5.2 is T ε = max E [U(V T ε max E [U(V T ε π Π(G π Π(F = 1 ( 2 (y g, a g 1 (T ε g 1 ( (y g, a + 1 [( g 2 Tr 1 (T ε g 1 ( g ( g (T ε log. g ( 5.7. Example. Consider the classical Black and Scholes pricing model: ds t S t = µdt + σdw t, S = 1, where W = (W t t [,T is the standard Brownian motion. Assume that the insider trader knows at time t = that the total and the average return of the stock price over the period [, T are both zeros and that the trading ends at time T ε. So, the insider knows that where G 1 T = G 2 T = g 1 (t dw t = y 1 σ µ σ g 1, 1 T = µ σ g 1, 1 T g 2 (t dw t = y 2 σ µ σ g 2, 1 T = µ σ g 2, 1 T, g 1 (t = 1 T (t, g 2 (t = T t T.

26 2 Working Papers Then, by Proposition 5.6, T ε = 1 ( µ ( 2 g, 1T g 1 (T ε g ( 1 g, 1 T 2 σ + 1 [( g 2 Tr 1 (T ε g 1 ( g ( with g (T ε log, g ( g 1 (t = 6 T ( 4 T T T t ( T T t ( 2 6 T T T t ( T T t 2 12 T 3 for all t [, T ε. We obtain T ε = { 1 ( µ ( 2 T 3 ( T 2 ( T 3T 6T + 4T 2 σ ε ε ε ( T 3 ( T 2 ( ( T T log ε ε ε ε } T 1. Here it can be nicely seen that = (no trading at all and T = (the knowledge of the final values implies arbitrage. References [1 Alili, L. Canonical decompositions of certain generalized Brownian bridges. Electron. Comm. Probab. 7, 27 36, 22. [2 Baudoin, F. Conditioned stochastic differential equations: theory, examples and application to finance. Stochastic Processes and their Applications 1, no. 1, pp , 22. [3 Bender, C. and Elliott, R. On the Clark-Ocone theorem for fractional Brownian motions with Hurst parameter bigger than a half. Stochastic. Stoch. Rep. 75, no. 6, , 23. [4 Biagini, F., Hu, Y., Øksendal, B., and Zhang, T. Stochastic calculus for fractional Brownian motion and applications. Probability and its Applications (New York. Springer-Verlag London, Ltd., London, 28. [5 Campi, L., Çetin, U. and Danilova, A. Dynamic Markov bridges motivated by models of insider trading. Stochastic Process. Appl. 121, no. 3, , 211. [6 Chaumont, L. and Uribe Bravo, G. Markovian bridges: weak continuity and pathwise constructions. Ann. Probab. 39, no. 2, , 211. [7 Gasbarra, D., Sottinen, T., and Valkeila, E. Gaussian bridges. Stochastic analysis and applications, , Abel Symp. 2, Springer, Berlin, 27. [8 Gasbarra, D., Valkeila, E. and Vostrikova, L. Enlargement of filtration and additional information in pricing models: Bayesian approach. From stochastic calculus to mathematical finance, , Springer, Berlin, 26. [9 Hida, T., Hitsuda, M., Gaussian Processes. AMS Translations, [1 Hoyle, E., Hughston, L. P. and Macrina, A. Lévy random bridges and the modelling of financial information. Stochastic Process. Appl. 121, no. 4, , 211. [11 Imkeller, P. Malliavin s calculus in insider models: additional utility and free lunches. Conference on Applications of Malliavin Calculus in Finance (Rocquencourt, 21. Math. Finance 13, no. 1, , 23. [12 Janson, S. Gaussian Hilbert spaces. Cambridge Tracts in Mathematics 129. Cambridge University Press, Cambridge, 1997.

27 Working Papers 21 [13 Mishura, Yu. S. Stochastic calculus for fractional Brownian motion and related processes. Lecture Notes in Mathematics Springer-Verlag, Berlin, 28. [14 Pipiras, V. and Taqqu, M. Are classes of deterministic integrands for fractional Brownian motion on an interval complete? Bernoulli 7, no. 6, , 21. [15 Shiryaev, A. Probability. Second edition. Graduate Texts in Mathematics, 95. Springer-Verlag, New York, [16 Sottinen, T. On Gaussian processes equivalent in law to fractional Brownian motion. Journal of Theoretical Probability 17, no. 2, , 24. [17 Sottinen, T. and Tudor, C.A. On the equivalence of multiparameter Gaussian processes. Journal of Theoretical Probability 19, no. 2., , 26. Tommi Sottinen, Department of Mathematics and Statistics, University of Vaasa P.O.Box 7, FIN-6511 Vaasa, Finland address: Adil Yazigi, Department of Mathematics and Statistics, University of Vaasa P.O.Box 7, FIN-6511 Vaasa, Finland address:

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Representations of Gaussian measures that are equivalent to Wiener measure

Representations of Gaussian measures that are equivalent to Wiener measure Representations of Gaussian measures that are equivalent to Wiener measure Patrick Cheridito Departement für Mathematik, ETHZ, 89 Zürich, Switzerland. E-mail: dito@math.ethz.ch Summary. We summarize results

More information

Representing Gaussian Processes with Martingales

Representing Gaussian Processes with Martingales Representing Gaussian Processes with Martingales with Application to MLE of Langevin Equation Tommi Sottinen University of Vaasa Based on ongoing joint work with Lauri Viitasaari, University of Saarland

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

arxiv: v1 [math.pr] 23 Jan 2018

arxiv: v1 [math.pr] 23 Jan 2018 TRANSFER PRINCIPLE FOR nt ORDER FRACTIONAL BROWNIAN MOTION WIT APPLICATIONS TO PREDICTION AND EQUIVALENCE IN LAW TOMMI SOTTINEN arxiv:181.7574v1 [math.pr 3 Jan 18 Department of Mathematics and Statistics,

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS J. Appl. Prob. 48, 561 568 (2011) Printed in England Applied Probability Trust 2011 CONDITIONAL FULL SUPPOT OF GAUSSIAN POCESSES WITH STATIONAY INCEMENTS DAIO GASBAA, University of Helsinki TOMMI SOTTINEN,

More information

Conditional Full Support for Gaussian Processes with Stationary Increments

Conditional Full Support for Gaussian Processes with Stationary Increments Conditional Full Support for Gaussian Processes with Stationary Increments Tommi Sottinen University of Vaasa Kyiv, September 9, 2010 International Conference Modern Stochastics: Theory and Applications

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Tools of stochastic calculus

Tools of stochastic calculus slides for the course Interest rate theory, University of Ljubljana, 212-13/I, part III József Gáll University of Debrecen Nov. 212 Jan. 213, Ljubljana Itô integral, summary of main facts Notations, basic

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

White noise generalization of the Clark-Ocone formula under change of measure

White noise generalization of the Clark-Ocone formula under change of measure White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications

More information

Riemann-Stieltjes integrals and fractional Brownian motion

Riemann-Stieltjes integrals and fractional Brownian motion Riemann-Stieltjes integrals and fractional Brownian motion Esko Valkeila Aalto University, School of Science and Engineering, Department of Mathematics and Systems Analysis Workshop on Ambit processes,

More information

An application of hidden Markov models to asset allocation problems

An application of hidden Markov models to asset allocation problems Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES

ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES ON THE STRUCTURE OF GAUSSIAN RANDOM VARIABLES CIPRIAN A. TUDOR We study when a given Gaussian random variable on a given probability space Ω, F,P) is equal almost surely to β 1 where β is a Brownian motion

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Homogeneous Stochastic Differential Equations

Homogeneous Stochastic Differential Equations WDS'1 Proceedings of Contributed Papers, Part I, 195 2, 21. ISBN 978-8-7378-139-2 MATFYZPRESS Homogeneous Stochastic Differential Equations J. Bártek Charles University, Faculty of Mathematics and Physics,

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

On Unitary Relations between Kre n Spaces

On Unitary Relations between Kre n Spaces RUDI WIETSMA On Unitary Relations between Kre n Spaces PROCEEDINGS OF THE UNIVERSITY OF VAASA WORKING PAPERS 2 MATHEMATICS 1 VAASA 2011 III Publisher Date of publication Vaasan yliopisto August 2011 Author(s)

More information

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS

PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

arxiv: v2 [math.pr] 22 Aug 2009

arxiv: v2 [math.pr] 22 Aug 2009 On the structure of Gaussian random variables arxiv:97.25v2 [math.pr] 22 Aug 29 Ciprian A. Tudor SAMOS/MATISSE, Centre d Economie de La Sorbonne, Université de Panthéon-Sorbonne Paris, 9, rue de Tolbiac,

More information

Rough paths methods 4: Application to fbm

Rough paths methods 4: Application to fbm Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

Pathwise Construction of Stochastic Integrals

Pathwise Construction of Stochastic Integrals Pathwise Construction of Stochastic Integrals Marcel Nutz First version: August 14, 211. This version: June 12, 212. Abstract We propose a method to construct the stochastic integral simultaneously under

More information

Conditioned stochastic dierential equations: theory, examples and application to nance

Conditioned stochastic dierential equations: theory, examples and application to nance Stochastic Processes and their Applications 1 (22) 19 145 www.elsevier.com/locate/spa Conditioned stochastic dierential equations: theory, examples and application to nance Fabrice Baudoin a;b; a Laboratoire

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

EQUITY MARKET STABILITY

EQUITY MARKET STABILITY EQUITY MARKET STABILITY Adrian Banner INTECH Investment Technologies LLC, Princeton (Joint work with E. Robert Fernholz, Ioannis Karatzas, Vassilios Papathanakos and Phillip Whitman.) Talk at WCMF6 conference,

More information

ALEKSANDAR MIJATOVIĆ AND MIKHAIL URUSOV

ALEKSANDAR MIJATOVIĆ AND MIKHAIL URUSOV DETERMINISTIC CRITERIA FOR THE ABSENCE OF ARBITRAGE IN DIFFUSION MODELS ALEKSANDAR MIJATOVIĆ AND MIKHAIL URUSOV Abstract. We obtain a deterministic characterisation of the no free lunch with vanishing

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WITH MONOTONE DRIFT

ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WITH MONOTONE DRIFT Elect. Comm. in Probab. 8 23122 134 ELECTRONIC COMMUNICATIONS in PROBABILITY ON A SDE DRIVEN BY A FRACTIONAL BROWNIAN MOTION AND WIT MONOTONE DRIFT BRAIM BOUFOUSSI 1 Cadi Ayyad University FSSM, Department

More information

Optimal portfolio strategies under partial information with expert opinions

Optimal portfolio strategies under partial information with expert opinions 1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

A monetary value for initial information in portfolio optimization

A monetary value for initial information in portfolio optimization A monetary value for initial information in portfolio optimization this version: March 15, 2002; Jürgen Amendinger 1, Dirk Becherer 2, Martin Schweizer 3 1 HypoVereinsbank AG, International Markets, Equity

More information

Convoluted Brownian motions: a class of remarkable Gaussian processes

Convoluted Brownian motions: a class of remarkable Gaussian processes Convoluted Brownian motions: a class of remarkable Gaussian processes Sylvie Roelly Random models with applications in the natural sciences Bogotá, December 11-15, 217 S. Roelly (Universität Potsdam) 1

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Discrete approximation of stochastic integrals with respect to fractional Brownian motion of Hurst index H > 1 2

Discrete approximation of stochastic integrals with respect to fractional Brownian motion of Hurst index H > 1 2 Discrete approximation of stochastic integrals with respect to fractional Brownian motion of urst index > 1 2 Francesca Biagini 1), Massimo Campanino 2), Serena Fuschini 2) 11th March 28 1) 2) Department

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Mean-Variance Hedging for Continuous Processes: New Proofs and Examples

Mean-Variance Hedging for Continuous Processes: New Proofs and Examples Mean-Variance Hedging for Continuous Processes: New Proofs and Examples Huyên Pham Équipe d Analyse et de Mathématiques Appliquées Université de Marne-la-Vallée 2, rue de la Butte Verte F 9366 Noisy-le-Grand

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Stochastic Integration and Continuous Time Models

Stochastic Integration and Continuous Time Models Chapter 3 Stochastic Integration and Continuous Time Models 3.1 Brownian Motion The single most important continuous time process in the construction of financial models is the Brownian motion process.

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

Random Fields: Skorohod integral and Malliavin derivative

Random Fields: Skorohod integral and Malliavin derivative Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We

More information