Martingale Representation Theorem for the G-expectation
|
|
- Debra Gordon
- 5 years ago
- Views:
Transcription
1 Martingale Representation Theorem for the G-expectation H. Mete Soner Nizar Touzi Jianfeng Zhang February 22, 21 Abstract This paper considers the nonlinear theory of G-martingales as introduced by eng in [14, 15. A martingale representation theorem for this theory is proved by using the techniques and the results established in [18 for the second order stochastic target problems and the second order backward stochastic differential equations. In particular, this representation provides a hedging strategy in a market with an uncertain volatility. Key words: G-expectation, G-martingale, nonlinear expectation, stochastic target problem, singular measure, BSDE, 2BSDE, duality. AMS 2 subject classifications: 6H1, 6H3. 1 Introduction The notion of a G-expectation as recently introduced by eng [14, 15 has several motivations and applications. One of them is the study of financial problems with uncertainty about the volatility. This important problem was also considered earlier by Denis & Martini [3. Motivated by this application, Denis & Martini developed an almost pathwise theory of stochastic calculus. In this second approach, probabilistic statements are required to hold quasi surely: namely -almost surely for all probability measures from a large class of mutually singular measures. Denis & Martini employ functional analytic techniques while eng s approach utilizes the theory of viscosity solutions of parabolic partial differential equations. ETH (Swiss Federal Institute of Technology), Zurich, hmsoner@ethz.ch. Research partly supported by the European Research Council under the grant FiRM. CMA, Ecole olytechnique aris, nizar.touzi@polytechnique.edu. Research supported by the Chair Financial Risks of the Risk Foundation sponsored by Société Générale, the Chair Derivatives of the Future sponsored by the Fédération Bancaire Française, and the Chair Finance and Sustainable Development sponsored by EDF and Calyon. University of Southern California, Department of Mathematics, jianfenz@usc.edu. Research supported in part by NSF grant DMS
2 Indeed, the G-expectation is defined by eng using the nonlinear heat equation, t u G(D 2 u) = on [, 1), where the time maturity is taken to be T = 1 and for given symmetric matrices a > and a a, the nonlinearity G is defined by, G(γ) := 1 sup{ tr [γa a a a}. (1.1) 2 Then for Markov-like random variables, the G-expectation and conditional expectations are defined through the solution of the above equation with this random variable as its terminal condition at time T = 1. A G-martingale is then defined easily as a process which satisfies the martingale property by this conditional expectation. A brief introduction to this theory is provided in Section 2 below. Denis & Martini [3 also construct a similar structure of quasi-sure stochastic analysis. However, they use a quite different approach which utilizes the set of all probability measures so that the canonical map in the Wiener space is a martingale under and the quadratic variation of this martingale lies between a a. Although the constructions of the quasi sure analysis and the G-expectations are substantially different, these theories are very closely related as proved recently by Denis, Hu & eng [4. The paper [4 also provides a dual representation of the G-expectation as the supremum of expectations over. A probabilistic construction similar to quasi-sure stochastic analysis and G-expectations, is the theory of second order backward stochastic differential equations (2BSDE). This theory is developed in [1, 2, 16 as a generalization of BSDEs as initially introduced in [6, 12. In particular, 2BSDEs provide a stochastic representation for fully nonlinear partial differential equations. Since the G-expectation is defined through such a nonlinear equation, one expects that the G-expectations are naturally connected to the 2BSDEs. Equivalently, 2BSDEs can be viewed as the extension of G-expectations to more general nonlinearities. Indeed, recently the authors developed such a generalization and a duality theory for 2BS- DEs using probabilistic constructions similar to quasi-sure analysis [17, 18, 19. In this paper, we investigate the question of representing an arbitrary G-martingale in terms of stochastic integrals and other processes. Specifically, we fix a finite horizon say T = 1. Since all martingales can be seen as the conditional expectation, we also fix the final value ξ. We then would like to construct stochastic processes H and K so that Y t := E G t [ξ = ξ t H s db s + K 1 K t = E G [ξ + H s db s K t, where E G t is the G-conditional expectation and the process M := K is a non-increasing G- martingale. The stochastic integral that appears in the above is the regular Itô one. But it is also defined quasi-surely. More precisely, the above statement holds almost-surely for all 2
3 probability measures in. Equivalently, the above equation holds quasi-surely in the sense of Denis & Martini. In particular, all the above processes as well as the stochastic integral are defined on the support of all measures in the set. This is an important property of this martingale representation as contains measures which are mutually singular. Moreover, there is no measure that dominates all measures in. Hence the above processes are defined on a large subset of our probability space. A partial answer to this question was already provided by Xu and Zhang [2 for the class of symmetric G-martingales, i.e. a process N which is both itself and N are G-martingales. Since the G-expectation is not linear, the class of symmetric martingales is a strict subset of all G-martingales. In particular, the representation of symmetric martingales are obtained using only the stochastic integrals. We obtain the martingale representation in Theorem 5.1 for almost all square-integrable martingales. This result essentially provides a complete answer to the question of representation for the integrable classes defined in [15. We also show that the non-increasing martingale M := K does not admit any further structure as illustrated in Example 6.4. Our analysis utilizes the already mentioned duality result of Denis, Hu and eng [4. Similar to [4, we also provide a dual characterization of G-martingales as an immediate consequence of the results in [4, 15. This observation is one of the key-ingredients of our representation proof. Moreover, it can be used to extend the definition of G-martingales to a class larger than the integrability class L 1 G of eng. Indeed, the above martingale representation result could also be proved for a larger class of random variables. But this development also requires the extension of G-expectations and conditional expectations to this larger class. These types of results are not pursued here. But in an example, Example 6.3 below, we show that the integrability class L 1 G does not include all bounded random variables. Thus it is desirable to extend the theory to a larger class of random variables using the equivalent definitions that do not refer to partial differential equations. Indeed such a theory is developed by the authors in [17, 18, 19. The paper is organized as follows. In Section 2, we review the theory of G-expectations and G-martingales. Section 3 defines the quasi-sure analysis of Denis & Martini and also provides the dual formulation. The main ingredients for our approach, such as the norms and spaces, are collected in Section 4. The main result is then stated and proved in Section 5. In the Appendix, we provide an approximation argument for the solutions of the partial differential equation. Then the connection between the integrability class of eng and the spaces utilized in this paper is given in the subsection
4 1.1 Notation and spaces We collect all the spaces and the notation used in the paper with a reference to their definitions. We always assume that a >, a a. F = {F B t, t } is the filtration generated by the canonical process B. E G is the G-expectation, defined in [15 and in subsection 2.1. E G t is the conditional G-expectation. L ip is the space of random variables of the form ϕ(b t1,, B tn ) with a bounded, Lipschitz deterministic function ϕ and time points t 1... t n 1. L p G is the integrability class defined in subsection 2.1 as the closure of L ip. H p, G is the space of piecewise constant G-stochastic integrands, see subsection 2.2. H p G is the integrability class defined in subsection 2.2 as the closure of Hp, G. = W [a,a measures under which the canonical process is a martingale and satisfies (3.1). (t, ) is defined in (3.2). L p is the set of all p-integrable random variables; see (4.1). L p is the the closure of L ip under the norm L p ; see (4.1). H p is the set of all p-integrable, Rd -valued stochastic integrands; see (4.2). H p is the closure of Hp, G under the norm H p ; see Definition 4.2 S p is the set of all p-integrable, continuous processes; see Definition 4.2. I p is the subset of Sp that are non-decreasing with initial value ; see Definition 4.2. S d is the set of all d d symmetric matrices with the usual ordering and identity I d. For ν, η R d, A := ν η S d is defined by Ax = (η x)ν for any x R d. For A S d, ν k R d are its orthonormal eigenvectors and λ k are the corresponding eigenvalues so that A = k λ k [ν k ν k. For A S d, and a real number, A ci d S d is defined by A ci d := k (λ k c) [ν k ν k. 2 G-stochastic analysis of eng [14, 15 We fix the time horizon T = 1. Let Ω := {ω C([, 1, R d ) : ω() = } be the canonical space, B the canonical process, and the Wiener measure. F = {Ft B, t [, 1} is the filtration generated by B. We note that Ft B = Ft B Ft+. B In what follows, we always use the space Ω together with the filtration F. 4
5 2.1 G-expectation and G-martingale Following eng [14, let a >, a a and G be as in (1.1). For a bounded uniformly Lipschitz continuous function ϕ on R d, let u be the unique bounded viscosity solution of the following parabolic equation, t u G(D 2 u) = on [, 1), and u(1, x) = ϕ(x). (2.1) Here, t and D 2 denote, respectively, the partial derivative with respect to t, and the partial Hessian with respect to the space variable x. Then, the conditional G-expectation of the random variable ϕ(b 1 ) at time t is defined by E G t [ϕ(b 1 ) := u (t, B t ). In particular, the G-expectation of ϕ(b 1 ) is given by E G [ϕ(b 1 ) := E G [ϕ(b 1 ) = u(, ). Next consider the random variables of the form ξ := ϕ(b t1,..., B tn 1, B tn ) for some bounded uniformly Lipschitz continuous function ϕ on R d n and t 1 <... < t n = 1. For t i 1 t < t i, let E G t [ξ = E G t [ϕ(b t1,, B tn ) := v i (t, B t1,, B ti 1, B t ), where {v i } i=1,...,n 1 is the unique, bounded, Lipschitz viscosity solution of the following equation, t v i G ( D 2 ) v i =, ti 1 t < t i and (2.2) v i (t i, x 1,, x i 1, x) = v i+1 (t i, x 1,, x i 1, x, x), and v n solves the above equation with final data v n (1, x 1,..., x n 1, x) = ϕ(x 1,..., x n 1, x). Moreover, if we set u i (x 1,..., x i ) = v i+1 (t i, x 1,..., x i, x i ), then for t i 1 t < t i we have the following additional identity, E G t [ϕ(b t1,, B tn ) = v i (t, B t1,, B ti 1, B t ) = E G t [u i (B t1,, B ti ). Let L ip denote the space of such random variables ϕ(b t1,, B tn ) with a bounded and Lipschitz ϕ. For p 1, L p G is the closure of L ip under the norm ξ p L p G := E G [ ξ p. We may then extend the definitions of the G-expectation and the conditional G-expectation to all ξ L 1 G. A characterization of this space, in particular a Lusin type theorem, is obtained in [4. However, since these integrability classes are defined through the closure of a rather smooth space L ip, they require substantial smoothness. Indeed, in the Appendix, we construct a bounded random variable which is not in L 1 G (see Example 6.3). We now can define G-martingales. 5
6 Definition 2.1 A adapted L 1 G-valued process M is called a G-martingale iff for any s < t, M s = E G s [M t. M is called a symmetric G-martingale, if both M and M are G-martingales. A G-stochastic integral (as will be defined in the next subsection) is an example of a symmetric G-martingale. But not all martingales are stochastic integrals and not all are symmetric. 2.2 Stochastic integral and quadratic variation For p [1, ), we let H p, G be the space of F-adapted, Rd -valued piecewise constant processes H = i H t i 1 [ti,t i+1 ) such that H ti L p G. For H Hp, G, the G-stochastic integral is easily defined by H s d G B s := i H ti [B t ti+1 B t ti. Notice that this definition is completely universal in the sense that it is pointwise and independent of G. Let H p G be the closure of Hp, G H p H p := G under the norm: E G [ H t p dt. By a closure argument the stochastic integral is defined for all H H p G. It is clear that the set of G-martingales does not form a linear space. However, for any H H p, G, one may directly verify that the stochastic integral process M := H sd G B s is a G-martingale and so is M. Hence, any G-stochastic integral is a symmetric G-martingale. This notion of the stochastic integral can be used to define the quadratic variation process B G t as well. Indeed, the S d -valued process is defined by the identity B G t := 1 2 B t B t where the tensor product is as in the Notations 1.1. integrand B t is in the integration class H p G. Therefore, B G t B s d G B s, t 1, (2.3) We can directly check that the is well defined. 3 Quasi-sure stochastic analysis of Denis & Martini [3 Let be a measure on (Ω, F) so that the canonical process B is a martingale. Then, the quadratic variation process B t of B under exists. We consider the subset := W [a,a of such measures so that B t satisfies the following for some deterministic constant c = c() >, < [ci d a d B t dt a, t [, 1, a.s., (3.1) 6
7 where I d is the identity matrix in S d. Notice that when a is positive definite, as required in Denis and Martini [3, we do not need ci d in the lower bound. Also, the constant c = c() may be different for each measure. Denis and Martini [3 define the following. Definition 3.1 We say that a property holds quasi-surely, abbreviated as q.s., if it holds -almost surely for all. Remark 3.2 All the results in this paper will also hold true if we set := S [a,a as the collection of all probability measures α given by α := (X α ) 1 where X α t := for some F adapted process α taking values in S d and satisfying αs 1/2 db s, t [, 1, a.s. [c(α)i d a a t a, t [, 1, a.s., where the constant c(α) > may depend on α. We note that S [a,a is a strict subset of W [a,a and each S [a,a satisfies the Blumenthal zero-one law and the martingale representation property. We remark that Denis and Martini [3 uses the space W [a,a. But Denis, Hu and eng [4 and our subsequent work [19 essentially use S [a,a. The following are immediate consequences of the definition of G-expectations. roposition 3.3 Let H HG 2. Then, H is Itô-integrable for every. Moreover, H s d G B s = H s db s, q.s., where the right hand side is the usual Itô integral. Consequently, the quadratic variation process B G defined in (2.3) agrees with the usual quadratic variation process quasi surely. roof. The above statement clearly holds for the integrands H H 2, G (i.e. the piece-wise constant processes). For any fix, the L 2 () norm is clearly weaker than the L 2 G norm. Therefore, the quasi-sure equality also holds in the L 2 G closure of H2, G. The statement about the quadratic variation follows from the general statement about the stochastic integrals and the formula (2.3). Next we recall a dual characterization of the G-expectation as proved in [4. We will then generalize that characterization to the G-conditional expectations. Like the previous result, this generalization is also an immediate consequence of the previous results. need the following notation, for t [, 1 and, (t, ) := { : = on F t }. (3.2) We 7
8 Notice that for any (t, ) and ξ L 1 G, the random variable E [ξ F t is defined both and almost surely. Also recall that ess sup = ess sup is the essential supremum of a class of almost surely defined random variables. Clearly, it is also defined almost surely (see Definition A.1 on page 323 in [9). In particular, for t [, 1, we may define as a almost sure random variable. ess sup E [ ξ F t (3.3) (t,) We now have the following characterization of the G-conditional expectation. roposition 3.4 For any ξ L 1 G, t [, 1, and, E G t [ξ = ess sup (t,) E [ ξ F t, a.s.. Moreover, an adapted L 1 G valued process M is a G-martingale if and only if it satisfies the following dynamic programming principle for all s t 1 and, roof. M s = ess sup (s,) E [ M t F s, a.s.. (3.4) The characterization of the conditional expectation follows directly from [4 for ξ L ip. Then, a closure argument proves the equality for all ξ L 1 G. The martingale property is a direct consequence of the tower property of the G-conditional expectation as proved in [14 and the above formula for the conditional expectation. Remark 3.5 In their classical paper [7, El Karoui & Jeanblanc consider a very general stochastic optimal control problem. Their results in our context imply that M s := ess sup (s,) E [ M t F s is a -super-martingale for all. Moreover is the maximizer if and only if M -martingale. While this result provides a characterization of the optimal measure, it does not provide a universal hedge. More precisely their approach provides an optimal control which is defined only for the optimal measure and on its support. is Indeed, the super-martinagle property of M imply that there are an increasing function K and an integrand H so that M t = H s db s K t. However, aggregating these processes into one universally defined K and H is not immediate. In the standard Markovian context, this problem can be solved directly. However, it is exactly the non-markovian generalization that motivates this paper and [3, 15, 14. This interesting question of aggregation is further discussed in the Remark
9 4 Spaces and Norms The particular case of t = in (3.4) gives the following dual characterization proved in [4, E G [ξ = sup E [ξ. The above results enable us to extend the definition of G-expectation and G-martingales to a possibly larger class of random variables. In particular, this extension has the advantage of not referring to the partial differential equation (2.1). We will not develop this theory here. However, in view of the results and the norms used in the theory of BSDEs, we introduce the following function spaces. For p 1, and a F 1 -measurable, non-negative map ξ, we set ξ p L p := sup E [ess sup t [,1 ( ) p Mt (ξ), where Mt (ξ) := ess sup (t,) E [ξ F t. In fact the process Mt (ξ) a supermartingale. Therefore it admits a càdlàg version and ( thus the term sup t [,1 M t (ξ) ) p is measurable. However, in this paper we do not wish to prove the supermartingale property and that is the reason for defining the above norm through the random variable ess sup t [,1 ( M t (ξ) ) which is by definition measurable. We next define L p := } {ξ : ξ L p L p <, (4.1) L p := closure of L ip under the norm L p. Notice that if ξ L 1 G, then M t (ξ) = E G t [ξ for every. Moreover, ξ L p whenever ξ := sup t [,1 E G t [ ξ is in the class L p G. = ξ L p G In the Appendix, we compare the integrability classes defined by eng [15 and the above spaces. The connection is related to the Doob maximal inequalities in the setting of G-expectations. In particular, we prove the following. Lemma 4.1 p>2 L p G L2 L2 L2 G L2. Moreover, the final inclusion is strict. We also define the following norms for the processes. As usual 1 p <. For an F-adapted integrand H and a stochastic process Y, we set H p H p := sup E [ ( ) p 2 (d B t H t H t ), (4.2) Y p S p := sup E [ ess sup Y t p. (4.3) t 1 9
10 If Y t = E G t [ ξ for some ξ L 1 G, then Y p S p = ξ p L p. This identity also motivates the definition of the norm L p. Moreover, when the lower bound a in (3.1) is non-degenerate, then the H p norm is equivalent to the norm used in [4, 14: [ ( ) p E H t 2 2 dt. sup In analogy with the standard notation in stochastic calculus, we define the following spaces. Definition 4.2 Let p [1, ) and be as in Section 3. H p is the set of all integrands with a finite H p -norm, H p is the closure of Hp, G under the norm H p, S p is the set of all continuous processes with finite S p -norm, I p is the subset of Sp of non-decreasing processes with X =. Clearly all of the above spaces are complete and therefore Banach spaces. H H p H H p for H Hp, G G, then it is clear that Hp G Hp and thus Hp of H p G under the norm H p. Notice that is the closure Remark 4.3 Given ξ L 1 (but not necessarily in L1 G ) and a stopping time τ, it is not straightforward to define the conditional G expectation E τ [ξ as in (3.3). Indeed, set Mτ := ess sup E τ [ξ, (τ,) a.s. Then, to define the conditional expectation, we need to aggregate this family of random variables {M τ, } into one universally defined random variable. A similar problem arises in the definition of a stochastic integral for a given integrand H H 2. Again, for, we set Mt := H sdb s. Then, to define the G-stochastic integral of H we need to aggregate this family of stochastic processes. The issue of aggregation is an interesting technical question. Generally, solution is given by imposing regularity on the random variables. Indeed, for all random variables which are in L p G, one can define the universal version through a closure argument. However, there are other alternatives and a comprehensive study of this question is given in our accompanying paper [17. Finally we recall that, when the integrand H has the additional regularity that it is a càdlàg process, then Karandikar [8 defines the stochastic integral Mt := H sdb s point wise. This definition can then be used as the aggregating process. 1
11 5 The martingale representation theorem To motivate the main result of this paper, we first consider the case ξ = ϕ(b 1 ) for some smooth, bounded function ϕ. In this case, as in eng [13, 14, a formal construction can be derived by simply using the Itô s formula. Now suppose that the solution u(t, x) of the (2.1) is smooth. Indeed, we can approximate th equation (2.1) so that the approximating equation admits smooth solutions as proved by [1. This is done in the Appendix. Then, we set Y t := u(t, B t ) = E G t [ξ, H t := u(t, B t ) and K t := ( G(D 2 u(s, B s )) 1 2 tr [ â s D 2 u(s, B s ) ) ds, â t := d B t, q.s.. dt Using (2.1), (3.1) and the definition of the nonlinearity G, one may directly check that dy t = dk t + H t db t, and dk t q.s.. Also, the characterization of G-martingales in roposition 3.4 and the definition of the nonlinearity G imply that K is a G-martingale. Hence for the random variable ξ = ϕ(b 1 ), we have the martingale representation. More importantly, this example also shows that in general a non-decreasing process K is always present in this representation. construction is also the basic step in our construction. random variables in L ip the above construction proves the result. The above Indeed essentially for almost all We then prove that stochastic integrals and non-decreasing martingales are closed subsets under the appropriate norms as defined in the preceding section. Finally, these results allow us to prove the result by a closure argument. 5.1 Main results We first state the main result. Recall that function spaces are defined in Definition 4.2. Theorem 5.1 For every ξ L 2, the conditional G expectation process Y t := E G t [ξ is in S 2, and there exist unique H H2, K I2 so that N := K is a G-martingale and for every t [, T, Y t = ξ t H s db s + K 1 K t = E G [ξ + H s db s K t, q.s.. (5.1) In particular, the stochastic integrals are defined both as G-stochastic integrals and also quasi surely. Moreover the following estimate is also satisfied with a universal constant C, Y S 2 + H H 2 + K L 2 C ξ L 2. (5.2) 11
12 The proof of the above theorem will be completed in several lemmas below. In the above theorem the integrand H is not only in the class H 2 but also in the closure of H 2, G under the norm H 2. Indeed this fact implies that stochastic integral is well defined quasi surely as it is shown in the next subsection. The following is an immediate corollary of the above martingale representation. Corollary 5.2 A G-martingale is symmetric if and only if the process K in the representation (5.1) is identically equal to zero. In addition to the estimate (5.2) an estimate of the differences of the solutions is known to be an important tool. Let ξ 1, ξ 2 L 2 and (Y i, H i, K i ) be the processes in the martingale representation. We set δξ := ξ 1 ξ 2, δy := Y 1 Y 2, δz := Z 1 Z 2 and δk := K 1 K 2. Theorem 5.3 There exists a universal constant C so that, δy S 2 δξ L 2, δh H 2 + δk S 2 C [ δξ L 2 + ( ) ξ ξ 2 1 L 2 2 L Stochastic Integral and Symmetric G-martingales δξ 1 2. L 2 As discussed in Remark 4.3, for an integrand H H 2 it is not immediate to define the stochastic integral H sdb s quasi surely. However, the stochastic integral is defined in [15 for integrands H H 2, G. Then, for integrands in H2 a closure argument can be used to construct the stochastic integral quasi-surely. (Recall that H 2 is the closure of H2 G under the norm H 2. ) Theorem 5.4 For any H H 2, the stochastic integral H sdb s exists quasi surely. Moreover, the stochastic integral satisfies the Burkholder-Davis-Gundy inequality S H H 2 H s db s 2 H 2 H 2. (5.3) roof. Let H H 2. Then, there is a sequence {Hn } n H 2 G so that Hn H H 2 converges to zero as n tends to infinity. By relabeling the sequence we may assume that H n H H 2 2 n for every n. Moreover, since H H 2, for every, M t := H s db s, t [, 1, is -almost surely well-defined. Also since H n HG 2, the G-stochastic integral M n t := H n s db s, t [, 1, 12
13 is also quasi-surely defined. We now have to prove that the family {M, } can be aggregated into a universal process. For this, we define M t := lim n M n t, t [, 1. Notice that M is quasi-surely defined. We continue by showing that M = M, -almost surely, for every. inequality to obtain Indeed for any, we use the Burkholder-Davis-Gundy E [ sup Mt n Mt 2 = E [ sup t 1 We then directly estimate that 4E [ t 1 = 4E [ 4 H n H 2 H 2 (H n s H s )db s 2 (H n s H s )db s 2 âs 1/2 (Hs n H s ) 2 ds 2 2 2n. n=1 [ sup t 1 By the Borel-Cantelli Lemma, M n t M t n 2 n=1 n 2 E [ sup Mt n Mt t 1 <. lim sup n t T M n t M t =, a.s.. This implies that M = M, dt d almost surely. Since this holds for every, we conclude that the process M is an aggregating process. Hence the stochastic integral is defined. The Burkholder-Davis-Gundy inequalities follow directly from the definitions. We close this subsection by stating the following result for symmetric G-martingales, which is an immediate consequence of the main results. Theorem 5.5 The following are equivalent: (i) M is a martingale for every, (ii) M is a symmetric G martingale, (iii) For any G martigale N, both N + M and N M are also G martingales, (iv) M is a G-martingale satisfying E G { M t } = E G {M t } for any t, (v) There exists H H 2 so that M t := M + H sdb s. 13
14 Recall that I 2 is defined in Definition 4.2 as the set of all non-decreasing, continuous processes with finite S p. For (H, K) H2 I2, define a process by M t := M + An immediate corollary of the above Theorem is the following. H s db s K t. (5.4) Corollary 5.6 The process M defined in (5.4) is a G-martingale if and only if the nonincreasing process K is a G-martingale. 5.3 Increasing G-martingales In this section we show that the set of non-decreasing G-martingales is a closed set. Indeed, let MI 2 be the set of all processes K I2 such that K is a G-martingale. Then we have the following closure result which is similar to Theorem 5.4. Theorem 5.7 The space MI 2 is closed under norm S 2. roof. Consider a sequence K n MI 2 converging to a process K I2 in the norm S 2. We claim that the limit K is also a G-martingale and therefore K MI 2. Indeed, for every s t 1, set A t := K t K s and A n t := Kt n Ks n. Then, by the martingale property of the sequence, for every n and, we have ess inf (s,) E s [A n t =, a.s.. Moreover, a.s., ess inf (s,) E s [A t ess sup E s A t A n t + ess inf (s,) (s,) E s [A n t = ess sup E s A t A n t. (s,) The following can be shown directly from the definitions: [ sup E ess sup E s A t A n t (s,) A A n S 2. Hence by the convergence of A A n S 2 to zero as n tends to infinity, we conclude that lim ess sup n (s,) E s A t A n t =, a.s.. Since s t 1 and are arbitrary, the limit process K is also a G martingale. 14
15 5.4 Estimates For (H, K) H 2 I2, let M be defined as in (5.4). In this subsection, we prove certain estimates for H and K in terms of the process M. These estimates are similar to those obtained for reflected backward stochastic differential equations in [5. roposition 5.8 Let H, K, M be as in (5.4). There exists a constant C depending only on the dimension so that H H 2 + K S 2 C M S 2. roof. We directly calculate that We integrate over [t, 1 to obtain, M t 2 + t d M t 2 = 2M t H t db t 2M t dk t + d B t H t H t. d B s H s H s = M t M s dk s 2 t M s H s db s. We then take the expected value under an arbitrary to arrive at E [ M t 2 + d B t H t H t E [ M M t dk t. Since dk t, for any ε >, we have the following estimate, ( ) E [ M t 2 + d B t H t H t E [ M sup M t K 1 t [,1 [ (1 + ε 1 )E sup M t 2 + εe [ K1 2. (5.5) t [,1 Next we estimate K. Recall that = K K t. By the definition of M t, K 2 1 = ( M 1 M We now use (5.5) with ε = 1 6. The result is E [ K1 2 H s db s ) 2 ( ) 2 3 M M H s db s. E [3 M M d B t H t H t [ 27 E sup M t [ t [,1 2 E K
16 Hence, E [ K E [ sup M t 2. t [,1 This together with (5.5) and the definitions of the norms imply the result. Next we prove an estimate for differences. So for any (H i, K i ) H 2 I2, i = 1, 2, let M i be defined as in (5.4). As before, let δm := M 1 M 2, δh := H 1 H 2, δk := K 1 K 2. roposition 5.9 There exists a constant C depending only on the dimension so that [ ( ) δh 2 H + δk 2 2 S C δm 2 2 S + δm 2 S 2 K 1 S 2 + K 2 S 2. (5.6) roof. The terms K i S 2 in the above inequality can be estimated using roposition 5.8. The arguments are very similar to the proof of roposition 5.8. The only difference is the fact that δk is no longer a monotone function. We directly compute that δm t = δm + δh s db s δk t. Then we proceed as in the proof of the previous proposition to arrive at E [ δm t 2 + d B t δh t δh t E [ δm 1 2 [ + E δm s d δk s. The last integral term is directly estimated as follows. [ [( ) ( ) E δm s d δk s E sup δm t sup [ Kt 1 + Kt 2 t [,1 t [,1 [ 1/2 [ 1/2 2 2 E sup δm t 2 E sup Kt i 2 t [,1 i=1 2 δm S 2 ( K 1 S 2 + K 2 S 2 ). t [,1 The estimate of δk S 2 is obtained exactly as in the proof of roposition roof of Theorem 5.1 We prove uniqueness first. Suppose that there are two pairs (H i, K i ) satisfying (5.1). Then, we can use roposition 5.9 with Mt i = Y t = Et G [ξ. In particular, δm. By (5.6), we conclude that δh H 2 = δk S 2 =. For the existence, let M be the subset of L 2 so that the martingale representation (5.1) holds for all ξ M. We will prove the result by showing that M is closed in L 2 and that L ip M. The second statement is proved in the Appendix, by an approximation argument. 16
17 This is roposition 6.1. Then for ξ L 2 these two statements imply the existence of (H, K) as L 2 is in the closure of L ip under the norm L 2. To show that M is closed, consider a sequence ξ n M converging to ξ L 2. Since ξ n M, there are H n H 2 G and Kn I 2 so that (5.1) holds for each n and N n := K n is a continuous, non-increasing G-martingale. We now use the estimate (5.6) with M 1 = Y n and M 2 = Y m for arbitrary n and m. The identity Yt n of the conditional expectation E G t imply that for every t [, 1, Y n t Yt m 2 E G t = E G t [ξ n together with the definition [ ξ n ξ m 2. Hence the definition of the norm L 2 yield, Y n Y m S 2 ξ n ξ m L 2. We now use the results of ropositions 5.8 and 5.9 with M 1 = Y n and M 2 = Y m. The roposition 5.8 yields for each n, We use this in (5.6). The result is K n S 2 ξ n L 2 c := sup ξ m L 2 <. m [ H n H m 2 H + K n K m 2 2 S C ξ n ξ m 2 2 L 2 + 2c ξ n ξ m L 2. Hence {H n } n is a Cauchy sequence in H G 2. Therefore by the definition of H G 2, we know that there is a limit H H G 2. Moreover, by (5.3) the corresponding stochastic integrals converge in S 2. Also {Kn } n is a Cauchy sequence in S 2. By Theorem 5.7, we conclude that there is a limit K I 2 so that N := K is a G-martingale. Since (Y n, H n, K n ) satisfies (5.1) with final data Y1 n = ξn, we conclude that the limit processes (Y, H, K) also satisfies (5.1) with final data Y 1 = ξ. Hence M is closed under the norm L roof of Theorem 5.3 Since Yt i = Et G [ξ i, the dual representation of the G-conditional expectation yield that for each t [, 1, δy t = Et G [ξ 1 Et G [ξ 2 Et G [ ξ 1 ξ 2. Hence, We now use roposition 5.9. The result is δy S 2 δξ L 2. δh H 2 + δk S 2 C [ δy S 2 + δy 1 2 S 2 ( ) K K 2 1 S 2 2. S 2 We now use the estimate (5.2) in the above inequality, together with the fact that ξ 2 S 2 ξ 2 S 2 δξ S 2, to complete the proof of the Theorem. 17
18 6 Appendix In this Appendix, we construct smooth approximations of the partial differential equations (2.1), (2.2) and study the properties of the integrability class L Approximation The main goal of this subsection is to construct a smooth approximation of solutions of (2.2). We require smoothness of these solutions in order to be able to apply the Itô rule. The first obstacle to regularity is the possible degeneracy of the nonlinearity G or equivalently the possible degeneracy of the lower bound a. Therefore, we do not expect the equation to regularize the final data. However, even in this case the solution remains twice differentiable provided the final data has this regularity. But the second difficulty in proving smoothness emanates from the fact that the equation (2.2) is solved in several time intervals and in each interval (t i, t i+1 ) and the value B ti enters into the equation as a parameter. Differentiability with respect to these types of parameters is harder to prove. Given these difficulties, we approximate the equation as follows. For ɛ (, 1, set a ɛ := a ɛi so that Ḡ ɛ (γ) := sup{ 1 2 a : γ aɛ a a }, where we use the short notation a : γ := trace(aγ) for two symmetric matrices a, γ. We then mollify Ḡɛ. Indeed, let η : S d [, 1 be a regular bump function, i.e., support of η is B 1 and ηdx = 1. We then define G ɛ (γ) := Ḡ ɛ (γ + ɛγ ) η(γ ) dγ. B 1 It can be shown that 1 2 aɛ : γ G ɛ (γ + γ ) G ɛ (γ) 1 2 a : γ, and that there is a constant C satisfying G ɛ (γ) Ḡɛ (γ) C ɛ. Moreover G ɛ is smooth and convex. Thus, we can define the Legendre transform of G ɛ by L ɛ (a) := sup γ S d { 1 2 a : γ Gɛ (γ) }. Then L ɛ (a) is finite only if a ɛ a a. Also, C ɛ L ε (a) for all a ɛ a a and G ɛ (γ) := sup { 1 a ɛ a a 2 a : γ Lɛ (a) }. We are now ready to prove the approximation result. Recall that M L 2 which the representation (5.1) holds. is the subset for 18
19 roposition 6.1 L ip M. roof. Let ξ L ip. Then ξ = ϕ(b t1,..., B tn ) for some bounded Lipschitz function ϕ and t 1... t n = 1. Let {v i } n i=1 be the solutions of (2.2). Then, v i s are bounded and Lipschitz continuous. Moreover, by the definition of the G-expectations Et G [ξ = v i (t, B t1,..., B ti 1, B t ), t [t i 1, t i ). We approximate v i as follows. Let ϕ ɛ be smooth, bounded approximation of ϕ so that ϕ ɛ ϕ tends to zero and ϕ ɛ ϕ. Define vi ɛ(t, x 1,..., x i, x) recursively as in the definition G-expectations in Section 2 with data ϕ ɛ (B t1,..., B tn ) and the nonlinearity G ɛ. Indeed, vi ɛ is the solution of t vɛ i (t, x 1,..., x i 1, x) G ɛ (Dxv 2 i ɛ (t, x 1,..., x i 1, x)) =, (6.1) on the interval [t i 1, t i ) with final data vi ɛ(t i, x 1,..., x i 1, x) = vi+1 ɛ (t i, x 1,..., x i 1, x, x). In the interval [t n 1, 1), vn(t, ɛ x 1,..., x n 1, x) solves (6.1) with data vn(1, ɛ x 1,..., x n 1, x) = ϕ ɛ (x 1,..., x n 1, x). By the celebrated regularity result of Krylov [1 (Theorem 1, section 6.3, page 292), vi ɛ(t, x 1,..., x i 1, x) is a smooth function of (t, x) (t i, t i+1 ) R d and it is bounded and Lipschitz in all variables. Moreover the uniform Lipschitz constant of ϕ is preserved and for each i, we have lim n vɛ i v i =, sup <ɛ 1 v ɛ i ϕ. For t (t i, t i+1 ), we set Mt ɛ := vi ɛ (t, B t1,..., B ti 1, B t ), Ht ɛ := x vi ɛ (t, B t1,..., B ti 1, B t ), Kt ɛ := G ɛ (Dxv 2 i ɛ (t, B t1,..., B ti 1, B t )) 1 2 tr [â tdxv 2 i ɛ (t, B t1,..., B ti 1, B t ), so that dm ɛ t = H ɛ db t dk ɛ t. Let ɛ be defined exactly as but with lower bound a ɛ in (3.1). Then, by the definition of G ɛ and ɛ, we have that K ɛ is non-decreasing almost surely for every ɛ. But also since L ɛ C ɛ, we have Thus K ɛ is almost a G-martingale. C ɛ sup ɛ E [ K ɛ 1. (6.2) It is clear that M ɛ t converges to M t := E G t [ξ. Also, H ɛ t is uniformly bounded in ɛ due to the Lipschitz estimate on vi ɛ. Hence Hɛ HG 2. Also the roposition 5.8 yields, K ɛ S 2 ɛ C M ɛ S 2 ɛ C ξ. 19
20 Moreover, by roposition 5.9 (applied with ɛ instead of ) we obtain the following estimate where C(ɛ ) := sup <ɛ,ɛ ɛ H ɛ H ɛ H 2 ɛ + K ɛ K ɛ S 2 ɛ C(ɛ ), < ɛ, ɛ ɛ, sup <ɛ,ɛ ɛ ( M ɛ M ɛ S 2 ɛ + M ɛ M ɛ 1/2 S 2 ɛ ( ( ) M ɛ M ɛ S 2 + M ɛ M ɛ 1/2 (2 ξ ɛ S 2 ) ɛ K ɛ S 2 ɛ + K ɛ S 2 ɛ )) Since M ɛ converges uniformly to M t, C(ɛ ) tends to zero with ɛ. Therefore {(H ɛ, K ɛ )} ɛ is a Cauchy sequence in H 2 ɛ S 2 ɛ for every ɛ. By the closure results, Theorem 5.4 and Theorem 5.7, we conclude that there are H H 2 ɛ and K I 2 ɛ for every ɛ > and that (M, H, K) satisfies 5.4 and H S 2 ε + K S 2 ε C ξ. (6.3) Clearly H and K are independent of ε. Since by definition and by (3.1) = ɛ> ɛ, we conclude from the uniform estimates (6.3) that H H 2, K I2. Moreover, this yields that H H 2 and also K is a G-martingale by (6.2). Since M t = Et G [ξ, we have shown that there is a martingale representation for the arbitrary random variable ξ L ip. Hence ξ M. 6.2 L p -spaces. In this section we study the properties of the L 2 the example that follows it, imply Lemma 4.1. space. The following result together with Lemma 6.2 For every p > 2, there exits C p so that for ξ L ip, ξ L 2 C p ξ L p G. roof. Since ξ L ip, for each, E G t [ξ = M t := ess sup (t,) E t [ξ, a.s.. Set M t := sup s t M t. It suffices to show that E [ M 1 2 C p ξ 2 L p G for all. Now fix. Without loss of generality we may assume ξ. 2
21 Since M is a supermartingale, it has a càdlàg version under. For any λ >, set ˆτ := ˆτ λ := inf{t : M t λ}. Then ˆτ is an F stopping time and (M 1 λ) = (ˆτ 1) 1 λ E[ Mˆτ 1 {ˆτ 1}. By Neveu [11, there exist a sequence { j, j 1} (ˆτ, ) defined in (3.2)such that For each n 1, denote Mˆτ = sup j 1 E j ˆτ [ξ, a.s.. M ṋ τ := sup E j ˆτ [ξ. 1 j n Then M ṋ τ Mˆτ, a.s.. Fix n. Set A j := {M ṋ τ = E j ˆτ [ξ}, 1 j n, and Ã1 := A 1, Ã j := A j \ 1 i<j A i, j = 2,, n. Then {Ãj, 1 j n} F Bˆτ form a partition of Ω. Define ˆ n by ˆ n (E) := n j (E Ãj). j=1 Clearly ˆ n. Since j (ˆτ, ), j = 1,, n, it is also clear that ˆ n (ˆτ, ). Then we have M ṋ τ = Eˆ n ˆτ [ξ. Since ˆ n (ˆτ, ), by definition ˆ n = on F Bˆτ. Let q = p/(p 1) be the conjugate of p. We directly estimate that E [ Mτ ṋ 1 {ˆτ 1} = Eˆ n[ Mτ ṋ 1 {ˆτ 1} We let n to arrive at (M 1 λ) 1 λ E[ M ˆτ 1 {ˆτ 1} Therefore, [ Eˆ n 1 ( ξ p [ˆn p ) (ˆτ 1) [ ξ L p (M G 1 λ) lim n = Eˆ n[ Eˆ n ˆτ [ξ1 {ˆτ 1} = Eˆ n[ ξ1 {ˆτ 1} 1 q. 1 q = [ Eˆ n 1 [ 1 ( ξ p p ) (M1 q λ) 1 λ E[ Mτ ṋ 1 {ˆτ 1} 1 [ λ ξ L p (M G 1 λ) (M 1 λ) 1 λ p ξ p L p G, 1 q. 21
22 so that for any fixed λ, E [ M 1 2 = 2 We choose λ := ξ L p G λ ξ p L p G to arrive at λ λ(m1 λ)dλ 2 λ λdλ + 2 λ dλ λ p 1 = λ2 + 2 p 2 ξ p L p λ 2 p. G E [ M 1 2 C p ξ 2 L p G. λ(m T λ)dλ We next construct a bounded random variable which is not in L 1 G. Example 6.3 Let d = 1, a = 1, a = 2, E := {lim t B t / 2t ln ln 1 t = 1}. We claim that 1 E / L 1 G. Indeed, assume that 1 E L 1 G. Then there exists ξ n = ϕ(b t1,, B tn ) L ip such that E G [ ξ n 1 E < 1 3. For θ [, 1, denote aθ t := 1+θ1 [,t1 )(t) and θ := aθ. Define ψ(x) := E [ϕ(x, x + B t2 t 1,, x + B tn t 1 ). Since E F + F t1, for any θ [, 1, we have the following inequality. E θ[ ψ(bt1 ) 1 E = E θ[ E θ t 1 [ϕ(b t1,, B tn ) 1 E E θ[ ϕ(bt1,, B tn ) 1 E < 1 3. Note that (E) = 1 and θ (E) = for all θ >. Then E [ ψ(bt1 ) 1 < 1 and E θ[ ψ(bt1 ) < for all θ >. The latter implies that E [ ψ(bt1 ) Thus [ = lim E ψ((1 1 + θ) θ 2 Bt1 ) = lim E θ[ ψ(bt1 ) 1 θ 3. 1 E [ ψ(bt1 ) 1 + E [ ψ(bt1 ) = 2 3, yielding a contradiction. Hence 1 E L 1 G. We close the paper with an example that shows that the process K does not have any special structure. Example 6.4 Let ν be a deterministic Radon measure of [, 1 and F : S d R 1 be a smooth function so that F () = and F (a) for all a. We define a random variable by ξ := F (â s a) ν(ds). 22
23 For every integer n, we set ξ n := n F ( n[b k/n B (k 1)/n [B k/n B (k 1)/n a ) ([ k 1 ν n k )). n i=1 Clearly ξ n L ip. If ν does not have any atoms then we can show that ξ n converges to ξ in L p for every p <. Hence ξ Lp. Moreover, E G t [ξ = F (â s a) ν(ds). Hence the martingale representation holds with H and K t = E G t [ξ. References [1. Cheridito, M. Soner and N. Touzi, The multi-dimensional super-replication problem under Gamma constraints, Annales de l Institut Henri oincaré, Série C: Analyse Non- Linéaire 22, (25). [2 Cheridito,., Soner, H.M. and Touzi, N., Victoir, N. (27) Second order BSDE s and fully nonlinear DE s, Communications in ure and Applied Mathematics, 6 (7): [3 Denis, L. and Martini, C. (26) A Theorectical Framework for the ricing of Contingent Claims in the resence of Model Uncertainty, Annals of Applied robability 16, 2, [4 Denis, L., Hu, M. and eng, S. Function Spaces and Capacity Related to a Sublinear Expectation: Application to G-Brownian Motion aths, arxiv: (February 28). [5 El Karoui, N.; Kapoudjian, C.; ardoux, E.; eng, S.; Quenez, M. C. Reflected solutions of backward SDE s, and related obstacle problems for DE s. Ann. robab. 25 (1997), no. 2, [6 El Karoui, N., eng, S. and Quenez, M.-C. Backward stochastic differential equations in finance. Mathematical Finance 7, [7 El Karoui, N. and Jeanblanc, M. (1988) Controle de rocessus de Markov. Sem. rob. XXII, Lecture Notes in Math, Vol. 1321, Springer-Verlag, [8 Karandikar, R. On pathwise stochastic integration, Stochastic rocesses and Their Applications, 57 (1995), [9 Karatzas, I. and Shreve, S. (1998) Methods of Mathematical Finance, Berlin, Springer. 23
24 [1 Krylov N.K. (1987) Nonlinear elliptic and arabolic Equations of the Second Order, Kluwer (original Russian version published in 1985). [11 Neveu, J. (1975) Discrete arameter Martingales. North Holland ublishing Company. [12 ardoux, E., eng, S. (199). Adapted solution of a backward stochastic differential equation, Systems Control Lett., 14, [13 eng, S. (1999) Monotonic limit theory of BSDE and nonlinear decomposition theorem of Doob Meyer type, robab. Theory Related Fields 113, 473?499. [14 eng, S. (27) G-Expectation, G-Brownian motion, and related stochastic calculus of Itô type, arxiv:math/6135v2. [15 eng, S. (27) G-Brownian motion and dynamic risk measure under volatility uncertainty, arxiv: v1. [16 Soner, H. M. and Touzi, N. (2), Super-replication under Gamma constraint, SIAM Journal on Control and Optimization 39, [17 Soner, H. M. Touzi, N. and Zhang, J. (29) Aggregation approach to quasi-sure stochastic analysis, preprint. [18 Soner, H. M. Touzi, N. and Zhang, J. (29) Dual Formulation of Second Order Target roblems, preprint. [19 Soner, H. M. Touzi, N. and Zhang, J. (29) An Existence and Uniqueness Theory for Second Order Backward SDEs, in preparation. [2 Xu, J. and Zhang, B. (29) Martingale characterization of G-Brownian motion, Stochastic rocesses and their Applications 119, 1,
Quasi-sure Stochastic Analysis through Aggregation
Quasi-sure Stochastic Analysis through Aggregation H. Mete Soner Nizar Touzi Jianfeng Zhang March 7, 21 Abstract This paper is on developing stochastic analysis simultaneously under a general family of
More informationRandom G -Expectations
Random G -Expectations Marcel Nutz ETH Zurich New advances in Backward SDEs for nancial engineering applications Tamerza, Tunisia, 28.10.2010 Marcel Nutz (ETH) Random G-Expectations 1 / 17 Outline 1 Random
More informationPathwise Construction of Stochastic Integrals
Pathwise Construction of Stochastic Integrals Marcel Nutz First version: August 14, 211. This version: June 12, 212. Abstract We propose a method to construct the stochastic integral simultaneously under
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationOn the Multi-Dimensional Controller and Stopper Games
On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper
More informationMinimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging
Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR
More informationNonlinear representation, backward SDEs, and application to the Principal-Agent problem
Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem
More informationRobust Mean-Variance Hedging via G-Expectation
Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationOptimal Stopping under Adverse Nonlinear Expectation and Related Games
Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationOn the monotonicity principle of optimal Skorokhod embedding problem
On the monotonicity principle of optimal Skorokhod embedding problem Gaoyue Guo Xiaolu Tan Nizar Touzi June 10, 2015 Abstract This is a continuation of our accompanying paper [18]. We provide an alternative
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationOptimal stopping for non-linear expectations Part I
Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University
More informationSTOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI
STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications
More informationA Representation of Excessive Functions as Expected Suprema
A Representation of Excessive Functions as Expected Suprema Hans Föllmer & Thomas Knispel Humboldt-Universität zu Berlin Institut für Mathematik Unter den Linden 6 10099 Berlin, Germany E-mail: foellmer@math.hu-berlin.de,
More informationLecture 19 L 2 -Stochastic integration
Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More informationSuperreplication under Volatility Uncertainty for Measurable Claims
Superreplication under Volatility Uncertainty for Measurable Claims Ariel Neufeld Marcel Nutz April 14, 213 Abstract We establish the duality formula for the superreplication price in a setting of volatility
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationInformation and Credit Risk
Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information
More informationProf. Erhan Bayraktar (University of Michigan)
September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationAn essay on the general theory of stochastic processes
Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationSTOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES
STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s
More informationQuadratic BSDE systems and applications
Quadratic BSDE systems and applications Hao Xing London School of Economics joint work with Gordan Žitković Stochastic Analysis and Mathematical Finance-A Fruitful Partnership 25 May, 2016 1 / 23 Backward
More informationBSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009
BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, 16-20 March 2009 1 1) L p viscosity solution to 2nd order semilinear parabolic PDEs
More informationStochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity
Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Rama Cont Joint work with: Anna ANANOVA (Imperial) Nicolas
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationPREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.
Acta Math. Univ. Comenianae Vol. LXXVII, 1(28), pp. 123 128 123 PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES M. EL KADIRI Abstract. We prove as for the real case that a martingale
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationDoléans measures. Appendix C. C.1 Introduction
Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration
More informationOn semilinear elliptic equations with measure data
On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July
More informationRobust Superhedging with Jumps and Diffusion arxiv: v2 [q-fin.mf] 17 Jul 2015
Robust Superhedging with Jumps and Diffusion arxiv:1407.1674v2 [q-fin.mf] 17 Jul 2015 Marcel Nutz July 20, 2015 Abstract We establish a nondominated version of the optional decomposition theorem in a setting
More informationMarkov Chain BSDEs and risk averse networks
Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs
More informationExistence and Comparisons for BSDEs in general spaces
Existence and Comparisons for BSDEs in general spaces Samuel N. Cohen and Robert J. Elliott University of Adelaide and University of Calgary BFS 2010 S.N. Cohen, R.J. Elliott (Adelaide, Calgary) BSDEs
More informationExponential martingales: uniform integrability results and applications to point processes
Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda
More informationViscosity Solutions of Path-dependent Integro-Differential Equations
Viscosity Solutions of Path-dependent Integro-Differential Equations Christian Keller University of Southern California Conference on Stochastic Asymptotics & Applications Joint with 6th Western Conference
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More informationFinancial Asset Price Bubbles under Model Uncertainty
Financial Asset Price Bubbles under Model Uncertainty Francesca Biagini, Jacopo Mancin October 24, 27 Abstract We study the concept of financial bubbles in a market model endowed with a set P of probability
More informationarxiv:math/ v1 [math.pr] 13 Feb 2007
arxiv:math/0702358v1 [math.pr] 13 Feb 2007 Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations 1 Introduction Shige PENG Institute of Mathematics Shandong University 250100, Jinan,
More informationAn Introduction to Malliavin calculus and its applications
An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationPROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS
PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We
More informationOptimal Skorokhod embedding under finitely-many marginal constraints
Optimal Skorokhod embedding under finitely-many marginal constraints Gaoyue Guo Xiaolu Tan Nizar Touzi June 10, 2015 Abstract The Skorokhod embedding problem aims to represent a given probability measure
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More informationA class of globally solvable systems of BSDE and applications
A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationStochastic Calculus (Lecture #3)
Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:
More informationEULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS
Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationSolvability of G-SDE with Integral-Lipschitz Coefficients
Solvability of G-SDE with Integral-Lipschitz Coefficients Yiqing LIN joint work with Xuepeng Bai IRMAR, Université de Rennes 1, FRANCE http://perso.univ-rennes1.fr/yiqing.lin/ ITN Marie Curie Workshop
More informationFOUNDATIONS OF MARTINGALE THEORY AND STOCHASTIC CALCULUS FROM A FINANCE PERSPECTIVE
FOUNDATIONS OF MARTINGALE THEORY AND STOCHASTIC CALCULUS FROM A FINANCE PERSPECTIVE JOSEF TEICHMANN 1. Introduction The language of mathematical Finance allows to express many results of martingale theory
More informationOn Kusuoka Representation of Law Invariant Risk Measures
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of
More informationThe Cameron-Martin-Girsanov (CMG) Theorem
The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,
More informationA TWO PARAMETERS AMBROSETTI PRODI PROBLEM*
PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationCoherent and convex monetary risk measures for bounded
Coherent and convex monetary risk measures for bounded càdlàg processes Patrick Cheridito ORFE Princeton University Princeton, NJ, USA dito@princeton.edu Freddy Delbaen Departement für Mathematik ETH Zürich
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationProving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control
Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationNon-Zero-Sum Stochastic Differential Games of Controls and St
Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings October 1, 2009 Based on two preprints: to a Non-Zero-Sum Stochastic Differential Game of Controls and Stoppings I. Karatzas, Q. Li,
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationREGULARITY RESULTS FOR THE EQUATION u 11 u 22 = Introduction
REGULARITY RESULTS FOR THE EQUATION u 11 u 22 = 1 CONNOR MOONEY AND OVIDIU SAVIN Abstract. We study the equation u 11 u 22 = 1 in R 2. Our results include an interior C 2 estimate, classical solvability
More informationFundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales
Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time
More informationCoherent and convex monetary risk measures for unbounded
Coherent and convex monetary risk measures for unbounded càdlàg processes Patrick Cheridito ORFE Princeton University Princeton, NJ, USA dito@princeton.edu Freddy Delbaen Departement für Mathematik ETH
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationarxiv: v1 [math.oc] 22 Sep 2016
EUIVALENCE BETWEEN MINIMAL TIME AND MINIMAL NORM CONTROL PROBLEMS FOR THE HEAT EUATION SHULIN IN AND GENGSHENG WANG arxiv:1609.06860v1 [math.oc] 22 Sep 2016 Abstract. This paper presents the equivalence
More informationSTATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.
STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution
More informationStochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes
BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents
More informationMaturity randomization for stochastic control problems
Maturity randomization for stochastic control problems Bruno Bouchard LPMA, Université Paris VI and CREST Paris, France Nicole El Karoui CMAP, Ecole Polytechnique Paris, France elkaroui@cmapx.polytechnique.fr
More informationDynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)
Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic
More informationA Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm
A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes
More informationL -uniqueness of Schrödinger operators on a Riemannian manifold
L -uniqueness of Schrödinger operators on a Riemannian manifold Ludovic Dan Lemle Abstract. The main purpose of this paper is to study L -uniqueness of Schrödinger operators and generalized Schrödinger
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationItô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory
More informationMA8109 Stochastic Processes in Systems Theory Autumn 2013
Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More informationRisk-Minimality and Orthogonality of Martingales
Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2
More informationSome aspects of the mean-field stochastic target problem
Some aspects of the mean-field stochastic target problem Quenched Mass ransport of Particles owards a arget Boualem Djehiche KH Royal Institute of echnology, Stockholm (Joint work with B. Bouchard and
More informationCONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES
CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES IOANNIS KARATZAS Departments of Mathematics and Statistics, 619 Mathematics Building Columbia University, New York, NY 127 ik@math.columbia.edu
More informationMARTINGALE OPTIMAL TRANSPORT AND ROBUST HEDGING IN CONTINUOUS TIME
MARTIGALE OPTIMAL TRASPORT AD ROBUST HEDGIG I COTIUOUS TIME YA DOLISKY AD H.METE SOER HEBREW UIVERSITY OF JERUSALEM AD ETH ZURICH Abstract. The duality between the robust or equivalently, model independent
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More information