Quasi-sure Stochastic Analysis through Aggregation
|
|
- Terence McCarthy
- 5 years ago
- Views:
Transcription
1 Quasi-sure Stochastic Analysis through Aggregation H. Mete Soner Nizar Touzi Jianfeng Zhang March 7, 21 Abstract This paper is on developing stochastic analysis simultaneously under a general family of probability measures that are not dominated by a single probability measure. The interest in this question originates from the probabilistic representations of fully nonlinear partial differential equations and applications to mathematical finance. The existing literature relies either on the capacity theory (Denis and Martini [5]), or on the underlying nonlinear partial differential equation (Peng [11]). In both approaches, the resulting theory requires the smoothness of the corresponding processes and random variables in terms of the underlying canonical process. In this paper, we investigate this question for a larger class of non-smooth processes, but with a restricted family of non-dominated probability measures. For smooth processes, our approach leads to similar results as in previous literature, provided the restricted family satisfies an additional density property. Key words: non-dominated probability measures, weak solutions of SDEs, uncertain volatility model, quasi-sure stochastic analysis. AMS 2 subject classifications: 6H1, 6H3. ETH (Swiss Federal Institute of Technology), Zürich and Swiss Finance Institute, hmsoner@ethz.ch. Research partly supported by the European Research Council under the grant FiRM. Financial support from Credit Suisse through the ETH Foundation is also gratefully acknowledged. CMAP, Ecole Polytechnique Paris, nizar.touzi@polytechnique.edu. Research supported by the Chair Financial Risks of the Risk Foundation sponsored by Société Générale, the Chair Derivatives of the Future sponsored by the Fédération Bancaire Française, and the Chair Finance and Sustainable Development sponsored by EDF and Calyon. University of Southern California, Department of Mathematics, jianfenz@usc.edu. Research supported in part by NSF grant DMS
2 1 Introduction It is well known that all probabilistic constructions crucially depend on the underlying probability measure. In particular, all random variables and stochastic processes are defined up to null sets of this measure. If, however, one needs to develop stochastic analysis simultaneously under a family of probability measures, then careful constructions are needed as the null sets of different measures do not necessarily coincide. Of course, when this family of measures is dominated by a single measure this question trivializes as we can simply work with the null sets of the dominating measure. However, we are interested exactly in the cases where there is no such dominating measure. An interesting example of this situation is provided in the study of financial markets with uncertain volatility. Then, essentially all measures are orthogonal to each other. Since for each probability measure we have a well developed theory, for simultaneous stochastic analysis, we are naturally led to the following problem of aggregation. Given a family of random variables or stochastic processes, X P, indexed by probability measures P, can one find an aggregator X that satisfies X = X P, P almost surely for every probability measure P? This paper studies exactly this abstract problem. Once aggregation is achieved, then essentially all classical results of stochastic analysis generalize as shown in Section 6 below. This probabilistic question is also closely related to the theory of second order backward stochastic differential equations (2BSDE) introduced in [3]. These type of stochastic equations have several applications in stochastic optimal control, risk measures and in the Markovian case, they provide probabilistic representations for fully nonlinear partial differential equations. A uniqueness result is also available in the Markovian context as proved in [3] using the theory of viscosity solutions. Although the definition given in [3] does not require a special structure, the non-markovian case, however, is better understood only recently. Indeed, [14] further develops the theory and proves a general existence and uniqueness result by probabilistic techniques. The aggregation result is a central tool for this result and in our accompanying papers [12, 13, 14]. Our new approach to 2BSDE is related to the quasi sure analysis introduced by Denis and Martini [5] and the G-stochastic analysis of Peng [11]. These papers are motivated by the volatility uncertainty in mathematical finance. In such financial models the volatility of the underlying stock process is only known to stay between two given bounds a < a. Hence, in this context one needs to define probabilistic objects simultaneously for all probability measures under which the canonical process B is a square integrable martingale with absolutely continuous quadratic variation process satisfying adt d B t adt. Here d B t is the quadratic variation process of the canonical map B. We denote the set 2
3 of all such measures by P W, but without requiring the bounds a and a, see subsection 2.1. As argued above, stochastic analysis under a family of measures naturally leads us to the problem of aggregation. This question, which is also outlined above, is stated precisely in Section 3, Definition 3.1. The main difficulty in aggregation originates from the fact that the above family of probability measures are not dominated by one single probability measure. Hence the classical stochastic analysis tools can not be applied simultaneously under all probability measures in this family. As a specific example, let us consider the case of the stochastic integrals. Given an appropriate integrand H, the stochastic integral It P = H sdb s can be defined classically under each probability measure P. However, these processes may depend on the underlying probability measure. On the other hand we are free to redefine this integral outside the support of P. So, if for example, we have two probability measures P 1, P 2 that are orthogonal to each other, then the integrals are immediately aggregated since the supports are disjoint. However, for uncountably many probability measures, conditions on H or probability measures are needed. Indeed, in order to aggregate these integrals, we need to construct a stochastic process I t defined on all of the probability space so that I t = I P t for all t, P almost surely. Under smoothness assumptions on the integrand H this aggregation is possible and a pointwise definition is provided by Karandikar [9] for càdlàg integrands H. Denis and Martini [5] uses the theory of capacities and construct the integral for quasi-continuous integrands, as defined in that paper. A different approach based on the underlying partial differential equation was introduced by Peng [11] yielding essentially the same results as in [5]. In Section 6 below, we also provide a construction without any restrictions on H but in a slightly smaller class than P W. For general stochastic processes or random variables, an obvious consistency condition (see Definition 3.2, below) is clearly needed for aggregation. But Example 3.3 also shows that this condition is in general not sufficient. So to obtain aggregation under this minimal condition, we have two alternatives. First is to restrict the family of processes by requiring smoothness. Indeed the previous results of Karandikar [9], Denis-Martini [5], and Peng [11] all belong to this case. A precise statement is given in Section 3 below. The second approach is to slightly restrict the class of non-dominated measures. The main goal of this paper is to specify these restrictions on the probability measures that allows us to prove aggregation under only the consistency condition (3.4). Our main result, Theorem 5.1, is proved in Section 5. For this main aggregation result, we assume that the class of probability measures are constructed from a separable class of diffusion processes as defined in subsection 4.4, Definition 4.8. This class of diffusion processes is somehow natural and the conditions are motivated from stochastic optimal control. Several simple examples of such sets are also provided. Indeed, the processes obtained by a straightforward concatenation of deterministic piece-wise constant processes 3
4 forms a separable class. For most applications, this set would be sufficient. However, we believe that working with general separable class helps our understanding of quasi-sure stochastic analysis. The construction of a probability measure corresponding to a given diffusion process, however, contains interesting technical details. Indeed, given an F-progressively measurable process α, we would like to construct a unique measure P α. For such a construction, we start with the Wiener measure P and assume that α takes values in S > d (symmetric, positive definite matrices) and also satisfy α s ds < for all t, P -almost surely. We then consider the P stochastic integral X α t := α 1/2 s db s. (1.1) Classically, the quadratic variation density of X α under P is equal to α. We then set P α S := P (X α ) 1 (here the subscript S is for the strong formulation). It is clear that B under P α S has the same distribution as Xα under P. One can show that the quadratic variation density of B under P α S is equal to a satisfying a(xα (ω)) = α(ω) (see Lemma 8.1 below for the existence of such a). Hence, P α S P W. Let P S P W be the collection of all such local martingale measures P α S. Barlow [1] has observed that this inclusion is strict. Moreover, this procedure changes the density of the quadratic variation process to the above defined process a. Therefore to be able to specify the quadratic variation a priori, in subsection 4.2, we consider the weak solutions of a stochastic differential equation ((4.5) below) which is closely related to (1.1). This class of measures obtained as weak solutions almost provides the necessary structure for aggregation. The only additional structure we need is the uniqueness of the map from the diffusion process to the corresponding probability measure. Clearly, in general, there is no uniqueness. So we further restrict ourselves into the class with uniqueness which we denote by A W. This set and the probability measures generated by them, P W, are defined in subsection 4.2. The implications of our aggregation result for quasi-sure stochastic analysis are given in Section 6. In particular, for a separable class of probability measures, we first construct a quasi sure stochastic integral and then prove all classical results such as Kolmogrov continuity criterion, martingale representation, Ito s formula, Doob-Meyer decomposition and the Girsanov theorem. All of them are proved as a straightforward application of our main aggregation result. If in addition the family of probability measures is dense in an appropriate sense, then our aggregation approach provides the same result as the quasi-sure analysis. These type of results, of course, require continuity of all the maps in an appropriate sense. The details of this approach are investigated in our paper [13], see also Remark 7.5 in the context of the application to the hedging problem under uncertain volatility. Notice that, in contrast 4
5 with [5], our approach provides existence of an optimal hedging strategy, but at the price of slightly restricting the family of probability measures. The paper is organized as follows. The local martingale measures P W and a universal filtration are studied in Section 2. The question of aggregation is defined in Section 3. In the next section, we define A W, P W and then the separable class of diffusion processes. The main aggregation result, Theorem 5.1, is proved in Section 5. The next section generalizes several classical results of stochastic analysis to the quasi-sure setting. Section 7 studies the application to the hedging problem under uncertain volatility. In Section 8 we investigate the class P S of mutually singular measures induced from strong formulation. Finally, several examples concerning weak solutions and the proofs of several technical results are provided in the Appendix. Notations. We close this introduction with a list of notations introduced in the paper. Ω := C(R +, R d ), B is the canonical process, P is the Wiener measure. For a given stochastic process X, F X is the filtration generated by X. F := F B = {F t } t is the filtration generated by B. F + := {F + t,t }, where F+ t := F t+ := s>t F s, Ft P := F t + N P (F t + ) and FP t := F t + N P (F ), where } N P (G) := {E Ω : there exists Ẽ G such that E Ẽ and P[Ẽ] =. N P is the class of P polar sets defined in Definition 2.2. ˆF t P := ( ) P P F P t N P is the universal filtration defined in (2.3). T is the set of all F stopping times τ taking values in R + { }. ˆT P is set of all ˆF P stopping times. B is the universally defined quadratic variation of B, defined in subsection 2.1. â is the density of the quadratic variation B, also defined in subsection 2.1. S d is the set of d d symmetric matrices. S > d is the set of positive definite symmetric matrices. P W is the set of measures defined in subsection 2.1. P S P W is defined in the Introduction, see also Lemma
6 P MRP P W are the measures with the martingale representation property, see (2.2). Sets P W, P S, P MRP are defined in subsection 4.2 and section 8, as the subsets of P W, P S, P MRP with the additional requirement of weak uniqueness. A is the set of integrable, progressively measurable processes with values in S > d. A W := P P W A W (P) and A W (P) is the set of diffusion matrices satisfying (4.1). A W, A S, A MRP are defined as above using P W, P S, P MRP, see section 8. Sets Ωaˆτ, Ωa,b ˆτ and the stopping time θ ab are defined in subsection 4.3. Function spaces L, L p (P), ˆL p, and the integrand spaces H, H p (P a ), H 2 loc (Pa ), Ĥ p, Ĥ 2 loc are defined in Section 6. 2 Non-dominated mutually singular probability measures Let Ω := C(R +, R d ) be as above and F = F B be the canonical process B. Then it is well known that this natural filtration F is left-continuous, but is not right-continuous. This paper makes use of the right-limiting filtration F +, the P completed filtration F P := {F P t,t }, and the P augmented filtration F P := {F P t,t }, which are all right continuous. 2.1 Local martingale measures We say a probability measure P is a local martingale measure if the canonical process B is a local martingale under P. It follows from Karandikar [9] that there exists an F progressively measurable process, denoted as B sdb s, which coincides with the Itô s integral, P almost surely for all local martingale measure P. In particular, this provides a pathwise definition of B t := B t B T t 2 B s db s 1 and â t := lim ε ε [ B t B t ε ]. Clearly, B coincides with the P quadratic variation of B, P almost surely for all local martingale measure P. Let P W denote the set of all local martingale measures P such that P-almost surely, B t is absolutely continuous in t and â takes values in S > d, (2.1) where S > d denotes the space of all d d real valued positive definite matrices. We note that, for different P 1, P 2 P W, in general P 1 and P 2 are mutually singular, as we see in the next simple example. Moreover, there is no dominating measure for P W. 6
7 Example 2.1 Let d = 1, P 1 := P ( 2B) 1, and Ω i := { B t = (1 + i)t,t }, i =,1. Then, P, P 1 P W, P (Ω ) = P 1 (Ω 1 ) = 1, and P (Ω 1 ) = P 1 (Ω ) =. That is, P and P 1 are mutually singular. In many applications, it is important that P P W has martingale representation property (MRP, for short), i.e. for any (F P, P)-local martingale M, there exists a unique (P-almost surely) F P -progressively measurable R d valued process H such that We thus define â 1/2 s H s 2 ds < and M t = M + H s db s, t, P-almost surely. P MRP := { P P W : B has MRP under P }. (2.2) The inclusion P MRP P W is strict as shown in Example 9.3 below. Another interesting subclass is the set P S defined in the Introduction. Since in this paper it is not directly used, we postpone its discussion to Section A universal filtration We now fix an arbitrary subset P P W. By a slight abuse of terminology, we define the following notions introduced by Denis and Martini [5]. Definition 2.2 (i) We say that a property holds P-quasi-surely, abbreviated as P-q.s., if it holds P-almost surely for all P P. (ii) Denote N P := P P N P (F ) and we call the elements of N P P-polar sets. (iii) A probability measure P is called absolutely continuous with respect to P if P(E) = for all E N P. In the stochastic analysis theory, it is usually assumed that the filtered probability space satisfies the usual hypotheses. However, the key issue in the present paper is to develop stochastic analysis tools simultaneously for non-dominated mutually singular measures. In this case, we do not have a good filtration satisfying the usual hypotheses under all the measures. In this paper, we shall use the following universal filtration ˆF P for the mutually singular probability measures {P, P P}: ˆF P := { ˆF P t } t where ˆF P t := P P ( F P t N P ) for t. (2.3) Moreover, we denote by T (resp. ˆT P ) the set of all F-stopping times τ (resp., ˆF P - stopping times ˆτ) taking values in R + { }. 7
8 Remark 2.3 Notice that F + F P F P. The reason for the choice of this completed filtration F P is as follows. If we use the small filtration F +, then the crucial aggregation result of Theorem 5.1 below will not hold true. On the other hand, if we use the augmented filtrations F P, then Lemma 5.2 (ii) below does not hold. Consequently, in applications one will not be able to check the consistency condition (5.1) in Theorem 5.1, and thus will not be able to apply the aggregation result. See also Remarks 5.3 and 5.6 below. However, this choice of the completed filtration does not cause any problems in the applications. We note that ˆF P is right continuous and all P-polar sets are contained in ˆF P. But ˆF P is not complete under each P P. However, thanks to the Lemma 2.4 below, all the properties we need still hold under this filtration. For any sub-σ algebra G of F and any probability measure P, it is well-known that an F P -measurable random variable X is [G N P (F )] measurable if and only if there exists a G-measurable random variable X such that X = X, P-almost surely. The following result extends this property to processes and states that one can always consider any process in its F + -progressively measurable version. Since F + ˆF P, the F + -progressively measurable version is also ˆF P -progressively measurable. This important result will be used throughout our analysis so as to consider any process in its ˆF P -progressively measurable version. However, we emphasize that the ˆF P -progressively measurable version depends on the underlying probability measure P. Lemma 2.4 Let P be an arbitrary probability measure on the canonical space (Ω, F ), and let X be an F P -progressively measurable process. Then, there exists a unique (P-almost surely) F + -progressively measurable process X such that X = X, P almost surely. If, in addition, X is càdlàg P-almost surely, then we can choose X to be càdlàg P-almost surely. The proof is rather standard but it is provided in Appendix for completeness. We note that, the identity X = X, P-almost surely, is equivalent to that they are equal dt dp-almost surely. However, if both of them are càdlàg, the clearly X t = X t, t 1, P-almost surely. 3 Aggregation We are now in a position to define the problem. Definition 3.1 Let P P W, and let {X P, P P} be a family of ˆF P -progressively measurable processes. An ˆF P -progressively measurable process X is called a P-aggregator of the family {X P, P P} if X = X P, P-almost surely for every P P. Clearly, for any family {X P, P P} which can be aggregated, the following consistency condition must hold. 8
9 Definition 3.2 We say that a family {X P, P P} satisfies the consistency condition if, for any P 1, P 2 P, and ˆτ ˆT P satisfying P 1 = P 2 on ˆF Pˆτ we have X P 1 = X P 2 on [, ˆτ], P 1 almost surely. (3.4) Example 3.3 below shows that the above condition is in general not sufficient. Therefore, we are left with following two alternatives. Restrict the range of aggregating processes by requiring that there exists a sequence of ˆF P -progressively measurable processes {X n } n 1 such that X n X P, P-almost surely as n for all P P. In this case, the P-aggregator is X := lim n X n. Moreover, the class P can be taken to be the largest possible class P W. We observe that the aggregation results of Karandikar [9], Denis-Martini [5], and Peng [11] all belong to this case. Under some regularity on the processes, this condition holds. Restrict the class P of mutually singular measures so that the consistency condition (3.4) is sufficient for the largest possible family of processes {X P, P P}. This is the main goal of the present paper. We close this section by constructing an example in which the consistency condition is not sufficient for aggregation. Example 3.3 Let d = 2. First, for each x,y [1,2], let P x,y := P ( xb 1, yb 2 ) 1 and Ω x,y := { B 1 t = xt, B 2 t = yt,t }. Cleary for each (x,y), P x,y P W and P x,y [Ω x,y ] = 1. Next, for each a [1,2], we define P a [E] := (P a,z [E] + P z,a [E])dz for all E F. We claim that P a P W. Indeed, for any t 1 < t 2 and any bounded F t1 -measurable random variable η, we have 2E Pa [(B t2 B t1 )η] = 2 1 {E Pa,z [(B t2 B t1 )η] + E Pz,a [(B t2 B t1 )η]}dz =. Hence P a is a martingale measure. Similarly, one can easily show that I 2 dt d B t 2I 2 dt, P a -almost surely, where I 2 is the 2 2 identity matrix. For a [1,2] set Ω a := { B 1 t = at,t } { B 2 t = at,t } z [1,2] [Ω a,z Ω z,a ] so that P a [Ω a ] = 1. Also for a b, we have Ω a Ω b = Ω a,b Ω b,a and thus P a [Ω a Ω b ] = P b [Ω a Ω b ] =. 9
10 Now let P := {P a,a [1,2]} and set Xt a (ω) = a for all t,ω. Notice that, for a b, P a and P b disagree on F + ˆF P. Then the consistency condition (3.4) holds trivially. However, we claim that there is no P-aggregator X of the family {X a,a [1,2]}. Indeed, if there is X such that X = X a, P a -almost surely for all a [1,2], then for any a [1,2], 1 = P a [X ạ = a] = P a [X. = a] = ( ) P a,z [X. = a] + P z,a [X. = a] dz. Let λ n the Lebesgue measure on [1,2] n for integer n 1. Then, we have ) ( ) λ 1 ({z : P a,z [X. = a] = 1} = λ 1 {z : P z,a [X. = a] = 1} = 1, for all a [1,2]. Set A 1 := {(a,z) : P a,z [X. = a] = 1}, A 2 := {(z,a) : P z,a [X. = a] = 1} so that λ 2 (A 1 ) = λ 2 (A 2 ) = 1. Moreover, A 1 A 2 {(a,a) : a (,1]} and λ 2 (A 1 A 2 ) =. Now we directly calculate that 1 λ 2 (A 1 A 2 ) = λ 2 (A 1 ) + λ 2 (A 2 ) λ 2 (A 1 A 2 ) = 2. This contradiction implies that there is no aggregator. 4 Separable classes of mutually singular measures The main goal of this section is to identify a condition on the probability measures that yields aggregation as defined in the previous section. It is more convenient to specify this restriction through the diffusion processes. However, as we discussed in the Introduction there are technical difficulties in the connection between the diffusion processes and the probability measures. Therefore, in the first two subsections we will discuss the issue of uniqueness of the mapping from the diffusion process to a martingale measure. The separable class of mutually singular measures are defined in subsection 4.4 after a short discussion of the supports of these measures in subsection Classes of diffusion matrices Let A := { a : R + S > d F-progressively measurable and For a given P P W, let A W (P) := } a s ds <, for all t. { } a A : a = â, P-almost surely. (4.1) Recall that â is the density of the quadratic variation of B and is defined pointwise. We also define A W := P P W A W (P). 1
11 A subtle technical point is that A W is strictly included in A. In fact, the process a t := 1 {ât 2} + 31 {ât<2} is clearly in A \ A W. (4.2) For any P P W and a A W (P), by the Lévy characterization, the following Itô s stochastic integral under P is a P-Brownian motion: W P t := â 1/2 s db s = Also since B is the canonical process, a = a(b ) and thus db t = a 1/2 t a 1/2 s db s, t. (4.3) (B )dwt P P, P-almost surely, and Wt is a P-Brownian motion. (4.4) 4.2 Characterization by diffusion matrices In view of (4.4), to construct a measure with a given quadratic variation a A W, we consider the stochastic differential equation, dx t = a 1/2 t (X )db t, P -almost surely. (4.5) In this generality, we consider only weak solutions P which we define next. Although the following definition is standard (see for example Stroock & Varadhan [15]), we provide it for specificity. Definition 4.1 Let a be an element of A W. (i) For F stopping times τ 1 τ 2 T and a probability measure P 1 on F τ1, we say that P is a weak solution of (4.5) on [τ 1,τ 2 ] with initial condition P 1 if the followings hold: 1. P = P 1 on F τ1 ; 2. The canonical process B t is a P-local martingale on [τ 1,τ 2 ]; 3. The process W t := (B )db s, defined P almost surely for all t [τ 1,τ 2 ], is a P-Brownian Motion. τ 1 as 1/2 (ii) We say that the equation (4.5) has weak uniqueness on [τ 1,τ 2 ] with initial condition P 1 if any two weak solutions P and P satisfy P = P on F τ2. (iii) We say that (4.5) has weak uniqueness if (ii) holds for any τ 1,τ 2 T and any initial condition P 1 on F τ1. We emphasize that the stopping times in this definition are F-stopping times. Note that, for each P P W and a A W (P), P is a weak solution of (4.5) on R + with initial value P(B = ) = 1. We also need uniqueness of this map to characterize the measure P in terms of the diffusion matrix a. Indeed, if (4.5) with a has weak uniqueness, we let 11
12 P a P W be the unique weak solution of (4.5) on R + with initial condition P a (B = ) = 1, and define, A W := { a A W : (4.5) has weak uniqueness }, P W := {P a : a A W }. (4.6) We also define P MRP := P MRP P W, A MRP := {a A W : P a P MRP }. (4.7) For notational simplicity, we denote F a := F Pa, F a := F Pa, for all a A W. (4.8) It is clear that, for each P P W, the weak uniqueness of the equation (4.5) may depend on the version of a A W (P). This is indeed the case and the following example illustrates this observation. Example 4.2 Let a (t) := 1, a 2 (t) := 2 and a 1 (t) := E 1 (, ) (t), where E := { B h B } lim 1 F + h 2hln lnh 1. Then clearly both a and a 2 belong to A W. Also a 1 = a, P -almost surely and a 1 = a 2, P a 2 -almost surely. Hence, a 1 A W (P ) A W (P a 2 ). Therefore the equation (4.5) with coefficient a 1 has two weak solutions P and P a 2. Thus a 1 / A W. Remark 4.3 In this paper, we shall consider only those P P W P W. However, we do not know whether this inclusion is strict or not. In other words, given an arbitrary P P W, can we always find one version a A W (P) such that a A W? It is easy to construct examples of separable classes in the Markovian context. Below, we provide two classes of path dependent separable diffusion processes. These sets are in fact subsets of A S A W, which is defined in (8.11) below. We also construct some counter-examples in the Appendix. Denote { } Q := (t,x) : t,x C([,t], R d ). (4.9) Example 4.4 (Lipschitz coefficients) Let a t := σ 2 (t,b ) where σ : Q S > d is Lebesgue measurable, uniformly Lipschitz continuous in x under the uniform norm, and σ 2 (,) A. Then (4.5) has a unique strong solution and consequently a A W. Example 4.5 (Piecewise constant coefficients) Let a = n= a n1 [τn,τ n+1 ) where {τ n } n T is a nondecreasing sequence of F stopping times with τ =, τ n as n, and a n F τn with values in S > d for all n. Again (4.5) has a unique strong solution and a A W. This example is in fact more involved than it looks like, mainly due to the presence of the stopping times. We relegate its proof to the Appendix. 12
13 4.3 Support of P a In this subsection, we collect some properties of measures that are constructed as in the previous subsection. We fix a subset A A W, and denote by P := {P a : a A} the corresponding subset of P W. In the sequel, we may also say a property holds A quasi surely if it holds P quasi surely. For any ˆF P stopping time ˆτ ˆT P, let Ω aˆτ := n 1 Ω a,n ˆτ where Ω a,n ˆτ := { â s ds = a s ds, for all t [, ˆτ + 1 n ] }. It is clear that Ω aˆτ ˆF Pˆτ, Ωa t is non-increasing in t, Ω aˆτ+ = Ωaˆτ, and Pa (Ω a ) = 1. (4.1) We next introduce the first disagreement time of any a,b A. This stopping time plays a central role in Section 5, θ a,b t := inf { t : a s ds } b s ds, and, for any ˆF P stopping time ˆτ ˆT P, the agreement set of a and b up to ˆτ: Ω a,b ˆτ := {ˆτ < θ a,b } {ˆτ = θ a,b = }. Here we use the convention that inf =. It is obvious that θ a,b ˆT P, Ω a,b ˆτ ˆF Pˆτ and Ω aˆτ Ωbˆτ Ωa,b ˆτ. (4.11) Remark 4.6 The above notations can be extended to all diffusion processes a,b A. This will be important in Lemma 4.12 below. 4.4 Separability We are now in a position to state the restrictions needed for the main aggregation result Theorem 5.1. Definition 4.7 A subset A A W is called a generating class of diffusion coefficients if (i) A satisfies the concatenation property: a1 [,t) + b1 [t, ) A for a,b A, t. (ii) A has constant disagreement times: for all a,b A, θ a,b is a constant or, equivalently, Ω a,b t = or Ω for all t. 13
14 We note that the concatenation property is standard in the stochastic control theory in order to establish the dynamic programming principle, see, e.g. [8]. The constant disagreement times property is important for both Lemma 5.2 below and the aggregation result of Theorem 5.1 below. We will provide two examples of sets with these properties, after stating the main restriction for the aggregation result. Definition 4.8 We say A is a separable class of diffusion coefficients generated by A if A A W is a generating class of diffusion coefficients and A consists of all processes a of the form, a = n= i=1 a n i 1 E n i 1 [τn,τ n+1 ), (4.12) where (a n i ) i,n A, (τ n ) n T is strictly nondecreasing with τ = and inf{n : τ n = } < and each τ n takes at most countable many values, for each n, {E n i,i 1} F τ n form a partition of Ω. We emphasize that the τ n s in the previous definition are F stopping times. The following are two examples of generating classes of diffusion coefficients. Example 4.9 Let A A be the class of all deterministic mappings. Then clearly A A W and satisfies both properties (the concatenation and the constant disagreement times properties) of a generating class. Example 4.1 Recall the set Q defined in (4.9). Let D be a set of deterministic Lebesgue measurable functions σ : Q S > d satisfying, - σ is uniformly Lipschitz continuous in x under L -norm, and σ 2 (,) A and - for each x C(R +, R d ) and different σ 1,σ 2 D, the Lebesgue measure of the set A(σ 1,σ 2,x) is equal to, where A(σ 1,σ 2,x) := { } t : σ 1 (t,x [,t] ) = σ 2 (t,x [,t] ). Let D be the class of all possible concatenations of D, i.e. σ D takes the following form: σ(t,x) := σ i (t,x)1 [ti,t i+1 )(t), (t,x) Q, i= for some sequence t i and σ i D, i. Let A := {σ 2 (t,b ) : σ D}. It is immediate to check that A A W and satisfies the concatenation and the constant disagreement times properties. Thus it is also a generating class. We next prove several important properties of the separable classes. 14
15 Proposition 4.11 Let A be a separable class of diffusion coefficients generated by A. Then A A W, and A-quasi surely is equivalent to A -quasi surely. Moreover, if A A MRP, then A A MRP. We need the following two lemmas to prove this result. The first one provides a convenient structure for the elements of A. Lemma 4.12 Let A be a separable class of diffusion coefficients generated by A. For any a A and F-stopping time τ T, there exist τ τ T, a sequence {a n,n 1} A, and a partition {E n,n 1} F τ of Ω, such that τ > τ on {τ < } and a t = n 1a n (t)1 En for all t < τ. In particular, E n Ω a,an τ and consequently n Ω a,an τ (4.12) and τ τ n, then one can choose τ τ n+1. = Ω. Moreover, if a takes the form The proof of this lemma is straightforward, but with technical notations. Thus we postpone it to the Appendix. We remark that at this point we do not know whether a A W. But the notations θ a,an and Ω a,an τ are well defined as discussed in Remark 4.6. Lemma 4.13 Let τ 1,τ 2 T with τ 1 τ 2, {a i,i 1} A W (not necessarily in A W ) and {E i,i 1} F τ1 be a partition of Ω. Let P be a probability measure on F τ1 and for i 1, P i be a weak solution of (4.5) on [τ 1,τ 2 ] with coefficient a i and initial condition P. Then the measure P defined by P(E) := i 1 P i (E E i ) for all E F τ2, is a weak solution of (4.5) on [τ 1,τ 2 ] with initial condition P and the diffusion coefficient given by, a t := i 1 a i t1 Ei, t [τ 1,τ 2 ]. Proof. Clearly, P = P on F τ1. It suffices to show that both B t and B t B T t τ 1 a s ds are P-local martingales on [τ 1,τ 2 ]. By a standard localization argument, we may assume without loss of generality that all the below random variables are integrable. Now for any τ 1 τ 3 τ 4 τ 2 and any bounded random variable η F τ3, we have E P [(B τ4 B τ3 )η] = i 1 = i 1 E Pi[ (B τ4 B τ3 )η1 Ei ] E Pi[ E Pi( B τ4 B τ3 F τ3 ) η1 Ei ] =. 15
16 Therefore B is a P-local martingale on [τ 1,τ 2 ]. Similarly one can show that B t Bt T τ 1 a s ds is also a P-local martingale on [τ 1,τ 2 ]. Proof of Proposition Let a A be given as in (4.12). (i) We first show that a A W. Fix θ 1,θ 2 T with θ 1 θ 2 and a probability measure P on F θ1. Set τ := θ 1 and τ n := (τ n θ 1 ) θ 2, n 1. We shall show that (4.5), on [θ 1,θ 2 ] with coefficient a and initial condition P, has a unique weak solution. We do this by induction on n that (4.5) on [ τ, τ n ], with coefficient a and initial condition P, has a unique weak solution. First, let n = 1. We apply Lemma 4.12 with τ = τ and choose τ = τ 1. Then, a t = i 1 a i(t)1 Ei for all t < τ 1, where a i A and {E i,i 1} F τ form a partition of Ω. For i 1, set P,i be the (unique) weak solution of (4.5) on [ τ, τ 1 ] with coefficient a i and initial condition P. Let P,a (E) := i 1 P,i (E E i ) for all E F τ1. We use Lemma 4.13 to conclude that P,a is a weak solution of (4.5) on [ τ, τ 1 ] with coefficient a and initial condition P. On the other hand, suppose P is an arbitrary weak solution of (4.5) on [ τ, τ 1 ] with coefficient a and initial condition P. For each i 1, we define P i by P i (E) := P(E E i ) + P,i (E (E i ) c ) for all E F τ1. We again use Lemma The result is that P i is a weak solution of (4.5) whose coefficient is: a1 Ei + a i 1 (Ei ) c = a i. Now by the uniqueness of weak solutions of (4.5) with coefficient a i, we conclude that P i = P,i on F τ1. This, in turn, implies that P(E E i ) = P,i (E E i ) for all E F τ1 and i 1. Therefore, P(E) = i 1 P,i (E E i ) = P,a (E) for all E F τ1. Hence the weak solution of (4.5) on [ τ, τ 1 ] with coefficient a and initial condition P is unique. We continue with the induction step. Assume that (4.5) on [ τ, τ n ] with coefficient a and initial condition P has a unique weak solution, denoted by P n. Without loss of generality, we assume τ n < τ n+1. Following the same arguments as above we know that (4.5) on [ τ n, τ n+1 ] with coefficient a and initial condition P n has a unique weak solution, denoted by P n+1. Then both B t and B t B T t a sds are P n+1 -local martingales on [ τ, τ n ] and on [ τ n, τ n+1 ]. This implies that P n+1 is a weak solution to SDE (4.5) on [ τ, τ n+1 ] with coefficient a and initial condition P. On the other hand, let P be an arbitrary weak solution of (4.5) on [ τ, τ n+1 ] with coefficient a and initial condition P. Since P is also a weak solution 16
17 to SDE (4.5) on [ τ, τ n ] with coefficient a and initial condition P, by the uniqueness in the induction assumption we must have the equality P = P n on F τn. Therefore, P is in fact a weak solution of (4.5) on [ τ n, τ n+1 ] with coefficient a and initial condition P n. Thus by uniqueness P = P n+1 on F τn+1. This proves the induction claim for n + 1. Finally, note that P m (E) = P n (E) for all E F τn and m n. Hence, we may define P (E) := P n (E) for E F τn. Since inf{n : τ n = } <, then inf{n : τ n = θ 2 } < and thus F θ2 = n 1 F τn. So we can uniquely extend P to F θ2. Now we directly check that P is the unique weak solution to SDE (4.5) on [θ 1,θ 2 ] with coefficient a and initial condition P. (ii) We next show that P a (E) = for all A polar set E. Once again we apply Lemma 4.12 with τ =. Therefore a t = i 1 a i(t)1 Ei for all t, where {a i,i 1} A and {E i,i 1} F form a partition of Ω. Now for any A -polar set E, P a (E) = P a (E E i ) = P a i (E E i ) =. i 1 i 1 This clearly implies the equivalence between A-quasi surely and A -quasi surely. (iii) We now assume A A MRP and show that a A MRP. Let M be a P a -local martingale. We prove by induction on n again that M has a martingale representation on [,τ n ] under P a for each n 1. This, together with the assumption that inf{n : τ n = } <, implies that M has martingale representation on R + under P a, and thus proves that P a A MRP. Since τ =, there is nothing to prove in the case of n =. Assume the result holds on [,τ n ]. Apply Lemma 4.12 with τ = τ n and recall that in this case we can choose the τ to be τ n+1. Hence a t = i 1 a i(t)1 Ei, t < τ n+1, where {a i,i 1} A and {E i,i 1} F τn form a partition of Ω. For each i 1, define M i t := [M t τ n+1 M τn ]1 Ei 1 [τn, )(t) for all t. Then one can directly check that M i is a P a i -local martingale. Since a i A A MRP, there exists H i such that dm i t = Hi t db t, P a i -almost surely. Now define H t := i 1 Hi t 1 E i, τ n t < τ n+1. Then we have dm t = H t db t, τ n t < τ n+1, P a -almost surely. We close this subsection by the following important example. Example 4.14 Assume A consists of all deterministic functions a : R + S > d taking the form a t = n 1 i= a t i 1 [ti,t i+1 ) + a tn 1 [tn, ) where t i Q and a ti has rational entries. This is a special case of Example 4.9 and thus A A W. In this case A is countable. Let A = {a i } i 1 and define ˆP := i=1 2 i P a i. Then ˆP is a dominating probability measure of all P a, a A, where A is the separable class of diffusion coefficients generated by A. Therefore, A-quasi surely is equivalent to ˆP-almost surely. Notice however that A is not countable. 17
18 5 Quasi-sure aggregation In this section, we fix a separable class A of diffusion coefficients generated by A and denote P := {P a,a A}. Then we prove the main aggregation result of this paper. For this we recall that the notion of aggregation is defined in Definition 3.1 and the notations θ a,b and Ω a,b ˆτ are introduced in subsection 4.3. Theorem 5.1 (Quasi sure aggregation) Let {X a,a A} be a family of ˆF P -progressively measurable processes. Then there exists a unique (P q.s.) P-aggregator X if and only if {X a,a A} satisfies the consistency condition X a = X b, P a almost surely on [,θ a,b ) for any a A and b A. (5.1) Moreover, if X a is càdlàg P a -almost surely for all a A, then we can choose a P-q.s. càdlàg version of the P-aggregator X. We note that the consistency condition (5.1) is slightly different from the condition (3.4) before. The condition (5.1) is more natural in this framework and is more convenient to check in applications. Before the proof of the theorem, we first show that, for any a, b A, the corresponding probability measures P a and P b agree as long as a and b agree. Lemma 5.2 For any a,b A, θ a,b is an F-stopping time taking countable many values and P a (E Ω a,b ˆτ ) = Pb (E Ω a,b ˆτ ) for all ˆτ ˆT P and E ˆF Pˆτ. (5.2) Proof. (i) We first show that θ a,b is an F-stopping time. In view of Lemma 4.12 with τ = t, we assume without loss of generality that a t = a n (t)1 En and b t = n (t)1 En for all t < τ, n 1 n 1b where τ > t, a n,b n A and {E n,n 1} F t form a partition of Ω. Then {θ a,b t } = ] [{θ an,bn t } E n. n By the constant disagreement times property of A, θ an,bn is a constant. This implies that {θ an,bn t } is equal to either or Ω. Since E n F t, we conclude that {θ a,b t } F t for all t. That is, θ a,b is an F-stopping time. (ii) We next show that θ a,b takes only countable many values. In fact, by (i) we may now apply Lemma 4.12 with τ = θ a,b. So we may write a t = ã n (t)1ẽn and b t = (t)1ẽn for all t < n 1 n 1 bn θ, 18
19 where θ > θ a,b or θ = θ a,b =, ã n, b n A, and {Ẽn,n 1} F θ a,b form a partition of Ω. Then it is clear that θ a,b = θãn, b n on Ẽn, for all n 1. For each n, by the the constant disagreement times property of A, θãn, b n is constant. Hence θ a,b takes only countable many values. (iii) We now prove (5.2). We first claim that, Indeed, for any t, E Ω a,b ˆτ [ ] F θ a,b N Pa (F ) for any E ˆF Pˆτ. (5.3) E Ω a,b ˆτ {θ a,b t} = E {ˆτ < θ a,b } {θ a,b t} = [ E {ˆτ < θ a,b } {ˆτ t 1 ] m } {θa,b t}. m 1 By (i) above, {θ a,b t} F t. For each m 1, E {ˆτ < θ a,b } {ˆτ t 1 m } ˆF P t 1 m F + t 1 m N Pa (F ) F t N Pa (F ), and (5.3) follows. By (5.3), there exist E a,i,e b,i F θ a,b, i = 1,2, such that E a,1 E Ω a,b ˆτ E a,2, E b,1 E Ω a,b ˆτ E b,2, and P a (E a,2 \E a,1 ) = P b (E b,2 \E b,1 ) =. Define E 1 := E a,1 E b,1 and E 2 := E a,2 E b,2, then E 1,E 2 F θ a,b, E 1 E E 2, and P a (E 2 \E 1 ) = P b (E 2 \E 1 ) =. Thus P a (E Ω a,b ˆτ ) = Pa (E 2 ) and P b (E Ω a,b ˆτ ) = Pb (E 2 ). Finally, since E 2 F θ a,b, following the definition of P a and P b, in particular the uniqueness of weak solution of (4.5) on the interval [,θ a,b ], we conclude that P a (E 2 ) = P b (E 2 ). This implies (5.2) immediately. Remark 5.3 The property (5.2) is crucial to check the consistency conditions in our aggregation result in Theorem 5.1. We note that (5.2) does not hold if we replace the completed σ-algebra F a τ F b τ with the augmented σ algebra F a τ F b τ. To see this, let d = 1, a t := 1, b t := [1, ) (t). In this case, θ a,b = 1. Let τ :=, E := Ω a 1. One can easily check that Ω a,b = Ω, P a (E) = 1, P b (E) =. This implies that E F a Fb and E Ωa,b. However, P a (E) = 1 = P b (E). See also Remark 2.3. Proof of Theorem 5.1. The uniqueness of P aggregator is immediate. By Lemma 5.2 and the uniqueness of weak solutions of (4.5) on [,θ a,b ], we know P a = P b on F θ a,b. Then the existence of the P-aggregator obviously implies (5.1). We now assume that the condition (5.1) holds and prove the existence of the P-aggregator. 19
20 We first claim that, without loss of generality, we may assume that X a is càdlàg. Indeed, suppose that the theorem holds for càdlàg processes. Then we construct a P-aggregator for a family {X a,a A}, not necessarily càdlàg, as follows: - If X a R for some constant R > and for all a A, set Y a t := Xa s ds. Then, the family {Y a,a A} inherits the consistency condition (5.1). Since Y a is continuous for every a A, this family admits a P-aggregator Y. Define X t := lim ε 1 ε [Y t+ε Y t ]. Then one can verify directly that X satisfies all the requirements. - In the general case, set X R,a := ( R) X a R. By the previous arguments there exists P-aggregator X R of the family {X R,a,a A} and it is immediate that X := lim R X R satisfies all the requirements. We now assume that X a is càdlàg, P a -almost surely for all a A. In this case, the consistency condition (5.1) is equivalent to X a t = X b t, t < θ a,b, P a -almost surely for any a A and b A. (5.4) Step 1. We first introduce the following quotient sets of A. For each t, and a,b A, we say a t b if Ω a,b t = Ω (or, equivalently, the constant disagreement time θ a,b t). Then t is an equivalence relationship in A. Thus one can form a partition of A based on t. Pick an element from each partition set to construct a quotient set A (t) A. That is, for any a A, there exists a unique b A (t) such that Ω a,b t = Ω. By (4.11) and the constant disagreement times property of A, we know that {Ω a t,a A (t)} are disjoint. Step 2. For fixed t R +, define ξ t (ω) := a A (t) Xt a (ω)1 Ω a (ω) for all ω Ω. (5.5) t The above uncountable sum is well defined because the sets {Ω a t,a A (t)} are disjoint. In this step, we show that ξ t is ˆF P t -measurable and ξ t = X a t, Pa -almost surely for all a A. (5.6) We prove this claim in the following three sub-cases For each a A (t), by definition ξ t = X a t on Ω a t. Equivalently {ξ t X a t } (Ω a t ) c. Moreover, by (4.1), P a ((Ω a t )c ) =. Since Ω a t F t + Ft a -measurable and P a (ξ t = Xt a ) = 1. and F a t is complete under P a, ξ t is 2.2. Also, for each a A, there exists a unique b A (t) such that a t b. Then ξ t = X b t on Ω b t. Since Ω a,b t = Ω, it follows from Lemma 5.2 that P a = P b on F + t and P a (Ω b t) = P b (Ω b t) = 1. Hence P a (ξ t = X b t ) = 1. Now by the same argument as in the first case, we can prove that ξ t is F a t -measurable. Moreover, by the consistency condition (5.7), P a (X a t = X b t ) = 1. This implies that P a (ξ t = X a t ) = 1. 2
21 2.3. Now consider a A. We apply Lemma 4.12 with τ = t. This implies that there exist a sequence {a j,j 1} A such that Ω = j 1 Ω a,a j t [ {ξ t X a t } = j 1. Then {ξ t X a t } Ω a,a j t ]. Now for each j 1, {ξ t X a t } Ωa,a j t [ ] [ {ξ t X a j t } Ω a,a j t {X a j t ] Xt a } Ωa,a j t. Applying Lemma 5.2 and using the consistency condition (5.4), we obtain P a( ) ( ) {X a j t Xt a } Ω a,a j t = P a j {X a j t Xt a } Ω a,a j t ( ) = P a j {X a j t Xt a } {t < θ a,a j } =. Moreover, for a j A, by the previous sub-case, {ξ t X a j t } N Paj (F t + exists D F t + such that P a j (D) = and {ξ t X a j t } D. Therefore ). Hence there {ξ t X a j t } Ω a,a j t D Ω a,a j t and P a (D Ω a,a j t ) = P a j (D Ω a,a j t ) =. This means that {ξ t X a j t } Ω a,a j t N Pa (F t + ). All of these together imply that {ξ t Xt a} N Pa (F t + ). Therefore, ξ t Ft a and P a (ξ t = Xt a) = 1. Finally, since ξ t F a t for all a A, we conclude that ξ t ˆF P t. This completes the proof of (5.6). Step 3. For each n 1, set t n i := i n,i and define X a,n := X1 a {} + Xt a n1 (t i n i 1,tn i ] for all a A and X n := ξ 1 {} + ξ t n i 1 (t n i 1,t n i ], i=1 where ξ t n i is defined by (5.5). Let ˆF n := { ˆF P,t }. By Step 2, X a,n,x n are ˆF n - t+ 1 n progressively measurable and P a (Xt n = X a,n t,t ) = 1 for all a A. We now define X := lim n Xn. Since ˆF n is decreasing to ˆF P and ˆF P is right continuous, X is ˆF P -progressively measurable. Moreover, for each a A, [ {X t = Xt a,t } {X is càdlàg} {Xt n = X a,n t,t }] {X a is càdlàg}. n 1 Therefore X = X a and X is càdlàg, P a -almost surely for all a A. In particular, X is càdlàg, P-quasi surely. i=1 21
22 Let ˆτ ˆT P and {ξ a,a A} a family of ˆF Pˆτ -measurable random variables. We say an ˆF Pˆτ -measurable random variable ξ is a P-aggregator of the family {ξa,a A} if ξ = ξ a, P a - almost surely for all a A. Note that we may identify any ˆF Pˆτ -measurable random variable ξ with the ˆF P -progressively measurable process X t := ξ1 [ˆτ, ). Then a direct consequence of Theorem 5.1 is the following. Corollary 5.4 Let ˆτ ˆT P. Then the family of ˆF Pˆτ -measurable random variables {ξ a,a A} has a unique (up to P-quasi sure equality) P-aggregator ξ if and only if the following consistency condition holds: ξ a = ξ b on Ω a,b ˆτ, Pa -almost surely for any a A and b A. (5.7) For the next result, we recall that the P-Brownian motion W P is defined in (4.3). As a direct consequence of Theorem 5.1, the following result defines the P-Brownian motion. Corollary 5.5 The family {W Pa,a A} admits a unique P-aggregator W. Since W Pa is a P a -Brownian motion for every a A, we call W a P-universal Brownian motion. Proof. By Lemma 5.2, it is clear that the family {W Pa,a A} satisfies the consistency condition (5.1). We then apply Theorem 5.1 directly to obtain the P aggregator W. The P Brownian motion W is our first example of a stochastic integral defined simultaneously under all P a, a A: W t = â 1/2 s db s, t, P q.s. (5.8) We will investigate in detail the universal integration in Section 6. Remark 5.6 Although a and W Pa are F-progressively measurable, from Theorem 5.1 we can only deduce that â and W are ˆF P -progressively measurable. On the other hand, if we take a version of W Pa that is progressively measurable to the augmented filtration F a, then in general the consistency condition (5.1) does not hold. For example, let d = 1, a t := 1, and b t := [1, ) (t), t, as in Remark 5.3. Set Wt Pa (ω) := B t (ω) + 1 (Ω a 1 ) c(ω) and Wt Pb (ω) := B t (ω) + [B t (ω) B 1 (ω)]1 [1, ) (t). Then both W Pa and W Pb are F a F b - progressively measurable. However, θ a,b = 1, but P b (W Pa = W Pb) (Ω a 1 ) =, so we do not have W Pa = W Pb, P b -almost surely on [,1]. 22
23 6 Quasi-sure stochastic analysis In this section, we fix a separable class A of diffusion coefficients generated by A. We set P := {P a : a A} and we develop the P-quasi sure stochastic analysis. We first introduce several spaces. We denote by L the collection of all ˆF P -measurable random variables with appropriate dimension. For each p [1, ] and P P, we denote by L p (P) the corresponding L p space under the measure P and ˆL p := L p (P). P P Similarly, H := H (R d ) denotes the collection of all R d valued ˆF P -progressively measurable processes. H p (P a ) is the subset of all H Ĥ satisfying H p T,H p (P a ) := EPa[( T ) p/2 ] a 1/2 s H s 2 ds < for all T >, and H 2 loc (Pa ) is the subset of H whose elements satisfy T surely, for all T. Finally, we define Ĥ p := H p (P) and P P a1/2 s Ĥ2 loc := H 2 loc (P). P P H s 2 ds <, P a -almost The following two results are direct applications of Theorem 5.1. Similar results were also proved in [5, 6]. In particular, Theorem 2.1 in [5], Theorem 36 in [6] and the Kolmogorov criterion of Theorem 31 in [6]. Proposition 6.1 (Completeness) Fix p 1. (i) Let (X n ) n ˆL p be a Cauchy sequence under each P a, a A. Then there exists a unique random variable X ˆL p such that X n X in L p (P a, ˆF P ) for every a A. (ii) Let (X n ) n Ĥp be a Cauchy sequence under the norm T,H p (P a ) for all T and a A. Then there exists a unique process X Ĥp such that X n X under the norm T,H p (P a ) for all T and a A. Proof. (i) By the completeness of L p (P a, ˆF P ), we may find X a L p (P a, ˆF P ) such that X n X a in L p (P a, ˆF P ). The consistency condition of Theorem 5.1 is obviously satisfied by the family {X a,a A}, and the result follows. (ii) can be proved by a similar argument. Proposition 6.2 (Kolmogorov continuity criteria) Let X be an ˆF P -progressively measurable process with values in R n. We further assume that for some p > 1, X t ˆL p for all t and satisfy E Pa [ X t X s p ] c a t s n+εa for some constants c a,ε a >. 23
Martingale Representation Theorem for the G-expectation
Martingale Representation Theorem for the G-expectation H. Mete Soner Nizar Touzi Jianfeng Zhang February 22, 21 Abstract This paper considers the nonlinear theory of G-martingales as introduced by eng
More informationRandom G -Expectations
Random G -Expectations Marcel Nutz ETH Zurich New advances in Backward SDEs for nancial engineering applications Tamerza, Tunisia, 28.10.2010 Marcel Nutz (ETH) Random G-Expectations 1 / 17 Outline 1 Random
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationPathwise Construction of Stochastic Integrals
Pathwise Construction of Stochastic Integrals Marcel Nutz First version: August 14, 211. This version: June 12, 212. Abstract We propose a method to construct the stochastic integral simultaneously under
More informationLecture 19 L 2 -Stochastic integration
Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes
More informationStochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes
BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationAn essay on the general theory of stochastic processes
Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16
More informationNonlinear representation, backward SDEs, and application to the Principal-Agent problem
Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationOn the Multi-Dimensional Controller and Stopper Games
On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper
More informationStochastic integration. P.J.C. Spreij
Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationRisk-Minimality and Orthogonality of Martingales
Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2
More informationEULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS
Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationStochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik
Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationFundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales
Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time
More informationDynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)
Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic
More informationOptimal stopping for non-linear expectations Part I
Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationViscosity Solutions of Path-dependent Integro-Differential Equations
Viscosity Solutions of Path-dependent Integro-Differential Equations Christian Keller University of Southern California Conference on Stochastic Asymptotics & Applications Joint with 6th Western Conference
More informationMinimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging
Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR
More informationInformation and Credit Risk
Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationGeneralized Gaussian Bridges of Prediction-Invertible Processes
Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine
More informationOptimal Stopping under Adverse Nonlinear Expectation and Related Games
Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationµ X (A) = P ( X 1 (A) )
1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration
More informationSTOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI
STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications
More informationOn the monotonicity principle of optimal Skorokhod embedding problem
On the monotonicity principle of optimal Skorokhod embedding problem Gaoyue Guo Xiaolu Tan Nizar Touzi June 10, 2015 Abstract This is a continuation of our accompanying paper [18]. We provide an alternative
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationSupermartingales as Radon-Nikodym Densities and Related Measure Extensions
Supermartingales as Radon-Nikodym Densities and Related Measure Extensions Nicolas Perkowski Institut für Mathematik Humboldt-Universität zu Berlin Johannes Ruf Oxford-Man Institute of Quantitative Finance
More informationThe Uniform Integrability of Martingales. On a Question by Alexander Cherny
The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost
More informationPREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.
Acta Math. Univ. Comenianae Vol. LXXVII, 1(28), pp. 123 128 123 PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES M. EL KADIRI Abstract. We prove as for the real case that a martingale
More informationThe Pedestrian s Guide to Local Time
The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments
More informationRobust Mean-Variance Hedging via G-Expectation
Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.
More informationStochastic Calculus (Lecture #3)
Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:
More informationNonlinear Lévy Processes and their Characteristics
Nonlinear Lévy Processes and their Characteristics Ariel Neufeld Marcel Nutz January 11, 215 Abstract We develop a general construction for nonlinear Lévy processes with given characteristics. More precisely,
More informationProf. Erhan Bayraktar (University of Michigan)
September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationPROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS
PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationSuperreplication under Volatility Uncertainty for Measurable Claims
Superreplication under Volatility Uncertainty for Measurable Claims Ariel Neufeld Marcel Nutz April 14, 213 Abstract We establish the duality formula for the superreplication price in a setting of volatility
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationDYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS
DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS SEBASTIÁN DONOSO AND WENBO SUN Abstract. For minimal Z 2 -topological dynamical systems, we introduce a cube structure and a variation
More informationELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer
ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION A la mémoire de Paul-André Meyer Francesco Russo (1 and Pierre Vallois (2 (1 Université Paris 13 Institut Galilée, Mathématiques 99 avenue J.B. Clément
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationDISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction
DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be
More informationThe Asymptotic Theory of Transaction Costs
The Asymptotic Theory of Transaction Costs Lecture Notes by Walter Schachermayer Nachdiplom-Vorlesung, ETH Zürich, WS 15/16 1 Models on Finite Probability Spaces In this section we consider a stock price
More informationSEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models
SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency
More informationFinancial Asset Price Bubbles under Model Uncertainty
Financial Asset Price Bubbles under Model Uncertainty Francesca Biagini, Jacopo Mancin October 24, 27 Abstract We study the concept of financial bubbles in a market model endowed with a set P of probability
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationStochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity
Stochastic calculus without probability: Pathwise integration and functional calculus for functionals of paths with arbitrary Hölder regularity Rama Cont Joint work with: Anna ANANOVA (Imperial) Nicolas
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationSUPERMARTINGALES AS RADON-NIKODYM DENSITIES AND RELATED MEASURE EXTENSIONS
Submitted to the Annals of Probability SUPERMARTINGALES AS RADON-NIKODYM DENSITIES AND RELATED MEASURE EXTENSIONS By Nicolas Perkowski and Johannes Ruf Université Paris Dauphine and University College
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationThe Skorokhod reflection problem for functions with discontinuities (contractive case)
The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection
More informationSome Background Material
Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationJump Processes. Richard F. Bass
Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov
More informationONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS
ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof
More informationStochastic Analysis I S.Kotani April 2006
Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic
More information4th Preparation Sheet - Solutions
Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space
More informationGENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS
Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationBoundedly complete weak-cauchy basic sequences in Banach spaces with the PCP
Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,
More information2 Sequences, Continuity, and Limits
2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationBernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012
1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.
More informationExponential martingales: uniform integrability results and applications to point processes
Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More informationVISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.
VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)
More informationOptimal Skorokhod embedding under finitely-many marginal constraints
Optimal Skorokhod embedding under finitely-many marginal constraints Gaoyue Guo Xiaolu Tan Nizar Touzi June 10, 2015 Abstract The Skorokhod embedding problem aims to represent a given probability measure
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS
PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary
More informationSolutions to the Exercises in Stochastic Analysis
Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationDoléans measures. Appendix C. C.1 Introduction
Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationBackward Stochastic Differential Equations with Infinite Time Horizon
Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March
More informationBasic Definitions: Indexed Collections and Random Functions
Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery
More informationAn Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem
An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition heorem Adam Jakubowski Nicolaus Copernicus University, Faculty of Mathematics and Computer Science, ul. Chopina
More information