Insider games with asymmetric information. Bernt Øksendal Department of Mathematics, University of Oslo
|
|
- Duane Allen
- 5 years ago
- Views:
Transcription
1 Insider games with asymmetric information Bernt Øksendal Department of Mathematics, University of Oslo 7th General AMaMeF and Swissquote Conference, Lausanne, Switzerland 7-11 September 2015
2 Acknowledgments/thanks: This talk is based on joint papers with Olfa Draouil, University of Tunis El Manar, Tunisia. This research was carried out with support of Centre for Advanced Study (CAS), at the Norwegian Academy of Science and Letters, within the research program Stochastics in Environmental and Financial Economics (SEFE).
3 As illustrations of this machinery we apply it to (i) the problem of optimal consumption under model uncertainty for an insider in a jump-diffusion financial market, (ii) the problem of optimal portfolio for an insider in an insider influenced market. 1. Introduction In this talk we present a general method for finding Nash equilibria of insider control games with asymmetric information, i.e. stochastic differential games where one or several of the controllers have access to information about the future of the system. This inside information in the control process puts the problem outside the context of semimartigale theory, and we therefore apply general anticipating white noise calculus, including forward integrals and Hida-Malliavin calculus to study it. Combining this with the Donsker delta functional for the random variable Y which represents the inside information, we are able to prove both sufficient and necessary maximum principles for the Nash equilibria of such games.
4 We now explain this in more detail: The system we consider, is described by a stochastic differential equation driven by a Brownian motion B(t) and an independent compensated Poisson random measure Ñ(dt, dζ), jointly defined on a filtered probability space (Ω, F = {F t } t 0, P) satisfying the usual conditions. We assume that the inside information is of initial enlargement type. Specifically, we assume that the inside filtration H i for player i has the form (1.1) H i = {H i t} t 0, where H i t = F t Y i for all t, where Y i is a given F T0 -measurable random variable, for some T 0 > T (both constants), representing the inside information of player i. For simplicity we consider only the case with 2 players. We put Y = (Y 1, Y 2 ).
5 We assume that the values at time t of our two insider control processes u 1 (t), u 2 (t) are allowed to depend on both F t and Y 1, Y 2, respectively. In other words, u i is assumed to be H i -adapted. Therefore it has the form (1.2) u i (t, ω) = ũ i (t, Y i, ω) for some function ũ i : [0, T ] Ω such that u i (t, y) is F-adapted for each y. For simplicity (albeit with some abuse of notation) we will in the following write u i in stead of ũ i, i = 1, 2. To explain our approach we will in the following first discuss the case with just one insider and her stochastic control problem:
6 Consider a controlled stochastic process X (t) = X u (t) of the form (1.3) dx (t) = b(t, X (t), u(t), Y )dt + σ(t, X (t), u(t), Y )db(t) + γ(t, X (t), u(t), Y, ζ)ñ(dt, dζ); t 0 X (0) = x, x, where u(t) = u(t, y) y=y is our insider control and the (anticipating) stochastic integrals are interpreted as forward integrals, as introduced in usso & Vallois (1993) [V] (Brownian motion case) and in Di Nunno et al (2006) [DMØP1] (Poisson random measure case). A motivation for using forward integrals in the modelling of insider control is given in F. Biagini & Ø. (2005) [BØ].
7 Let A be a given family of admissible H adapted controls u. The performance functional J(u) of a control process u A is defined by (1.4) T J(u) = E[ f (t, X (t), u(t))dt + g(x (T ))] 0 We consider the problem to find u A such that (1.5) sup J(u) = J(u ). u A We use the Donsker delta functional of Y to transform this anticipating system into a classical (albeit parametrised) adapted system with a non-classical performance functional. Then we solve this transformed system by using modified maximum principles.
8 2. The Donsker delta functional Definition Let Z : Ω be a random variable which also belongs to (S). Then a continuous functional (2.1) δ Z (.) : (S) is called a Donsker delta functional of Z if it has the property that (2.2) g(z)δ Z (z)dz = g(z) a.s. for all (measurable) g : such that the integral converges.
9 The Donsker delta functional is related to the regular conditional distribution. The connection is the following: Define the regular conditional distribution with respect to F t of a given real random variable Y, denoted by Q t (dy) = Q t (ω, dy), by the following properties: For any Borel set Λ, Q t (, Λ) is a version of E[1 Y Λ F t ] For each fixed ω, Q t (ω, dy) is a probability measure on the Borel subsets of.
10 It is well-known that such a regular conditional distribution always exists. See e. g. Breiman (1968) [B], page 79. From the required properties of Q t (ω, dy) we get the following formula (2.3) f (y)q t (ω, dy) = E[f (Y ) F t ] Comparing with the definition of the Donsker delta functional, we obtain the following representation of the regular conditional distribution: Theorem Suppose Q t (ω, dy) is absolutely continuous with respect to Lebesgue measure on. Then the Donsker delta functional of Y, δ Y (y), exists and we have (2.4) Q t (ω, dy) dy = E[δ Y (y) F t ]
11 A general expression, in terms of Wick calculus, for the Donsker delta functional of an Itô diffusion with non-degenerate diffusion coefficient can be found in the amazing paper Lanconelli & Proske (2004) [LP]. See also Meyer-Brandis & Proske (2006) [MP]. In the following we present more explicit formulas the Donsker delta functional and its conditional expectation and Hida-Malliavin derivatives, for Itô - Lévy processes.
12 List of notation: λ denotes Lebesgue measure on, or on [0, T ], depending on the situation. F G = the Wick product of random variables F and G. F n = F F F... F (n times). (The n th Wick power of F ). exp (F ) = Σ n=0 1 n! F n (The Wick exponential of F.) D t F = the Hida-Malliavin derivative of F at t with respect to B( ). D t,z F = the Hida-Malliavin derivative of F at (t, z) with respect to N(, ). D (ϕ (F )) = ((ϕ) ) (F ) D F, where D = D t or D = D t,z.(the Wick chain rule.) (S), (S) = the Hida stochastic test function space and stochastic distribution space, respectively. (S) L 2 (P) (S).
13 2.1. The Donsker delta functional for a class of Itô - Lévy processes Consider the special case when Y is a first order chaos random variable of the form (2.5) Y = Y (T 0 ); where Y (t) = t 0 t β(s)db(s) + ψ(s, ζ)ñ(ds, dζ); t [0, T 0 ] 0 for some deterministic functions β, ψ satisfying T0 (2.6) {β 2 (t) + ψ 2 (t, ζ)ν(dζ)}dt < a.s. 0 We also assume that the following holds thoughout this paper: For every ɛ > 0 there exists ρ > 0 such that (2.7) e ρζ dν(ζ) <. \( ɛ,ɛ)
14 In this case it is well known (see e.g. Mataramvura et al (2004) [MØP], Di Nunno & Ø. (2007) ( [DiØ1], [DiØ2]), and Di Nunno et al. (2009) [DØP]) that the Donsker delta functional exists in (S) and is given by (2.8) δ Y (y) = 1 2π + + T0 0 T0 0 exp [ T 0 (e ixψ(s,ζ) 1)Ñ(ds, dζ) 0 xβ(s)db(s) { (e ixψ(s,ζ) 1 ixψ(s, ζ))ν(dζ) x 2 β 2 (s)}ds ixy ] dx.
15 Taking conditional expectation brings us from S down to L 2 (λ P). In fact, we have the following result: Lemma (2.9) E[δ Y (y) F t ] = 1 {exp [ t t ixψ(s, ζ)ñ(ds, dζ) + xβ(s)db(s) 2π 0 0 T0 + (e ixψ(s,ζ) 1 ixψ(s, ζ))ν(dζ)ds + t T0 t 1 2 x 2 β 2 (s)ds ixy ] }dx
16 Next, we need the following: Lemma (2.10) E[D t,z δ Y (y) F t ] = 1 { exp [ t t ixψ(s, ζ)ñ(ds, dζ) + xβ(s)db(s) 2π 0 0 T0 T0 + (e ixψ(s,ζ) 1 1 ixψ(s, ζ))ν(dζ)ds + 2 x 2 β 2 (s)ds ixy ] t (e ixψ(t,z) 1)}dx. t
17 Finally, we need the following result: Lemma (2.11) E[D t δ Y (y) F t ] = 1 {exp [ t t ixψ(s, 2π ζ)ñ(ds, dζ) + xβ(s)db(s) 0 0 T0 + (e ixψ(s,ζ) 1 ixψ(s, ζ))ν(dζ)ds + t T0 t 1 2 x 2 β 2 (s)ds ixy ] xβ(t)}dx.
18 2.2. The Donsker delta functional for a Gaussian process Consider the special case when Y is a Gaussian random variable of the form (2.12) Y = Y (T 0 ); where Y (t) = t for some deterministic function β L 2 [0, T 0 ] with 0 β(s)db(s), for t [0, T 0 ] T (2.13) β 2 [0,T ] := β(s) 2 ds > 0 for all t [0, T ]. t In this case it is well known that the Donsker delta functional is given by (2.14) δ Y (y) = (2πv) 1 2 exp [ (Y y) 2 ] 2v where we have put v := β 2 [0,T 0 ]. See e.g. Aase et al (2001) [AaØU], Proposition 3.2.
19 Using the Wick rule when taking conditional expectation, using the martingale property of the process Y (t) and applying Lemma 3.7 in [AaØU] we get (2.15) E[δ Y (y) F t ] = (2πv) 1 2 exp [ E[ (Y (T 0) y) 2 F t ]] 2v = (2π β 2 [0,T 0 ] ) 1 2 exp (Y (t) y) 2 [ 2 β 2 ] [0,T 0 ] = (2π β 2 [t,t 0 ] ) 1 (Y (t) y) 2 2 exp[ 2 β 2 ]. [t,t 0 ]
20 Similarly, by the Wick chain rule and Lemma 3.8 in [AaØU] we get, for t [0, T ], E[D t δ Y (y) F t ] (2.16) = E[(2πv) 1 2 exp [ (Y (T 0) y) 2 = (2πv) 1 2 exp [ 2v (Y (t) y) 2 2v ] Y (T 0) y β(t) F t ] v ] Y (t) y β(t) v = (2π β 2 [t,t 0 ] ) 1 (Y (t) y) 2 (t) y 2 ) exp[ 2 β 2 ]Y [t,t 0 ]) β 2 β(t). [t,t 0 ]
21 2.4 The Donsker delta functional for a Poisson process Next, assume that Y = Y (T 0 ), such that Y (t) = Ñ(t) = N(t) λt, where N(t) is a Poisson process with intensity λ > 0. In this case the Lévy measure is ν(dζ) = λδ 1 (dζ) since the jumps are of size 1. Comparing with (2.8) and by taking β = 0 and ψ = 1, we obtain (2.17) δ Y (y) = 1 exp [ (e ix 2π 1)Ñ(T 0) + λt 0 (e ix 1 ix) ixy ] dx
22 By using the general expressions in Section 2.1, we get: (2.18) E[δ Y (y) F t ] = 1 exp [ ixñ(t) + λ(t 0 t)(e ix 1 ix) ixy ] dx 2π and E[D t,1 δ Y (y) F t ] (2.19) = 1 2π exp [ ixñ(t) + λ(t 0 t)(e ix 1 ix) ixy ] (e ix 1)dx
23 3.The forward integral with respect to Brownian motion The forward integral with respect to Brownian motion was first defined in the seminal paper usso & Vallois (1993) [V] and further studied in subsequent papers by the same authors. This integral was introduced in the modeling of insider trading in F. Biagini & Ø. (2005) [BØ] and then applied by several authors in questions related to insider trading and stochastic control with advanced information.
24 Definition We say that a stochastic process φ = φ(t), t [0, T ], is forward integrable (in the weak sense) over the interval [0, T ] with respect to B if there exists a process I = I (t), t [0, T ], such that t B(s + ɛ) B(s) (3.1) sup φ(s) ds I (t) 0, ɛ 0 + t [0,T ] 0 ɛ in probability. In this case we write (3.2) I (t) := t 0 φ(s)d B(s), t [0, T ], and call I (t) the forward integral of φ with respect to B on [0, t].
25 The following results give a more intuitive interpretation of the forward integral as a limit of iemann sums. Lemma Suppose φ is càglàd and forward integrable. Then (3.3) T 0 J n φ(s)d B(s) = lim φ(t j 1 )(B(t j ) B(t j 1 )) t 0 j=1 with convergence in probability. Here the limit is taken over the partitions 0 = t 0 < t 1 <... < t Jn = T of t [0, T ] with t := max j=1,...,jn (t j t j 1 ) 0, n.
26 emark From the previous lemma we can see that, if the integrand φ is F-adapted, then the iemann sums are also an approximation to the Itô integral of φ with respect to the Brownian motion. Hence in this case the forward integral and the Itô integral coincide. In this sense we can regard the forward integral as an extension of the Itô integral to a nonanticipating setting.
27 We now give some useful properties of the forward integral. The following result is an immediate consequence of the definition. Lemma Suppose φ is a forward integrable stochastic process and G a random variable. Then the product G φ is forward integrable stochastic process and (3.4) T 0 Gφ(t)d B(t) = G T 0 φ(t)d B(t)
28 As a consequence of the above we get the following useful result: Lemma Let ϕ(t, y) be an F-adapted process for each y such that the classical Itô integral T 0 φ(t, y)db(t) exists for each y. Let Y be a random variable. Then ϕ(t, Y ) is forward integrable and (3.5) T 0 ϕ(t, Y )d B(t) = T 0 ϕ(t, y)db(t) y=y.
29 The next result shows that the forward integral is an extension of the integral with respect to a semimartingale. Lemma Let G := {G t, t [0, T ]}(T > 0) be a given filtration. Suppose that 1. B is a semimartingale with respect to the filtration G. 2. φ is G-predictable and the integral (3.6) T 0 φ(t)db(t), with respect to B, exists. Then φ is forward integrable and (3.7) T 0 φ(t)d B(t) = T 0 φ(t)db(t),
30 We now turn to the Itô formula for forward integrals. In this connection it is convenient to introduce a notation that is analogous to the classical notation for Itô processes. Definition A forward process (with respect to B) is a stochastic process of the form t t (3.8) X (t) = x + u(s)ds + v(s)d B(s), t [0, T ], 0 0 (x constant), where T 0 u(s) ds <, P-a.s. and v is a forward integrable stochastic process. A shorthand notation for (3.8) is that (3.9) d X (t) = u(t)dt + v(t)d B(t).
31 Theorem The one-dimensional Itô formula for forward integrals. Let (3.10) d X (t) = u(t)dt + v(t)d B(t) be a forward process. Let f C 1,2 ([0, T ] ) and define (3.11) Y (t) = f (t, X (t)), t [0, T ]. Then Y (t), t [0, T ], is also a forward process and (3.12) d Y (t) = f (t, X (t))dt+ f t x (t, X (t))d X (t)+ 1 2 f 2 x 2 (t, X (t))v 2 (t)dt. Similar definitions and results can be obtained in the Poisson random measure case. See Di Nunno et al (2006) [DMØP1].
32 4. The general insider stochastic differential game In this section, we formulate and prove a sufficient and a necessary maximum principle for general stochastic differential games (not necessarily zero-sum games) for insiders. The system we consider, is described by a stochastic differential equation driven by a Brownian motion B(t) and an independent compensated Poisson random measure Ñ(dt, dζ), jointly defined on a filtered probability space (Ω, F = {F t } t 0, P) satisfying the usual conditions. We assume that the inside information is of initial enlargement type. Specifically, we assume that the two inside filtrations H 1, H 2 representing the information flows available to player 1 and player 2, respectively, have the form (4.1) H i = {H i t} t 0, where H i t = F t Y i, i = 1, 2 for all t, where Y i is a given F T -measurable random variable, for some fixed T > t.
33 Here the insider control process u(t) = (u 1 (t), u 2 (t)), where u i (t) is the control of player i; i=1,2. Thus we assume that the value at time t of our insider control process u i (t) is allowed to depend on both Y i and F t ; i = 1, 2. In other words, u i is assumed to be H i -adapted for i = 1, 2. Therefore they have the form (4.2) u i (t, ω) = u i (t, Y i, ω) for some function u i : [0, T ] Ω such that u i (t, y i ) is F-adapted for each y i. For simplicity (albeit with some abuse of notation) we will in the following write u i in stead of u i ; i = 1, 2. Consider a controlled stochastic process X (t) = X u (t) of the form (4.3) dx (t) = dx u (t) = b(t, X (t), u 1 (t), u 2 (t), Y 1, Y 2 )dt +σ(t, X (t), u 1 (t), u 2 (t), Y 1, Y 2 )db(t) + γ(t, X (t), u 1(t), u 2 (t), Y 1, Y 2, ζ)ñ(dt, dζ); t 0 X (0) = x, x,
34 where u i (t) = u i (t, y i ) yi =Y i is the control process of insider i; i = 1, 2, and the (anticipating) stochastic integrals are interpreted as forward integrals. Let A i denote a given set of admissible H i adapted controls u i of player i, with values in A i d, d 1; i = 1, 2. Denote U = A 1 A 2. Then X (t) is F Y 1 Y 2 -adapted, and hence using the definition of the Donsker delta functional δ (Y1,Y 2 )(y 1, y 2 ) of (Y 1, Y 2 ) we get (4.4) X (t) = x(t, Y 1, Y 2 ) = x(t, y 1, y 2 ) y1 =Y 1,y 2 =Y 2 = x(t, y 1, y 2 )δ Y1,Y 2 (y 1, y 2 )dy 1 dy 2 2 for some y 1, y 2 -parametrized process x(t, y 1, y 2 ) which is F-adapted for each y 1, y 2. Then, again by the definition of the Donsker delta functional and the properties of forward integration, we can write
35 t X (t) = x + (4.5) t t t b(s, X (s), u(s), Y )ds + σ(s, X (s), u(s), Y )db(s) 0 γ(s, X (s), u(s), Y, ζ)ñ(ds, dζ) = x + b(s, x(s, Y ), u 1 (s, Y 1 ), u 2 (s, Y 2 ), Y 1, Y 2 )ds 0 t + σ(s, x(s, Y 1, Y 2 ), u 1 (s, Y 1 ), u 2 (s, Y 2 ), Y 1, Y 2 )db(s) 0 t + γ(s, x(s, Y 1, Y 2 ), u 1 (s, Y 1 ), u 2 (s, Y 2 ), Y 1, Y 2, ζ)ñ(ds, dζ) 0 t = x + b(s, x(s, y 1, y 2 ), u 1 (s, y 1 ), u 2 (s, y 2 ), y 1, y 2 ) y1 =Y 1,y 2 =Y 2 ds 0 t + σ(s, x(s, y 1, y 2 ), u 1 (s, y 1 ), u 2 (s, y 2 ), y 1, y 2 ) y1 =Y 1,y 2 =Y 2 db(s) 0 t + γ(s, x(s, y), u 1 (s, y 1 ), u 2 (s, y 2 ), y 1, y 2, ζ) y1 =Y 1,y 2 =Y 2 Ñ(ds, dζ 0
36 t = x + b(s, x(s, y), u(s, y), u 2 (s, y), y)δ Y (y)dy 1 dy 2 ds 0 2 t + σ(s, x(s, y), u(s, y), y)δ Y (y)dy 1 dy 2 db(s) 0 2 t + γ(s, x(s, y), u(s, y), y, ζ)δ Y (y)dy 1 dy 2 Ñ(ds, dζ) 0 2 t = x + [ b(s, x(s, y 1, y 2 ), u 1 (s, y 1 ), u 2 (s, y 2 ), y 1, y 2 )ds 2 0 t + σ(s, x(s, y 1, y 2 ), u 1 (s, y 1 ), u 2 (s, y 2 ), y 1, y 2 )db(s) 0 (4.6) t + γ(s, x(s, y), u(s, y), y, ζ)ñ(ds, dζ)]δ Y (y 1, y 2 )dy 1 dy 2. 0
37 Comparing (4.4) and (4.5) we see that (4.4) holds if we choose x(t, y) for each y = (y 1, y 2 ) as the solution of the classical SDE dx(t, y 1, y 2 ) = b(t, x(t, y 1, y 2 ), u 1 (t, y 1 ), u 2 (t, y 2 ), y 1, y 2 )dt + σ(t, x(t, y 1, y 2 ), u 1 (t, y 1 ), u 2 (t, y 2 ), y 1, y 2 )db(t) (4.7) + γ(t, x(t, y 1, y 2 ), u 1 (t, y 1 ), u 2 (t, y 2 ), y 1, y 2, ζ)ñ(dt, dζ); t 0 (4.8) x(0, y) = x, x, The performance functional J i (u); u = (u 1, u 2 ) of player i is defined by J i (u) = E[ = E[ T f i (t, X (t), u 1 (t), u 2 (t), Y )dt + g i (X (T ), Y )] 0 T { 2 0 f i (t, x(t, y), u(t, y), y)e[δ Y1,Y 2 (y 1, y 2 ) F t ]dt (4.9) + g i (x(t, y 1, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F T ]}dy 1 dy 2 ]; i = 1, 2.
38 A Nash equilibrium for the game (4.3)-(4.9) is a pair û = (û 1, û 2 ) A 1 A 2 such that (4.10) sup u 1 A 1 J 1 (u 1, û 2 ) J 1 (û 1, û 2 ) and (4.11) sup u 2 A 2 J 2 (û 1, u 2 ) J 2 (û 1, û 2 ). We can rewrite this problem in terms of a classical (i.e. F-adapted) stochastic differential game with parameter y = (y 1, y 2 ) as follows:
39 For each given y = (y 1, y 2 ) 2 define (4.12) J i (u(, y)) = T 0 f i (t, x(t, y), u(t, y), y, E[δ Y1,Y 2 (y 1, y 2 ) F t ]dt + g i (x(t, y 1, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F T ]; i = 1, 2. Then we see that (4.13) J i (u) = J i (u(, y 1, y 2 ))dy 1 dy 2. 2 Therefore the problem is, in many situations, reduced to solving a classical stochastic differential game for each y.
40 5. A sufficient maximum principle The problem (4.10)-(4.11) is a stochastic differential game with a standard (albeit parametrized) stochastic differential equation (4.7) for the state process x(t, y 1, y 2 ), but with a non-standard performance functional given by (4.9). We can solve this problem by a modified maximum principle approach, as follows: Define the Hamiltonians H i : [0, T ] U Ω by H i (t, x, y 1, y 2, u 1, u 2, p, q, r) = H(t, x, y 1, y 2, u 1, u 2, p, q, r, ω) = E[δ Y1,Y 2 (y 1, y 2 ) F t ]f i (t, x, u 1, u 2, y 1, y 2 ) + b(t, x, u 1, u 2, y 1, y 2 )p (5.1) + σ(t, x, u 1, u 2, y 1, y 2 )q + γ(t, x, u 1, u 2, y 1, y 2, ζ)r(ζ)ν(dζ); i = 1, 2. Here denotes the set of all functions r(.) : such that the last integral above converges.
41 For i = 1, 2 we define the adjoint processes p i (t, y 1, y 2 ), q i (t, y 1, y 2 ), r i (t, y 1, y 2, ζ) as the solution of the y 1, y 2 -parametrised BSDEs (5.2) dp i (t, y 1, y 2 ) = H i x (t, y 1, y 2 )dt + q i (t, y 1, y 2 )db(t) + r i(t, y 1, y 2, ζ)ñ(dt, dζ); 0 t T p i (T, y 1, y 2 ) = g i (x(t, y 1, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F T ] Let J i (u(., y 1, y 2 )) be defined by J i (u(., y 1, y 2 )) = E[ T 0 f i (t, x(t, y 1, y 2 ), u 1 (t, y 1 ), u 2 (t, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F t ]dt (5.3) + g i (x(t, y 1, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F T ]] Then we see that (5.4) J i (u 1, u 2 ) = J i (u(., y 1, y 2 ))dy 1 dy 2
42 The insider game problem can therefore be written as follows: Problem For each y 1, y 2, find (u1 (., y 1), u2 (., y 2)) A 1 A 2 such that sup J 1 (u 1 (., y 1 ), u2(., y 2 ))dy 1 dy 2 (5.5) and (5.6) u 1 (.,y 1 ) A 1 sup u 2 (.,y 2 ) A 2 J 1 (u 1(., y 1 ), u 2(., y 2 ))dy 1 dy 2 J 2 (u 1(., y 1 ), u 2 (., y 2 ))dy 1 dy 2 J 2 (u 1(., y 1 ), u 2 (., y 2 ))dy 1 dy 2 To study this problem we present two maximum principles for the corresponding games. The first is the following:
43 Theorem [Sufficient maximum principle] Let (û 1, û 2 ) A 1 A 2 with associated solution ˆx(t, y 1, y 2 ), ˆp i (t, y 1, y 2 ), ˆq i (t, y 1, y 2 ), ˆr i (t, y 1, y 2, ζ) of (4.7) and (5.2); i=1,2. Assume that the following hold: x g i (x) is concave; i = 1, 2 The functions (5.7) Ĥ 1 (x) = sup u 1 A 1 and (5.8) Ĥ 2 (x) = sup u 2 A 2 are concave for all t, y 1, y 2 H 1 (t, x, y, u 1, û 2,t (y 2 ), p 1,t (y), q 1,t (y), ˆr 1,t (y))dy 2 H 2 (t, x, y, û 1 (t, y 1 ), u 2, p 2,t (y), q 2,t (y)ˆr 2,t (y))dy 1
44 ( sup H 1 t, x(t, y), u1, û 2 (t, y 2 ), p 1 (t, y), q 1 (t, y), ˆr 1 (t, y) ) dy 2 u 1 A 1 ( = H 1 t, x(t, y), û1 (t, y 1 ), û 2 (t, y 2 ), p 1 (t, y), q 1 (t, y), ˆr 1 (t, y) ) dy 2 (5.9) for all t, y 1. ( sup H 2 t, x(t, y), û1 (t, y 1 ), u 2, p 2 (t, y), q 2 (t, y), ˆr 2 (t, y) ) dy 1 u 2 A 2 ( = H 2 t, x(t, y), û1 (t, y 1 ), û 2 (t, y 2 ), p 2 (t, y), q 2 (t, y), ˆr 2 (t, y) ) dy 1 (5.10) for all t, y 2. Then (u 1 (., y 1), u 2 (., y 2)) := (û 1 (., y 1 ), û 2 (., y 2 )) is a Nash equilibrium for the problem (5.5)-(5.6).
45 are concave for all t. Theorem [Sufficient maximum principle with only one insider] Suppose Y 2 = 0, i.e. player number 2 has no inside information. Let (û 1, û 2 ) A 1 A 2 with associated solution ˆx(t, y 1 ), ˆp i (t, y 1 ), ˆq i (t, y 1 ), ˆr i (t, y 1, ζ) of (4.7) and (5.2); i=1,2. Assume that the following hold: x g i (x) is concave; i = 1, 2 (The Arrow conditions.) The functions (5.11) Ĥ 1 (x) = sup u 1 A 1 H 1 (t, x, y 1, u 1, û 2 (t), p 1 (t, y 1 ), q 1 (t, y 1 ), ˆr 1 (t, y 1, )) and Ĥ 2 (x) = (5.12) sup u 2 A 2 H 2 (t, x, y 1, û 1 (t, y 1 ), u 2, p 2 (t, y 1 ), q 2 (t, y 1 ), ˆr 2 (t, y 1 ))dy 1
46 Moreover, assume that 1. ( sup H 1 t, x(t, y1 ), u 1, û 2 (t), p 1 (t, y 1 ), q 1 (t, y 1 ), ˆr 1 (t, y 1, ) ) u 1 A 1 = H 1 ( t, x(t, y1 ), û 1 (t, y 1 ), û 2 (t), p 1 (t, y 1 ), q 1 (t, y 1 ), ˆr 1 (t, y 1, ) ) (5.13) for all t. 2. ( sup H 2 t, x(t, y1 ), û 1 (t, y 1 ), u 2, p 2 (t, y 1 ), q 2 (t, y 1 ), ˆr 2 (t, y 1 ) ) dy 1 u 2 A 2 ( = H 2 t, x(t, y1 ), û 1 (t, y 1 ), û 2 (t), p 2 (t, y 1 ), q 2 (t, y 1 ), ˆr 2 (t, y 1 ) ) dy 1 (5.14) for all t. Then (u 1 (., y 1), u 2 (.)) := (û 1(., y 1 ), û 2 (.)) is a Nash equilibrium for the problem (5.5)-(5.6).
47 6. A necessary maximum principle We proceed to establish a corresponding necessary maximum principle. For this, we do not need concavity conditions, but in stead we need the following assumptions about the set of admissible control values: A 1. For all t 0 [0, T ], y i and all bounded F t0 -measurable random variables α i (y i, ω), the control θ i (t, y i, ω) := 1 [t0,t ](t)α i (y i, ω) belongs to A i for i = 1, 2. A 2. For all u i ; β i 0 A i with β i 0 (t, y i) K < for all t, y i define (6.1) δ i (t, y i ) = 1 2K dist((u i(t, y i ), A i ) 1 > 0 and put (6.2) β i (t, y i ) = δ i (t, y i )β i 0(t, y i ). Then the control ũ i (t, y i ) = u i (t, y i ) + aβ i (t, y i ); t [0, T ] belongs to A i for all a ( 1, 1) for i = 1, 2.
48 A3. For all β i as in (6.2) the derivative processes and χ 1 (t, y 1, y 2 ) := d da x (u 1+aβ 1,u 2 ) (t, y 1, y 2 ) a=0 χ 2 (t, y 1, y 2 ) := d da x (u 1,u 2 +aβ 2 ) (t, y 1, y 2 ) a=0 exist, and belong to L 2 (λ P) and (6.3) dχ 1 (t, y 1, y 2 ) = [ b x (t, y 1, y 2 )χ 1 (t, y 1, y 2 ) + b u 1 (t, y)β 1 (t, y 1 )]dt +[ σ x (t, y 1, y 2 )χ 1 (t, y) + σ u 1 (t, y 1, y 2 )β 1 (t, y 1 )]db(t) + [ γ x (t, y 1, y 2, ζ)χ 1 (t, y 1, y 2 ) + γ u 1 (t, y 1, y 2, ζ)β 1 (t, y 1 )]Ñ(dt, d χ 1 (0, y 1, y 2 ) = d da x (u 1+aβ 1,u 2 ) (0, y 1, y 2 ) a=0 = 0. and (6.4) dχ 2 (t, y) = [ b x (t, y 1, y 2 )χ 2 (t, y 1, y 2 ) + b u 2 (t, y 1, y 2 )β 2 (t, y 2 )]dt +[ σ x (t, y 1, y 2 )χ 2 (t, y 1, y 2 ) + σ u 2 (t, y 1, y 2 )β 2 (t, y 2 )]db(t) + [ γ x (t, y 1, y 2, ζ)χ 2 (t, y 1, y 2 ) + γ u 2 (t, y 1, y 2, ζ)β 2 (t, y 2 )]Ñ(dt, d χ(0, y 1, y 2 ) = d da x (u 1,u 2 +aβ 2 ) (0, y 1, y 2 ) a=0 = 0.
49 Theorem [Necessary maximum principle] Let (u 1, u 2 ) A 1 A 2. Then the following are equivalent: 1. d da J 1(u 1 + aβ 1, u 2 ) a=0 dy 1 dy 2 = d da J 2(u 1, u 2 + aβ 2 ) a=0 dy 1 dy 2 = 0, for all bounded β i A i of the form (6.2). H 1 2. (t, x(t, y), v 1, u 2 (t, y 2 ), p 1 (t, y), q 1 (t, y), r 1 (t, y))dy 2 v v1 =u 1 (t,y 1 ) 1 H 2 = (t, x(t, y), u 1 (t, y 1 ), v 2, p 2 (t, y), q 2 (t, y), r 2 (t, y))dy 1 v v2 =u 2 (t,y 2 ) 2 (6.5) = 0 ; t [0, T ].
50 7. The zero-sum case In the zero-sum case we have (7.1) J 1 (u 1 (., y 1 ), u 2 (., y 2 )) + J 2 (u 1 (., y 1 ), u 2 (., y 2 ))dy 1 dy 2 = 0. Then the Nash equilibrium (û 1 (., y 1 ), û 2 (., y 2 )) A 1 A 2 satisfying (5.5)-(5.6) becomes a saddle point for (7.2) J(u 1 (., y 1 ), u 2 (., y 2 ))dy 1 dy 2 := J 1 (u 1 (., y 1 ), u 2 (., y 2 ))dy 1 dy 2.
51 Hence we want to find (û 1 (., y 1 ), û 2 (., y 2 )) A 1 A 2 such that sup inf J(u 1 (., y 1 ), u 2 (., y 2 ))dy 1 dy 2 u 1 A 1 u 2 A 2 2 = inf sup J(u 1 (., y 1 ), u 2 (., y 2 ))dy 1 dy 2 u 2 A 2 u 1 A 1 2 (7.3) = J(û 1 (., y 1 ), û 2 (., y 2 ))dy 1 dy 2
52 where J(u(., y 1, y 2 ))dy 1 dy 2 = E [ T f (t, x(t, y), u 1 (t, y 1 ), u 2 (t, y 2 ), y)e[δ Y (y) F t ]dt 0 (7.4) + g(x(t, y), y)e[δ Y (y) F T ] ] dy 1 dy 2
53 One of the players is maximising and the other is minimising the performance functional Let us now look at the problem with one performance functional common to both players, but where one of the players is maximising and the other is minimising it. Then we get just one Hamiltonian and just one BSDE, which is simpler to deal with. In this case the Hamiltonian H, is given by: H(t, x, y 1, y 2, u 1, u 2, p, q, r) = H(t, x, y 1, y 2, u 1, u 2, p, q, r, ω) = E[δ Y1,Y 2 (y 1, y 2 ) F t ]f (t, x, u 1, u 2, y 1, y 2 ) + b(t, x, u 1, u 2, y 1, y 2 )p (7.5) + σ(t, x, u 1, u 2, y 1, y 2 )q + γ(t, x, u 1, u 2, y 1, y 2 )r(t, ζ)ν(dζ)
54 Moreover, there is only one triple (p, q, r) of adjoint processes, given by the BSDE dp(t, y 1, y 2 ) = H x (t, y 1, y 2 )dt + q(t, y 1, y 2 )db(t) (7.6) + r(t, y 1, y 2, ζ)ñ(dt, dζ); 0 t T p(t, y) = g (x(t, y 1, y 2 ), y 1, y 2 )E[δ Y1,Y 2 (y 1, y 2 ) F T ] We can now state the corresponding sufficient maximum principle for the zero-sum game:
55 Theorem (Sufficient maximum principle for the zero-sum game) Let (û 1, û 2 ) A 1 A 2 with associated solution ˆx(t, y 1, y 2 ), ˆp(t, y 1, y 2 ), ˆq(t, y 1, y 2 ), ˆr(t, y 1, y 2, ζ) of (4.7) and (7.6). Assume that the following holds: 1. The function x g(x) is affine 2. sup H ( t, x(t, y), u 1, û 2 (t, y 2 ), p(t, y), q(t, y), ˆr(t, y) ) dy 2 u 1 A 1 = H ( t, x(t, y), û 1 (t, y 1 ), û 2 (t, y 2 ), p(t, y), q(t, y), ˆr(t, y) ) dy 2 (7.7) for all t, y 1.
56 3. inf H ( t, x(t, y), û 1 (t, y 1 ), u 2, p(t, y), q(t, y), ˆr(t, y) ) dy 1 u 2 A 2 = H ( t, x(t, y), û 1 (t, y 1 ), û 2 (t, y 2 ), p(t, y), q(t, y), ˆr(t, y) ) dy 1 (7.8) for all t, y The function (7.9) Ĥ(x) = sup u 1 A 1 is concave for all t, y 1 H(t, x, y, u 1, û 2 (t, y 2 ), p(t, y), q(t, y), ˆr(t, y, ))dy 2
57 and the function (7.10) H(x) = inf H(t, x, y, û 1 (t, y 1 ), u 2, p(t, y), q(t, y), ˆr(t, y, ))dy 1 u 2 A 2 is convex for all t, y 2. Then û(t, y 1, y 2 ) = (û 1 (t, y 1 ), û 2 (t, y 2 )) is a saddle point for J(u 1, u 2 ).
58 Let us now state the corresponding necessary maximum principle for the zero sum game problem: Theorem [Necessary maximum principle for zero-sum games] Assume the conditions of the previous Theorem hold. Then the following are equivalent: 1. d da J(u 1 + aβ 1, u 2 ) a=0 dy 1 dy 2 = d da J(u 1, u 2 + aβ 2 ) a=0 dy 1 dy 2 = 0 for all bounded β i A i of the form (6.2). H 2. (t, x(t, y), v 1, u 2 (t, y 2 ), p 1 (t, y), q 1 (t, y), r 1 (t, y))dy 2 v v1 =u 1 (t,y 1 ) 1 H = (t, x(t, y), u 1 (t, y 1 ), v 2, p 2 (t, y), q 2 (t, y), r 2 (t, y))dy 1 v v2 =u 2 (t,y 2 ) 2 (7.11) = 0 t [0, T ]
59 8. Application 1: Optimal consumption for an insider under model uncertainty Suppose we have a cash flow with consumption, modelled by the process X (t, Y ) = X c,µ (t, Y ) defined by: dx (t, Y ) = (α(t, Y ) + µ(t, Y 2 ) c(t, Y 1 ))X (t, Y )dt +β(t, Y )X (t, Y )db(t) + γ(t, Y, ζ)x (t, Y )Ñ(dt, dζ) X (0) = x > 0 Here α(t, Y ), β(t, Y ), γ(t, Y ) are given coefficients, while c(t, Y 1 ) > 0 is the relative consumption rate chosen by the consumer (player number 1) and µ(t, Y 2 ) is a perturbation of the drift term, representing the model uncertainty chosen by the environment (player number 2).
60 Define the performance functional by T (8.1) J(c, µ) = E[ log(c(t)x (t)) µ2 (t)dt + θ log X (T )] where θ > 0 is a given constant and 1 2 µ2 (t) represents a penalty rate, penalizing µ for being away from 0. We assume that c is H 1 -adapted, while µ is H 2 -adapted. We want to find c A 1 and µ A 2 such that (8.2) sup inf J(c, µ) = J(c, µ ). µ A 2 c A 1
61 As before we rewrite this problem as a classical stochastic differential game with two parameters y 1, y 2. Thus we define, for y = (y 1, y 2 ), (8.3) dx(t, y) x(0, y) = x > 0 and = (α(t, y) + µ(t, y 2 ) c(t, y 1 ))x(t, y)dt + β(t, y)x(t, y)db( + γ(t, y, ζ)x(t, y)ñ(dt, dζ) J(c(., y 1 ), µ(., y 2 )) = E[ T µ2 (t, y 2 )}E[δ Y (y) F t ]dt 0 {log(c(t, y 1 )x(t, y)) (8.4) + θ log x(t, y)e[δ Y (y) F T ]]
62 The Hamiltonian for this problem is H(t, x, y, c, µ, p, q, r) = {log(cx) µ2 }E[δ Y (y) F t ] (8.5) + (α(t, y) + µ c)xp + β(t, y)xq + x and the BSDE for the adjoint processes p, q, r is p(t, y) = θ x(t,y) E[δ Y (y) F T ] γ(t, y, ζ)r(ζ)dν(ζ) 1 dp(t, y) = [ x(t,y) E[δ Y (y) F t ] +(α(t, y) + µ(t, y 2 ) c(t, y 1 ))p(t, y) +β(t, y)q(t, y) + γ(t, y, ζ)r(ζ)dν(ζ)]dt Define +q(t, y)db(t) + r(t, y)ñ(dt, dζ); 0 t T (8.6) h(t, y) = p(t, y)x(t, y).
63 Then by the Itô formula we get dh(t, y) = x(t, y)[ 1 x(t, y) E[δ Y (y) F t ] (α(t, Y ) + µ(t, Y 2 ) c(t, Y 1 ))p(t, y) β(t, y)q(t, y) γ(t, y, ζ)r(t, ζ)dν(ζ)]dt + p(t, y)(α(t, Y ) + µ(t, Y 2 ) c(t, Y 1 ))x(t, y)dt + p(t, y)β(t, y)x(t, y)d + q(t, y)β(t, y)x(t, y)dt + [(x(t, y) + γ(t, y, ζ)x(t, y))(p(t, y) + r(t, y, ζ)) p(t, y)x(t, y) p (8.7) + [(x(t, y) + γ(t, y, ζ)x(t, y))(p(t, y) + r(t, y, ζ)) p(t, y)x(t, y)]ñ(d (8.8) = df (t, y) + h(t, y)β(t, y)db(t) + h(t, y) γ(t, y, ζ)ñ(dt, dζ)),
64 where (8.9) df (t, y) = E[δ Y (y) F t ]dt + x(t, y)q(t, y)db(t) + x(t, y) r(t, y, ζ)(1 + γ(t, y, ζ))ñ(dt, dζ). To simplify this, we define the process k(t, y) by the equation [ ] (8.10) dk(t, y) = k(t, y) b(t, y)db(t) + c(t, y, ζ)ñ(dt, dζ) for suitable processes b, c (to be determined).
65 Then again by the Itô formula we get [ d(h(t, y)k(t, y)) = h(t, y)k(t, y) b(t, y)db(t) + [ + k(t, y) df (t, y) + h(t, y)β(t, y)db(t) + h(t, y) + (h(t, y)β(t, y) + x(t, y)q(t, y))k(t, y)b(t, y)dt ( + h(t, y)γ(t, y, ζ) ) + x(t, y)r(t, y, ζ)(1 + γ(t, y, ζ) k(t, y)c(t, y, ζ)ñ(dt, dζ) ( + h(t, y)γ(t, y, ζ) (8.11) ) + x(t, y)r(t, y, ζ)(1 + γ(t, y, ζ) k(t, y)c(t, y, ζ)dν(ζ)dt. ] c(t, y, ζ)ñ(dt, dζ) ] γ(t, y, ζ)ñ(dt, dζ)
66 Define (8.12) u(t, y) := h(t, y)k(t, y). Then the equation above can be written [ du(t, y) = u(t, y) γ(t, y, ζ)c(t, y, ζ)dν(ζ)dt + {β(t, y) + b(t, y)}db(t) + β(t, y)b(t, y)dt ] + {c(t, y, ζ) + γ(t, y, ζ) + c(t, y, ζ)γ(t, y, ζ)}ñ(dt, dζ) [ + k(t, y) df (t, y) + x(t, y)q(t, y)b(t, y)dt + x(t, y)r(t, y, ζ)c(t, y, ζ)(1 + γ(t, y, ζ))dν(ζ)dt (8.13) + ] x(t, y)r(t, y, ζ)c(t, y, ζ)(1 + γ(t, y, ζ))ñ(dt, dζ).
67 Choose (8.14) b(t, y) := β(t, y) γ(t, y, ζ) c(t, ζ) := 1 + γ(t, y, ζ) Then (8.13) reduces to (8.15) du(t, y) = f (t, y)dt + k(t, y)x(t, y)q(t, y)db(t) + {x(t, y)r(t, y, ζ)(1 + γ(t, y, ζ))[k(t, y) + k(t, y)c(t, y, ζ)]}ñ(dt, dζ),
68 where (8.16) f (t, y) = k(t, y)e[δ Y (y) F t ] + u(t, y)[ γ(t, y, ζ)c(t, ζ)dν(ζ) + β(t, y)b(t, y)] + k(t, y)x(t, y)q(t, y)β(t, y) + k(t, y) x(t, y)r(t, y, ζ)c(t, y, ζ)(1 + γ(t, y, ζ))dν(ζ)
69 Now define (8.17) v(t, y) := k(t, y)x(t, y)q(t, y) w(t, y) := k(t, y)x(t, y)r(t, y, ζ). Then from (8.11) and (8.14) we get the following BSDE in the unknowns u, v, w: du(t, y) = ( γ 2 (t, y, ζ) k(t, y)e[δ Y (y) F t ] u(t, y)[ 1 + γ(t, y, ζ) dν(ζ) + β2 (t, y)] ) β(t, y)v(t, y) γ(t, y, ζ)w(t, y, ζ)dν(ζ) dt + v(t, y)db(t) + w(t, y, ζ)ñ(dt, dζ); 0 t T (8.18) u(t, y) = θk(t, y)e[δ Y (y) F T ]
70 This is a linear BSDE which has a unique solution u(t, y) = p(t, y)x(t, y)k(t, y), v(t, y), w(t, y, ζ). In particular, we may regard (8.19) p(t, y)x(t, y) = as known. u(t, y) k(t, y)
71 Maximizing Hdy 2 with respect to c gives the first order equation (8.20) i.e., (8.21) 1 { c(t, y 1 ) E[δ Y (y) F t ] x(t, y)p(t, y)}dy 2 = 0, c(t, y 1 ) = ĉ(t, y 1 ) = E[δ Y (y) F t ]dy 2 x(t, y)p(t, y)dy. 2 Minimizing Hdy 1 with respect to µ gives the first order equation (8.22) i.e. (8.23) {µ(t, y 2 )E[δ Y (y) F t ] + x(t, y)p(t, y)}dy 1 = 0, µ(t, y 2 ) = ˆµ(t, y 2 ) = x(t, y)p(t, y)dy 1 E[δ Y (y) F t ]dy 1.
72 We can now verify that ĉ, ˆµ satisfies all the conditions of the sufficient maximum principle, and hence we conclude the following: Theorem (Optimal consumption for an insider under model uncertainty) The solution (c, µ ) of the stochastic differential game (8.2) is given by c (t, Y 1 ) = E[δ Y (y) F t ]dy 2 y1 =Y (8.24) 1 x(t, y)p(t, y)dy. 2 y1 =Y 1 and (8.25) µ (t, Y 2 ) = x(t, y)p(t, y)dy 1 y2 =Y 2 E[δ Y (y) F t ]dy 1 y2 =Y 2, where h(t, y) = x(t, y)p(t, y) is given by (8.18)-(8.19).
73 8.2 Optimal portfolio for an insider under model uncertainty Consider a financial market with two investment possibilities: (i) A risk free investment possibility with unit price S 0 (t) = 1 for all t [0, T ] (ii) A risky investment, where the unit price S(t) = S(t, Y ) is modelled by the (forward) SDE (8.26) ds(t, Y ) = S(t, Y )[(α(t, Y )+µ(t))dt+β(t, Y )db(t)]; S(0) > 0. Here α(t, Y ), β(t, Y ) 0 are given H-adapted coefficients, while µ(t) is a perturbation of the drift term, representing the model uncertainty chosen by the environment (player number 2).
74 Suppose the wealth process X (t, Y ) = X π,µ (t, Y ) associated to an insider portfolio π(t, Y ) (representing the fraction of the wealth invested in the risky asset) is given by: (8.27) { dx (t, Y ) = π(t, Y )X (t, Y )[(α(t, Y ) + µ(t))dt + β(t, Y )]db(t) X (0) = x > 0 Define the performance functional by (8.28) J(π, µ) = E[ T µ2 (t)dt + θ log X (T )], where θ > 0 is a given constant and 1 2 µ2 (t) represents a penalty rate, penalizing µ for being away from 0. We assume that π is H-adapted, while µ is F-adapted, i.e. has no inside information. We want to find π A 1 and µ A 2 such that (8.29) sup inf J(π, µ) = J(π, µ ). µ A 2 π A 1
75 We rewrite this problem as a classical stochastic differential game with one parameter y 1. Thus we define, setting y = y 1, (8.30) { dx(t, y) and = π(t, y)x(t, y)[{α(t, y) + µ(t)}dt + β(t, y)db(t)] x(0, y) = x(y) > 0 J(π(., y), µ(.)) (8.31) = E[ T µ2 (t)e[δ Y (y) F t ]dt + θ log x(t, y)e[δ Y (y) F T ]]
76 The Hamiltonian for this problem is (8.32) H(t, x, y, π, µ, p, q) = 1 2 µ2 E[δ Y (y) F t ] + πx(α(t, y) + µ)p + πxβ(t, y)q and the BSDE for the adjoint processes p, q, r is { dpt (y) = [π t (y){(α t (y) + µ t )p t (y) + β t (y)q t (y)}]dt + q t (y)db(t); p T (y) = θe[δ Y (y) F T ] x T (y).
77 Minimizing H with respect to π gives the first order equation (8.33) x(t, y)[(α(t, y) + µ(t))p(t, y) + β(t, y)q(t, y)] = 0. Since x(t, y) > 0 and β(t, y) 0, we deduce that (8.34) (α(t, y) + µ(t))p(t, y) + β(t, y)q(t, y) = 0 and (8.35) q(t, y) = Hence (8.31) reduces to α(t, y) + µ(t) p(t, y). β(t, y) (8.36) { dp(t, y) = α(t,y)+µ(t) β(t,y) p(t, y)db(t) p(t, y) = θe[δ Y (y) F T ] x(t,y).
78 Define (8.37) h(t, y) = p(t, y)x(t, y). Then by the Itô formula we get (8.38) { dh(t, y) = (π(t, y)β(t, y) α(t,y)+µ(t) β(t,y) )h(t, y)db(t) h(t, y) = p(t, y)x(t, y) = θe[δ Y (y) F T ]. This BSDE has the solution (8.39) h(t, y) = θe[δ Y (y) F t ].
79 Moreover, by the generalized Clark-Ocone formula we have (8.40) α(t, y) + µ(t) (π(t, y)β(t, y) )h(t, y) = D t h(t) = θe[d t δ Y (y) F], β(t, y) from which we get the following expression for our candidate ˆπ(t, y) for the optimal portfolio (8.41) ˆπ(t, y) = α(t, y) + ˆµ(t) β 2 (t, y) + E[D tδ Y (y) F t ] β(t, y)e[δ Y (y) F t ], where ˆµ(t) is the corresponding candidate for the optimal perturbation.
80 Minimizing Hdy with respect to µ gives the following first order equation for the optimal ˆµ(t): (8.42) {ˆµ(t)E[δ Y (y) F t ] + ˆπ(t, y)ˆx(t, y)ˆp(t, y)}dy = 0, i.e., (8.43) ˆµ(t) = ˆπ(t, y)ˆx(t, y)ˆp(t, y)dy ˆx(t, y)ˆp(t, y)dy = ˆπ(t, y)e[δ Y (y) F t ]dy.
81 We can now verify that (ˆπ, ˆµ) satisfies all the conditions of the sufficient maximum principle, and hence we conclude the following: Theorem (Optimal portfolio for an insider under model uncertainty) The saddle point (π (t, Y ), µ (t)), where π (t, Y ) = π (t, y) y=y, of the stochastic differential game (8.27) is given by the solution of the following coupled system of equations (8.44) and (8.45) π (t, y) = α(t, y) + µ (t) β 2 (t, y) + E[D tδ Y (y) F t ] β(t, y)e[δ Y (y) F t ], µ (t) = π (t, y)e[δ Y (y) F t ]dy. emark. This result is an extension to insider trading of a result in Sulem & Ø. (2014) [ØS4].
82 K. Aase, B. Øksendal, N. Privault and J. Ubøe: White noise generalizations of the Clark-Haussmann-Ocone theorem with application to mathematical finance. Finance Stoch. 4 (2000), K. Aase, B. Øksendal and J. Ubøe: Using the Donsker delta function to compute hedging strategies. Potential Analysis 14 (2001), L. Breiman: Probability. Addison-Wesley F. Biagini and B. Øksendal: A general stochastic calculus approach to insider trading.appl. Math. & Optim. 52 (2005), G. Di Nunno, T. Meyer-Brandis, B. Øksendal and F. Proske: Malliavin calculus and anticipative Itô formulae for Lévy processes. Inf. Dim. Anal. Quantum Prob. el. Topics 8 (2005),
83 G. Di Nunno, T. Meyer-Brandis, B. Øksendal and F. Proske: Optimal portfolio for an insider in a market driven by Lévy processes. Quant. Finance 6 (2006), G. Di Nunno and B. Øksendal: The Donsker delta function, a representation formula for functionals of a Lévy process and application to hedging in incomplete markets. Séminaires et Congrèes, Societé Mathématique de France, Vol. 16 (2007), O. Draouil and B. Øksendal: A Donsker delta functional approach to optimal insider control and applications to finance. (April 2015) To appear in Communication in Mathematics and Statistics. G. Di Nunno and B. Øksendal: A representation theorem and a sensitivity result for functionals of jump diffusions. In A.B. Cruzeiro, H. Ouerdiane and N. Obata (editors): Mathematical Analysis and andom Phenomena. World Scientific 2007, pp
84 G. Di Nunno, B. Øksendal and F. Proske: Malliavin Calculus for Lévy Processes with Applications to Finance. Universitext, Springer H. Holden, B. Øksendal, J. Ubøe and T. Zhang: Stochastic Partial Differential Equations. Universitext, Springer, Second Edition A. Lanconelli and F. Proske: On explicit strong solution of Itô-SDEs and the Donsker delta function of a diffusion. Inf. Dim. Anal. Quantum Prob el. Topics 7 (2004), S. Mataramvura, B. Øksendal and F. Proske: The Donsker delta function of a Lévy process with application to chaos expansion of local time. Ann. Inst H. Poincaré Prob. Statist. 40 (2004), T. Meyer-Brandis and F. Proske: On the existence and explicit representability of strong solutions of Lévy noise driven SDEs with irregular coefficients. Commun. Math. Sci. 4 (2006),
85 B. Øksendal and E. øse: A white noise approach to insider trading. arxiv (2015). To appear in T. Hida and L. Streit (editors): Applications of White Noise Theory. World Scientific B. Øksendal and E. øse: Applications of white noise to mathematical finance. Manuscript University of Oslo, 5 February 2015 B. Øksendal and A. Sulem: Applied Stochastic Control of Jump Diffusions. Second Edition. Springer 2007 B. Øksendal and A. Sulem: isk minimization in financial markets modeled by Itô-Lévy processes. Afrika Matematika (2014), DOI: /s B. Øksendal and A. Sulem: A game theoretic approach to martingale measures in incomplete markets. Surveys of Applied and Industrial Mathematics, TVP Publishers, Moscow, 15 (2008),
86 B. Øksendal and A. Sulem: Dynamic robust duality in utility maximization. ArXiv (2014) I. Pikovsky and I. Karatzas: Anticipative portfolio optimization. Adv. Appl. Probab. 28 (1996), F. usso and P. Vallois: Forward, backward and symmetric stochastic integration. Probab. Theor. el. Fields 93 (1993), F. usso and P. Vallois. The generalized covariation process and Itô formula. Stoch. Proc. Appl., 59(4):81-104, F. usso and P. Vallois. Stochastic calculus with respect to continuous finite quadratic variation processes. Stoch. Stoch. ep., 70(4):1-40, 2000.
Optimal control for an insider
Optimal control for an insider Bernt Øksendal - Department of Mathematics, University of Oslo - Norwegian School of Economics (NHH),Bergen - Stochastics in Environmental and Financial Economics (SEFE),
More informationA stochastic control approach to robust duality in utility maximization
Dept. of Math./CMA University of Oslo Pure Mathematics No 5 ISSN 86 2439 April 213 A stochastic control approach to robust duality in utility maximization Bernt Øksendal 1,2 Agnès Sulem 3 18 April 213
More informationWhite noise generalization of the Clark-Ocone formula under change of measure
White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationAn Introduction to Stochastic Control, with Applications to Mathematical Finance
An Introduction to Stochastic Control, with Applications to Mathematical Finance Bernt Øksendal Department of Mathematics, University of Oslo, Norway and Norwegian School of Economics (NHH),Bergen, Norway
More informationStochastic control of Itô-Lévy processes with applications to finance
Communications on Stochastic Analysis Volume 8 Number 1 Article 2 3-1-214 Stochastic control of Itô-Lévy processes with applications to finance Bernt Øksendal Agnès Sulem Follow this and additional works
More informationIntroduction to Malliavin calculus and Applications to Finance
Introduction to Malliavin calculus and Applications to Finance Part II Giulia Di Nunno Finance and Insurance, Stochastic Analysis and Practical Methods Spring School Marie Curie ITN - Jena 29 PART II 1.
More informationAn Introduction to Malliavin calculus and its applications
An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart
More informationMean-field SDE driven by a fractional BM. A related stochastic control problem
Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,
More informationMan Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.
More informationThe multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications
The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications
More informationOptimal Control of Predictive Mean-Field Equations and Applications to Finance
Optimal Control of Predictive Mean-Field Equations and Applications to Finance Bernt Øksendal and Agnès Sulem Abstract We study a coupled system of controlled stochastic differential equations (SDEs) driven
More informationOptimal investment strategies for an index-linked insurance payment process with stochastic intensity
for an index-linked insurance payment process with stochastic intensity Warsaw School of Economics Division of Probabilistic Methods Probability space ( Ω, F, P ) Filtration F = (F(t)) 0 t T satisfies
More informationA general stochastic calculus approach to insider trading
Dept. of Math. University of Oslo Pure Mathematics ISBN 82 553 1352 4 No. 17 ISSN 86 2439 November 22 A general stochastic calculus approach to insider trading Francesca Biagini 1) Bernt Øksendal 2),3)
More informationThomas Knispel Leibniz Universität Hannover
Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July
More informationBackward martingale representation and endogenous completeness in finance
Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationAnticipative stochastic control for Lévy processes with application to insider trading
Anticipative stochastic control for Lévy processes with application to insider trading Giulia Di Nunno 1, Arturo Kohatsu-Higa 2, Thilo Meyer-Brandis 1, Bernt Øksendal 1,3, Frank Proske 1 and Agnès Sulem
More informationarxiv: v1 [math.pr] 24 Sep 2018
A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b
More informationUtility Maximization in Hidden Regime-Switching Markets with Default Risk
Utility Maximization in Hidden Regime-Switching Markets with Default Risk José E. Figueroa-López Department of Mathematics and Statistics Washington University in St. Louis figueroa-lopez@wustl.edu pages.wustl.edu/figueroa
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationInformation and Credit Risk
Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More informationMA8109 Stochastic Processes in Systems Theory Autumn 2013
Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form
More informationProf. Erhan Bayraktar (University of Michigan)
September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled
More informationStrong Solutions and a Bismut-Elworthy-Li Formula for Mean-F. Formula for Mean-Field SDE s with Irregular Drift
Strong Solutions and a Bismut-Elworthy-Li Formula for Mean-Field SDE s with Irregular Drift Thilo Meyer-Brandis University of Munich joint with M. Bauer, University of Munich Conference for the 1th Anniversary
More informationDynamic robust duality in utility maximization
Dynamic robust duality in utility maximization Bernt Øksendal 1,2 Agnès Sulem 3,4 arxiv:134.54v3 [q-fin.pm] 6 Sep 215 24 August 215 Abstract A celebrated financial application of convex duality theory
More informationON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose
More informationOn the quantiles of the Brownian motion and their hitting times.
On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]
More informationCompleteness of bond market driven by Lévy process
Completeness of bond market driven by Lévy process arxiv:812.1796v1 [math.p] 9 Dec 28 Michał Baran Mathematics Department of Cardinal Stefan Wyszyński University in Warsaw, Poland Jerzy Zabczyk Institute
More informationLecture 4: Introduction to stochastic processes and stochastic calculus
Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London
More informationModel Uncertainty Stochastic Mean-Field Control
Model Uncertainty Stochastic Mean-Field Control Nacira Agram 1,2,3 and Bernt Øksendal 1,3 1) Dept. of Mathematics, University of Oslo, Norway. 2) University Mohamed Khider of Biskra, Algeria. 3) This research
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationLecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University
Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral
More informationRobust Mean-Variance Hedging via G-Expectation
Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.
More informationDiscrete approximation of stochastic integrals with respect to fractional Brownian motion of Hurst index H > 1 2
Discrete approximation of stochastic integrals with respect to fractional Brownian motion of urst index > 1 2 Francesca Biagini 1), Massimo Campanino 2), Serena Fuschini 2) 11th March 28 1) 2) Department
More informationOn the Multi-Dimensional Controller and Stopper Games
On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper
More informationOptimal portfolio strategies under partial information with expert opinions
1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationInfinite-dimensional methods for path-dependent equations
Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional
More informationNonlinear representation, backward SDEs, and application to the Principal-Agent problem
Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem
More informationWeak solutions of mean-field stochastic differential equations
Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationErrata for Stochastic Calculus for Finance II Continuous-Time Models September 2006
1 Errata for Stochastic Calculus for Finance II Continuous-Time Models September 6 Page 6, lines 1, 3 and 7 from bottom. eplace A n,m by S n,m. Page 1, line 1. After Borel measurable. insert the sentence
More informationProving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control
Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationRandom Fields: Skorohod integral and Malliavin derivative
Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We
More informationStochastic Stackelberg equilibria with applications to time dependent newsvendor models
INSTITUTT FOR FORTAKSØKONOMI DPARTMNT OF FINANC AND MANAGMNT SCINC FOR 9 2011 ISSN: 1500-4066 MAY 2011 Discussion paper Stochastic Stackelberg equilibria with applications to time dependent newsvendor
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationMULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES
MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,
More informationPROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS
PROGRESSIVE ENLARGEMENTS OF FILTRATIONS AND SEMIMARTINGALE DECOMPOSITIONS Libo Li and Marek Rutkowski School of Mathematics and Statistics University of Sydney NSW 26, Australia July 1, 211 Abstract We
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationEULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS
Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show
More informationThe concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.
The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationA class of globally solvable systems of BSDE and applications
A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao
More informationDifferential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011
Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationStochastic Calculus February 11, / 33
Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationSome Properties of NSFDEs
Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationCLARK-OCONE FORMULA BY THE S-TRANSFORM ON THE POISSON WHITE NOISE SPACE
Communications on Stochastic Analysis Vol. 6, No. 4 (2012 513-524 Serials Publications www.serialspublications.com CLARK-OCONE FORMULA BY THE S-TRANSFORM ON THE POISSON WHITE NOISE SPACE YUH-JIA LEE*,
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationLAN property for ergodic jump-diffusion processes with discrete observations
LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationLECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION
LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION René Carmona Department of Operations Research & Financial Engineering PACM
More informationMinimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization
Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date
More informationBernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010
1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes
More informationIntroduction Optimality and Asset Pricing
Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our
More informationLeast Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises
Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationBernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012
1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.
More informationBirgit Rudloff Operations Research and Financial Engineering, Princeton University
TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street
More information(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS
(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University
More informationOrder book modeling and market making under uncertainty.
Order book modeling and market making under uncertainty. Sidi Mohamed ALY Lund University sidi@maths.lth.se (Joint work with K. Nyström and C. Zhang, Uppsala University) Le Mans, June 29, 2016 1 / 22 Outline
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationA Short Introduction to Diffusion Processes and Ito Calculus
A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationMarket environments, stability and equlibria
Market environments, stability and equlibria Gordan Žitković Department of Mathematics University of exas at Austin Austin, Aug 03, 2009 - Summer School in Mathematical Finance he Information Flow two
More information(B(t i+1 ) B(t i )) 2
ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1
More informationp 1 ( Y p dp) 1/p ( X p dp) 1 1 p
Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p
More informationStochastic Differential Equations
CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for
More informationStochastic Maximum Principle and Dynamic Convex Duality in Continuous-time Constrained Portfolio Optimization
Stochastic Maximum Principle and Dynamic Convex Duality in Continuous-time Constrained Portfolio Optimization Yusong Li Department of Mathematics Imperial College London Thesis presented for examination
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationMDP Algorithms for Portfolio Optimization Problems in pure Jump Markets
MDP Algorithms for Portfolio Optimization Problems in pure Jump Markets Nicole Bäuerle, Ulrich Rieder Abstract We consider the problem of maximizing the expected utility of the terminal wealth of a portfolio
More informationControl of McKean-Vlasov Dynamics versus Mean Field Games
Control of McKean-Vlasov Dynamics versus Mean Field Games René Carmona (a),, François Delarue (b), and Aimé Lachapelle (a), (a) ORFE, Bendheim Center for Finance, Princeton University, Princeton, NJ 8544,
More informationThe Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation.
The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. Binghamton University Department of Mathematical Sciences
More informationRobust Markowitz portfolio selection. ambiguous covariance matrix
under ambiguous covariance matrix University Paris Diderot, LPMA Sorbonne Paris Cité Based on joint work with A. Ismail, Natixis MFO March 2, 2017 Outline Introduction 1 Introduction 2 3 and Sharpe ratio
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationA BAYES FORMULA FOR NON-LINEAR FILTERING WITH GAUSSIAN AND COX NOISE
A BAYES FOMULA FO NON-LINEA FILTEING WITH GAUSSIAN AND COX NOISE V. MANDEKA, T. MEYE-BANDIS, AND F. POSKE Abstract. A Bayes type formula is derived for the non-linear filter where the observation contains
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationMalliavin calculus and central limit theorems
Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas
More information