Non-Zero-Sum Stochastic Differential Games of Controls and St

Size: px
Start display at page:

Download "Non-Zero-Sum Stochastic Differential Games of Controls and St"

Transcription

1 Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings October 1, 2009

2 Based on two preprints: to a Non-Zero-Sum Stochastic Differential Game of Controls and Stoppings I. Karatzas, Q. Li, 2009 A to Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings I. Karatzas, Q. Li, S. Peng, 2009

3 Bibliography

4 Non-Zero-Sum Game and Nash Equilibrium John F. Nash (1949): One may define a concept of AN n-person GAME in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n-tuple of pure strategies, one strategy being taken by each player. One such n-tuple counters another if the strategy of each player in the countering n-tuple yields the highest obtainable expectation for its player against the n 1 strategies of the other players in the countered n-tuple. A self-countering n-tuple is called AN EQUILIBRIUM POINT.

5 Non-Zero-Sum Game and Nash Equilibrium Aside: In a non-zero-sum game, each player chooses a strategy as his best response to other players strategies. In a Nash equilibrium, no player will profit from unilaterally changing his strategy.

6 Non-Zero-Sum Game and Nash Equilibrium Generalization of zero-sum games: 0-sum 0-sum non-0-sum Player I Player II optimal (s 1, s 2 ) max R(s 1, s 2 ) min R(s 1, s 2 ) saddle s 1 s 2 R(s 1, s 2 ) R(s 1, s 2 ), R(s 1, s 2 ) R(s 1, s 2) max s 1 R(s 1, s 2 ) max s 2 R(s 1, s 2 ) R(s 1, s 2 ) R(s 1, s 2 ), R(s 1, s 2 ) R(s 1, s 2) max s 1 R 1 (s 1, s 2 ) max s 2 R 2 (s 1, s 2 ) equilibrium R 1 (s 1, s 2 ) R1 (s 1, s 2 ), R 2 (s 1, s 2 ) R2 (s 1, s 2)

7 Non-Zero-Sum Game and Nash Equilibrium Simple and understandable example, if there has to be: go watching A Beautiful Mind, Universal Pictures, 2001 (11th Mar. 2009, Columbia University) Kuhn : Don t learn game theory from the movie. The blonde thing is not a Nash equilibrium! Odifreddi : How you invented the theory, I mean, the story about the blonde, was it real? Nash : No!!! Odifreddi : Did you apply game theory to win Alicia? Nash :...Yes... (followed by 10 min s discussion on personal life and game theory)

8 Stochastic Differential Games Martingale Method: Rewards can be functionals of state process. Beneš, 1970, 1971 M H A Davis, 1979 Karatzas and Zamfirescu, 2006, 2008

9 Stochastic Differential Games BSDE Method: Identify value of a game to solution to a BSDE, then seek uniqueness and especially existence of solution. Bismut, 1970 s Pardoux and Peng, 1990 El Karoui, Kapoudjian, Pardoux, Peng, and Quenez, 1997 Cvitanić and Karatzas, 1996 Hamadène, Lepeltier, and Peng, 1997

10 Stochastic Differential Games PDE Method: Rewards are functions of state process. Regularity theory by Bensoussan, Frehse, and Friedman. Facilitates numerical computation. Bensoussan and Friedman, 1977 Bensoussan and Frehse, 2000 H.J. Kushner and P. Dupuis

11 Our Results Main results: (non-) existence of equilibrium stopping rules necessity and sufficiency of Isaacs condition Martingale part: equilibrium stopping rules, L U, L> U equivalent martingale characterization of Nash equilibrium BSDE part: multi-dim reflective BSDE equilibrium stopping rules, L U

12

13 B is a d-dimensional Brownian motion w.r.t. its generated filtration{f t } t 0 on the probability space (Ω, F,P). Change of measure dp u,v dp F t = exp{ t σ 1 (s, X)f(s, X, u s, v s )db s t 0 σ 1 (s, X)f(s, X, u s, v s ) 2 ds}, (1) standardp u,v -Brownian motion B u,v t := B t t 0 σ 1 (s, X)f(s, X, u s, v s )ds, 0 t T. (2)

14 State process X t =X 0 + =X 0 + Hamiltonian t 0 t 0 σ(s, X)dB s, f(s, X, u s, v s )ds + t 0 σ(s, X)dB u,v s, 0 t T. (3) H 1 (t, x, z 1, u, v) := z 1 σ 1 (t, x)f(t, x, u, v) + h 1 (t, x, u, v); H 2 (t, x, z 2, u, v) := z 2 σ 1 (t, x)f(t, x, u, v) + h 2 (t, x, u, v). (4)

15 Admissible controls u U and v V. u, v : [0, T] R R R Rrandom fields. τ,ρ S t = set of stopping rules defined on the pathsω, which generate{f t } t 0 -stopping times on Ω. Strategy: Player I - (u,τ(u, v)); Player II - (v,ρ(u, v)). Reward processes R 1 (τ,ρ, u, v) and R 2 (τ,ρ, u, v). Players expected reward processes J i t (τ,ρ, u, v) =Eu,v [R i t (τ,ρ, u, v) F t], i = 1, 2. (5)

16 Nash equilibrium strategies (u, v,τ,ρ ) Find admissible control strategies u U and v V, and stopping rulesτ andρ in S t,t, that maximize expected rewards. V 1 (t) := J 1 t (τ,ρ, u, v ) J 1 t (τ,ρ, u, v ),τ S t,t, u U; V 2 (t) := J 2 t (τ,ρ, u, v ) J 2 t (τ,ρ, u, v), ρ S t,t, v V. (6) no profit from unilaterally changing strategy

17 Analysis, in the spirit of Nash (1949) For u 0 U, v 0 V, andτ 0,ρ 0 S t,t, find (u 1, v 1,τ 1,ρ 1 ) that counters (u 0, v 0,τ 0,ρ 0 ), i.e. (u 1,τ 1 ) = arg max τ S t,t max u U J1 t (τ,ρ0, u, v 0 ); (v 1,ρ 1 ) = arg max ρ S t,t max v V J2 t (τ0,ρ, u 0, v). (7) The equilibrium (τ,ρ, u, v ) is fixed point of the mapping Γ : (τ 0,ρ 0, u 0, v 0 ) (τ 1,ρ 1, u 1, v 1 ) (8)

18 More about Control Sets (t) = instantaneous volatility process of player i s reward process Jt i (τ,ρ, u, v), i.e. Z u,v i djt i u,v (τ,ρ, u, v) = d finite variation part + Z (t)db u,v (t). (9) i Partial observation u t = u(t), and v t = v(t). Full observation u t = u(t, x), and v t = v(t, x). Observing volatility u t = u(t, x, Z u,v u,v (t), Z 1 2 (t)), and v t = v(t, x, Z u,v u,v (t), Z 1 2 (t)). (10)

19 More about Control Sets - Why caring about Z u,v? - Risk sensitive control. Sensitive to not only expectation but also variance of the reward. Bensoussan, Frehse, and Nagai, 1998 El Karoui and Hamadène, 2003

20

21 Rewards and Assumptions τ ρ R 1 t (τ,ρ, u, v) := h 1 (s, X, u s, v s )ds + L 1 (τ)½ {τ<ρ} + U 1 (ρ)½ {ρ τ<t} t +ξ 1 ½ {τ ρ=t} ; τ ρ R 2 t (τ,ρ, u, v) := h 2 (s, X, u s, v s )ds + L 2 (ρ)½ {ρ<τ} + U 2 (τ)½ {τ ρ<t} t +ξ 2 ½ {τ ρ=t}. boundedness: h, L, U,ξ measurabilities: h, L, U,ξ continuity: L, U (11)

22 Equivalent Martingale Characterization Notations. Y 1 (t;ρ, v) := sup Y 2 (t;τ, u) := sup sup τ S t,ρ u U sup ρ S t,τ v V J 1 t (τ,ρ, u, v); J 2 t (τ,ρ, u, v). (12) V 1 (t;ρ, u, v) :=Y 1 (t;ρ, v) + V 2 (t;τ, u, v) :=Y 2 (t;τ, u) + t 0 t 0 h 1 (s, X, u s, v s )ds; h 2 (s, X, u s, v s )ds. (13)

23 Equivalent Martingale Characterization Thm. (τ,ρ, u, v ) is an equilibrium point, if and only if the following three conditions hold. (1) Y 1 (τ ;ρ, v ) = L 1 (τ )½ {τ <ρ } + U 1 (ρ )½ {ρ τ <T} +ξ 1 ½ {τ ρ =T}, and Y 2 (ρ ;τ, u ) = L 2 (ρ )½ {ρ <τ } + U 2 (τ )½ {τ ρ <T} +ξ 2 ½ {τ ρ =T}; (2) V 1 ( τ ;ρ, u, v ) and V 2 ( ρ ;τ, u, v ) are P u,v -martingales; (3) For every u U,V 1 ( τ ;ρ, u, v ) is ap u,v -supermartingale. For every v V, V 2 ( ρ ;τ, u, v) is ap u,v -supermartingale.

24 Equilibrium Stopping Rules Def. Generic controls (u, v) U V. The equilibrium stopping rules are a pair (τ,ρ ) S 2, such that t,t J 1 t (τ,ρ, u, v) J 1 t (τ,ρ, u, v), τ S t,t ; J 2 t (τ,ρ, u, v) J 2 t (τ,ρ, u, v), ρ S t,t, (14)

25 Equilibrium Stopping Rules Notations. Y 1 (t, u;ρ, v) := sup J 1 t (τ,ρ, u, v); τ S t,t Y 2 (t, v;τ, u) := sup ρ S t,t J 2 t (τ,ρ, u, v). (15) Q 1 (t, u;ρ, v) :=Y 1 (t, u;ρ, v) + Q 2 (t, v;τ, u) :=Y 2 (t, v;τ, u) + t 0 t 0 h 1 (s, X, u s, v s )ds h 2 (s, X, u s, v s )ds (16)

26 Equilibrium Stopping Rules Lem. (τ,ρ ) is a a pair of equilibrium stopping rules, iff (1) Y 1 (τ, u;ρ, v) = L 1 (τ )½ {τ <ρ } + U 1 (ρ )½ {ρ τ <T} +ξ 1 ½ {τ ρ =T}, Y 2 (ρ, v;τ, u) = L 2 (ρ )½ {ρ <τ } + U 2 (τ )½ {τ ρ <T} +ξ 2 ½ {τ ρ =T}; (17) (2) The stopped supermartingales Q 1 ( τ, u;ρ, v) and Q 2 ( ρ, v;τ, u) arep u,v -martingales.

27 Equilibrium Stopping Rules L(t) U(t)½ {t<t} +ξ½ {t=t}, for all 0 t T. If (τ,ρ ) solve the equations τ = inf{t s<ρ Y 1 (s, u;ρ, v) = L 1 (s)} ρ ; ρ = inf{t s<ρ Y 2 (s, v;τ, u) = L 2 (s)} τ, (18) on first hitting times, then (τ,ρ ) are equilibrium.

28 Equilibrium Stopping Rules L(t) U(t)½ {t<t} +ξ½ {t=t} +ǫ, for all 0 t T, someǫ> 0. If L is uniformly continuous inω Ω, then equilibrium stopping rules do not exist.

29 Martingale Structures Suppose (τ,ρ, u, v ) is an equilibrium point. Doob-Meyer V 1 (t;ρ, u, v) =Y 1 (0;ρ, v) A 1 (t;ρ, u, v) + M 1 (t;ρ, u, v), 0 t τ ; V 2 (t;τ, u, v) =Y 2 (0;τ, v) A 2 (t;τ, u, v) + M 2 (t;τ, u, v), 0 t ρ. (19) Martingale representation M 1 (t;ρ, u, v) = M 2 (t;τ, u, v) = t 0 t 0 Z v 1 (s)dbu,v s ; Z u 2 (s)dbu,v s, (20)

30 Martingale Structures Finite variation part A 1 (t;τ, u 1, v) A 1 (t;τ, u 2, v) = t 0 0 t τ ; (H 1 (s, X, Z 1 (s), u 1 s, v s) H 1 (s, X, Z 1 (s), u 2 s, v s))ds, A 2 (t;ρ, u, v 1 ) A 2 (t;ρ, u, v 2 ) = t 0 (H 2 (s, X, Z 2 (s), u s, v 1 s ) H 2(s, X, Z 2 (s), u s, v 2 s ))ds, 0 t ρ. (21)

31 Isaacs Condition Necessity, stochastic maximum principle Prop. If (τ,ρ, u, v ) is an equilibrium point, then H 1 (t, X, Z 1 (t), u t, v t ) H 1(t, X, Z 1 (t), u t, v t ), for all 0 t τ, u U; H 2 (t, X, Z 2 (t), u t, v t ) H 2(t, X, Z 2 (t), u t, v t), for all 0 t ρ, v V. (22)

32 Isaacs Condition Sufficiency Thm. Letτ,ρ S t,t be equilibrium stopping rules. If a pair of controls (u, v ) U V satisfies Isaacs condition H 1 (t, x, z 1, u t, v t ) H 1(t, x, z 1, u t, v t ); H 2 (t, x, z 2, u t, v t ) H 2(t, x, z 2, u t, v t), (23) for all 0 t T, u U,and v V, then u, v are equilibrium controls.

33

34 Each Player s Reward Terminated by Himself Game 2.1 τ R 1 t (τ,ρ, u, v) := h 1 (s, X, u s, v s )ds + L 1 (τ)½ {τ<t} +η 1 ½ {τ=t} ; t ρ R 2 t (τ,ρ, u, v) := h 2 (s, X, u s, v s )ds + L 2 (ρ)½ {ρ<t} +η 2 ½ {ρ=t}. L 1 η 1, L 2 η 2, a.s. t Assumption A 2.1 (Isaac s condition) There exist admissible controls (u, v ) U V, such that t [0, T], u U, v V, (24) H 1 (t, x, z 1, (u, v )(t, x, z 1, z 2 )) H 1 (t, x, z 1, u(t, x,, ), v (t, x, z 1, z 2 )); H 2 (t, x, z 2, (u, v )(t, x, z 1, z 2 )) H 2 (t, x, z 2, u (t, x, z 1, z 2 ), v(t, x,, )). (25)

35 Each Player s Reward Terminated by Himself Thm 2.1 Let (Y, Z, K) be solution to reflective BSDE T T Y(t) =η + H(s, X, Z(s), u, v )ds Z(s)dB s + K(T) K(t); Y(t) L(t), 0 t T; Optimal stopping rules t T 0 (Y(t) L(t))dK i (t) = 0. t (26) τ := inf{s [t, T] : Y 1 (s) L 1 (s)} T; ρ := inf{s [t, T] : Y 2 (s) L 2 (s)} T. (27) (τ,ρ, u, v ) is optimal for Game 2.1. Further more, V i (t) = Y i (t), i = 1, 2.

36 Game with Interactive Stoppings Game 2.2 τ ρ R 1 t (τ,ρ, u, v) := h 1 (s, X, u s, v s )ds + L 1 (τ)½ {τ<ρ} + U 1 (ρ)½ {ρ τ<t} t +ξ 1 ½ {τ ρ=t} ; τ ρ R 2 t (τ,ρ, u, v) := h 2 (s, X, u s, v s )ds + L 2 (ρ)½ {ρ<τ} + U 2 (τ)½ {τ ρ<t} t +ξ 2 ½ {τ ρ=t}. Assumption A 2.2 (Isaac s condition) There exist admissible controls (u, v ) U V, such that (28) H 1 (t, x, z 1, (u, v )(t, x)) H 1 (t, x, z 1, u(t, x), v (t, x)), t [0, T], u U H 2 (t, x, z 2, (u, v )(t, x)) H 2 (t, x, z 2, u (t, x), v(t, x)), t [0, T], v V (29)

37 Game with Interactive Stoppings Assumption A 2.3 (1) For i = 1, 2, the reward processes U i ( ) redefined as U i (t), 0 t< T; U i (t) = ξ i, t = T, (30) are increasing processes. L(t) i U i (t) ξ, a.s. ( patience pays ) (2) Both reward processes{u(t)} 0 t T and{l(t)} 0 t T are right continuous in time t. [0, T] Ω. (3) h i c, i = 1, 2.

38 Game with Interactive Stoppings Thm 2.2 Under assumptions A 2.2 and A 2.3, then there exists an equilibrium point (τ,ρ, u, v ) of Game 2.2.

39 Game with Interactive Stoppings Thm 2.3 (Associated BSDE) T Y i (t) =ξ i + H i (s, X, Z i (s), (u, v)(t, X, Z 1 (s), Z 2 (s))ds t T Z i (s)db s + K i (T) K i (t) + N i (t, T), t T Y i (t) L i (t), 0 t T; (Y i (t) L i (t))dk i (t) = 0; i = 1, 2, 0 (31) where N i (t, T) := (U i (s) Y i (s))½ {Yj (s)=l j (s)}, i, j = 1, 2. (32) t s T (being kicked up to U, when the other player drops down to L)

40 Game with Interactive Stoppings Optimal stopping times τ :=τ u,v t (u, v) := inf{s [t, T] : Y1 (s) L 1(s)} T; ρ :=ρ u,v t (u, v) := inf{s [t, T] : Y2 (s) L 2(s)} T. (33)

41

42 Lipschitz Growth m-dim reflective BSDE T Y 1 (t) =ξ 1 + g 1 (s, Y(s), Z(s))ds Y 1 (t) L 1 (t), 0 t T, (Y 1 (t) L 1 (t)) dk 1 (t) = 0, 0 T T Y m (t) =ξ m + g m (s, Y(s), Z(s))ds Z m (s) db s + K m (T) K m Y m (t) L m (t), 0 t T, t t T T 0 T t t Z 1 (s) db s + K 1 (T) K 1 (t); (Y m (t) L m (t)) dk m (t) = 0. (34)

43 Lipschitz Growth Seek solution (Y, Z, K) in the spaces Y = (Y 1,, Y m ) M 2 (m; 0, T) :={m-dimensional predictable process φ s.t. E[supφ 2 t ] }; [0,T] Z = (Z 1,, Z m ) L 2 (m d; 0, T) :={m d-dimensional predictable process φ s.t. E[ T 0 φ 2 t dt] }; K = (K 1,, K m ) = continuous, increasing process inm 2 (m; 0, T). (35)

44 Lipschitz Growth Assumption A 3.1 (1) The random field g = (g 1,, g m ) : [0, T] R m R m d R m (36) is uniformly Lipschitz in y and z, i.e. there exists a constant b> 0, such that g(t, y, z) g(t, ȳ, z) b( y ȳ + z z ), t [0, T]. (37) Further more, E[ T 0 g(t, 0, 0) 2 dt]<. (38) (2) The random variableξis F T -measurable and square-integrable. The lower reflective boundary L is progressively measurable, and satisfy E[sup L + (t) 2 ]<. L ξ,p-a.s. [0,T]

45 Lipschitz Growth Results: existence and uniqueness of solution, via contraction method 1-dim Comparison Theorem (EKPPQ, 1997) continuous dependency property

46 Linear Growth, Markovian System l(elle)-dim forward equation X t,x (s) = x, 0 s t; dx t,x (s) =σ(s, X t,x (s)) db s, t< s T. (39) m-dim backward equation T Y t,x (s) =ξ(x t,x (T)) + g i (r, X t,x (r), Y t,x (r), Z t,x (r))dr s T Z t,x (r) db r + K t,x (T) K t,x (s); s Y t,x (s) L(s, X t,x (s)), t s T, T t (Y t,x (s) L(s, X t,x (s))) dk t,x (s) (40)

47 Linear Growth, Markovian System Assumption A 4.1 (1) g : [0, T] R l R m R m d R m is measurable, and for all (t, x, y, z) [0, T] R l R m R m d, g(t, x, y, z) b(1 + x p + y + z ), for some positive constant b; (2) for every fixed (t, x) [0, T] R, g(t, x,, ) is continuous. (3)E[ξ(X(T)) 2 ]< ;E[sup L + (t, X(t)) 2 ]<. L ξ,p-a.s. [0,T]

48 Linear Growth, Markovian System Results existence of solution, via Lipschitz approximation 1-dim Comparison Theorem continuous dependency property

49 THAT S ALL THANK YOU

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

A class of globally solvable systems of BSDE and applications

A class of globally solvable systems of BSDE and applications A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao

More information

Quadratic BSDE systems and applications

Quadratic BSDE systems and applications Quadratic BSDE systems and applications Hao Xing London School of Economics joint work with Gordan Žitković Stochastic Analysis and Mathematical Finance-A Fruitful Partnership 25 May, 2016 1 / 23 Backward

More information

Random G -Expectations

Random G -Expectations Random G -Expectations Marcel Nutz ETH Zurich New advances in Backward SDEs for nancial engineering applications Tamerza, Tunisia, 28.10.2010 Marcel Nutz (ETH) Random G-Expectations 1 / 17 Outline 1 Random

More information

American Game Options and Reected BSDEs with Quadratic Growth

American Game Options and Reected BSDEs with Quadratic Growth American Game Options and Reected BSDEs with Quadratic Growth Saïd Hamadène a, Eduard Rotenstein b, Adrian Zalinescu c a Laboratoire de Statistique et Processus, Université du Maine, 72085, Le Mans Cedex

More information

Equilibria in Incomplete Stochastic Continuous-time Markets:

Equilibria in Incomplete Stochastic Continuous-time Markets: Equilibria in Incomplete Stochastic Continuous-time Markets: Existence and Uniqueness under Smallness Constantinos Kardaras Department of Statistics London School of Economics with Hao Xing (LSE) and Gordan

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

Optimal stopping and a non-zero-sum Dynkin game in discrete time with risk measures induced by BSDEs

Optimal stopping and a non-zero-sum Dynkin game in discrete time with risk measures induced by BSDEs Optimal stopping and a non-zero-sum Dynin game in discrete time with ris measures induced by BSDEs Miryana Grigorova, Marie-Claire Quenez To cite this version: Miryana Grigorova, Marie-Claire Quenez. Optimal

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009 BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, 16-20 March 2009 1 1) L p viscosity solution to 2nd order semilinear parabolic PDEs

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Backward martingale representation and endogenous completeness in finance

Backward martingale representation and endogenous completeness in finance Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. COTROL OPTIM. Vol. 43, o. 4, pp. 1222 1233 c 2004 Society for Industrial and Applied Mathematics OZERO-SUM STOCHASTIC DIFFERETIAL GAMES WITH DISCOTIUOUS FEEDBACK PAOLA MAUCCI Abstract. The existence

More information

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES IOANNIS KARATZAS Departments of Mathematics and Statistics, 619 Mathematics Building Columbia University, New York, NY 127 ik@math.columbia.edu

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR

More information

Asymptotic Perron Method for Stochastic Games and Control

Asymptotic Perron Method for Stochastic Games and Control Asymptotic Perron Method for Stochastic Games and Control Mihai Sîrbu, The University of Texas at Austin Methods of Mathematical Finance a conference in honor of Steve Shreve s 65th birthday Carnegie Mellon

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

An Introduction to Moral Hazard in Continuous Time

An Introduction to Moral Hazard in Continuous Time An Introduction to Moral Hazard in Continuous Time Columbia University, NY Chairs Days: Insurance, Actuarial Science, Data and Models, June 12th, 2018 Outline 1 2 Intuition and verification 2BSDEs 3 Control

More information

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Ulrich Horst 1 Humboldt-Universität zu Berlin Department of Mathematics and School of Business and Economics Vienna, Nov.

More information

Skorokhod Embeddings In Bounded Time

Skorokhod Embeddings In Bounded Time Skorokhod Embeddings In Bounded Time Stefan Ankirchner Institute for Applied Mathematics University of Bonn Endenicher Allee 6 D-535 Bonn ankirchner@hcm.uni-bonn.de Philipp Strack Bonn Graduate School

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Risk-Sensitive and Robust Mean Field Games

Risk-Sensitive and Robust Mean Field Games Risk-Sensitive and Robust Mean Field Games Tamer Başar Coordinated Science Laboratory Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL - 6181 IPAM

More information

Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems

Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems Non-zero Sum Stochastic Differential Games of Fully Coupled Forward-Backward Stochastic Systems Maoning ang 1 Qingxin Meng 1 Yongzheng Sun 2 arxiv:11.236v1 math.oc 12 Oct 21 Abstract In this paper, an

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Viscosity Solutions of Path-dependent Integro-Differential Equations

Viscosity Solutions of Path-dependent Integro-Differential Equations Viscosity Solutions of Path-dependent Integro-Differential Equations Christian Keller University of Southern California Conference on Stochastic Asymptotics & Applications Joint with 6th Western Conference

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION René Carmona Department of Operations Research & Financial Engineering PACM

More information

Density models for credit risk

Density models for credit risk Density models for credit risk Nicole El Karoui, Ecole Polytechnique, France Monique Jeanblanc, Université d Évry; Institut Europlace de Finance Ying Jiao, Université Paris VII Workshop on stochastic calculus

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Nonzero-Sum Games of Optimal Stopping for Markov Processes

Nonzero-Sum Games of Optimal Stopping for Markov Processes Appl Math Optim 208 77:567 597 https://doi.org/0.007/s00245-06-9388-7 Nonzero-Sum Games of Optimal Stopping for Markov Processes Natalie Attard Published online: 7 November 206 The Authors 206. This article

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity

Optimal investment strategies for an index-linked insurance payment process with stochastic intensity for an index-linked insurance payment process with stochastic intensity Warsaw School of Economics Division of Probabilistic Methods Probability space ( Ω, F, P ) Filtration F = (F(t)) 0 t T satisfies

More information

Systemic Risk and Stochastic Games with Delay

Systemic Risk and Stochastic Games with Delay Systemic Risk and Stochastic Games with Delay Jean-Pierre Fouque (with René Carmona, Mostafa Mousavi and Li-Hsien Sun) PDE and Probability Methods for Interactions Sophia Antipolis (France) - March 30-31,

More information

Alberto Bressan. Department of Mathematics, Penn State University

Alberto Bressan. Department of Mathematics, Penn State University Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Dynamic programming approach to Principal-Agent problems

Dynamic programming approach to Principal-Agent problems Dynamic programming approach to Principal-Agent problems Jakša Cvitanić, Dylan Possamaï and Nizar ouzi September 8, 217 Abstract We consider a general formulation of the Principal Agent problem with a

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Conservation laws and some applications to traffic flows

Conservation laws and some applications to traffic flows Conservation laws and some applications to traffic flows Khai T. Nguyen Department of Mathematics, Penn State University ktn2@psu.edu 46th Annual John H. Barrett Memorial Lectures May 16 18, 2016 Khai

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Time-Consistent Mean-Variance Portfolio Selection in Discrete and Continuous Time

Time-Consistent Mean-Variance Portfolio Selection in Discrete and Continuous Time ime-consistent Mean-Variance Portfolio Selection in Discrete and Continuous ime Christoph Czichowsky Department of Mathematics EH Zurich BFS 21 oronto, 26th June 21 Christoph Czichowsky (EH Zurich) Mean-variance

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

ROOT S BARRIER, VISCOSITY SOLUTIONS OF OBSTACLE PROBLEMS AND REFLECTED FBSDES

ROOT S BARRIER, VISCOSITY SOLUTIONS OF OBSTACLE PROBLEMS AND REFLECTED FBSDES OOT S BAIE, VISCOSITY SOLUTIONS OF OBSTACLE POBLEMS AND EFLECTED FBSDES HAALD OBEHAUSE AND GONÇALO DOS EIS Abstract. Following work of Dupire 005, Carr Lee 010 and Cox Wang 011 on connections between oot

More information

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations ITO OLOGY Thomas Wieting Reed College, 2000 1 Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations 1 Random Processes 1 Let (, F,P) be a probability space. By definition,

More information

Strong Solutions and a Bismut-Elworthy-Li Formula for Mean-F. Formula for Mean-Field SDE s with Irregular Drift

Strong Solutions and a Bismut-Elworthy-Li Formula for Mean-F. Formula for Mean-Field SDE s with Irregular Drift Strong Solutions and a Bismut-Elworthy-Li Formula for Mean-Field SDE s with Irregular Drift Thilo Meyer-Brandis University of Munich joint with M. Bauer, University of Munich Conference for the 1th Anniversary

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

A Stochastic Paradox for Reflected Brownian Motion?

A Stochastic Paradox for Reflected Brownian Motion? Proceedings of the 9th International Symposium on Mathematical Theory of Networks and Systems MTNS 2 9 July, 2 udapest, Hungary A Stochastic Parado for Reflected rownian Motion? Erik I. Verriest Abstract

More information

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures Numerical scheme for quadratic BSDEs, Numerics and Finance, Oxford Man Institute Julien Azzaz I.S.F.A., Lyon, FRANCE 4 July 2012 Joint Work In Progress with Anis Matoussi, Le Mans, FRANCE Outline Framework

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Research Article Nonzero-Sum Stochastic Differential Game between Controller and Stopper for Jump Diffusions

Research Article Nonzero-Sum Stochastic Differential Game between Controller and Stopper for Jump Diffusions Abstract and Applied Analysis Volume 213, Article ID 76136, 7 pages http://dx.doi.org/1.1155/213/76136 esearch Article Nonzero-Sum Stochastic Differential Game between Controller and Stopper for Jump Diffusions

More information

Solvability of G-SDE with Integral-Lipschitz Coefficients

Solvability of G-SDE with Integral-Lipschitz Coefficients Solvability of G-SDE with Integral-Lipschitz Coefficients Yiqing LIN joint work with Xuepeng Bai IRMAR, Université de Rennes 1, FRANCE http://perso.univ-rennes1.fr/yiqing.lin/ ITN Marie Curie Workshop

More information

Deterministic Minimax Impulse Control

Deterministic Minimax Impulse Control Deterministic Minimax Impulse Control N. El Farouq, Guy Barles, and P. Bernhard March 02, 2009, revised september, 15, and october, 21, 2009 Abstract We prove the uniqueness of the viscosity solution of

More information

Martingale Representation Theorem for the G-expectation

Martingale Representation Theorem for the G-expectation Martingale Representation Theorem for the G-expectation H. Mete Soner Nizar Touzi Jianfeng Zhang February 22, 21 Abstract This paper considers the nonlinear theory of G-martingales as introduced by eng

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Optimal Stopping under Adverse Nonlinear Expectation and Related Games

Optimal Stopping under Adverse Nonlinear Expectation and Related Games Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal

More information

A probabilistic approach to second order variational inequalities with bilateral constraints

A probabilistic approach to second order variational inequalities with bilateral constraints Proc. Indian Acad. Sci. (Math. Sci.) Vol. 113, No. 4, November 23, pp. 431 442. Printed in India A probabilistic approach to second order variational inequalities with bilateral constraints MRINAL K GHOSH,

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Dynamic Bertrand and Cournot Competition

Dynamic Bertrand and Cournot Competition Dynamic Bertrand and Cournot Competition Effects of Product Differentiation Andrew Ledvina Department of Operations Research and Financial Engineering Princeton University Joint with Ronnie Sircar Princeton-Lausanne

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

The Real Option Approach to Plant Valuation: from Spread Options to Optimal Switching

The Real Option Approach to Plant Valuation: from Spread Options to Optimal Switching The Real Option Approach to Plant Valuation: from Spread Options to René 1 and Michael Ludkovski 2 1 Bendheim Center for Finance Department of Operations Research & Financial Engineering Princeton University

More information

Dual Representation as Stochastic Differential Games of Backward Stochastic Differential Equations and Dynamic Evaluations

Dual Representation as Stochastic Differential Games of Backward Stochastic Differential Equations and Dynamic Evaluations arxiv:mah/0602323v1 [mah.pr] 15 Feb 2006 Dual Represenaion as Sochasic Differenial Games of Backward Sochasic Differenial Equaions and Dynamic Evaluaions Shanjian Tang Absrac In his Noe, assuming ha he

More information

Robust Mean-Variance Hedging via G-Expectation

Robust Mean-Variance Hedging via G-Expectation Robust Mean-Variance Hedging via G-Expectation Francesca Biagini, Jacopo Mancin Thilo Meyer Brandis January 3, 18 Abstract In this paper we study mean-variance hedging under the G-expectation framework.

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

On dynamic stochastic control with control-dependent information

On dynamic stochastic control with control-dependent information On dynamic stochastic control with control-dependent information 12 (Warwick) Liverpool 2 July, 2015 1 {see arxiv:1503.02375} 2 Joint work with Matija Vidmar (Ljubljana) Outline of talk Introduction Some

More information

MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS

MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS MEAN FIELD GAMES WITH MAJOR AND MINOR PLAYERS René Carmona Department of Operations Research & Financial Engineering PACM Princeton University CEMRACS - Luminy, July 17, 217 MFG WITH MAJOR AND MINOR PLAYERS

More information

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Mihai Sîrbu, The University of Texas at Austin based on joint work with Oleksii Mostovyi University

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

Bridging the Gap between Center and Tail for Multiscale Processes

Bridging the Gap between Center and Tail for Multiscale Processes Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Utility maximization in incomplete markets

Utility maximization in incomplete markets Uiliy maximizaion in incomplee markes Marcel Ladkau 27.1.29 Conens 1 Inroducion and general seings 2 1.1 Marke model....................................... 2 1.2 Trading sraegy.....................................

More information

On the dual problem of utility maximization

On the dual problem of utility maximization On the dual problem of utility maximization Yiqing LIN Joint work with L. GU and J. YANG University of Vienna Sept. 2nd 2015 Workshop Advanced methods in financial mathematics Angers 1 Introduction Basic

More information

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables

Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Singular Perturbations of Stochastic Control Problems with Unbounded Fast Variables Joao Meireles joint work with Martino Bardi and Guy Barles University of Padua, Italy Workshop "New Perspectives in Optimal

More information

Homogenization with stochastic differential equations

Homogenization with stochastic differential equations Homogenization with stochastic differential equations Scott Hottovy shottovy@math.arizona.edu University of Arizona Program in Applied Mathematics October 12, 2011 Modeling with SDE Use SDE to model system

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information