Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218
Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem Formulation 2 3 Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs
(Static) Principal-Agent Problem Formulation Principal delegates management of output process X, only observes X pays salary defined by contract ξ(x ) Agent devotes effort a = X a, chooses optimal effort by ( V A (ξ) := max E U A ξ(x a ) c(a) ) = â(ξ) a Principal chooses optimal contract by solving ( max E U P X â(ξ) ξ(x â(ξ) ) ) under constraint V A (ξ) ρ ξ Non-zero sum Stackelberg game
(Static) Principal-Agent Problem Formulation Principal delegates management of output process X, only observes X pays salary defined by contract ξ(x ) Agent devotes effort a = X a, chooses optimal effort by ( V A (ξ) := max E U A ξ(x a ) c(a) ) = â(ξ) a Principal chooses optimal contract by solving ( max E U P X â(ξ) ξ(x â(ξ) ) ) under constraint V A (ξ) ρ ξ = Non-zero sum Stackelberg game
(Static) Principal-Agent Problem Formulation Principal delegates management of output process X, only observes X pays salary defined by contract ξ(x ) Agent devotes effort a = X a, chooses optimal effort by ( V A (ξ) := max E U A ξ(x a ) c(a) ) = â(ξ) a Principal chooses optimal contract by solving ( max E U P X â(ξ) ξ(x â(ξ) ) ) under constraint V A (ξ) ρ ξ = Non-zero sum Stackelberg game
(Static) Principal-Agent Problem Formulation Principal delegates management of output process X, only observes X pays salary defined by contract ξ(x ) Agent devotes effort a = X a, chooses optimal effort by ( V A (ξ) := max E U A ξ(x a ) c(a) ) = â(ξ) a Principal chooses optimal contract by solving ( max E U P X â(ξ) ξ(x â(ξ) ) ) under constraint V A (ξ) ρ ξ = Non-zero sum Stackelberg game
Principal-Agent problem formulation Formulation Agent problem : V A (ξ) := sup E P[ T KT ν ξ(x ) Kt ν c t (ν t )dt ], Kt ν := e t kν s ds P P P P : weak solution of Output process for some ν valued in U : dx t = b t (X, ν t )dt + σ t (X, ν t )dw P t P a.s. Given solution P (ξ), Principal solves the optimization problem V P := sup ξ Ξ ρ E P (ξ) [ R T U ( l(x ) ξ(x ) )], R t := e t rsds where Ξ ρ := { ξ(x. ) : V A (ξ) ρ }
Principal-Agent problem formulation Formulation Agent problem : V A (ξ) := sup E P[ T KT ν ξ(x ) Kt ν c t (ν t )dt ], Kt ν := e t kν s ds P P P P : weak solution of Output process for some ν valued in U : dx t = b t (X, ν t )dt + σ t (X, ν t )dw P t P a.s. Given solution P (ξ), Principal solves the optimization problem V P := sup ξ Ξ ρ E P (ξ) [ R T U ( l(x ) ξ(x ) )], R t := e t rsds where Ξ ρ := { ξ(x. ) : V A (ξ) ρ }
Formulation Principal-Agent problem formulation : non-degeneracy Agent problem : V A (ξ) := sup E P[ T KT ν ξ(x ) Kt ν c t (ν t )dt ], Kt ν := e t kν s ds P P P P : weak solution of Output process for some ν valued in U : dx t = σ t (X, β t ) [ λ t (X, α t )dt + dwt P ] P a.s. Given solution P (ξ), Principal solves the optimization problem V P := sup ξ Ξ ρ E P (ξ) [ R T U ( l(x ) ξ(x ) )], R t := e t rsds where Ξ ρ := { ξ(x. ) : V A (ξ) ρ }
A subset of revealing contracts Formulation Path-dependent Hamiltonian for the Agent problem : H t (ω, y, z, γ) := sup u { (σt λ t )(ω, u) z + 1 2 σ tσ t (ω, u) : γ k t (ω, u)y c t (ω, u) } For Y R and Z, Γ F X prog meas, define T Yt Z,Γ = Y + Z t dx t + 1 2 Γ t : d X t H t (X, Yt Z,Γ, Z t, Γ t )dt, P q.s. ( Proposition V A Y Z,Γ) T = Y. Moreover P is optimal iff ν t = Argmax u U H t(y t, Z t, Γ t ) = ˆν(Y t, Z t, Γ t )
A subset of revealing contracts Formulation Path-dependent Hamiltonian for the Agent problem : H t (ω, y, z, γ) := sup u { (σt λ t )(ω, u) z + 1 2 σ tσ t (ω, u) : γ k t (ω, u)y c t (ω, u) } For Y R and Z, Γ F X prog meas, define T Yt Z,Γ = Y + Z t dx t + 1 2 Γ t : d X t H t (X, Yt Z,Γ, Z t, Γ t )dt, P q.s. ( Proposition V A Y Z,Γ) T = Y. Moreover P is optimal iff ν t = Argmax u U H t(y t, Z t, Γ t ) = ˆν(Y t, Z t, Γ t )
Proof : classical verification argument! Formulation For all P P, denote J A (ξ, P) := E P[ K ν T ξ T K ν t c ν t dt ]. Then ( J A Y Z,Γ T, P) = E P[ KT ν { Y + T = Y E P[ T Y T Z t dx t + 1 2 Γ t :d X t H t (Y t, Z t, Γ t )dt } ] Kt ν ct ν dt Kt ν { Ht (Y t, Z t, Γ t ) Z t σ ν t dw P t +c ν t + k ν t Y t b ν t Z t 1 2 σσ : Γ t } dt ] with equality iff ν achieves the max of Hamiltonian
Proof : classical verification argument! Formulation For all P P, denote J A (ξ, P) := E P[ K ν T ξ T K ν t c ν t dt ]. Then ( J A Y Z,Γ T, P) = E P[ KT ν { Y + T = Y +E P[ T Y T Z t dx t + 1 2 Γ t :d X t H t (Y t, Z t, Γ t )dt } ] Kt ν ct ν dt Kt ν { H t (Y t, Z t, Γ t ) kt ν Y t } +b ν t Z t + 1 2 σσ :Γ t c ν t with equality iff ν achieves the max of Hamiltonian dt + Z t σ ν t dw P t ]
Proof : classical verification argument! Formulation For all P P, denote J A (ξ, P) := E P[ K ν T ξ T K ν t c ν t dt ]. Then ( J A Y Z,Γ T, P) = E P[ KT ν { Y + T = Y +E P[ T Y T Z t dx t + 1 2 Γ t :d X t H t (Y t, Z t, Γ t )dt } ] Kt ν ct ν dt Kt ν { H t (Y t, Z t, Γ t ) kt ν Y t } +b ν t Z t + 1 2 σσ :Γ t c ν t with equality iff ν achieves the max of Hamiltonian dt+z t σ ν t dw P t ]
Proof : classical verification argument! Formulation For all P P, denote J A (ξ, P) := E P[ K ν T ξ T K ν t c ν t dt ]. Then ( J A Y Z,Γ T, P) = E P[ KT ν { Y + T = Y +E P[ T T Z t dx t + 1 2 Γ t :d X t H t (Y t, Z t, Γ t )dt } ] Kt ν ct ν dt Kt ν { H t (Y t, Z t, Γ t ) kt ν Y t } +b ν t Z t + 1 2 σσ :Γ t c ν t Y by definition of H with equality iff ν achieves the max of Hamiltonian dt+z t σ ν t dw P t ]
Formulation Principal problem restricted to revealing contracts Dynamics of the pair (X, Y ) under optimal response dx t = z H t (X, Yt Z,Γ, Z t, Γ t ) }{{} b t(x,ˆν(y t,z t,γ t)) dy Z,Γ t dt + { 2 γ H t (X, Y Z,Γ t, Z t, Γ t ) } 1 2 } {{ } σ t(x,ˆν(y t,z t,γ t)) = Z t dx t + 1 2 Γ t : d X t H t (X, Yt Z,Γ, Z t, Γ t )dt = Principal s value function under revealing contracts : dw t [ V P V (X, Y ) := sup E U ( l(x ) Y Z,Γ ) ] T, for all Y ρ (Z,Γ) V { where V := (Z, Γ) : Z H 2 (P) and P ( Y Z,Γ ) } T
Formulation Theorem (Cvitanić, Possamaï & NT 15) Assume V. Then V P = sup V (X, Y ) Y ρ Given maximizer Y, the corresponding optimal controls (Z, Γ ) induce an optimal contract T ξ = Y + Zt dx t + 1 2 Γ t : d X t H t (X, Y Z,Γ t, Zt, Γ t )dt
Formulation To prove the main result, it suffices that for all ξ?? (Y, Z, Γ) s.t. ξ = Y Z,Γ T, P q.s. OR, weaker sufficient condition : for all ξ?? (Y n, Z n, Γ n ) s.t. Y Z n,γ n T ξ
Formulation To prove the main result, it suffices that for all ξ?? (Y, Z, Γ) s.t. ξ = Y Z,Γ T, P q.s. OR, weaker sufficient condition : for all ξ?? (Y n, Z n, Γ n ) s.t. Y Z n,γ n T ξ
Formulation From fully nonlinear HJB equation to semilinear H t (ω, y, z, γ) non-decreasing and convex in γ, Then { 1 H t (ω, y, z, γ) = sup σ 2 σ2 : γ Ht (ω, y, z, σ)} Denote k t := H t (Y t, Z t, Γ t ) 1 2 ˆσ2 t : Γ t + H t (Y t, Z t, ˆσ t ) Then, required representation ξ = Y Z,Γ, P q.s. is equivalent to ξ = Y + T Z t dx t + H t (Y t, Z t, ˆσ t )dt T T k t dt, = 2BSDE up to approximation of nondecreasing process K P q.s.
Formulation From fully nonlinear HJB equation to semilinear H t (ω, y, z, γ) non-decreasing and convex in γ, Then { 1 H t (ω, y, z, γ) = sup σ 2 σ2 : γ Ht (ω, y, z, σ)} Denote k t := H t (Y t, Z t, Γ t ) 1 2 ˆσ2 t : Γ t + H t (Y t, Z t, ˆσ t ) Then, required representation ξ = Y Z,Γ, P q.s. is equivalent to ξ = Y + T Z t dx t + H t (Y t, Z t, ˆσ t )dt T T k t dt, = 2BSDE up to approximation of nondecreasing process K P q.s.
Outline The Principal-Agent problem 1 The Principal-Agent problem Formulation 2 3 Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs
Linear representation of random variables Predictable Representation Property of BM Theorem For all ξ L 2 (F W T ), there is a unique (Y, Z) FW prog. meas. s.t. ξ = Y t + T t Z s dw s, P a.s. E [ T ( Y t 2 + Z t 2 )dt ] <, For ξ = g(w T ) : Heat equation and Itô s formula True for ξ = g(w t1,..., W tn )... conclude by density argument Heat equation with path dependent boundary condition Connection with W 1,2 solution of Heat equation
Linear representation of random variables Predictable Representation Property of BM Theorem For all ξ L 2 (F W T ), there is a unique (Y, Z) FW prog. meas. s.t. ξ = Y t + T t Z s dw s, P a.s. E [ T ( Y t 2 + Z t 2 )dt ] <, For ξ = g(w T ) : Heat equation and Itô s formula True for ξ = g(w t1,..., W tn )... conclude by density argument Heat equation with path dependent boundary condition Connection with W 1,2 solution of Heat equation
Linear representation of random variables Predictable Representation Property of BM Theorem For all ξ L 2 (F W T ), there is a unique (Y, Z) FW prog. meas. s.t. ξ = Y t + T t Z s dw s, P a.s. E [ T ( Y t 2 + Z t 2 )dt ] <, For ξ = g(w T ) : Heat equation and Itô s formula True for ξ = g(w t1,..., W tn )... conclude by density argument Heat equation with path dependent boundary condition Connection with W 1,2 solution of Heat equation
Semilinear representation of random variables Here again, ξ = ξ(w s, s T ). Let f : R Ω R R d R, Theorem (Pardoux & Peng 92) Lip in (y, z), unif in (t, ω), E [ T f s 2 ds ] < For all ξ L 2 (F W T ), there is a unique FW prog. meas. (Y, Z), E [ T ( Y t 2 + Z t 2 )dt ] <, s.t Y t = ξ + T t Unique fixed point for the Picard iteration f s (Y s, Z s )ds T t Z s dw s, P a.s. (Y, Z) Y t = ξ + T f t s (Y s, Z s )ds T Z t sdw s, t T W 1,2 type solution of semilinear heat equation with path-dependent nonlinearity and boundary data
Semilinear representation of random variables Here again, ξ = ξ(w s, s T ). Let f : R Ω R R d R, Theorem (Pardoux & Peng 92) Lip in (y, z), unif in (t, ω), E [ T f s 2 ds ] < For all ξ L 2 (F W T ), there is a unique FW prog. meas. (Y, Z), E [ T ( Y t 2 + Z t 2 )dt ] <, s.t Y t = ξ + T t Unique fixed point for the Picard iteration f s (Y s, Z s )ds T t Z s dw s, P a.s. (Y, Z) Y t = ξ + T f t s (Y s, Z s )ds T Z t sdw s, t T W 1,2 type solution of semilinear heat equation with path-dependent nonlinearity and boundary data
Backward SDE and semilinear PDE Rewrite the backward SDE in differential form dy t = f t (Y t, Z t )dt + Z t dw t, t T, andy T = ξ, P a.s. In the Markovian case ξ = g(w T ) and f t (y, z) = f (t, W t, y, z) = Y t = v(t, W t ), so that dy t = t v(t, W t )dt+dv(t, W t ) dw t + 1 2 v(t, W t)dt by Itô s formula. Direct identification yields Z t = Dv(t, W t ) and t v(t, W t ) + 1 2 v(t, W t) = f t (v, Dv) Backward SDE path-dependent semilinear PDE with Sobolev-type of regularity [Barles & Lesigne 97]
Backward SDE and semilinear PDE Rewrite the backward SDE in differential form dy t = f t (Y t, Z t )dt + Z t dw t, t T, andy T = ξ, P a.s. In the Markovian case ξ = g(w T ) and f t (y, z) = f (t, W t, y, z) = Y t = v(t, W t ), so that dy t = t v(t, W t )dt+dv(t, W t ) dw t + 1 2 v(t, W t)dt by Itô s formula. Direct identification yields Z t = Dv(t, W t ) and t v(t, W t ) + 1 2 v(t, W t) = f t (v, Dv) Backward SDE path-dependent semilinear PDE with Sobolev-type of regularity [Barles & Lesigne 97]
Backward SDE and semilinear PDE Rewrite the backward SDE in differential form dy t = f t (Y t, Z t )dt + Z t dw t, t T, andy T = ξ, P a.s. In the Markovian case ξ = g(w T ) and f t (y, z) = f (t, W t, y, z) = Y t = v(t, W t ), so that dy t = t v(t, W t )dt+dv(t, W t ) dw t + 1 2 v(t, W t)dt by Itô s formula. Direct identification yields Z t = Dv(t, W t ) and t v(t, W t ) + 1 2 v(t, W t) = f t (v, Dv) Backward SDE path-dependent semilinear PDE with Sobolev-type of regularity [Barles & Lesigne 97]
Outline The Principal-Agent problem Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs 1 The Principal-Agent problem Formulation 2 3 Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Towards fully nonlinear PDEs : probabilistic framework In order to cover fully nonlinear PDEs, we need quasi-sure stochastic analysis... Ω := { ω C (R +, R d ) : ω() = } X : canonical process, i.e. X t (ω) := ω(t) F t := σ(x s, s t), F := {F t, t } X : quadratic variation process (defined on R + ω) X t := Xt 2 t 2X sdx s = P lim π n 1 X t t π n X t t π 2 n 1 for all semimartingale probability measure P on Ω, and ˆσ 2 t := lim h X t+h X t h
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Towards fully nonlinear PDEs : probabilistic framework In order to cover fully nonlinear PDEs, we need quasi-sure stochastic analysis... Ω := { ω C (R +, R d ) : ω() = } X : canonical process, i.e. X t (ω) := ω(t) F t := σ(x s, s t), F := {F t, t } X : quadratic variation process (defined on R + ω) X t := Xt 2 t 2X sdx s = P lim π n 1 X t t π n X t t π 2 n 1 for all semimartingale probability measure P on Ω, and ˆσ 2 t := lim h X t+h X t h
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Semimartingale measures on canonical space P W : collection of all semimartingale measures P such that dx t = b t dt + σ t dw t, P a.s. for some F processes b and σ, and P Brownian motion W Class of prob. meas. on Ω : P P W sufficiently rich (to satisfy DP properties...) if not, enrich it... P quasi-surely MEANS P a.s. for all P P
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Semimartingale measures on canonical space P W : collection of all semimartingale measures P such that dx t = b t dt + σ t dw t, P a.s. for some F processes b and σ, and P Brownian motion W Class of prob. meas. on Ω : P P W sufficiently rich (to satisfy DP properties...) if not, enrich it... P quasi-surely MEANS P a.s. for all P P
Nonlinear expectation operators Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs P : subset of local martingale measures, i.e. dx t = σ t dw t, P a.s. for all P P = Nonlinear expectation E t := sup P P E P t { P L (P) := Q = E (. λ ) } t dw t P : λ L L = Nonlinear expectation Et P := sup P P L (P) E P t P L := P P P L (P) = Nonlinear expectation E L t := sup P P L E P t
Nonlinearity The Principal-Agent problem Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Assumptions F : R + ω R R d S d + R satisfies (C1 L ) Lipschitz in (y, σz) : F (., y, z, σ) F (., y, z, σ) L ( y y + σ(z z ) (C2 µ ) Monotone in y : (y y ) [F (., y,.) F (., y,.) ] µ y y 2 Denote f t (y, z) := F t ( y, z, σt ) and f t := f t (, )
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Wellposedness of random horizon backward SDEs τ : stopping time, ξ is F τ meas., consider the backward SDE : Y t τ = ξ + τ t τ f s (Y s, Z s )ds τ t τ where N martingale with N, X =, P a.s. (Z s dx s + dn s ), P a.s. Theorem (Y. Lin, Z. Ren, NT & J. Yang 17) q,p Let ξ q L ρ,τ (P) <, fρ,τ := E P[( τ e ρt ft 2 ds ) q ] 1 2 q <, for some ρ > µ, q > 1. Then the BSDE has a unique solution with Y p D η,τ (P) + Z H p η,τ (P) + N N p η,τ (P) C( q,p) ξ q L ρ,τ (P) + fρ,τ for all p (1, q) and η [ µ, ρ) Darling & Pardoux 97 : ρ := ρ + L2 2, EP, instead of ρ, E P.
Norms The Principal-Agent problem Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs We have used the notations ξ p L q ρ,τ (P) Y p D p η,τ (P) := E P[ e ρτ ξ q] := E P[ sup t τ Z p H p η,τ (P) := E P[( τ N p N p η,τ (P) := E P[( τ e ηt ] Y t p e ηt σ t T ) p ] Z t 2 2 dt e 2ηt d[n] t ) p 2 ]
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Random horizon reflected backward SDEs Find (Y, Z) such that : τ Y τ = ξ + f s (Y s, Z s )ds (Z s dx s + du s ), Y S, P a.s. τ and E P[ t τ ( 1 (Yr S r ) ) ] du r =, for all t, where U t is a càdlàg P supermartingale, for all t, starting from U =, orthogonal to X, i.e. [X, U] = Theorem (Y. Lin, Z. Ren, NT & J. Yang 17) Assume further that S càdlàg, F +,P adapted, S + D q ρ,τ (P) <. Then, the reflected BSDE has a unique solution (Y, Z) D p η,τ (P) H p η,τ (P), for all p (1, q) and η [ µ, ρ).
Random horizon 2 nd order backward SDE Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs For a stop. time τ, and F τ measurable ξ : Y t τ = ξ + τ t τ F s (Y s, Z s, ˆσ s )ds τ t τ Z s dx s + τ t τ dk s, P q.s. K non-decreasing, K =, and minimal in the sense t2 τ ] inf EP[ dk t =, for all t 1 t 2 P P t 1 τ Remark Deterministic finite horizon τ = T : (C2) µ not needed Soner, NT & Zhang 14 Possamaï, Tan & Zhou 16
Random horizon 2 nd order backward SDE Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs For a stop. time τ, and F τ measurable ξ : Y t τ = ξ + τ t τ F s (Y s, Z s, ˆσ s )ds τ t τ Z s dx s + τ t τ dk s, P q.s. K non-decreasing, K =, and minimal in the sense t2 τ ] inf EP[ dk t =, for all t 1 t 2 P P t 1 τ Remark Deterministic finite horizon τ = T : (C2) µ not needed Soner, NT & Zhang 14 Possamaï, Tan & Zhou 16
Connection with fully nonlinear PDEs Rewrite the 2BSDE in differential form Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs dy t = F t (Y t, Z t, ˆσ t )dt + Z t dx t dk t, t τ, andy T = ξ, P q.s. Markovian case ξ = g(x T ) and f t (y, z) = f (t, X t, y, z) = Y t = v(t, X t ) dy t = t v(t, X t )dt + Dv(t, X t ) dx t + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] dt, P q.s. by Itô s formula. Direct identification yields Z t = Dv(t, X t ) and t v(t, X t ) + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] F t (v, Dv, ˆσ t ) Finally, the minimality condition on K implies the fully nonlinear PDE { 1 t v(t, X t ) + sup σ 2 Tr[ σ 2 D 2 v(t, X t ) ] + F t (v, Dv, σ)} =
Connection with fully nonlinear PDEs Rewrite the 2BSDE in differential form Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs dy t = F t (Y t, Z t, ˆσ t )dt + Z t dx t dk t, t τ, andy T = ξ, P q.s. Markovian case ξ = g(x T ) and f t (y, z) = f (t, X t, y, z) = Y t = v(t, X t ) dy t = t v(t, X t )dt + Dv(t, X t ) dx t + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] dt, P q.s. by Itô s formula. Direct identification yields Z t = Dv(t, X t ) and t v(t, X t ) + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] F t (v, Dv, ˆσ t ) Finally, the minimality condition on K implies the fully nonlinear PDE { 1 t v(t, X t ) + sup σ 2 Tr[ σ 2 D 2 v(t, X t ) ] + F t (v, Dv, σ)} =
Connection with fully nonlinear PDEs Rewrite the 2BSDE in differential form Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs dy t = F t (Y t, Z t, ˆσ t )dt + Z t dx t dk t, t τ, andy T = ξ, P q.s. Markovian case ξ = g(x T ) and f t (y, z) = f (t, X t, y, z) = Y t = v(t, X t ) dy t = t v(t, X t )dt + Dv(t, X t ) dx t + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] dt, P q.s. by Itô s formula. Direct identification yields Z t = Dv(t, X t ) and t v(t, X t ) + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] F t (v, Dv, ˆσ t ) Finally, the minimality condition on K implies the fully nonlinear PDE { 1 t v(t, X t ) + sup σ 2 Tr[ σ 2 D 2 v(t, X t ) ] + F t (v, Dv, σ)} =
Connection with fully nonlinear PDEs Rewrite the 2BSDE in differential form Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs dy t = F t (Y t, Z t, ˆσ t )dt + Z t dx t dk t, t τ, andy T = ξ, P q.s. Markovian case ξ = g(x T ) and f t (y, z) = f (t, X t, y, z) = Y t = v(t, X t ) dy t = t v(t, X t )dt + Dv(t, X t ) dx t + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] dt, P q.s. by Itô s formula. Direct identification yields Z t = Dv(t, X t ) and t v(t, X t ) + 1 2 Tr[ˆσ 2 t D 2 v(t, X t ) ] F t (v, Dv, ˆσ t ) Finally, the minimality condition on K implies the fully nonlinear PDE { 1 t v(t, X t ) + sup σ 2 Tr[ σ 2 D 2 v(t, X t ) ] + F t (v, Dv, σ)} =
Revisiting random horizon backward SDEs Random horizon 2nd order backward SDEs Wellposedness of random horizon 2 nd order backward SDE Theorem (Y. Lin, Z. Ren, NT & J. Yang 17) Let ξ q L ρ,τ (PL ) <, f q ρ,τ := E L[( τ e ρt ft 2 ds ) q ] 1 2 q <, for some ρ > µ, q > 1. Then the Random horizon 2BSDE has a unique solution (Y, Z) with Y D p η,τ (P L ), Z H p η,τ (P L ) for all η [µ, ρ), p [1, q) ξ p L q ρ,τ (P) Y p D p η,τ (P) := E L[ e ρτ ξ q] := E L[ sup t τ Z p H p η,τ (P) := E L[( τ e ηt ] Y t p e ηt σ t T ) p ] Z t 2 2 dt