Stochastic Optimal Control with Finance Applications

Size: px
Start display at page:

Download "Stochastic Optimal Control with Finance Applications"

Transcription

1 Stochastic Optimal Control with Finance Applications Tomas Björk, Department of Finance, Stockholm School of Economics, KTH, February, 2015 Tomas Björk,

2 Contents Dynamic programming. Investment theory. The martingale approach. Filtering theory. Optimal investment with partial information. Tomas Björk,

3 1. Dynamic Programming The basic idea. Deriving the HJB equation. The verification theorem. The linear quadratic regulator. Tomas Björk,

4 Problem Formulation max u E [ T ] subject to 0 F (t, X t, u t )dt + Φ(X T ) dx t = µ (t, X t, u t ) dt + σ (t, X t, u t ) dw t X 0 = x 0, u t U(t, X t ), t. We will only consider feedback control laws, i.e. controls of the form u t = u(t, X t ) Terminology: X = state variable u = control variable U = control constraint Note: No state space constraints. Tomas Björk,

5 Main idea Embedd the problem above in a family of problems indexed by starting point in time and space. Tie all these problems together by a PDE the Hamilton Jacobi Bellman equation. The control problem is reduced to the problem of solving the deterministic HJB equation. NOTE: For simplicity of notation we assume that X, W, and u are scalar. Tomas Björk,

6 Some notation For any fixed number u R, the functions µ u and σ u are defined by µ u (t, x) = µ(t, x, u), σ u (t, x) = σ(t, x, u), For any control law u, the functions µ u, σ u, and F u (t, x) are defined by µ u (t, x) = µ(t, x, u(t, x)), σ u (t, x) = σ(t, x, u(t, x)), F u (t, x) = F (t, x, u(t, x)). Tomas Björk,

7 More notation For any fixed number u R, the partial differential operator A u is defined by A u = A u = µ u (t, x) x [σu (t, x)] 2 2 x 2. For any control law u, operator A u is defined by the partial differential A u = µ u (t, x) x [σu (t, x)] 2 2 x 2. For any control law u, the process X u is the solution of the SDE dx u t = µ (t, X u t, u t ) dt + σ (t, X u t, u t ) dw t, where u t = u(t, X u t ) Tomas Björk,

8 Embedding the problem For every fixed (t, x) the control problem P(t, x) is defined as the problem to maximize E t,x [ T given the dynamics t F (s, X u s, u s )ds + Φ (X u T ) dx u s = µ (s, X u s, u s ) ds + σ (s, X u s, u s ) dw s, X t = x, and the constraints u(s, y) U, (s, y) [t, T ] R. ], The original problem was P(0, x 0 ). Tomas Björk,

9 The optimal value function The value function J : R + R U R is defined by J (t, x, u) = E [ T t F (s, X u s, u s )ds + Φ (X u T ) ] given the dynamics above. The optimal value function is defined by V : R + R R V (t, x) = sup u U We want to derive a PDE for V. J (t, x, u). Tomas Björk,

10 Assumptions We assume: There exists an optimal control law û. The optimal value function V is regular in the sense that V C 1,2. A number of limiting procedures in the following arguments can be justified. Tomas Björk,

11 The Bellman Optimality Principle Dynamic programming relies heavily on the following basic result. Proposition: If û is optimal on the time interval [t, T ] then it is also optimal on every subinterval [s, T ] with t s T. Proof: Iterated expectations. Tomas Björk,

12 Basic strategy To derive the PDE do as follows: Fix (t, x) (0, T ) R n. Choose a real number h (interpreted as a small time increment). Choose an arbitrary control law u. Now define the control law u by u (s, y) = { u(s, y), (s, y) [t, t + h] R û(s, y), (s, y) (t + h, T ] R. In other words, if we use u then we use the arbitrary control u during the time interval [t, t + h], and then we switch to the optimal control law during the rest of the time period. Tomas Björk,

13 Basic idea The whole idea of DynP boils down to the following procedure. Given the point (t, x) above, we consider the following two strategies over the time interval [t, T ]: I: Use the optimal law û. II: Use the control law u defined above. Compute the expected utilities obtained by the respective strategies. Using the obvious fact that Strategy I is least as good as Strategy II, and letting h tend to zero, we obtain our fundamental PDE. Tomas Björk,

14 Strategy values Expected utility for strategy I: J (t, x, û) = V (t, x) Expected utility for strategy II: The expected utility for [t, t + h) is given by [ ] t+h E t,x F (s, Xs u, u s ) ds. t Conditional expected utility over [t + h, T ], given (t, x): E t,x [ V (t + h, X u t+h ) ]. Total expected utility for Strategy II is [ ] t+h E t,x F (s, Xs u, u s ) ds + V (t + h, Xt+h) u. t Tomas Björk,

15 Comparing strategies We have trivially [ ] t+h V (t, x) E t,x F (s, Xs u, u s ) ds + V (t + h, Xt+h) u. t Remark We have equality above if and only if the control law u is an optimal law û. Now use Itô to obtain V (t + h, X u t+h) = V (t, x) + t+h t { } V t (s, Xu s ) + A u V (s, Xs u ) ds + t+h t V x (s, Xu s )σ u dw s, and plug into the formula above. Tomas Björk,

16 We obtain E t,x [ t+h t [ F (s, Xs u, u s ) + V ] ] t (s, Xu s ) + A u V (s, Xs u ) ds 0. Going to the limit: Divide by h, move h within the expectation and let h tend to zero. We get F (t, x, u) + V t (t, x) + Au V (t, x) 0, Tomas Björk,

17 Recall F (t, x, u) + V t (t, x) + Au V (t, x) 0, This holds for all u = u(t, x), with equality if and only if u = û. We thus obtain the HJB equation V t (t, x) + sup u U {F (t, x, u) + A u V (t, x)} = 0. Tomas Björk,

18 The HJB equation Theorem: Under suitable regularity assumptions the follwing hold: I: V satisfies the Hamilton Jacobi Bellman equation V t (t, x) + sup u U {F (t, x, u) + A u V (t, x)} = 0, V (T, x) = Φ(x), II: For each (t, x) [0, T ] R the supremum in the HJB equation above is attained by u = û(t, x). Tomas Björk,

19 Notation for the multi-dimensional case For any fixed vector u R k, the functions µ u, σ u and C u are defined by µ u (t, x) = µ(t, x, u), σ u (t, x) = σ(t, x, u), C u (t, x) = σ(t, x, u)σ(t, x, u). For any control law u, the functions µ u, σ u, C u (t, x) and F u (t, x) are defined by µ u (t, x) = µ(t, x, u(t, x)), σ u (t, x) = σ(t, x, u(t, x)), C u (t, x) = σ(t, x, u(t, x))σ(t, x, u(t, x)), F u (t, x) = F (t, x, u(t, x)). Tomas Björk,

20 Notation continued For any fixed vector u R k, the partial differential operator A u is defined by A u = n i=1 µ u i (t, x) x i n i,j=1 2 Cij(t, u x). x i x j For any control law u, operator A u is defined by the partial differential A u = n i=1 µ u i (t, x) x i n i,j=1 2 Cij(t, u x). x i x j For any control law u, the process X u is the solution of the SDE dx u t = µ (t, X u t, u t ) dt + σ (t, X u t, u t ) dw t, where u t = u(t, X u t ) Tomas Björk,

21 Logic and problem Note: We have shown that if V is the optimal value function, and if V is regular enough, then V satisfies the HJB equation. The HJB eqn is thus derived as a necessary condition, and requires strong ad hoc regularity assumptions. Problem: Suppose we have solved the HJB equation. Have we then found the optimal value function and the optimal control law? Answer: Yes! This follows from the Verification tehorem. Tomas Björk,

22 The Verification Theorem Suppose that we have two functions H(t, x) and g(t, x), such that H is sufficiently integrable, and solves the HJB equation 8 >< H (t, x) + sup {F (t, x, u) + A u H(t, x)} = 0, t u U >: H(T, x) = Φ(x), For each fixed (t, x), the supremum in the expression sup u U {F (t, x, u) + A u H(t, x)} is attained by the choice u = g(t, x). Then the following hold. 1. The optimal value function V to the control problem is given by V (t, x) = H(t, x). 2. There exists an optimal control law û, and in fact. û(t, x) = g(t, x) Tomas Björk,

23 Handling the HJB equation 1. Consider the HJB equation for V. 2. Fix (t, x) [0, T ] R n and solve, the static optimization problem max u U [F (t, x, u) + A u V (t, x)]. Here u is the only variable, whereas t and x are fixed parameters. The functions F, µ, σ and V are considered as given. 3. The optimal û, will depend on t and x, and on the function V and its partial derivatives. We thus write û as û = û (t, x; V ). (1) 4. The function û (t, x; V ) is our candidate for the optimal control law, but since we do not know V this description is incomplete. Therefore we substitute the expression for û into the PDE, giving us the PDE V t (t, x) + F û(t, x) + Aû (t, x) V (t, x) = 0, V (T, x) = Φ(x). 5. Now we solve the PDE above! Then we put the solution V into expression (1). Using the verification theorem we can identify V as the optimal value function, and û as the optimal control law. Tomas Björk,

24 Making an Ansatz The hard work of dynamic programming consists in solving the highly nonlinear HJB equation There are no general analytic methods available for this, so the number of known optimal control problems with an analytic solution is very small indeed. In an actual case one usually tries to guess a solution, i.e. we typically make a parameterized Ansatz for V then use the PDE in order to identify the parameters. Hint: V often inherits some structural properties from the boundary function Φ as well as from the instantaneous utility function F. Most of the known solved control problems have, to some extent, been rigged in order to be analytically solvable. Tomas Björk,

25 The Linear Quadratic Regulator min u R k E [ T 0 {X tqx t + u tru t } dt + X T HX T ], with dynamics dx t = {AX t + Bu t } dt + CdW t. We want to control a vehicle in such a way that it stays close to the origin (the terms x Qx and x Hx) while at the same time keeping the energy u Ru small. Here X t R n and u t R k, and we impose no control constraints on u. The matrices Q, R, H, A, B and C are assumed to be known. We may WLOG assume that Q, R and H are symmetric, and we assume that R is positive definite (and thus invertible). Tomas Björk,

26 Handling the Problem The HJB equation becomes V t (t, x) + inf u R k {x Qx + u Ru + [ x V ](t, x) [Ax + Bu]} i,j V (T, x) = x Hx. 2 V x i x j (t, x) [CC ] i,j = 0, For each fixed choice of (t, x) we now have to solve the static unconstrained optimization problem to minimize u Ru + [ x V ](t, x) [Ax + Bu]. Tomas Björk,

27 The problem was: min u u Ru + [ x V ](t, x) [Ax + Bu]. Since R > 0 we set the gradient to zero and obtain 2u R = ( x V )B, which gives us the optimal u as û = 1 2 R 1 B ( x V ). Note: This is our candidate of optimal control law, but it depends on the unkown function V. We now make an educated guess about the shape of V. Tomas Björk,

28 From the boundary function x Hx and the term x Qx in the cost function we make the Ansatz V (t, x) = x P (t)x + q(t), where P (t) is a symmetric matrix function, and q(t) is a scalar function. With this trial solution we have, V t (t, x) = x P x + q, x V (t, x) = 2x P, xx V (t, x) = 2P û = R 1 B P x. Inserting these expressions into the HJB equation we get { } x P + Q P BR 1 B P + A P + P A x + q + tr[c P C] = 0. Tomas Björk,

29 We thus get the following matrix ODE for P { P = P BR 1 B P A P P A Q, P (T ) = H. and we can integrate directly for q: { q = tr[c P C], q(t ) = 0. The matrix equation is a Riccati equation. equation for q can then be integrated directly. The Final Result for LQ: V (t, x) = x P (t)x + T t û(t, x) = R 1 B P (t)x. tr[c P (s)c]ds, Tomas Björk,

30 2. Portfolio Theory Problem formulation. An extension of HJB. The simplest consutmption-investment problem. The Merton fund separation results. Tomas Björk,

31 Recap of Basic Facts We consider a market with n assets. St i = price of asset No i, h i t = units of asset No i in portfolio wt i = portfolio weight on asset No i X t = portfolio value c t = consumption rate We have the relations X t = n h i tst, i i=1 w i t = hi ts i t X t, n u i t = 1. i=1 Basic equation: Dynamics of self financing portfolio in terms of relative weights n dx t = X t wt i dst i St i c t dt i=1 Tomas Björk,

32 Simplest model Assume a scalar risky asset and a constant short rate. ds t = αs t dt + σs t dw t db t = rb t dt We want to maximize expected utility over time [ T ] max w 0,w 1,c E 0 F (t, c t )dt + Φ(X T ) Dynamics dx t = X t [ u 0 t r + w 1 t α ] dt c t dt + w 1 t σx t dw t, Constraints c t 0, t 0, wt 0 + wt 1 = 1, t 0. Nonsense! Tomas Björk,

33 What are the problems? We can obtain umlimited uttility by simply consuming arbitrary large amounts. The wealth will go negative, but there is nothing in the problem formulations which prohibits this. We would like to impose a constratin of type X t 0 but this is a state constraint and DynP does not allow this. Good News: DynP can be generalized to handle (some) problems of this kind. Tomas Björk,

34 Generalized problem Let D be a nice open subset of [0, T ] R n and consider the following problem. max u U E [ τ 0 ] F (s, Xs u, u s )ds + Φ (τ, Xτ u ). Dynamics: dx t = µ (t, X t, u t ) dt + σ (t, X t, u t ) dw t, X 0 = x 0, The stopping time τ is defined by τ = inf {t 0 (t, X t ) D} T. Tomas Björk,

35 Generalized HJB Theorem: Given enough regularity the follwing hold. 1. The optimal value function satisfies V t (t, x) + sup u U {F (t, x, u) + A u V (t, x)} = 0, (t, x) D V (t, x) = Φ(t, x), (t, x) D. 2. We have an obvious verification theorem. Tomas Björk,

36 Reformulated problem where max c 0, w R with notation: E [ τ 0 ] F (t, c t )dt + Φ(X T ) τ = inf {t 0 X t = 0} T. w 1 = w, w 0 = 1 w Thus no constraint on w. Dynamics dx t = w t [α r] X t dt + (rx t c t ) dt + wσx t dw t, Tomas Björk,

37 HJB Equation V t + sup c 0,w R ( F (t, c) + wx(α r) V x + (rx c) V x + 1 ) 2 x2 w 2 σ 2 2 V x 2 = 0, V (T, x) = 0, V (t, 0) = 0. We now specialize (why?) to so we have to maximize F (t, c) = e δt c γ, e δt c γ + wx(α r) V x + (rx c) V x x2 w 2 σ 2 2 V x 2, Tomas Björk,

38 Analysis of the HJB Equation In the embedde static problem we maximize, over c and w, e δt c γ + wx(α r) V x + (rx c) V x x2 w 2 σ 2 2 V x 2, First order conditions: γc γ 1 = e δt V x, w = V x x V xx α r σ 2, Ansatz: V (t, x) = e δt h(t)x γ, Because of the boundary conditions, we must demand that h(t ) = 0. (2) Tomas Björk,

39 Given a V of this form we have (using to denote the time derivative) giving us V t = e δt ḣx γ δe δt hx γ, V x = γe δt hx γ 1, 2 V x 2 = γ(γ 1)e δt hx γ 2. ŵ(t, x) = α r σ 2 (1 γ), ĉ(t, x) = xh(t) 1/(1 γ). Plug all this into HJB! Tomas Björk,

40 After rearrangements we obtain x γ { ḣ(t) + Ah(t) + Bh(t) γ/(1 γ)} = 0, where the constants A and B are given by A = B = 1 γ. γ(α r)2 σ 2 (1 γ) + rγ 1 γ(α r) 2 2σ 2 (1 γ) δ If this equation is to hold for all x and all t, then we see that h must solve the ODE ḣ(t) + Ah(t) + Bh(t) γ/(1 γ) = 0, h(t ) = 0. An equation of this kind is known as a Bernoulli equation, and it can be solved explicitly. We are done. Tomas Björk,

41 Merton s Mutal Fund Theorems 1. The case with no risk free asset We consider n risky assets with dynamics ds i = S i α i dt + S i σ i dw, i = 1,..., n where W is Wiener in R k. On vector form: ds = D(S)αdt + D(S)σdW. where α = α 1. α n σ = σ 1. σ n D(S) is the diagonal matrix D(S) = diag[s 1,..., S n ]. Wealth dynamics dx = Xw αdt cdt + Xw σdw. Tomas Björk,

42 Formal problem max c,w given the dynamics E [ τ 0 ] F (t, c t )dt and constraints dx = Xw αdt cdt + Xw σdw. e w = 1, c 0. Assumptions: The vector α and the matrix σ are constant and deterministic. The volatility matrix σ has full rank so σσ is positive definite and invertible. Note: S does not turn up in the X-dynamics so V is of the form V (t, x, s) = V (t, x) Tomas Björk,

43 The HJB equation is where V t (t, x) + sup {F (t, c) + A c,w V (t, x)} = 0, e w=1, c 0 V (T, x) = 0, V (t, 0) = 0. A c,w V = xw α V x c V x x2 w Σw 2 V x 2, and where the matrix Σ is given by Σ = σσ. Tomas Björk,

44 The HJB equation is 8 >< >: V t (t, x) + where Σ = σσ. j sup F (t, c) + (xw α c)v x (t, x) + 1 ff w e=1, c 0 2 x2 w ΣwV xx (t, x) = 0, V (T, x) = 0, V (t, 0) = 0. If we relax the constraint w e = 1, the Lagrange function for the static optimization problem is given by L = F (t, c) + (xw α c)v x (t, x) x2 w ΣwV xx (t, x) + λ (1 w e). Tomas Björk,

45 L = F (t, c) + (xw α c)v x (t, x) x2 w ΣwV xx (t, x) + λ (1 w e). The first order condition for c is F c (t, c) = V x(t, x). The first order condition for w is xα V x + x 2 V xx w Σ = λe, so we can solve for w in order to obtain [ λ ŵ = Σ 1 x 2 e xv ] x V xx x 2 α. V xx Using the relation e w = 1 this gives λ as λ = x2 V xx + xv x e Σ 1 α e Σ 1, e Tomas Björk,

46 Inserting λ gives us, after some manipulation, ŵ = 1 e Σ 1 e Σ 1 e + V [ x e Σ 1 Σ 1 ] α xv xx e Σ 1 e e α. We can write this as ŵ(t) = g + Y (t)h, where the fixed vectors g and h are given by g = whereas Y is given by 1 e Σ 1 e Σ 1 e, h = Σ 1 [ e Σ 1 α e Σ 1 e e α ], Y (t) = V x (t, X(t)) X(t)V xx (t, X(t)). Tomas Björk,

47 We had ŵ(t) = g + Y (t)h, Thus we see that the optimal portfolio is moving stochastically along the one-dimensional optimal portfolio line g + sh, in the (n 1)-dimensional portfolio hyperplane, where = {w R n e w = 1}. If we fix two points on the optimal portfolio line, say w a = g + ah and w b = g + bh, then any point w on the line can be written as an affine combination of the basis points w a and w b. An easy calculation shows that if w s = g + sh then we can write w s = µw a + (1 µ)w b, where µ = s b a b. Tomas Björk,

48 Mutual Fund Theorem There exists a family of mutual funds, given by w s = g + sh, such that 1. For each fixed s the portfolio w s stays fixed over time. 2. For fixed a, b with a b the optimal portfolio ŵ(t) is, obtained by allocating all resources between the fixed funds w a and w b, i.e. ŵ(t) = µ a (t)w a + µ b (t)w b, 3. The relative proportions (µ a, µ b ) of wealth allocated to w a and w b are given by µ a (t) = Y (t) b a b, µ b (t) = a Y (t) a b. Tomas Björk,

49 The case with a risk free asset Again we consider the standard model ds = D(S)αdt + D(S)σdW (t), We also assume the risk free asset B with dynamics db = rbdt. We denote B = S 0 and consider portfolio weights (w 0, w 1,..., w n ) where n 0 w i = 1. We then eliminate w 0 by the relation w 0 = 1 n w i, 1 and use the letter w to denote the portfolio weight vector for the risky assets only. Thus we use the notation w = (w 1,..., w n ), Note: w R n without constraints. Tomas Björk,

50 HJB We obtain dx = X w (α re)dt + (rx c)dt + X w σdw, where e = (1, 1,..., 1). The HJB equation now becomes V t (t, x) + sup {F (t, c) + A c,w V (t, x)} = 0, c 0,w R n V (T, x) = 0, V (t, 0) = 0, where A c V = xw (α re)v x (t, x) + (rx c)v x (t, x) x2 w ΣwV xx (t, x). Tomas Björk,

51 First order conditions We maximize F (t, c) + xw (α re)v x + (rx c)v x x2 w ΣwV xx with c 0 and w R n. The first order conditions are F c (t, c) = V x(t, x), ŵ = V x xv xx Σ 1 (α re), with geometrically obvious economic interpretation. Tomas Björk,

52 Mutual Fund Separation Theorem 1. The optimal portfolio consists of an allocation between two fixed mutual funds w 0 and w f. 2. The fund w 0 consists only of the risk free asset. 3. The fund w f consists only of the risky assets, and is given by w f = Σ 1 (α re). 4. At each t the optimal relative allocation of wealth between the funds is given by µ f (t) = V x(t, X(t)) X(t)V xx (t, X(t)), µ 0 (t) = 1 µ f (t). Tomas Björk,

53 3. The Martingale Approach Decoupling the wealth profile from the portfolio choice. Lagrange relaxation. Solving the general wealth problem. Example: Log utility. Example: The numeraire portfolio. Computing the optimal portfolio. The Merton fund separation theorems from a martingale perspective.. Tomas Björk,

54 Problem Formulation Standard model with internal filtration ds t = D(S t )α t dt + D(S t )σ t dw t, db t = rb t dt. Assumptions: Drift and diffusion terms are allowed to be arbitrary adapted processes. The market is complete. We have a given initial wealth x 0 Problem: where max h H EP [Φ(X T )] H = {self financing portfolios} given the initial wealth X 0 = x 0. Tomas Björk,

55 Some observations In a complete market, there is a unique martingale measure Q. Every claim Z satisfying the budget constraint e rt E Q [Z] = x 0, is attainable by an h H and vice versa. We can thus write our problem as max Z E P [Φ(Z)] subject to the constraint e rt E Q [Z] = x 0. We can forget the wealth dynamics! Tomas Björk,

56 Basic Ideas Our problem was max Z E P [Φ(Z)] subject to e rt E Q [Z] = x 0. Idea I: We can decouple the optimal portfolio problem: Finding the optimal wealth profile Ẑ. Given Ẑ, find the replicating portfolio. Idea II: Rewrite the constraint under the measure P. Use Lagrangian techniques to relax the constraint. Tomas Björk,

57 Lagrange formulation Problem: subject to max Z E P [Φ(Z)] e rt E P [L T Z] = x 0. Here L is the likelihood process, i.e. L T = dq dp, on F T The Lagrangian of this is L = E P [Φ(Z)] + λ { x 0 e rt E P [L T Z] } i.e. L = E P [ Φ(Z) λe rt L T Z ] + λx 0 Tomas Björk,

58 The optimal wealth profile Given enough convexity and regularity we now expect, given the dual variable λ, to find the optimal Z by maximizing L = E P [ Φ(Z) λe rt L T Z ] + λx 0 over unconstrained Z, i.e. to maximize { Φ(Z(ω)) λe rt L T (ω)z(ω) } dp (ω) Ω This is a trivial problem! We can simply maximize Z(ω) for each ω separately. max z { Φ(z) λe rt L T z } Tomas Björk,

59 The optimal wealth profile Our problem: max z { Φ(z) λe rt L T z } First order condition Φ (z) = λe rt L T The optimal Z is thus given by Ẑ = G ( λe rt L T ) where G(y) = [Φ ] 1 (y). The dual varaiable λ is determined by the constraint [ ] e rt E P L T Ẑ = x 0. Tomas Björk,

60 Example log utility Assume that Then Φ(x) = ln(x) Thus g(y) = 1 y Ẑ = G ( λe rt ) 1 L T = λ ert L 1 T Finally λ is determined by [ ] e rt E P L T Ẑ = x 0. i.e. [ ] e rt E P 1 L T λ ert L 1 T so λ = x 1 0 and Ẑ = x 0 e rt L 1 T = x 0. Tomas Björk,

61 The Numeraire Portfolio Standard approach: Choose a fixed numeraire (portfolio) N. Find the corresponding martingale measure, i.e. find Q N s.t. are Q N -martingales. B N, and S N Alternative approach: Choose a fixed measure Q. Find numeraire N such that Q = Q N. Special case: Set Q = P Find numeraire N such that Q N = P i.e. such that B N, and S N are Q N -martingales under the objective measure P. This N is the numeraire portfolio. Tomas Björk,

62 Log utility and the numeraire portfolio Definition: The growth optimal portfolio (GOP) is the portfolio which is optimal for log utility (for arbitrary terminal date T. Theorem: Assume that X is GOP. Then X is the numeraire portfolio. Proof: We have to show that the process Y t = S t X t is a P martingale. From above we know that We also have (why?) X T = x 0 e rt L 1 T. X t = e r(t t) E Q [X T F t ] = e r(t t) E P [ XT L T L t ] F t Tomas Björk,

63 Thus X t = e r(t t) E P [ XT L T L t = e r(t t) E Q [ x0 e rt L T L T L t ] F t ] F t = x 0 e rt L 1 t. as expected. Thus S t X t = x 1 0 e rt S t L t which is a P martingale, since x 1 0 e rt S t is a Q martingale. Tomas Björk,

64 Back to general case: Computing L T We recall Ẑ = G ( λe rt L T ). The likelihood process L is computed by using Girsanov. We recall ds t = D(S t )α t dt + D(S t )σ t dw t, We know from Girsanov that dl t = L t ϕ t dw t so dw t = ϕ t dt + dw Q t where W Q is Q-Wiener. Thus ds t = D(S t ) {α t + σ t ϕ t } dt + D(S t )σ t dw Q t, Tomas Björk,

65 Computing L T, continued Recall ds t = D(S t ) {α t + σ t ϕ t } dt + D(S t )σ t dw Q t, The kernel ϕ is determined by the martingale measure condition α t + σ t ϕ t = r where r = ṛ. r Market completeness implies that σ t is invertible so ϕ t = σ 1 t {r α t } and ( T T ) L T = exp 0 ϕ t dw t ϕ t 2 dt Tomas Björk,

66 Finding the optimal portfolio We can easily compute the optimal wealth profile. How do we compute the optimal portfolio? Recall: ds t = D(S t )α t dt + D(S t )σ t dw t, wealth dynamics dx t = h B t db t + h S t ds t or where dx t = X t u B t rdt + X t u S t D(S t ) 1 ds t h S = (h 1,..., h n ), u S = (u 1,..., u n ) Assume for simplicity that r = 0 or consider normalized prices. Tomas Björk,

67 Recall wealth dynamics alternatively dx t = h S t ds t dx t = h S t D(S t )σ t dw Q t alternatively dx t = X t u S t σ t dw Q t Obvious facts: X is a Q martingale. X T = Ẑ Thus the optimal wealth process is determined by [ ] X t = E Q Ẑ F t Tomas Björk,

68 Recall [ ] X t = E Q Ẑ F t Martingale representation theorem gives us but also dx t = ξ t dw Q t, dx t = X t u S t σ t dw Q t dx t = h S t D(S t )σ t dw Q t Thus u S and h S t are determined by u S t = 1 X t ξ t σ 1 t h S t = ξ t σ 1 t D(S t ) 1. and u B t = 1 u S t e, h B t = X t h S t S t Tomas Björk,

69 How do we find ξ? Recall [ ] X t = E Q Ẑ F t We need to compute ξ. dx t = ξ t dw Q t, In a Markovian framework this follows direcetly from the Itô formula. Recall where and Ẑ = H(L T ) = G (λl T ) G = [Φ ] 1 dl t = L t ϕ t dw t, dw = ϕdt + dw Q t so dl t = L t ϕ t 2 dt + L t ϕ t dw Q t Tomas Björk,

70 Finding ξ If the model is Markovian we have α t = α(s t ), σ t = σ(s t ), ϕ t = σ(s t ) 1 {α(s t ) r} so X t = E Q [H(L T ) F t ] ds t = D(S t )σ(s t )dw Q t, dl t = L t ϕ(s t ) 2 dt + L t ϕ(s t ) dw Q t Thus we have X t = F (t, S t, L t ) where, by the Kolmogorov backward equation F t + L ϕ 2 F L L2 ϕ 2 F LL tr {σf ssσ } = 0, F (T, s, L) = H(L) Tomas Björk,

71 Finding ξ, contd. We had X t = F (t, S t, L t ) and Itô gives us Thus dx t = {F S D(S t )σ(s t ) + F L L t ϕ (S t )} dw t and ξ t = F S D(S t )σ(s t ) + F L L t ϕ (S t ). u S t = 1 X t ξ t σ(s t ) 1 h S t = ξ t σ(s t ) 1 D(S t ) 1. Tomas Björk,

72 Mutual Funds Martingale Version We now assume constant parameters We recall α(s) = α, σ(s) = σ, ϕ(s) = ϕ X t = E Q [H(L T ) F t ] dl t = L t ϕ 2 dt + L t ϕ dw Q t Now L is Markov so we have (without any S) Thus X t = F (t, L t ) ξ t = F L L t ϕ, u S t = F LL t ϕ σ 1 and we have fund separation with the fixed risky fund given by X t w = ϕ σ 1 = {r α } {σσ } 1. Tomas Björk,

73 3. Filtering theory Motivational problem. The Innovations process. The non-linear FKK filtering equations. The Wonham filter. The Kalman filter. Tomas Björk,

74 An investment problem with stochastic rate of return Model: ds t = S t α(y t )dt + S t σdw t W is scalar and Y is some factor process. We assume that (S, Y ) is Markov and adapted to the filtration F. Wealth dynamics dx t = X t [r + u t (α r)] dt + u t X t σdw t Objective: max u Information structure: Complete information: u F E P [Φ(X T )] We observe S and Y, so Incomplete information: We only observe S, so u F S. We need filtering theory. Tomas Björk,

75 Filtering Theory Setup Given some filtration F: dy t = a t dt + dm t dz T = b t dt + dw t Here all processes are F adapted and Y = signal process, Z = observation process, M = martingale w.r.t. F W = Wiener w.r.t. F We assume (for the moment) that M and W are independent. Problem: Compute (recursively) the filter estimate Ŷ t = Π t [Y ] = E [ Y t F Z t ] Tomas Björk,

76 The innovations process Recall: dz T = b t dt + dw t Our best guess of b t information should be is ˆb t, so the genuinely new dz t ˆb t dt The innovations process ν is defined by ν t = dz t ˆb t dt Theorem: The process ν is F Z -Wiener. Proof: By Levy it is enough to show that ν is an F Z martingale. ν 2 t t is an F Z martingale. Tomas Björk,

77 I. ν is an F Z martingale: From definition we have so dν t = E Z s [ν t ν s ] = = t Es Z s [ E Z u (b t ˆb ) t dt + dw t (3) t Es Z s [b u ˆb ] u du + Es Z [W t W s ] [b u ˆb ]] u du + Es Z [E s [W t W s ]] = 0 I. ν 2 t t is an F Z martingale: From Itô we have dν 2 t = 2ν t dν t + (dν t ) 2 Here dν is a martingale increment and from (3) it follows that (dν t ) 2 = dt. Tomas Björk,

78 Remark 1: The innovations process gives us a Gram-Schmidt orthogonalization of the increasing family of Hilbert spaces L 2 (F Z t ); t 0. Remark 2: The use of Itô above requires general semimartingale integration theory, since we do not know a priori that ν is Wiener. Tomas Björk,

79 Filter dynamics From the Y dynamics we guess that dŷt = â t dt + martingale Definition: dm t = dŷt â t dt. Proposition: m is an F Z t martingale. Proof: E Z s [m t m s ] = E Z s = E Z s [Y t Y s ] E Z s = E Z s [M t M s ] E Z s [Ŷt Ŷs] [ t s [ t = E Z s [E s [M t M s ]] E Z s ] â u du s E Z s [ t ] (a u â u ) du [ t s s ] â u du ] Eu Z [a u â u ] du = 0 Tomas Björk,

80 Filter dynamics We now have the filter dynamics dŷt = â t dt + dm t where m is an F Z t martingale. If the innovations hypothesis F Z t = F ν t is true, then the martingale representation theorem would give us an Ft Z adapted process h such that dm t = h t dν t (4) The innovations hypothesis is not generally correct but FKK have proved that in fact (4) is always true. Tomas Björk,

81 Filter dynamics We thus have the filter dynamics dŷt = â t dt + h t dν t and it remains to determine the gain process h. Proposition: The process h is given by h t = Ŷtb t Ŷt b t We give a slighty heuristic proof. Tomas Björk,

82 Proof scetch From Itô we have d (Y t Z t ) = Y t b t dt + Y t dw t + Z t a t dt + Z t dm t using and we have ) d (Ŷt Z t dŷt = â t dt + h t dν t dz t = ˆb t dt + dν t = Ŷt b t dt + Ŷtdν t + Z t â t dt + Z t h t dν t + h t dt Formally we also should have E [ d (Y t Z t ) d (Ŷt X t ) F Z t ] = 0 which gives us ) (Ŷt b t Ŷt b t h t dt = 0. Tomas Björk,

83 For the model The filter equations dy t = a t dt + dm t dz T = b t dt + dw t where M and W are independent, we have the FKK non-linear filter equations dŷt = â t dt + dν t = dz t ˆb t dt } {Ŷt b t Ŷt b t dν t Remark: It is easy to see that h t = E [( ) ( Y t Ŷt b t b ) t Ft Z ] Tomas Björk,

84 For the model The general filter equations where dy t = a t dt + dm t dz T = b t dt + σ t dw t The process σ is F Z t adapted and positive. There is no assumption of independence between M and W. we have the filter dŷt = â t dt + dν t = 1 {dz t σ ˆb } t dt t dd t = d M, W t dt [ D t + 1 } {Ŷt ] b t σ Ŷt b t dν t t Tomas Björk,

85 Comment on M, W This requires semimartingale theory but there are two simple cases If M is Wiener then d M, W t = dm t dw t with usual multiplication rules. If M is a pure jump process then d M, W t = 0. Tomas Björk,

86 Filtering a Markov process Assume that Y is Markov with generator G. We want to compute Π t [f(y t )], for some nice function f. Dynkin s formula gives us df(y t ) = (Gf) (Y t )dt + dm t Assume that the observations are dz t = b(y t )dt + dw t where W is independent of Y. The filter equations are now dπ t [f] = Π t [Gf] dt + {Π t [fb] Π t [f] Π t [b]} dν t dν t = dz t Π t [b] dt Remark: To obtain dπ t [f] we need Π t [fb] and Π t [b]. This leads generically to an infinite dimensional system of filter equations. Tomas Björk,

87 On the filter dimension dπ t [f] = Π t [Gf] dt + {Π t [fb] Π t [f] Π t [b]} dν t To obtain dπ t [f] we need Π t [fb] and Π t [b]. Thus we apply the FKK equations to Gf and b. This leads to new filter estimates to determine and generically to an infinite dimensional system of filter equations. The filter equations are really equations for the entire conditional distribution of Y. You can only expect the filter to be finite when the conditional distribution of Y is finitely parameterized. There are only very few examples of finite dimensional filters. The most well known finite filters are the Wonham and the Kalman filters. Tomas Björk,

88 The Wonham filter Assume that Y is a continuous time Markov chain on the state space {1,..., n} with (constant) generator matrix H. Define the indicator processes by δ i (t) = I {Y t = i}, i = 1,..., n. Dynkin s formula gives us dδ i t = j H(j, i)δ j dt + dm i t, i = 1,..., n. Observations are Filter equations: dz t = b(y t )dt + dw t. dπ t [δ i ] = j H(j, i)π t [δ j ] dt+{π t [δ i b] Π t [δ i ] Π t [b]} dν t dν t = dz t Π t [b] dt Tomas Björk,

89 We note that b(y t ) = i b(i)δ i (t) so Π t [δ i b] = b(i)π t [δ i ], Π t [b] = j b(j)π t [δ j ] We finally have the Wonham filter d δ i = j H(j, i) δ j dt + b(i) δ i δ i b(j) δ j dν t, j dν t = dz t j b(j) δ j dt Tomas Björk,

90 The Kalman filter dy t = ay t dt + cdv t, dz t = Y t dt + dw t W and V are independent Wiener FKK gives us dπ t [Y ] = aπ t [Y ] dt + { Π t [ Y 2 ] (Π t [Y ]) 2} dν t dν t = dz t Π t [Y ] dt We need Π t [ Y 2 ], so use Itô to get write From FKK: dyt 2 = { 2aYt 2 + c 2} dt + 2cY t dv t dπ t [ Y 2 ] = { 2aΠ t [ Y 2 ] + c 2} dt + { Π t [ Y 3 ] Π t [ Y 2 ] Π t [Y ] } dν t Now we need Π t [ Y 3 ]! Etc! Tomas Björk,

91 Define the conditional error variance by H t = Π t [ (Π t [Y t ] Π t [Y ]) 2] = Π t [ Y 2 ] (Π t [Y ]) 2 Itô gives us d (Π t [Y ]) 2 = [ 2a (Π t [Y ]) 2 + H 2] dt + 2Π t [Y ] Hdν t and Itô again dh t = { 2aH t + c 2 Ht 2 } dt { [ + Π t Y 3 ] [ 3Π t Y 2 ] Π t [Y ] + 2 (Π t [Y ]) 3} dν t In this particular case we know (why?) that the distribution of Y conditional on Z is Gaussian! Thus we have [ Π t Y 3 ] [ = 3Π t Y 2 ] Π t [Y ] 2 (Π t [Y ]) 3 so H is deterministic (as expected). Tomas Björk,

92 The Kalman filter Model: dy t = ay t dt + cdv t, dz t = Y t dt + dw t Filter: dπ t [Y ] = aπ t [Y ] dt + H t dν t Ḣ t = 2aH t + c 2 H 2 t dν t = dz t Π t [Y ] dt H t = Π t [ (Π t [Y t ] Π t [Y ]) 2] Remark: Because of the Gaussian structure, the conditional distribution evolves on a two dimensional submanifold. Hence a two dimensional filter. Tomas Björk,

93 Optimal investment with stochastic rate of return A market model with a stochastic rate of return. Optimal portolios under complete information. Optimal portfolios under partial information. Tomas Björk,

94 An investment problem with stochastic rate of return Model: ds t = S t α(y t )dt + S t σdw t W is scalar and Y is some factor process. We assume that (S, Y ) is Markov and adapted to the filtration F. Wealth dynamics dx t = X t {r + u t [α(y t ) r]} dt + u t X t σdw t Objective: max u E P [X γ T ] We assume that Y is a Markov process. with generator A. We will treat several cases. Tomas Björk,

95 A. Full information, Y and W independent. The HJB equation for F (t, x, y) becomes F t +sup u {u [α r] xf x + rxf x + 12 u2 x 2 σ 2 F xx } +AF = 0 where G operates on the y-variable. Obvious boundary condition F (t, x, y) = x γ First order condition gives us: û = r α xσ 2 Fx F xx Plug into HJB: F t (α r)2 2σ 2 F 2 x F xx + rxf x + AF = 0 Tomas Björk,

96 Ansatz: F (T, x, y) = x γ G(t, y) F t = x γ G t, F x = γx γ 1 G, F xx = γ(γ 1)x γ 2 G Plug into HJB: x γ γ (α r)2 G t + x 2σ 2 βg + rγx γ G + x γ AG = 0 where β = γ/(γ 1). here G t (t, y) + H(y)G(t, y) + AG(t, y) = 0, H(y) = rγ Kolmogorov gives us G(T, y) = 1. [α(y) r]2 2σ 2 β H(y) = E t,y [ e R T t H(Y s )ds ] Tomas Björk,

97 B. Full information, Y and W dependent. Now we allow for dependence but restrict Y dynamics of the form. to ds t = S t α(y t )dt + S t σdw t dx t = X t {r + u t [α(y t ) r]} dt + u t X t σdw t dy t = a(y t )dt + b(y t )dw t with the same Wiener process W driving both S and Y. The imperfectly correlated case is a bit more messy but can also be handled. HJB Equation for F (t, x, y): F t + af y b2 F yy + rxf x + sup ju [α r] xf x + 12 ff u2 x 2 σ 2 F xx + uxbσf xy u = 0. Tomas Björk,

98 Ansatz. HJB: F t + af y b2 F yy + rxf x + sup {u [α r] xf x + 12 } u2 x 2 σ 2 F xx + uxbσf xy u = 0. After a lot of thinking we make the Ansatz F (t, x, y) = x γ h 1 γ (t, y) and then it is not hard to see that h satisfies a standard parabolic PDE. (Plug Ansatz into HJB). See Zariphopoulou for details and more complicated cases. Tomas Björk,

99 C. Partial information. Model: ds t = S t α(y t )dt + S t σdw t Assumption: Y cannot be observed directly. Reruirement: The control u must be Ft S We thus have a partially observed system. adapted. Idea: Project the S dynamics onto the smaller F S t filtration and add filter equations in order to reduce the problem to the completely observable case. Set Z = ln S and note (why?) that Ft Z = Ft S. We have dz t = {α(y t ) 12 } σ2 dt + σdw t Tomas Björk,

100 Projecting onto the S-filtration dz t = {α(y t ) 12 } σ2 dt + σdw t From filtering theory we know that dz t = {Π t [α] 12 } σ2 dt + σdν t where ν is F S t -Wiener and Π t [α] = E [ α(y t ) F S t ] We thus have the following S dynamics on the S filtration and wealth dynamics ds t = S t Π t [α] dt + S t σdν t dx t = X t {r + u t (Π t [α] r)} dt + u t X t σdν t Tomas Björk,

101 Reformulated problem We now have the problem max u E [Xγ T ] for Z-adapted controls given wealth dynamics dx t = X t {r + u t (Π t [α] r)} dt + u t X t σdν t If we now can model Y such that the (linear!) observation dynamics for Z will produce a finite filter vector π, then we are back in the completely observable case with Y replaced by π.and observation equation We neeed a finite dimensional filter! Two choices for Y Linear Y dynamics. This will give us the Kalman filter. See Brendle Y as a Markov chain. This will give us the Wonham filter. See Bäuerle and Rieder. Tomas Björk,

102 Kalman case (Brendle) Assume that with Y dynamics ds t = Y t S t dt + S t σdw t dy t = ay t dt + cdv t where W and V are independent. Observations: dz t = {Y t 12 } σ2 dt + σdw t We have a standard Kalman filter. Wealth dynamics )} dx t = X t {r + u t (Ŷt r dt + u t X t σdν t Tomas Björk,

103 Kalman case, solution. max u E [Xγ T ] )} dx t = X t {r + u t (Ŷt r dt + u t X t σdν t dŷt = {Ŷ t 12 } σ2 dt + H t σdν t where H is deterministic and given by a Riccatti equation. We are back in standard completly observable case with state variables X and Ŷ. Thus the optimal value function is of the form F (t, x, ŷ) = x γ h 1 γ (t, ŷ) where h solves a parabolic PDE and can be computed explicitly. Tomas Björk,

104 References We have used the following texts but see also other references therein (and /or go the author s home pages): Björk, T. (2009) Arbitrage Theory in Continuous Time, 3:rd Ed, Oxford University Press, Brendle, S. (2004) Portfolio selection under partial observation and constant relative risk aversion Working Paper. Princeton University. Bäuerle, N. & Rieder, U (2004) Portfolio optimization with unobservable Markov-modulated drift process Working Paper. Hannover University. Liptser & Shiryayev. (2003?) Statistics of Random Processes, 2:nd Ed, Springer Verlag, Zariphopoulou, T. (2001) A solution apporach to valuation with unhedgable risk. Finance and Stochastics, Tomas Björk,

105 References Continued For very interesting and deep works on duality, which are outside the scope of these lectures, see Schachermayer, W. Utility Maximisation in Incomplete Markets. In: Stochastic Methods in Finance(M. Fritelli, W. Runggaldier, eds.), Springer Lecture Notes in Mathematics, Vol (2004), pp Rogers, L.C.G. Duality in constrained optimal investment and consumption problems: a synthesis. In Paris-Princeton Lectures on Mathematical Finance 2002, Springer Lecture Notes in Mathematics 1814, 2003, Tomas Björk,

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

Utility Maximization in Hidden Regime-Switching Markets with Default Risk Utility Maximization in Hidden Regime-Switching Markets with Default Risk José E. Figueroa-López Department of Mathematics and Statistics Washington University in St. Louis figueroa-lopez@wustl.edu pages.wustl.edu/figueroa

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Optimal portfolio strategies under partial information with expert opinions

Optimal portfolio strategies under partial information with expert opinions 1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU

More information

Portfolio Optimization with unobservable Markov-modulated drift process

Portfolio Optimization with unobservable Markov-modulated drift process Portfolio Optimization with unobservable Markov-modulated drift process Ulrich Rieder Department of Optimization and Operations Research University of Ulm, Germany D-89069 Ulm, Germany e-mail: rieder@mathematik.uni-ulm.de

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

Mean Variance Portfolio Optimization with State Dependent Risk Aversion

Mean Variance Portfolio Optimization with State Dependent Risk Aversion Mean Variance Portfolio Optimization with State Dependent Risk Aversion Tomas Björk Agatha Murgoci Xun Yu Zhou July 8, 2011 First version October 2009 To appear in Mathematical Finance Abstract The objective

More information

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Ulrich Horst 1 Humboldt-Universität zu Berlin Department of Mathematics and School of Business and Economics Vienna, Nov.

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection F. Trojani and P. Vanini ECAS Course, Lugano, October 7-13, 2001 1 Contents Introduction Merton s Model and Perturbative Solution

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

of space-time diffusions

of space-time diffusions Optimal investment for all time horizons and Martin boundary of space-time diffusions Sergey Nadtochiy and Michael Tehranchi October 5, 2012 Abstract This paper is concerned with the axiomatic foundation

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

Infinite-dimensional methods for path-dependent equations

Infinite-dimensional methods for path-dependent equations Infinite-dimensional methods for path-dependent equations (Università di Pisa) 7th General AMaMeF and Swissquote Conference EPFL, Lausanne, 8 September 215 Based on Flandoli F., Zanco G. - An infinite-dimensional

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Stochastic and Adaptive Optimal Control

Stochastic and Adaptive Optimal Control Stochastic and Adaptive Optimal Control Robert Stengel Optimal Control and Estimation, MAE 546 Princeton University, 2018! Nonlinear systems with random inputs and perfect measurements! Stochastic neighboring-optimal

More information

Worst Case Portfolio Optimization and HJB-Systems

Worst Case Portfolio Optimization and HJB-Systems Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

Convex Stochastic Control and Conjugate Duality in a Problem of Unconstrained Utility Maximization Under a Regime Switching Model

Convex Stochastic Control and Conjugate Duality in a Problem of Unconstrained Utility Maximization Under a Regime Switching Model Convex Stochastic Control and Conjugate Duality in a Problem of Unconstrained Utility Maximization Under a Regime Switching Model by Aaron Xin Situ A thesis presented to the University of Waterloo in fulfilment

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Affine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009

Affine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009 Affine Processes Econometric specifications Eduardo Rossi University of Pavia March 17, 2009 Eduardo Rossi (University of Pavia) Affine Processes March 17, 2009 1 / 40 Outline 1 Affine Processes 2 Affine

More information

A class of globally solvable systems of BSDE and applications

A class of globally solvable systems of BSDE and applications A class of globally solvable systems of BSDE and applications Gordan Žitković Department of Mathematics The University of Texas at Austin Thera Stochastics - Wednesday, May 31st, 2017 joint work with Hao

More information

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Mihai Sîrbu, The University of Texas at Austin based on joint work with Oleksii Mostovyi University

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

Optimal portfolio in a regime-switching model

Optimal portfolio in a regime-switching model Optimal portfolio in a regime-switching model Adrian Roy L. Valdez Department of Computer Science University of the Philippines-Diliman alvaldez@up.edu.ph Tiziano Vargiolu Department of Pure and Applied

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

A problem of portfolio/consumption choice in a. liquidity risk model with random trading times

A problem of portfolio/consumption choice in a. liquidity risk model with random trading times A problem of portfolio/consumption choice in a liquidity risk model with random trading times Huyên PHAM Special Semester on Stochastics with Emphasis on Finance, Kick-off workshop, Linz, September 8-12,

More information

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Trend Following Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com References Introduction to Stochastic Calculus with Applications.

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Optimal transportation and optimal control in a finite horizon framework

Optimal transportation and optimal control in a finite horizon framework Optimal transportation and optimal control in a finite horizon framework Guillaume Carlier and Aimé Lachapelle Université Paris-Dauphine, CEREMADE July 2008 1 MOTIVATIONS - A commitment problem (1) Micro

More information

Portfolio Optimization in discrete time

Portfolio Optimization in discrete time Portfolio Optimization in discrete time Wolfgang J. Runggaldier Dipartimento di Matematica Pura ed Applicata Universitá di Padova, Padova http://www.math.unipd.it/runggaldier/index.html Abstract he paper

More information

On Consistent Decision Making. and the Theory of Continuous-Time. Recursive Utility

On Consistent Decision Making. and the Theory of Continuous-Time. Recursive Utility On Consistent Decision Making and the Theory of Continuous-Time Recursive Utility Mogens Ste ensen Köln, April 23, 202 /5 Outline: Quadratic and collective objectives Further comments on consistency Recursive

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

On the dual problem of utility maximization

On the dual problem of utility maximization On the dual problem of utility maximization Yiqing LIN Joint work with L. GU and J. YANG University of Vienna Sept. 2nd 2015 Workshop Advanced methods in financial mathematics Angers 1 Introduction Basic

More information

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73

CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73 CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4.

Optimal Control. Lecture 18. Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen. March 29, Ref: Bryson & Ho Chapter 4. Optimal Control Lecture 18 Hamilton-Jacobi-Bellman Equation, Cont. John T. Wen Ref: Bryson & Ho Chapter 4. March 29, 2004 Outline Hamilton-Jacobi-Bellman (HJB) Equation Iterative solution of HJB Equation

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model

Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model Karel Janeček Zheng Li Mihai Sîrbu August 2, 218 Abstract This paper studies the problem of optimal investment and

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>0)

2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>0) 12 VOLKER BETZ 2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>) (2.1) t u 1 2 σ2 u =. In the simplest case, (2.1) is supposed to hold for all x R n, and all t>, and there

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Market environments, stability and equlibria

Market environments, stability and equlibria Market environments, stability and equlibria Gordan Žitković Department of Mathematics University of exas at Austin Austin, Aug 03, 2009 - Summer School in Mathematical Finance he Information Flow two

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Uncertainty in Kalman Bucy Filtering

Uncertainty in Kalman Bucy Filtering Uncertainty in Kalman Bucy Filtering Samuel N. Cohen Mathematical Institute University of Oxford Research Supported by: The Oxford Man Institute for Quantitative Finance The Oxford Nie Financial Big Data

More information

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Leyla Jianyu Han 1 Kenneth Kasa 2 Yulei Luo 1 1 The University of Hong Kong 2 Simon Fraser University December 15, 218 1 /

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Backward martingale representation and endogenous completeness in finance

Backward martingale representation and endogenous completeness in finance Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information