Continuous Time Finance

Size: px
Start display at page:

Download "Continuous Time Finance"

Transcription

1 Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013

2 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale approach (Ch 10-12). Incomplete markets (Ch 15). Dividends (Ch 16). Currency derivatives (Ch 17). Stochastic Control Theory (Ch 19) Martingale Methods for Optimal Investment (Ch 20) Textbook: Björk, T: Arbitrage Theory in Continuous Time Oxford University Press, (3:rd ed.) Tomas Björk,

3 Notation X t = any random process, dt = small time step, dx t = X t+dt X t We often write X(t) instead of X t. dx t is called the increment of X over the interval [t,t + dt]. For any fixed interval [t,t + dt], the increment dx t is a stochastic variable. If the increments dx s and dx t, over the disjoint intervals [s, s + ds] and [t,t + dt] are independent, then we say that X has independent increments. If every increment has a normal distribution we say that X is a normal, or Gaussian process. Tomas Björk,

4 The Wiener Process A stochastic process W is called a Wiener process if it has the following properties The increments are normally distributed: For s < t: W t W s N[0, t s] E[W t W s ] = 0, V ar[w t W s ] = t s W has independent increments. W 0 = 0 W has continuous trajectories. Continuous random walk Note: In Hull, a Wiener process is typically denoted by Z instead of W. Tomas Björk,

5 A Wiener Trajectory t Tomas Björk,

6 Important Fact Theorem: A Wiener trajectory is, with probability one, a continuous curve which is nowhere differentiable. Proof. Hard. Tomas Björk,

7 Wiener Process with Drift A stochastic process X is called a Wiener process with drift µ and diffusion coefficient σ if it has the following dynamics dx t = µdt + σdw t, where µ and σ are constants. Summing all increments over the interval [0, t] gives us X t X 0 = µ t + σ (W t W 0 ), X t = X 0 + µt + σw t Thus X t N[X 0 + µt, σ 2 t] Tomas Björk,

8 Itô processes We say, losely speaking, that the process X is an Itô process if it has dynamics of the form dx t = µ t dt + σ t dw t, where µ t and σ t are random processes. Informally you can think of dw t as a random variable of the form dw t N[0,dt] To handle expressions like the one above, we need some mathematical theory. First, however, we present an important example, which we will discuss informally. Tomas Björk,

9 Example: The Black-Scholes model Price dynamics: (Geometrical Brownian Motion) ds t = µs t dt + σs t dw t, Simple analysis: Assume that σ = 0. Then ds t = µs t dt Divide by dt! ds t dt = µs t This is a simple ordinary differential equation with solution S t = s 0 e µt Conjecture: The solution of the SDE above is a randomly disturbed exponential function. Tomas Björk,

10 Intuitive Economic Interpretation ds t S t = µdt + σdw t Over a small time interval [t, t + dt] this means: Return = (mean return) + σ (Gaussian random disturbance) The asset return is a random walk (with drift). µ = mean rate of return per unit time σ = volatility Large σ = large random fluctuations Small σ = small random fluctuations The returns are normal. The stock price is lognormal. Tomas Björk,

11 A GBM Trajectory t Tomas Björk,

12 Stochastic Differentials and Integrals Consider an expression of the form dx t = µ t dt + σ t dw t, X 0 = x 0 Question: What exactly do we mean by this? Answer: Write the equation on integrated form as X t = x 0 + t 0 µ s ds + t 0 σ s dw s How is this interpreted? Tomas Björk,

13 Recall: X t = x 0 + t 0 µ s ds + t 0 σ s dw s Two terms: t µ s ds 0 This is a standard Riemann integral for each µ- trajectory. t 0 σ s dw s Stochastic integral. This can not be interpreted as a Stieljes integral for each trajectory. We need a new theory for this Itô integral. Tomas Björk,

14 Information Consider a Wiener process W. Def: F W t = The information generated by W over the interval [0,t] Def: Let Z be a stochastic variable. If the value of Z is completely determined by F W t, we write Z F W t Ex: For the stochastic variable Z, defined by Z = 5 0 W s ds, we have Z F W 5. We do not have Z F W 4. Tomas Björk,

15 Adapted Processes Let W be a Wiener process. Definition: A process X is adapted to the filtration { F W t : t 0 } if X t F W t, t 0 An adapted process does not look into the future Adapted processes are nice integrands for stochastic integrals. Tomas Björk,

16 The process is adapted. X t = t 0 W s ds, The process is adapted. X t = sup s t W s The process is not adapted. X t = sup W s s t+1 Tomas Björk,

17 The Itô Integral We will define the Itô integral b a g s dw s for processes g satisfying The process g is adapted. The process g satisfies b a E [ g 2 s] ds < This will be done in two steps. Tomas Björk,

18 Simple Integrands Definition: The process g is simple, if g is adapted. There exists deterministic points t 0...,t n with a = t 0 < t 1 <... < t n = b such that g is piecewise constant, i.e. g(s) = g(t k ), s [t k, t k+1 ) For simple g we define b a g s dw s = n 1 k=0 g(t k ) [W(t k+1 ) W(t k )] FORWARD INCREMENTS! Tomas Björk,

19 Properties of the Integral Theorem: For simple g the following relations hold The expected value is given by E [ b a g s dw s ] = 0 The second moment is given by E ( b a ) 2 g s dw s = b a E [ g 2 s] ds We have b a g s dw s F W b Tomas Björk,

20 General Case For a general g we do as follows. 1. Approximate g with a sequence of simple g n such that b [ E {g n (s) g(s)} 2] ds For each n the integral a b a g n (s)dw(s) is a well defined stochastic variable Z n. 3. One can show that the Z n sequence converges to a limiting stochastic variable. 4. We define b a gdw by b a b g(s)dw(s) = lim g n (s)dw(s). n a Tomas Björk,

21 Properties of the Integral Theorem: For general g following relations hold The expected value is given by E [ b a g s dw s ] = 0 We do in fact have E [ b a g s dw s F a ] = 0 The second moment is given by E ( b a ) 2 g s dw s = b a E [ g 2 s] ds Tomas Björk,

22 We have b a g s dw s F W b Tomas Björk,

23 Martingales Definition: An adapted process is a martingale if E [X t F s ] = X s, s t A martingale is a process without drift Proposition: For any g (sufficiently integrable) he process X t = is a martingale. t 0 g s dw s Proposition: If X has dynamics dx t = µ t dt + σ t dw t then X is a martingale iff µ = 0. Tomas Björk,

24 Continuous Time Finance Stochastic Calculus (Ch 4-5) Tomas Björk Tomas Björk,

25 Stochastic Calculus General Model: dx t = µ t dt + σ t dw t Let the function f(t, x) be given, and define the stochastic process Z t by Z t = f(t, X t ) Problem: What does df(t,x t ) look like? The answer is given by the Itô formula. We provide an intuitive argument. The formal proof is very hard. Tomas Björk,

26 A close up of the Wiener process Consider an infinitesimal Wiener increment We know: dw t = W t+dt W t dw t N[0, dt] E[dW t ] = 0, V ar[dw t ] = dt From this one can show E[(dW t ) 2 ] = dt, V ar[(dw t ) 2 ] = 2(dt) 2 Tomas Björk,

27 Recall E[(dW t ) 2 ] = dt, V ar[(dw t ) 2 ] = 2(dt) 2 Important observation: 1. Both E[(dW t ) 2 ] and V ar[(dw t ) 2 ] are very small when dt is small. 2. V ar[(dw t ) 2 ] is negligable compared to E[(dW t ) 2 ]. 3. Thus (dw t ) 2 is deterministic. We thus conclude, at least intuitively, that (dw t ) 2 = dt This was only an intuitive argument, but it can be proved rigorously. Tomas Björk,

28 Multiplication table. Theorem: We have the following multiplication table (dt) 2 = 0 dw t dt = 0 (dw t ) 2 = dt Tomas Björk,

29 Deriving the Itô formula dx t = µ t dt + σ t dw t Z t = f(t, X t ) We want to compute df(t,x t ) Make a Taylor expansion of f(t, X t ) including second order terms: df = f f dt + t x dx t f 2 t 2 (dt) f 2 x 2(dX t) f t x dt dx t Plug in the expression for dx, expand, and use the multiplication table! Tomas Björk,

30 df = f f dt + t x [µdt + σdw] f 2 t 2 (dt) f 2 x 2[µdt + σdw]2 + 2 f dt [µdt + σdw] t x = f dt + µ fdt + σ f t x x dw f t 2 (dt) f 2 x 2[µ2 (dt) 2 + σ 2 (dw) 2 + 2µσdt dw] + µ 2 f t x (dt)2 + σ 2 f dt dw t x Using the multiplikation table this reduces to: { f df = t + µ f x + 1 } f 2 σ2 2 x 2 dt + σ f x dw Tomas Björk,

31 The Itô Formula Theorem: With X dynamics given by dx t = µ t dt + σ t dw t we have df(t, X t ) = { f t + µ f x + 1 } f 2 σ2 2 x 2 dt + σ f x dw t Alternatively df(t, X t ) = f f dt + t x dx t f 2 x 2 (dx t) 2, where we use the multiplication table. Tomas Björk,

32 Example: GBM ds t = µs t dt + σs t dw t We smell something exponential! Natural Ansatz: S t = e Z t, Z t = ln S t Itô on f(t, s) = ln(s) gives us f s = 1 s, f t = 0, 2 f s 2 = 1 s 2 dz t = 1 S t ds t S 2 t (ds t ) 2 = (µ 12 ) σ2 dt + σdw t Tomas Björk,

33 Recall Integrate! dz t = (µ 12 ) σ2 dt + σdw t Z t Z 0 = = t 0 (µ 12 ) σ2 ds + σ (µ 12 ) σ2 t + σw t t 0 dw s Using S t = e Z t gives us S t = S 0 e ( µ 1 2 σ2 )t+σw t Since W t is N[0,t], we see that S t has a lognormal distribution. Tomas Björk,

34 Changing Measures Consider a probability measure P on (Ω, F), and assume that L F is a random variable with the properties that L 0 and E P [L] = 1. For every event A F we now define the real number Q(A) by the prescription Q(A) = E P [L I A ] where the random variable I A is the indicator for A, i.e. { 1 if A occurs I A = 0 if A c occurs Tomas Björk,

35 Recall that Q(A) = E P [L I A ] We now see that Q(A) 0 for all A, and that Q(Ω) = E P [L I Ω ] = E P [L 1] = 1 We also see that if A B = then Q(A B) = E P [L I A B ] = E P [L (I A + I B )] = E P [L I A ] + E P [L I B ] = Q(A) + Q(B) Furthermore we see that P(A) = 0 Q(A) = 0 We have thus more or less proved the following Tomas Björk,

36 Proposition 2: If L F is a nonnegative random variable with E P [L] = 1 and Q is defined by Q(A) = E P [L I A ] then Q will be a probability measure on F with the property that P(A) = 0 Q(A) = 0. I turns out that the property above is a very important one, so we give it a name. Tomas Björk,

37 Absolute Continuity Definition: Given two probability measures P and Q on F we say that Q is absolutely continuous w.r.t. P on F if, for all A F, we have P(A) = 0 Q(A) = 0 We write this as Q << P. If Q << P and P << Q then we say that P and Q are equivalent and write Q P Tomas Björk,

38 Equivalent measures It is easy to see that P and Q are equivalent if and only if P(A) = 0 Q(A) = 0 or, equivalently, P(A) = 1 Q(A) = 1 Two equivalent measures thus agree on all certain events and on all impossible events, but can disagree on all other events. Simple examples: All non degenerate Gaussian distributions on R are equivalent. If P is Gaussian on R and Q is exponential then Q << P but not the other way around. Tomas Björk,

39 Absolute Continuity ct d We have seen that if we are given P and define Q by Q(A) = E P [L I A ] for L 0 with E P [L] = 1, then Q is a probability measure and Q << P.. A natural question is now if all measures Q << P are obtained in this way. The answer is yes, and the precise (quite deep) result is as follows. The proof is difficult and therefore omitted. Tomas Björk,

40 The Radon Nikodym Theorem Consider two probability measures P and Q on (Ω, F), and assume that Q << P on F. Then there exists a unique random variable L with the following properties 1. Q(A) = E P [L I A ], A F 2. L 0, P a.s. 3. E P [L] = 1, 4. L F The random variable L is denoted as L = dq dp, on F and it is called the Radon-Nikodym derivative of Q w.r.t. P on F, or the likelihood ratio between Q and P on F. Tomas Björk,

41 A simple example The Radon-Nikodym derivative L is intuitively the local scale factor between P and Q. If the sample space Ω is finite so Ω = {ω 1,...,ω n } then P is determined by the probabilities p 1,...,p n where p i = P(ω i ) i = 1,...,n Now consider a measure Q with probabilities q i = Q(ω i ) i = 1,...,n If Q << P this simply says that p i = 0 q i = 0 and it is easy to see that the Radon-Nikodym derivative L = dq/dp is given by L(ω i ) = q i p i i = 1,...,n Tomas Björk,

42 If p i = 0 then we also have q i = 0 and we can define the ratio q i /p i arbitrarily. If p 1,...,p n as well as q 1,...,q n are all positive, then we see that Q P and in fact as could be expected. dp dq = 1 L = ( ) 1 dq dp Tomas Björk,

43 The likelihood process on a filtered space We now consider the case when we have a probability measure P on some space Ω and that instead of just one σ-algebra F we have a filtration, i.e. an increasing family of σ-algebras {F t } t 0. The interpretation is as usual that F t is the information available to us at time t, and that we have F s F t for s t. Now assume that we also have another measure Q, and that for some fixed T, we have Q << P on F T. We define the random variable L T by L T = dq dp on F T Since Q << P on F T we also have Q << P on F t for all t T and we define L t = dq dp on F t 0 t T For every t we have L t F t, so L is an adapted process, known as the likelihood process. Tomas Björk,

44 The L process is a P martingale We recall that L t = dq dp on F t 0 t T Since F s F t for s t we can use Proposition 5 and deduce that L s = E P [L t F s ] s t T and we have thus proved the following result. Proposition: Given the assumptions above, the likelihood process L is a P-martingale. Tomas Björk,

45 Where are we heading? We are now going to perform measure transformations on Wiener spaces, where P will correspond to the objective measure and Q will be the risk neutral measure. For this we need define the proper likelihood process L and, since L is a P-martingale, we have the following natural questions. What does a martingale look like in a Wiener driven framework? Suppose that we have a P-Wiener process W and then change measure from P to Q. What are the properties of W under the new measure Q? These questions are handled by the Martingale Representation Theorem, and the Girsanov Theorem respectively. Tomas Björk,

46 4. The Martingale Representation Theorem Tomas Björk,

47 Intuition Suppose that we have a Wiener process W under the measure P. We recall that if h is adapted (and integrable enough) and if the process X is defined by X t = x 0 + t 0 h s dw s then X is a a martingale. We now have the following natural question: Question: Assume that X is an arbitrary martingale. Does it then follow that X has the form X t = x 0 + t 0 h s dw s for some adapted process h? In other words: Are all martingales stochastic integrals w.r.t. W? Tomas Björk,

48 Answer It is immediately clear that all martingales can not be written as stochastic integrals w.r.t. W. Consider for example the process X defined by { 0 for 0 t < 1 X t = Z for t 1 where Z is an random variable, independent of W, with E [Z] = 0. X is then a martingale (why?) but it is clear (how?) that it cannot be written as X t = x 0 + t 0 h s dw s for any process h. Tomas Björk,

49 Intuition The intuitive reason why we cannot write X t = x 0 + t 0 h s dw s in the example above is of course that the random variable Z has nothing to do with the Wiener process W. In order to exclude examples like this, we thus need an assumption which guarantees that our probability space only contains the Wiener process W and nothing else. This idea is formalized by assuming that the filtration {F t } t 0 is the one generated by the Wiener process W. Tomas Björk,

50 The Martingale Representation Theorem Theorem. Let W be a P-Wiener process and assume that the filtation is the internal one i.e. F t = F W t = σ {W s ; 0 s t} Then, for every (P, F t )-martingale X, there exists a real number x and an adapted process h such that X t = x + t 0 h s dw s, i.e. dx t = h t dw t. Proof: Hard. This is very deep result. Tomas Björk,

51 Note For a given martingale X, the Representation Theorem above guarantees the existence of a process h such that X t = x + t 0 h s dw s, The Theorem does not, however, tell us how to find or construct the process h. Tomas Björk,

52 5. The Girsanov Theorem Tomas Björk,

53 Setup Let W be a P-Wiener process and fix a time horizon T. Suppose that we want to change measure from P to Q on F T. For this we need a P-martingale L with L 0 = 1 to use as a likelihood process, and a natural way of constructing this is to choose a process g and then define L by { dlt = g t dw t L 0 = 1 This definition does not guarantee that L 0, so we make a small adjustment. We choose a process ϕ and define L by { dlt = L t ϕ t dw t L 0 = 1 The process L will again be a martingale and we easily obtain L t = e R t 0 ϕ sdw s 1 R t 2 0 ϕ2 s ds Tomas Björk,

54 Thus we are guaranteed that L 0. We now change measure form P to Q by setting dq = L t dp, on F t, 0 t T The main problem is to find out what the properties of W are, under the new measure Q. This problem is resolved by the Girsanov Theorem. Tomas Björk,

55 The Girsanov Theorem Let W be a P-Wiener process. Fix a time horizon T. Theorem: Choose an adapted process ϕ, and define the process L by { dlt = L t ϕ t dw t L 0 = 1 Assume that E P [L T ] = 1, and define a new mesure Q on F T by dq = L t dp, on F t, 0 t T Then Q << P and the process W Q, defined by W Q t = W t t 0 ϕ s ds is Q-Wiener. We can also write this as dw t = ϕ t dt + dw Q t Tomas Björk,

56 Changing the drift in an SDE The single most common use of the Girsanov Theorem is as follows. Suppose that we have a process X with P dynamics dx t = µ t dt + σ t dw t where µ and σ are adapted and W is P-Wiener. We now do a Girsanov Transformation as above, and the question is what the Q-dynamics look like. From the Girsanov Theorem we have dw t = ϕ t dt + dw Q t and substituting this into the P-dynamics we obtain the Q dynamics as dx t = {µ t + σ t ϕ t } dt + σ t dw Q t Moral: The drift changes but the diffusion is unaffected. Tomas Björk,

57 1. Dynamic Programming The basic idea. Deriving the HJB equation. The verification theorem. The linear quadratic regulator. Tomas Björk,

58 Problem Formulation max u E [ T ] subject to 0 F(t, X t, u t )dt + Φ(X T ) dx t = µ(t,x t, u t ) dt + σ (t,x t,u t ) dw t X 0 = x 0, u t U(t, X t ), t. We will only consider feedback control laws, i.e. controls of the form u t = u(t, X t ) Terminology: X = state variable u = control variable U = control constraint Note: No state space constraints. Tomas Björk,

59 Main idea Embedd the problem above in a family of problems indexed by starting point in time and space. Tie all these problems together by a PDE: the Hamilton Jacobi Bellman equation. The control problem is reduced to the problem of solving the deterministic HJB equation. Tomas Björk,

60 Some notation For any fixed vector u R k, the functions µ u, σ u and C u are defined by µ u (t,x) = µ(t, x,u), σ u (t,x) = σ(t, x, u), C u (t,x) = σ(t, x, u)σ(t, x,u). For any control law u, the functions µ u, σ u, C u (t, x) and F u (t, x) are defined by µ u (t, x) = µ(t, x,u(t,x)), σ u (t, x) = σ(t, x,u(t, x)), C u (t, x) = σ(t, x,u(t, x))σ(t, x,u(t,x)), F u (t, x) = F(t, x,u(t, x)). Tomas Björk,

61 More notation For any fixed vector u R k, the partial differential operator A u is defined by A u = n i=1 µ u i (t, x) x i n i,j=1 2 Cij(t, u x). x i x j For any control law u, the partial differential operator A u is defined by A u = n i=1 µ u i (t,x) x i n i,j=1 2 Cij(t, u x). x i x j For any control law u, the process X u is the solution of the SDE dx u t = µ (t, X u t,u t ) dt + σ (t, X u t,u t ) dw t, where u t = u(t, X u t ) Tomas Björk,

62 Embedding the problem For every fixed (t, x) the control problem P t,x is defined as the problem to maximize E t,x [ T given the dynamics t F(s, X u s, u s )ds + Φ (X u T) dx u s = µ(s,x u s,u s ) ds + σ (s,x u s,u s ) dw s, X t = x, and the constraints u(s, y) U, (s, y) [t,t] R n. ], The original problem was P 0,x0. Tomas Björk,

63 The optimal value function The value function J : R + R n U R is defined by J (t, x,u) = E [ T t F(s, X u s,u s )ds + Φ(X u T) ] given the dynamics above. The optimal value function is defined by V : R + R n R V (t, x) = sup u U We want to derive a PDE for V. J (t,x,u). Tomas Björk,

64 Assumptions We assume: There exists an optimal control law û. The optimal value function V is regular in the sense that V C 1,2. A number of limiting procedures in the following arguments can be justified. Tomas Björk,

65 Bellman Optimality Principle Theorem: If a control law û is optimal for the time interval [t, T] then it is also optimal for all smaller intervals [s, T] where s t. Proof: Exercise. Tomas Björk,

66 Basic strategy To derive the PDE do as follows: Fix (t, x) (0,T) R n. Choose a real number h (interpreted as a small time increment). Choose an arbitrary control law u on the time inerval [t,t + h]. Now define the control law u by u (s, y) = { u(s, y), (s, y) [t,t + h] R n û(s, y), (s, y) (t + h,t] R n. In other words, if we use u then we use the arbitrary control u during the time interval [t,t + h], and then we switch to the optimal control law during the rest of the time period. Tomas Björk,

67 Basic idea The whole idea of DynP boils down to the following procedure. Given the point (t, x) above, we consider the following two strategies over the time interval [t, T]: I: Use the optimal law û. II: Use the control law u defined above. Compute the expected utilities obtained by the respective strategies. Using the obvious fact that û is least as good as u, and letting h tend to zero, we obtain our fundamental PDE. Tomas Björk,

68 Expected utility for û: Strategy values J (t, x,û) = V (t, x) Expected utility for u : The expected utility for [t,t + h) is given by [ ] t+h E t,x F (s, Xs u,u s )ds. t Conditional expected utility over [t + h,t], given (t, x): E t,x [ V (t + h,x u t+h ) ]. Total expected utility for Strategy II is [ ] t+h E t,x F (s,xs u,u s )ds + V (t + h,xt+h) u. t Tomas Björk,

69 Comparing strategies We have trivially V (t, x) E t,x [ t+h t F (s, X u s,u s ) ds + V (t + h,x u t+h) ]. Remark We have equality above if and only if the control law u is the optimal law û. Now use Itô to obtain V (t + h,x u t+h) = V (t, x) + + t+h t t+h t { } V t (s, Xu s ) + A u V (s, Xs u ) ds x V (s, X u s )σ u dw s, and plug into the formula above. Tomas Björk,

70 We obtain E t,x [ t+h t { F (s, Xs u,u s ) + V } ] t (s, Xu s ) + A u V (s, Xs u ) ds 0. Going to the limit: Divide by h, move h within the expectation and let h tend to zero. We get F(t, x,u) + V t (t, x) + Au V (t, x) 0, Tomas Björk,

71 Recall F(t, x,u) + V t (t, x) + Au V (t, x) 0, This holds for all u = u(t, x), with equality if and only if u = û. We thus obtain the HJB equation V t (t, x) + sup u U {F(t, x,u) + A u V (t, x)} = 0. Tomas Björk,

72 The HJB equation Theorem: Under suitable regularity assumptions the follwing hold: I: V satisfies the Hamilton Jacobi Bellman equation V t (t,x) + sup u U {F(t, x, u) + A u V (t, x)} = 0, V (T, x) = Φ(x), II: For each (t, x) [0, T] R n the supremum in the HJB equation above is attained by u = û(t, x), i.e. by the optimal control. Tomas Björk,

73 Logic and problem Note: We have shown that if V is the optimal value function, and if V is regular enough, then V satisfies the HJB equation. The HJB eqn is thus derived as a necessary condition, and requires strong ad hoc regularity assumptions, alternatively the use of viscosity solutions techniques. Problem: Suppose we have solved the HJB equation. Have we then found the optimal value function and the optimal control law? In other words, is HJB a sufficient condition for optimality. Answer: Yes! Theorem. This follows from the Verification Tomas Björk,

74 The Verification Theorem Suppose that we have two functions H(t, x) and g(t, x), such that H is sufficiently integrable, and solves the HJB equation 8 >< H (t, x) + sup {F(t, x, u) + A u H(t, x)} = 0, t u U >: H(T, x) = Φ(x), For each fixed (t, x), the supremum in the expression sup u U {F(t, x, u) + A u H(t, x)} is attained by the choice u = g(t, x). Then the following hold. 1. The optimal value function V to the control problem is given by V (t, x) = H(t, x). 2. There exists an optimal control law û, and in fact. û(t, x) = g(t, x) Tomas Björk,

75 Handling the HJB equation 1. Consider the HJB equation for V. 2. Fix (t, x) [0, T] R n and solve, the static optimization problem max u U [F(t, x, u) + A u V (t, x)]. Here u is the only variable, whereas t and x are fixed parameters. The functions F, µ, σ and V are considered as given. 3. The optimal û, will depend on t and x, and on the function V and its partial derivatives. We thus write û as û = û (t, x; V ). (4) 4. The function û (t, x; V ) is our candidate for the optimal control law, but since we do not know V this description is incomplete. Therefore we substitute the expression for û into the PDE, giving us the highly nonlinear (why?) PDE V t (t, x) + F û(t, x) + Aû (t, x) V (t, x) = 0, V (T, x) = Φ(x). 5. Now we solve the PDE above! Then we put the solution V into expression (4). Using the verification theorem we can identify V as the optimal value function, and û as the optimal control law. Tomas Björk,

76 Making an Ansatz The hard work of dynamic programming consists in solving the highly nonlinear HJB equation There are no general analytic methods available for this, so the number of known optimal control problems with an analytic solution is very small indeed. In an actual case one usually tries to guess a solution, i.e. we typically make a parameterized Ansatz for V then use the PDE in order to identify the parameters. Hint: V often inherits some structural properties from the boundary function Φ as well as from the instantaneous utility function F. Most of the known solved control problems have, to some extent, been rigged in order to be analytically solvable. Tomas Björk,

77 The Linear Quadratic Regulator min u R k E [ T 0 {X tqx t + u tru t }dt + X THX T ], with dynamics dx t = {AX t + Bu t }dt + CdW t. We want to control a vehicle in such a way that it stays close to the origin (the terms x Qx and x Hx) while at the same time keeping the energy u Ru small. Here X t R n and u t R k, and we impose no control constraints on u. The matrices Q, R, H, A, B and C are assumed to be known. We may WLOG assume that Q, R and H are symmetric, and we assume that R is positive definite (and thus invertible). Tomas Björk,

78 Handling the Problem The HJB equation becomes V t (t, x) + inf u R k {x Qx + u Ru + [ x V ](t, x) [Ax + Bu]} i,j V (T, x) = x Hx. 2 V x i x j (t, x) [CC ] i,j = 0, For each fixed choice of (t, x) we now have to solve the static unconstrained optimization problem to minimize x Qx + u Ru + [ x V ](t, x) [Ax + Bu]. Tomas Björk,

79 The problem was: min u x Qx + u Ru + [ x V ](t, x) [Ax + Bu]. Since R > 0 we set the gradient to zero and obtain 2u R = ( x V )B, which gives us the optimal u as û = 1 2 R 1 B ( x V ). Note: This is our candidate of optimal control law, but it depends on the unkown function V. We now make an educated guess about the structure of V. Tomas Björk,

80 From the boundary function x Hx and the term x Qx in the cost function we make the Ansatz V (t, x) = x P(t)x + q(t), where P(t) is a symmetric matrix function, and q(t) is a scalar function. With this trial solution we have, V t (t, x) = x Px + q, x V (t, x) = 2x P, xx V (t, x) = 2P û = R 1 B Px. Inserting these expressions into the HJB equation we get { } x P + Q PBR 1 B P + A P + PA x + q + tr[c PC] = 0. Tomas Björk,

81 We thus get the following matrix ODE for P { P = PBR 1 B P A P PA Q, P(T) = H. and we can integrate directly for q: { q = tr[c PC], q(t) = 0. The matrix equation is a Riccati equation. equation for q can then be integrated directly. The Final Result for LQ: V (t, x) = x P(t)x + T t û(t, x) = R 1 B P(t)x. tr[c P(s)C]ds, Tomas Björk,

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function:

Optimal Control. Quadratic Functions. Single variable quadratic function: Multi-variable quadratic function: Optimal Control Control design based on pole-placement has non unique solutions Best locations for eigenvalues are sometimes difficult to determine Linear Quadratic LQ) Optimal control minimizes a quadratic

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Stochastic Optimal Control with Finance Applications

Stochastic Optimal Control with Finance Applications Stochastic Optimal Control with Finance Applications Tomas Björk, Department of Finance, Stockholm School of Economics, KTH, February, 2015 Tomas Björk, 2015 1 Contents Dynamic programming. Investment

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

Stochastic Integration and Continuous Time Models

Stochastic Integration and Continuous Time Models Chapter 3 Stochastic Integration and Continuous Time Models 3.1 Brownian Motion The single most important continuous time process in the construction of financial models is the Brownian motion process.

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Backward martingale representation and endogenous completeness in finance

Backward martingale representation and endogenous completeness in finance Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Worst Case Portfolio Optimization and HJB-Systems

Worst Case Portfolio Optimization and HJB-Systems Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

Introduction to Diffusion Processes.

Introduction to Diffusion Processes. Introduction to Diffusion Processes. Arka P. Ghosh Department of Statistics Iowa State University Ames, IA 511-121 apghosh@iastate.edu (515) 294-7851. February 1, 21 Abstract In this section we describe

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

CHANGE OF MEASURE. D.Majumdar

CHANGE OF MEASURE. D.Majumdar CHANGE OF MEASURE D.Majumdar We had touched upon this concept when we looked at Finite Probability spaces and had defined a R.V. Z to change probability measure on a space Ω. We need to do the same thing

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

Utility Maximization in Hidden Regime-Switching Markets with Default Risk Utility Maximization in Hidden Regime-Switching Markets with Default Risk José E. Figueroa-López Department of Mathematics and Statistics Washington University in St. Louis figueroa-lopez@wustl.edu pages.wustl.edu/figueroa

More information

Formula Sheet for Optimal Control

Formula Sheet for Optimal Control Formula Sheet for Optimal Control Division of Optimization and Systems Theory Royal Institute of Technology 144 Stockholm, Sweden 23 December 1, 29 1 Dynamic Programming 11 Discrete Dynamic Programming

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Trend Following Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com References Introduction to Stochastic Calculus with Applications.

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency

More information

Mean Variance Portfolio Optimization with State Dependent Risk Aversion

Mean Variance Portfolio Optimization with State Dependent Risk Aversion Mean Variance Portfolio Optimization with State Dependent Risk Aversion Tomas Björk Agatha Murgoci Xun Yu Zhou July 8, 2011 First version October 2009 To appear in Mathematical Finance Abstract The objective

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation Jingyi Zhu Department of Mathematics University of Utah zhu@math.utah.edu Collaborator: Marco Avellaneda (Courant

More information

Clases 11-12: Integración estocástica.

Clases 11-12: Integración estocástica. Clases 11-12: Integración estocástica. Fórmula de Itô * 3 de octubre de 217 Índice 1. Introduction to Stochastic integrals 1 2. Stochastic integration 2 3. Simulation of stochastic integrals: Euler scheme

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Order book resilience, price manipulation, and the positive portfolio problem

Order book resilience, price manipulation, and the positive portfolio problem Order book resilience, price manipulation, and the positive portfolio problem Alexander Schied Mannheim University Workshop on New Directions in Financial Mathematics Institute for Pure and Applied Mathematics,

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Linear-Quadratic Stochastic Differential Games with General Noise Processes

Linear-Quadratic Stochastic Differential Games with General Noise Processes Linear-Quadratic Stochastic Differential Games with General Noise Processes Tyrone E. Duncan Abstract In this paper a noncooperative, two person, zero sum, stochastic differential game is formulated and

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

APPLICATION OF THE KALMAN-BUCY FILTER IN THE STOCHASTIC DIFFERENTIAL EQUATIONS FOR THE MODELING OF RL CIRCUIT

APPLICATION OF THE KALMAN-BUCY FILTER IN THE STOCHASTIC DIFFERENTIAL EQUATIONS FOR THE MODELING OF RL CIRCUIT Int. J. Nonlinear Anal. Appl. 2 (211) No.1, 35 41 ISSN: 28-6822 (electronic) http://www.ijnaa.com APPICATION OF THE KAMAN-BUCY FITER IN THE STOCHASTIC DIFFERENTIA EQUATIONS FOR THE MODEING OF R CIRCUIT

More information

Poisson Approximation for Structure Floors

Poisson Approximation for Structure Floors DIPLOMARBEIT Poisson Approximation for Structure Floors Ausgeführt am Institut für Stochastik und Wirtschaftsmathematik der Technischen Universität Wien unter der Anleitung von Privatdoz. Dipl.-Ing. Dr.techn.

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Portfolio Optimization with unobservable Markov-modulated drift process

Portfolio Optimization with unobservable Markov-modulated drift process Portfolio Optimization with unobservable Markov-modulated drift process Ulrich Rieder Department of Optimization and Operations Research University of Ulm, Germany D-89069 Ulm, Germany e-mail: rieder@mathematik.uni-ulm.de

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Static and Dynamic Optimization (42111)

Static and Dynamic Optimization (42111) Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Real Time Stochastic Control and Decision Making: From theory to algorithms and applications

Real Time Stochastic Control and Decision Making: From theory to algorithms and applications Real Time Stochastic Control and Decision Making: From theory to algorithms and applications Evangelos A. Theodorou Autonomous Control and Decision Systems Lab Challenges in control Uncertainty Stochastic

More information

Optimal portfolio strategies under partial information with expert opinions

Optimal portfolio strategies under partial information with expert opinions 1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Igor Cialenco. Department of Applied Mathematics Illinois Institute of Technology, USA joint with N.

Igor Cialenco. Department of Applied Mathematics Illinois Institute of Technology, USA joint with N. Parameter Estimation for Stochastic Navier-Stokes Equations Igor Cialenco Department of Applied Mathematics Illinois Institute of Technology, USA igor@math.iit.edu joint with N. Glatt-Holtz (IU) Asymptotical

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Math 4263 Homework Set 1

Math 4263 Homework Set 1 Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that

More information

Stochastic Modelling in Climate Science

Stochastic Modelling in Climate Science Stochastic Modelling in Climate Science David Kelly Mathematics Department UNC Chapel Hill dtbkelly@gmail.com November 16, 2013 David Kelly (UNC) Stochastic Climate November 16, 2013 1 / 36 Why use stochastic

More information