Introduction to Algorithmic Trading Strategies Lecture 4

Size: px
Start display at page:

Download "Introduction to Algorithmic Trading Strategies Lecture 4"

Transcription

1 Introduction to Algorithmic Trading Strategies Lecture 4 Optimal Pairs Trading by Stochastic Control Haksun Li haksun.li@numericalmethod.com

2 Outline Problem formulation Ito s lemma Dynamic programming Hamilton-Jacobi-Bellman equation Riccati equation Integrating factor 2

3 Reference Optimal Pairs Trading: A Stochastic Control Approach. Mudchanatongsuk, S., Primbs, J.A., Wong, W. Dept. of Manage. Sci. & Eng., Stanford Univ., Stanford, CA. 3

4 Basket Creation vs. Trading In lecture 3, we discussed a few ways to construct a mean-reverting basket. In this and the next lectures, we discuss how to trade a mean-reverting asset, if such exists. 4

5 Stochastic Control We model the difference between the log-returns of two assets as an Ornstein-Uhlenbeck process. We compute the optimal position to take as a function of the deviation from the equilibrium. This is done by solving the corresponding the Hamilton-Jacobi-Bellman equation. 5

6 Formulation Assume a risk free asset M t, which satisfies dm t = rm t dt Assume two assets,a t and B t. Assume B t follows a geometric Brownian motion. db t = μb t dt + σb t dz t x t is the spread between the two assets. x t = log A t log B t 6

7 Ornstein-Uhlenbeck Process We assume the spread, the basket that we want to trade, follows a mean-reverting process. dx t = k θ x t dt + ηdω t θ is the long term equilibrium to which the spread reverts. k is the rate of reversion. It must be positive to ensure stability around the equilibrium value. 7

8 Instantaneous Correlation Let ρ denote the instantaneous correlation coefficient between z and ω. E dω t dz t = ρdt 8

9 Univariate Ito s Lemma Assume dx t = μ t dt + σ t db t f t, X t is twice differentiable of two real variables We have df t, X t = f t + μ t f + σ t x f x 2 dt + σ t f x db t 9

10 Log example For G.B.M., dx t = μx t dt + σx t dz t, d log X t =? f x f t = 0 = log x f x = 1 x 2 f x 2 = 1 x 2 d log X t = μx t 1 + σx t X t X t 2 dt + σx t 1 X t db t = μ σ2 2 dt + σdb t 10

11 Multivariate Ito s Lemma Assume X t = X 1t, X 2t,, X nt is a vector Ito process f x 1t, x 2t,, x nt is twice differentiable We have df X 1t, X 2t,, X nt = n i=1 f X 1t, X 2t,, X nt dx i t n x i 2 n j=1 x i x j i=1 f X 1t, X 2t,, X nt d X i, X j t 11

12 Multivariate Example log A t = x t + log B t A t = exp x t + log B t A t x t = exp x t + log B t = A t A t B t = exp x t + log B t 1 B t = A t B t 2 A t x t 2 = A t x t = A t 2 A t B t 2 = B t 2 A t = B t x t B t A t B t = 0 A t x t = A t B t = A t B t 12

13 What is the Dynamic of Asset A t? A t = A t dx x t + A t db B t A t t 2 x 2 dx t A t B t x dx t db t = A t dx t + A t B t db t A t dx t 2 + A t B t dx t db t = A t k θ x t dt + ηdω t 1 2 A tη 2 dt + A t B t ρησb t dt + A t B t μb t dt + σb t dz t + 13

14 Dynamic of Asset A t A t = A t k θ x t dt + ηdω t + A t μdt + σdz t A tη 2 dt + A t ρησdt = A t k θ x t + μ η2 + ρησ dt + A t ηdω t + A t σdz t = A t μ + k θ x t η2 + ρησ dt + σdz t + ηdω t 14

15 Notations V t : the value of a self-financing pairs trading portfolio h t :the portfolio weight for stock A h t = h t :the portfolio weight for stock B 15

16 Self-Financing Portfolio Dynamic dv t V t = = h t da t A t + h t db t B t + dm t M t h t μ + k θ x t η2 + ρησ dt + σdz t + ηdω t h t μdt + σdz t + rdt = h t k θ x t η2 + ρησ dt + ηdω t + rdt = h t k θ x t η2 + ρησ + r dt + h t ηdω t 16

17 Power Utility Investor preference: U x = x γ x 0 0 < γ < 1 17

18 Problem Formulation max h t E V T γ, s.t., V 0 = v 0, x 0 = x 0 dx t = k θ x t dt + ηdω t dv t = h t dx t = h t k θ x t dt + h t ηdω t Note that we simplify GBM to BM of V t, and remove some constants. 18

19 Dynamic Programming Consider a stage problem to minimize (or maximize) the accumulated costs over a system path. Cost = c t=0 c t s11 2 S s0 1 s12 S22 4 s3 5 5 s12 s t = 0 t = 1 t = 2 time

20 Dynamic Programming Formulation State change: x k+1 = f k k: time x k : state x k, u k, ω k u k : control decision selected at time k ω k : a random noise N 1 Cost: g N x N + k=0 g k x k, u k, ω k Objective: minimize (maximize) the expected cost. We need to take expectation to account for the noise, ω k. 20

21 Principle of Optimality Let π = μ 0, μ 1,, μ N 1 be an optimal policy for the basic problem, and assume that when using π, a give state x i occurs at time i with positive probability. Consider the sub-problem whereby we are at x i at time i and wish to minimize the cost-to-go from time i to time N. N 1 E g N x N + k=0 g k x k, u k, ω k Then the truncated policy μ i, μ i+1,, μ N 1 is optimal for this sub-problem. 21

22 Dynamic Programming Algorithm For every initial state x 0, the optimal cost J x k of the basic problem is equal to J 0 x 0, given by the last step of the following algorithm, which proceeds backward in time from period N 1 to period 0: J N x N = g N x N J k x k = min u k E g k x k, u k, ω k + J k+1 f k x k, u k, ω k 22

23 Value function Terminal condition: G T, V, x = V γ DP equation: G t, V t, x t G t, V t, x t By Ito s lemma: = max E G t + dt, V t+dt, x t+dt h t = max E G t, V t, x t + ΔG h t ΔG = G t dt + G V dv + G x dx + 1 G 2 VV dv G 2 xx dx 2 + G Vx dv dx 23

24 Hamilton-Jacobi-Bellman Equation Cancel G t, V t, x t on both LHS and RHS. Divide by time discretization, Δt. Take limit as Δt 0, hence Ito. 0 = max h t E ΔG max E G t dt + G V dv h t + G x dx + 1 G 2 VV dv G 2 xx dx 2 + G Vx dv dx = 0 The optimal portfolio position is h t. 24

25 HJB for Our Portfolio Value max E G t dt + G V dv h t + G x dx + 1 G 2 VV dv G 2 xx dx 2 + G Vx dv dx = 0 max h t max h t E E G t dt + G V h t k θ x t dt + h t ηdω t + G x dx + 1 G 2 VV h t k θ x t dt + h t ηdω 2 t + 1 G 2 xx dx 2 + = 0 G Vx h t k θ x t dt + h t ηdω t dx G t dt + G V h t k θ x t dt + h t ηdω t + G x k θ x t dt + ηdω t + 1 G 2 VV h t k θ x t dt + h t ηdω 2 t + 1 G 2 xx k θ x t dt + ηdω 2 t + G Vx h t k θ x t dt + h t ηdω t k θ x t dt + ηdω t = 0 25

26 Taking Expectation All ηdω t disapper because of the expectation operator. Only the deterministic dt terms remain. Divide LHR and RHS by dt. max h t G t + G V h t k θ x t + G x k θ x t G VV h t η G xxη 2 +G Vx h t η 2 = 0 26

27 Dynamic Programming Solution Solve for the cost-to-go function, G t. Assume that the optimal policy is h t. 27

28 First Order Condition Differentiate w.r.t. h t. G V k θ x t + h t G VV η 2 + G Vx η 2 = 0 h t = G V k θ x t +G Vx η 2 G VV η 2 In order to determine the optimal position, h t, we need to solve for G to get G V, G Vx, and G VV. 28

29 The Partial Differential Equation (1) G t G V G V k θ x t +G Vx η 2 G VV η 2 k θ x t + G x k θ x t + 1 G 2 VVη 2 G V k θ x t +G Vx η 2 G VV η G xxη 2 G Vx η 2 G V k θ x t +G Vx η 2 G VV η = 0 29

30 The Partial Differential Equation (2) G t G V k θ x t G V k θ x t +G Vx η 2 G VV η G x k θ x t + G V k θ x t +G Vx η 2 2 G VV η G xxη 2 G Vx G V k θ x t +G Vx η 2 G VV = 0 30

31 Dis-equilibrium Let b = k θ x t. Rewrite: G t G V b G Vb+G Vx η 2 + G G VV η 2 x b G G V b+g Vx η 2 Vx = 0 G VV Multiply by G VV η 2. G V b+g Vx η 2 2 G VV η 2 G t G VV η 2 G V b G V b + G Vx η 2 + G x bg VV η G Vb + G Vx η G xxg VV η 4 G Vx η 2 G V b + G Vx η 2 = G xxη 2 31

32 Simplification Note that G V b G V b + G Vx η G 2 Vb + G Vx η 2 2 G Vx η 2 G V b + G Vx η 2 = 1 G 2 Vb + G Vx η 2 2 The PDE becomes G t G VV η 2 + G x bg VV η G 2 xxg VV η G Vb + G Vx η 2 2 = 0 32

33 The Partial Differential Equation (3) G t G VV η 2 + G x bg VV η G xxg VV η G Vb + G Vx η 2 2 = 0 33

34 Ansatz for G G t, V, x G T, V, x f T, x = 1 = f t, x V γ = V γ G t = V γ f t G V = γv γ 1 f G VV = γ γ 1 V γ 2 f G x = V γ f x G Vx = γv γ 1 f x G xx = V γ f xx 34

35 Another PDE (1) V γ f t γ γ 1 V γ 2 fη 2 + V γ f x bγ γ 1 V γ 2 fη Vγ f xx γ γ 1 V γ 2 fη γvγ 1 fb + γv γ 1 f x η 2 2 = 0 Divide by γ γ 1 η 2 V 2γ 2. f t f + f x bf f xxfη 2 γ 2 γ 1 f b η + f xη 2 = 0 35

36 Ansatz for f ff t + bff x η2 ff xx γ 2 γ 1 b η f + ηf x 2 = 0 f t, x = g t exp xβ t + x 2 α t = g exp xβ + x 2 α f t = g t exp xβ + x 2 α + g exp xβ + x 2 α xβ t + x 2 α t f x = g exp xβ + x 2 α β + 2xα f xx = g exp xβ + x 2 α β + 2xα 2 + g exp xβ + x 2 α 2α f x f = β + 2αx 36

37 Boundary Conditions f T, x = g T exp xβ T + x 2 α T = 1 g T = 1 α T = 0 β T = 0 37

38 Yet Another PDE (1) g exp xβ + x 2 α g t exp xβ + x 2 α + g exp xβ + x 2 α xβ t + x 2 α t + bg exp xβ + x 2 α g exp xβ + x 2 α β + 2xα η2 g exp xβ + x 2 α g exp xβ + x 2 α β + 2xα 2 + g exp xβ + x 2 α 2α γ 2 γ 1 b η g exp xβ + x2 α + ηg exp xβ + x 2 α Divide by g exp xβ + x 2 α exp xβ + x 2 α. β + 2xα 2 = 0 g t + g xβ t + x 2 α t + bg β + 2xα η2 g β + 2xα 2 + g 2α γ 2 γ 1 b + η β + 2xα η 2 = 0 g t + g xβ t + x 2 α t + bg β + 2xα η2 g β + 2xα 2 + η 2 gα γ 2 γ 1 b + η β + 2xα η 2 = 0 38

39 Yet Another PDE (2) λ = γ 2 γ 1 g t + g xβ t + x 2 α t + bg β + 2xα η2 g β + 2xα 2 + η 2 gα λg b + η β + 2xα η = 0 39

40 Expansion in x g t + gxβ t + gx 2 α t + bgβ + 2xαbg η2 g β 2 + 4x 2 α 2 + 4xαβ + η 2 gα λg b2 η 2 + η2 β 2 + 4η 2 x 2 α 2 + 2bβ + 4bxα + 4η 2 xαβ = 0 g t + gxβ t + gx 2 α t + k θ x gβ + 2xαk θ x g η2 g β 2 + 4x 2 α 2 + 4xαβ + η 2 gα λg k2 θ x 2 η 2 + η 2 β 2 + 4η 2 x 2 α 2 + 2k θ x β + 4k θ x xα + 4η 2 xαβ = 0 g t + gxβ t + gx 2 α t + kgβθ kgβx + 2xαkgθ 2αkgx η2 gβ 2 + 2η 2 gx 2 α 2 + 2η 2 gxαβ + η 2 gα λg η 2 k2 θ λg η 2 k2 θx λg η 2 k2 x 2 λgη 2 β 2 4λgη 2 x 2 α 2 2λgkβθ + 2λgkβx 4λgkθxα + 4λgkx 2 α 4λgη 2 xαβ = 0 40

41 Grouping in x g t + kgβθ η2 gβ 2 + η 2 gα λg η 2 k2 θ 2 λgη 2 β 2 2λgkβθ + gβ t kgβ + 2αkgθ + 2η 2 gαβ + 2 λg η 2 k2 θ + 2λgkβ 4λgkθα 4λgη 2 αβ x + gα t 2αkg + 2η 2 gα 2 λg η 2 k2 4λgη 2 α 2 + 4λgkα x 2 = 0 41

42 The Three PDE s (1) gα t 2αkg + 2η 2 gα 2 λg η 2 k2 4λgη 2 α 2 + 4λgkα = 0 gβ t kgβ + 2αkgθ + 2η 2 gαβ + 2 λg η 2 k2 θ + 2λgkβ 4λgkθα 4λgη 2 αβ = 0 g t + kgβθ η2 gβ 2 + η 2 gα λg η 2 k2 θ 2 λgη 2 β 2 2λgkβθ = 0 42

43 PDE in α α t + 2η 2 4λη 2 α 2 + 4λk 2k α λ η 2 k2 = 0 α t = λ η 2 k2 + 2k 1 2λ α + 2η 2 2λ 1 α 2 43

44 PDE in β, α β t kβ + 2η 2 αβ + 2λkβ 4λη 2 αβ 4λkθα + 2 λ η 2 k2 θ + 2αkθ = 0 β t = k 2η 2 α 2λk + 4λη 2 α β + 4λkθα 2 λ η 2 k2 θ 2αkθ 44

45 PDE in β, α, g g t + kgβθ η2 gβ 2 + η 2 gα λg η 2 k2 θ 2 λgη 2 β 2 2λgkβθ = 0 g t = kgβθ 1 2 η2 gβ 2 η 2 gα + λg η 2 k2 θ 2 + λgη 2 β 2 + 2λgkβθ g t = g kβθ 1 2 η2 β 2 η 2 α + λ η 2 k2 θ 2 + λη 2 β 2 + 2λkβθ 45

46 Riccati Equation A Riccati equation is any ordinary differential equation that is quadratic in the unknown function. α t = λ η 2 k2 + 2k 1 2λ α + 2η 2 2λ 1 α 2 α t = A 0 + A 1 α + A 2 α 2 46

47 Solving a Riccati Equation by Integration Suppose a particular solution, α 1, can be found. α = α is the general solution, subject to some z boundary condition. 47

48 Particular Solution Either α 1 or α 2 is a particular solution to the ODE. This can be verified by mere substitution. α 1,2 = A 1± A 2 1 4A 2 A 0 2A 2 48

49 z Substitution Suppose α = α z. 1 z = A 0 + A 1 α z + A 2 α z = A 0 + A 1 α 1 + A 1 1 z + A 2α A 2 1 z 2 + 2A 2 = A 0 + A 1 α 1 + A 1 1 z + A 2α A 2 1 z 2 + 2A 2 = A 0 + A 1 α 1 + A 2 α A 1+2α 1 A 2 z 2 + A 2 z 2 α 1 z α 1 z goes to 0 by the definition of α 1 49

50 Solving z 1 z = A 1+2α 1 A 2 z + A 2 z 2 1 z = A 1+2α 1 A 2 z 2 z 1 st order linear ODE + A 2 z 2 z + A 1 + 2α 1 A 2 z = A 2 z t = A 2 A 1 +2α 1 A 2 + Cexp A 1 + 2α 1 A 2 t 50

51 Solving for α α = α 1 + boundary condition: α T = 0 α A2 A1+2α1A2 +Cexp A 1+2α 1 A 2 t 1 A2 +Cexp = 0 A1+2α1A2 A 1+2α 1 A 2 T Cexp A 1 + 2α 1 A 2 T = 1 α 1 + C = exp A 1 + 2α 1 A 2 T A 2 A 2 A 1 +2α 1 A 2 A 1 +2α 1 A 2 1 α 1 51

52 α Solution (1) α = α 1 + = α A2 A1+2α1A2 +Cexp A 1+2α 1 A 2 t A2 A1+2α1A2 +exp A 1+2α 1 A 2 T = α 1 + = α 1 + A2 A1+2α1A2 +exp A 1+2α 1 A 2 1 A2 A1+2α1A2 1 1 T t α 1 A 1 +2α 1 A 2 α1 exp A 1+2α 1 A 2 t A2 A1+2α1A2 1 α1 α 1 A 2 +exp A 1 +2α 1 A 2 T t α 1 A 2 A 1 2α 1 A 2 52

53 α Solution (2) α = α 1 + = α 1 = α α 1 A 1 +2α 1 A 2 α 1 A 2 +exp A 1 +2α 1 A 2 T t A 1 α 1 A 2 A 1 +2α 1 A 2 A 2 +exp A 1 +2α 1 A 2 A1 A2 +2α 1 T t 1+ A 1 α1a2 +1 exp A 1+2α 1 A 2 A1 α1 +A 2 T t 53

54 α Solution (3) α t = k 2η γ γ γ exp 2k 1 γ T t 54

55 Solving β β t = k 2η 2 α 2λk + 4λη 2 α β + 4λkθα 2 λ η 2 k2 θ 2αkθ Let τ = T t β τ = β T t β τ τ = β t τ β τ τ = β t τ = k 2η 2 α τ 2λk + 4λη 2 α τ β τ + 4λkθα τ 2 λ η 2 k2 θ 2α τ kθ β τ τ = k + 2η 2 α + 2λk 4λη 2 α β + 4λkθα + 2 λ η 2 k2 θ + 2αkθ β τ τ = 2λ 1 k + 2η 2 α 1 2λ β + 2αkθ 1 2λ + 2 λ η 2 k2 θ 55

56 First Order Non-Constant Coefficients β τ = B 1 β + B 2 B 1 τ = 2λ 1 k + 2η 2 α 1 2λ B 2 τ = 2αkθ 1 2λ + 2 λ η 2 k2 θ 56

57 Integrating Factor (1) β τ B 1 β = B 2 We try to find an integrating factor μ = μ τ s.t. d dτ μβ = μ dβ dτ + β dμ dτ = μb 2 Divide LHS and RHS by μβ. 1 β dβ dτ + 1 μ dμ = B 2 dτ β By comparison, B 1 = 1 μ dμ dτ 57

58 Integrating Factor (2) B 1 dτ = dμ μ μ = exp B 1 dτ μβ = μb 2 dτ + C = log μ + C β = β = μb 2dτ+C μ exp B 1 du B 2 dτ+c exp B 1 dτ 58

59 β Solution τ β = exp B 1 u du 0 0 exp τ τ 0 B 1 u du B 2 s ds + C β τ = τ τ s exp B 1 u du exp B 0 1 u du 0 0 B 2 s ds + C 59

60 B 1, B 2 τ B 1 s ds 0 I 2 = τ 0 τ = 2λ 1 k + 2η 2 1 2λ α s ds 0 s 0 exp B 1 u du B 2 s ds 60

61 β Solution β t = kθ η γ exp 2k 1 γ T t γ exp 1 2k 1 γ T t 61

62 Solving g g t = g kβθ 1 2 η2 β 2 η 2 α + λ η 2 k2 θ 2 + λη 2 β 2 + 2λkβθ Withα and β solved, we are now ready to solve g t. g t g = G t d ds log g t = g t g = G log g t = Gds + C T g t = Cexp Gdt = exp Gds 0 T g t = exp Gds t t exp Gds 0 62

63 Computing the Optimal Position h t = G V k θ x t +G Vx η 2 G VV η 2 = γvγ 1 f k θ x t +γv γ 1 f x η 2 γ γ 1 V γ 2 fη 2 = Vf k θ x t +Vf x η 2 γ 1 fη 2 = V f k θ x t +f x η 2 γ 1 η 2 f V = k θ x γ 1 η 2 t + f x f η2 = V k θ x 1 γ η 2 t + η 2 β + 2αx = V 1 γ k η 2 x t θ + 2αx + β 63

64 The Optimal Position h t = h t ~ k η 2 V t 1 γ k η 2 x t θ + 2α t x t + β t x t θ 64

65 P&L for Simulated Data The portfolio increases from $1000 to $4625 in one year. 65

66 Parameter Estimation Can be done using Maximum Likelihood. Evaluation of parameter sensitivity can be done by Monte Carlo simulation. In real trading, it is better to be conservative about the parameters. Better underestimate the mean-reverting speed Better overestimate the noise To account for parameter regime changes, we can use: a hidden Markov chain model moving calibration window 66

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Trend Following Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com References Introduction to Stochastic Calculus with Applications.

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

Consumption. Consider a consumer with utility. v(c τ )e ρ(τ t) dτ.

Consumption. Consider a consumer with utility. v(c τ )e ρ(τ t) dτ. Consumption Consider a consumer with utility v(c τ )e ρ(τ t) dτ. t He acts to maximize expected utility. Utility is increasing in consumption, v > 0, and concave, v < 0. 1 The utility from consumption

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Hamilton-Jacobi-Bellman Equation Feb 25, 2008

Hamilton-Jacobi-Bellman Equation Feb 25, 2008 Hamilton-Jacobi-Bellman Equation Feb 25, 2008 What is it? The Hamilton-Jacobi-Bellman (HJB) equation is the continuous-time analog to the discrete deterministic dynamic programming algorithm Discrete VS

More information

From Random Variables to Random Processes. From Random Variables to Random Processes

From Random Variables to Random Processes. From Random Variables to Random Processes Random Processes In probability theory we study spaces (Ω, F, P) where Ω is the space, F are all the sets to which we can measure its probability and P is the probability. Example: Toss a die twice. Ω

More information

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience Ulrich Horst 1 Humboldt-Universität zu Berlin Department of Mathematics and School of Business and Economics Vienna, Nov.

More information

Optimal Saving under Poisson Uncertainty: Corrigendum

Optimal Saving under Poisson Uncertainty: Corrigendum Journal of Economic Theory 87, 194-217 (1999) Article ID jeth.1999.2529 Optimal Saving under Poisson Uncertainty: Corrigendum Klaus Wälde Department of Economics, University of Dresden, 01062 Dresden,

More information

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

Utility Maximization in Hidden Regime-Switching Markets with Default Risk Utility Maximization in Hidden Regime-Switching Markets with Default Risk José E. Figueroa-López Department of Mathematics and Statistics Washington University in St. Louis figueroa-lopez@wustl.edu pages.wustl.edu/figueroa

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Stochastic Modelling in Climate Science

Stochastic Modelling in Climate Science Stochastic Modelling in Climate Science David Kelly Mathematics Department UNC Chapel Hill dtbkelly@gmail.com November 16, 2013 David Kelly (UNC) Stochastic Climate November 16, 2013 1 / 36 Why use stochastic

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Robust control and applications in economic theory

Robust control and applications in economic theory Robust control and applications in economic theory In honour of Professor Emeritus Grigoris Kalogeropoulos on the occasion of his retirement A. N. Yannacopoulos Department of Statistics AUEB 24 May 2013

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Risk-Sensitive and Robust Mean Field Games

Risk-Sensitive and Robust Mean Field Games Risk-Sensitive and Robust Mean Field Games Tamer Başar Coordinated Science Laboratory Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL - 6181 IPAM

More information

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67

CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME. Chapter2 p. 1/67 CHAPTER 2 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Chapter2 p. 1/67 THE MAXIMUM PRINCIPLE: CONTINUOUS TIME Main Purpose: Introduce the maximum principle as a necessary condition to be satisfied by any optimal

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197 MATH 56A SPRING 8 STOCHASTIC PROCESSES 197 9.3. Itô s formula. First I stated the theorem. Then I did a simple example to make sure we understand what it says. Then I proved it. The key point is Lévy s

More information

Optimal portfolio strategies under partial information with expert opinions

Optimal portfolio strategies under partial information with expert opinions 1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Key words. Ambiguous correlation, G-Brownian motion, Hamilton Jacobi Bellman Isaacs equation, Stochastic volatility

Key words. Ambiguous correlation, G-Brownian motion, Hamilton Jacobi Bellman Isaacs equation, Stochastic volatility PORTFOLIO OPTIMIZATION WITH AMBIGUOUS CORRELATION AND STOCHASTIC VOLATILITIES JEAN-PIERRE FOUQUE, CHI SENG PUN, AND HOI YING WONG Abstract. In a continuous-time economy, we investigate the asset allocation

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Inverse Langevin approach to time-series data analysis

Inverse Langevin approach to time-series data analysis Inverse Langevin approach to time-series data analysis Fábio Macêdo Mendes Anníbal Dias Figueiredo Neto Universidade de Brasília Saratoga Springs, MaxEnt 2007 Outline 1 Motivation 2 Outline 1 Motivation

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference Optimal exit strategies for investment projects Simone Scotti Université Paris Diderot Laboratoire de Probabilité et Modèles Aléatories Joint work with : Etienne Chevalier, Université d Evry Vathana Ly

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Pairs Trading by Cointegration Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Distance method Cointegration Stationarity

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Corrections to Theory of Asset Pricing (2008), Pearson, Boston, MA

Corrections to Theory of Asset Pricing (2008), Pearson, Boston, MA Theory of Asset Pricing George Pennacchi Corrections to Theory of Asset Pricing (8), Pearson, Boston, MA. Page 7. Revise the Independence Axiom to read: For any two lotteries P and P, P P if and only if

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko

5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko 5 December 216 MAA136 Researcher presentation 1 schemes The main problem of financial engineering: calculate E [f(x t (x))], where {X t (x): t T } is the solution of the system of X t (x) = x + Ṽ (X s

More information

then the substitution z = ax + by + c reduces this equation to the separable one.

then the substitution z = ax + by + c reduces this equation to the separable one. 7 Substitutions II Some of the topics in this lecture are optional and will not be tested at the exams. However, for a curious student it should be useful to learn a few extra things about ordinary differential

More information

Portfolio Optimization with unobservable Markov-modulated drift process

Portfolio Optimization with unobservable Markov-modulated drift process Portfolio Optimization with unobservable Markov-modulated drift process Ulrich Rieder Department of Optimization and Operations Research University of Ulm, Germany D-89069 Ulm, Germany e-mail: rieder@mathematik.uni-ulm.de

More information

Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs

Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs Mihai Sîrbu, The University of Texas at Austin based on joint work with Jin Hyuk Choi and Gordan

More information

Worst Case Portfolio Optimization and HJB-Systems

Worst Case Portfolio Optimization and HJB-Systems Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,

More information

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Jean-Pierre Fouque North Carolina State University SAMSI November 3, 5 1 References: Variance Reduction for Monte

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Mortality Surface by Means of Continuous Time Cohort Models

Mortality Surface by Means of Continuous Time Cohort Models Mortality Surface by Means of Continuous Time Cohort Models Petar Jevtić, Elisa Luciano and Elena Vigna Longevity Eight 2012, Waterloo, Canada, 7-8 September 2012 Outline 1 Introduction Model construction

More information

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection F. Trojani and P. Vanini ECAS Course, Lugano, October 7-13, 2001 1 Contents Introduction Merton s Model and Perturbative Solution

More information

Stochastic Areas and Applications in Risk Theory

Stochastic Areas and Applications in Risk Theory Stochastic Areas and Applications in Risk Theory July 16th, 214 Zhenyu Cui Department of Mathematics Brooklyn College, City University of New York Zhenyu Cui 49th Actuarial Research Conference 1 Outline

More information

16. Working with the Langevin and Fokker-Planck equations

16. Working with the Langevin and Fokker-Planck equations 16. Working with the Langevin and Fokker-Planck equations In the preceding Lecture, we have shown that given a Langevin equation (LE), it is possible to write down an equivalent Fokker-Planck equation

More information

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications

More information

Continuous Time Finance

Continuous Time Finance Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale

More information

5 Applying the Fokker-Planck equation

5 Applying the Fokker-Planck equation 5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p

More information

Order book modeling and market making under uncertainty.

Order book modeling and market making under uncertainty. Order book modeling and market making under uncertainty. Sidi Mohamed ALY Lund University sidi@maths.lth.se (Joint work with K. Nyström and C. Zhang, Uppsala University) Le Mans, June 29, 2016 1 / 22 Outline

More information

Sec. 1.1: Basics of Vectors

Sec. 1.1: Basics of Vectors Sec. 1.1: Basics of Vectors Notation for Euclidean space R n : all points (x 1, x 2,..., x n ) in n-dimensional space. Examples: 1. R 1 : all points on the real number line. 2. R 2 : all points (x 1, x

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Intertwinings for Markov processes

Intertwinings for Markov processes Intertwinings for Markov processes Aldéric Joulin - University of Toulouse Joint work with : Michel Bonnefont - Univ. Bordeaux Workshop 2 Piecewise Deterministic Markov Processes ennes - May 15-17, 2013

More information

Optimal stopping of a mean reverting diffusion: minimizing the relative distance to the maximum

Optimal stopping of a mean reverting diffusion: minimizing the relative distance to the maximum Optimal stopping of a mean reverting diffusion: minimiing the relative distance to the maimum Romuald ELIE CEREMADE, CNRS, UMR 7534, Université Paris-Dauphine and CREST elie@ceremade.dauphine.fr Gilles-Edouard

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

problem. max Both k (0) and h (0) are given at time 0. (a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming 1. Endogenous Growth with Human Capital Consider the following endogenous growth model with both physical capital (k (t)) and human capital (h (t)) in continuous time. The representative household solves

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 15-7th March 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Mixture and composition of kernels. Hybrid algorithms. Examples Overview

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Affine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009

Affine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009 Affine Processes Econometric specifications Eduardo Rossi University of Pavia March 17, 2009 Eduardo Rossi (University of Pavia) Affine Processes March 17, 2009 1 / 40 Outline 1 Affine Processes 2 Affine

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018

EN Applied Optimal Control Lecture 8: Dynamic Programming October 10, 2018 EN530.603 Applied Optimal Control Lecture 8: Dynamic Programming October 0, 08 Lecturer: Marin Kobilarov Dynamic Programming (DP) is conerned with the computation of an optimal policy, i.e. an optimal

More information

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Liquidation in Limit Order Books. LOBs with Controlled Intensity Limit Order Book Model Power-Law Intensity Law Exponential Decay Order Books Extensions Liquidation in Limit Order Books with Controlled Intensity Erhan and Mike Ludkovski University of Michigan and UCSB

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Robust Markowitz portfolio selection. ambiguous covariance matrix

Robust Markowitz portfolio selection. ambiguous covariance matrix under ambiguous covariance matrix University Paris Diderot, LPMA Sorbonne Paris Cité Based on joint work with A. Ismail, Natixis MFO March 2, 2017 Outline Introduction 1 Introduction 2 3 and Sharpe ratio

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations.

Introduction. Stochastic Processes. Will Penny. Stochastic Differential Equations. Stochastic Chain Rule. Expectations. 19th May 2011 Chain Introduction We will Show the relation between stochastic differential equations, Gaussian processes and methods This gives us a formal way of deriving equations for the activity of

More information

Optimal Consumption, Investment and Insurance Problem in Infinite Time Horizon

Optimal Consumption, Investment and Insurance Problem in Infinite Time Horizon Optimal Consumption, Investment and Insurance Problem in Infinite Time Horizon Bin Zou and Abel Cadenillas Department of Mathematical and Statistical Sciences University of Alberta August 213 Abstract

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Appendix 3. Derivatives of Vectors and Vector-Valued Functions DERIVATIVES OF VECTORS AND VECTOR-VALUED FUNCTIONS

Appendix 3. Derivatives of Vectors and Vector-Valued Functions DERIVATIVES OF VECTORS AND VECTOR-VALUED FUNCTIONS Appendix 3 Derivatives of Vectors and Vector-Valued Functions Draft Version 1 December 2000, c Dec. 2000, B. Walsh and M. Lynch Please email any comments/corrections to: jbwalsh@u.arizona.edu DERIVATIVES

More information

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0 Introduction to multiscale modeling and simulation Lecture 5 Numerical methods for ODEs, SDEs and PDEs The need for multiscale methods Two generic frameworks for multiscale computation Explicit methods

More information

Decision principles derived from risk measures

Decision principles derived from risk measures Decision principles derived from risk measures Marc Goovaerts Marc.Goovaerts@econ.kuleuven.ac.be Katholieke Universiteit Leuven Decision principles derived from risk measures - Marc Goovaerts p. 1/17 Back

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming

(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming 1. Government Purchases and Endogenous Growth Consider the following endogenous growth model with government purchases (G) in continuous time. Government purchases enhance production, and the production

More information

Constrained Optimal Stopping Problems

Constrained Optimal Stopping Problems University of Bath SAMBa EPSRC CDT Thesis Formulation Report For the Degree of MRes in Statistical Applied Mathematics Author: Benjamin A. Robinson Supervisor: Alexander M. G. Cox September 9, 016 Abstract

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information