Weierstrass Institute for Applied Analysis and Stochastics Simulation of conditional diffusions via forward-reverse stochastic representations Christian Bayer and John Schoenmakers Numerical methods for stochastic differential equations UT Vienna, 2-4 September 2013 Mohrenstrasse 39 10117 Berlin Germany Tel. +49 30 20372 0 www.wias-berlin.de 19.04.2013
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 2 (26)
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 3 (26)
Conditional expectations Given a : [0,T] R d R d, σ : [0,T] R d R d m, a standard, m-dimensional Brownian motion B, consider dx(s) = a(s, X(s))ds + σ(s, X(s))dB(s), 0 s T Goal Given a grid D = {0 = s 0 < < s K+L+1 = T }, f : R (K+L)d R and x,y R d, compute E [ f (X(s 1 ),..., X(s K+L )) X(0) = x, X(T) = y ]. Forward-reverse representation for conditional diffusions 19.04.2013 Page 4 (26)
Conditional expectations Given a : [0,T] R d R d, σ : [0,T] R d R d m, a standard, m-dimensional Brownian motion B, consider dx(s) = a(s, X(s))ds + σ(s, X(s))dB(s), 0 s T Goal (extended) Given a grid D = {0 = s 0 < < s K+L+1 = T }, f : R (K+L)d R and A, B R d, compute E [ f (X(s 1 ),..., X(s K+L )) X(0) A, X(T) B ], A, B with positive measure or d -dimensional hyperplanes, 0 d d. Forward-reverse representation for conditional diffusions 19.04.2013 Page 4 (26)
Bridge simulation using a modified drift (Delyon and Hu 2006) ( dy(s) = a(s,y(s)) Y(s) y ) ds + σ(s,y(s))db(s), T s Y(0) = x SDE admits a unique solution Y on [0,T[ with lim s T Y(s) = y The law of Y on path-space is absolutely continuous w.r.t. the law of X conditioned on X(T) = y, X(0) = x. The Radon-Nikodym derivative is explicitly given (up to a constant) as an integral of Y(s), σ 1 (s,y(s)) and quadratic co-variations between them. Forward-reverse representation for conditional diffusions 19.04.2013 Page 5 (26)
Bridge simulation by time reversal (Bladt and Sørensen 2012) Dimension d = 1 X (1) t solution of SDE started at X (1) 0 = x X (2) t solution of SDE started at X (2) 0 = y τ inf {0 t T X (1) t = X (2) T t } Z t X (1) t, 0 t τ, Z t X (2) T t, τ t T on {τ T }. Theorem (Bladt and Sørensen 2009) The distribution of Z given {τ T } is equal to the distribution of a bridge process given that the bridge is hit by an independent realization of the SDE with initial distribution p(t, y, x)dx. Crucial hitting probability of bridge and time-reversed diffusion hard to estimate Forward-reverse representation for conditional diffusions 19.04.2013 Page 6 (26)
Bridge simulation by time reversal (Bladt and Sørensen 2012) Dimension d = 1 X (1) t solution of SDE started at X (1) 0 = x X (2) t solution of SDE started at X (2) 0 = y τ inf {0 t T X (1) t = X (2) T t } Z t X (1) t, 0 t τ, Z t X (2) T t, τ t T on {τ T }. Theorem (Bladt and Sørensen 2009) The distribution of Z given {τ T } is equal to the distribution of a bridge process given that the bridge is hit by an independent realization of the SDE with initial distribution p(t, y, x)dx. Crucial hitting probability of bridge and time-reversed diffusion hard to estimate Forward-reverse representation for conditional diffusions 19.04.2013 Page 6 (26)
Bridge simulation by time reversal (Bladt and Sørensen 2012) Dimension d = 1 X (1) t solution of SDE started at X (1) 0 = x X (2) t solution of SDE started at X (2) 0 = y τ inf {0 t T X (1) t = X (2) T t } Z t X (1) t, 0 t τ, Z t X (2) T t, τ t T on {τ T }. Theorem (Bladt and Sørensen 2009) The distribution of Z given {τ T } is equal to the distribution of a bridge process given that the bridge is hit by an independent realization of the SDE with initial distribution p(t, y, x)dx. Crucial hitting probability of bridge and time-reversed diffusion hard to estimate Forward-reverse representation for conditional diffusions 19.04.2013 Page 6 (26)
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 7 (26)
Forward process Notation: X t,x (s) solution of SDE started at X t,x (t) = x, t < s Generator of the SDE: L t f (x) = f (x), a(t, x) + 1 2 d b i j (t, x) x i x j f (x), i, j=1 where b i j (x) = σ(t, x)σ(t, x) T Transition density p(t, x, T, y) Forward representation (Feynman Kac formula) u(t, x) = E [ f ( X t,x (T) )] = p(t, x,t,y) f (y)dy I( f ) t u(t, x) + L t u(t, x) = 0, u(t, x) = f (x) Forward-reverse representation for conditional diffusions 19.04.2013 Page 8 (26)
Forward process Notation: X t,x (s) solution of SDE started at X t,x (t) = x, t < s Generator of the SDE: L t f (x) = f (x), a(t, x) + 1 2 d b i j (t, x) x i x j f (x), i, j=1 where b i j (x) = σ(t, x)σ(t, x) T Transition density p(t, x, T, y) Forward representation (Feynman Kac formula) u(t, x) = E [ f ( X t,x (T) )] = p(t, x,t,y) f (y)dy I( f ) t u(t, x) + L t u(t, x) = 0, u(t, x) = f (x) Forward-reverse representation for conditional diffusions 19.04.2013 Page 8 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Fokker-Planck equation: s p(t, x, s,y) = 1 2 d ( y i y j b i j (s,y)p(t, x, s,y) ) i, j=1 d i=1 y i ( a i (s,y)p(t, x, s,y) ) Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Cauchy problem for v s v(s,y) = 1 2 d ( y i y j b i j (s,y)v(s,y) ) i, j=1 v(t,y) = g(y) d i=1 y i ( a i (s,y)v(s,y) ), Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Cauchy problem for ṽ(s,y) v(t + t s,y) s ṽ(s,y) + 1 2 d y i y j ( b i j (s,y)ṽ(s,y) ) i, j=1 ṽ(t,y) = g(y), d i=1 y i (ãi (s,y)ṽ(s,y) ) = 0, where b(s,y) b(t + t s,y), ã(s,y) a(t + t s,y) Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Cauchy problem for ṽ(s,y) v(t + t s,y) s ṽ(s,y) + 1 2 d d b i j (s,y) y i y jṽ(s,y) + α i (s,y) y iṽ(s,y) + c(y)ṽ(s,y) = 0, i, j=1 i=1 ṽ(t,y) = g(y), where c(s,y) = 1 2 d α i (s,y) y j b i j (y) ã i (s,y), i, j=1 j=1 d d y i y j b i j (s,y) y iã i (s,y) i=1 Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Reverse process Consider: v(s,y) g(x)p(t, x, s,y)dx Reverse representation (Feynman-Kac formula) I (g) v(t,y) = E [ g ( Y t,y (T) ) Y t,y,1 (T) ] dy(s) = α(s, Y(s))ds + σ(s, Y(s))dB(s), Y(t) = y, dy(s) = c(s, Y(s))Y(s)ds, Y(t) = 1 d α i (s,y) y j b i j (y) ã i (s,y), c(s,y) = 1 2 i, j=1 j=1 d d y i y j b i j (s,y) y iã i (s,y) i=1 Forward-reverse representation for conditional diffusions 19.04.2013 Page 9 (26)
Forward-reverse representation Theorem (Milstein, Schoenmakers, Spokoiny 2004) Choose X and (Y,Y) independent, t < t < T: E [ f ( X t,x (t ),Y t,y(t) ) Y t,y(t) ] = = p(t, x,t, x ) f (x,y )p(t,y,t,y)dx dy J( f ). Proof. Condition on X t,x (t ) and apply the reverse representation Integrate with respect to the law of X t,x (t ) Forward-reverse representation for conditional diffusions 19.04.2013 Page 10 (26)
Forward-reverse representation Theorem (Milstein, Schoenmakers, Spokoiny 2004) Choose X and (Y,Y) independent, t < t < T: E [ f ( X t,x (t ),Y t,y(t) ) Y t,y(t) ] = = p(t, x,t, x ) f (x,y )p(t,y,t,y)dx dy J( f ). Proof. Condition on X t,x (t ) and apply the reverse representation Integrate with respect to the law of X t,x (t ) Forward-reverse representation for conditional diffusions 19.04.2013 Page 10 (26)
Forward-reverse estimator for p(t, x, T, y) Formally inserting f (x,y ) = δ 0 (x y ) gives J( f ) = p(t, x,t,y) Use kernel f (x,y ) = ɛ d K ( ) x y with bandwidth ɛ > 0 Define estimator: ˆp N,M,ɛ 1 N M ɛ d Yt m MN,y (T)K Xt,x n (t ) Yt m,y (T) ɛ n=1 m=1 Theorem (Milstein, Schoenmakers, Spokoiny 2004) Assume that the coefficients of the SDE are C bounded and satisfy a uniform ellipticity (or uniform Hörmander) condition. ɛ If d 4, choose M = N, ɛ N = CN 1/4, then the MSE of ˆp N,N,ɛN is of order N 1. For d > 4, choose M = N and ɛ N = CN 2/(4+d), then the MSE of ˆp N,N,ɛN is of order N 8/(4+d). Forward-reverse representation for conditional diffusions 19.04.2013 Page 11 (26)
Forward-reverse estimator for p(t, x, T, y) Formally inserting f (x,y ) = δ 0 (x y ) gives J( f ) = p(t, x,t,y) Use kernel f (x,y ) = ɛ d K ( ) x y with bandwidth ɛ > 0 Define estimator: ˆp N,M,ɛ 1 N M ɛ d Yt m MN,y (T)K Xt,x n (t ) Yt m,y (T) ɛ n=1 m=1 Theorem (Milstein, Schoenmakers, Spokoiny 2004) Assume that the coefficients of the SDE are C bounded and satisfy a uniform ellipticity (or uniform Hörmander) condition. ɛ If d 4, choose M = N, ɛ N = CN 1/4, then the MSE of ˆp N,N,ɛN is of order N 1. For d > 4, choose M = N and ɛ N = CN 2/(4+d), then the MSE of ˆp N,N,ɛN is of order N 8/(4+d). Forward-reverse representation for conditional diffusions 19.04.2013 Page 11 (26)
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 12 (26)
Generalization to multiple time steps Fix a grid 0 < t < t 1 < < t L := T Theorem (Generalization MSS 2004) Introduce the re-ordered grid t < ˆt 1 < < ˆt L = T defined by ˆt i T + t t L i, i = 1,..., L. Then (with t 0 = t and y L+1 = y) E [ f (Y t,y(t),y t,y(ˆt L 1 ),...,Y t,y(ˆt 1 ))Y t,y(t) ] = L f (y 1,y 2,...,y L ) p(t i 1,y i,t i,y i+1 )dy i. R d L i=1 Forward-reverse representation for conditional diffusions 19.04.2013 Page 13 (26)
Generalization to multiple time steps, cont. Key elements of the proof. Initially, we proved the autonomous case by direct induction on L first. Direct induction on L failed to work in the non-autonomous case due to the dependence of Y t,y on both t and T. For the non-autonomous case we lifted to autonomous in a standard wy while adding noise δdw for keeping non-degeneracy. Apply autonomous result and let δ 0 About five pages proof... Forward-reverse representation for conditional diffusions 19.04.2013 Page 14 (26)
Generalization to multiple time steps, cont. Key elements of the proof. Initially, we proved the autonomous case by direct induction on L first. Direct induction on L failed to work in the non-autonomous case due to the dependence of Y t,y on both t and T. For the non-autonomous case we lifted to autonomous in a standard wy while adding noise δdw for keeping non-degeneracy. Apply autonomous result and let δ 0 About five pages proof... Forward-reverse representation for conditional diffusions 19.04.2013 Page 14 (26)
Generalization to multiple time steps, cont. Key elements of the proof. Initially, we proved the autonomous case by direct induction on L first. Direct induction on L failed to work in the non-autonomous case due to the dependence of Y t,y on both t and T. For the non-autonomous case we lifted to autonomous in a standard wy while adding noise δdw for keeping non-degeneracy. Apply autonomous result and let δ 0 About five pages proof... Forward-reverse representation for conditional diffusions 19.04.2013 Page 14 (26)
Generalization to multiple time steps, cont. Key elements of the proof. Initially, we proved the autonomous case by direct induction on L first. Direct induction on L failed to work in the non-autonomous case due to the dependence of Y t,y on both t and T. For the non-autonomous case we lifted to autonomous in a standard wy while adding noise δdw for keeping non-degeneracy. Apply autonomous result and let δ 0 About five pages proof... Forward-reverse representation for conditional diffusions 19.04.2013 Page 14 (26)
Generalization to multiple time steps, cont. New proof. Get rid of the dependence of Y t,y on t by observing that Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), t s T, and consequently E[ f (Y y;t (T t ))Y y;t (T t )] = p(t,y,t,y) f (y )dy, holds for any 0 < t < T in terms of one and the same reverse process Y y (s t ) Direct induction works... only two pages! Forward-reverse representation for conditional diffusions 19.04.2013 Page 15 (26)
Generalization to multiple time steps, cont. New proof. Get rid of the dependence of Y t,y on t by observing that Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), t s T, and consequently E[ f (Y y;t (T t ))Y y;t (T t )] = p(t,y,t,y) f (y )dy, holds for any 0 < t < T in terms of one and the same reverse process Y y (s t ) Direct induction works... only two pages! Forward-reverse representation for conditional diffusions 19.04.2013 Page 15 (26)
Generalization to multiple time steps, cont. New proof. Get rid of the dependence of Y t,y on t by observing that Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), Y t,y(s) = Y 0,y (s t ) =: Y y (s t ), t s T, and consequently E[ f (Y y;t (T t ))Y y;t (T t )] = p(t,y,t,y) f (y )dy, holds for any 0 < t < T in terms of one and the same reverse process Y y (s t ) Direct induction works... only two pages! Forward-reverse representation for conditional diffusions 19.04.2013 Page 15 (26)
A forward-reverse representation of the conditional expectation Consider a grid 0 = s 0 < < s K+L = T Choose t s K, rename t i s i+k, 0 i L Assume p(s 0, x,t,y) > 0 K : R d R, K(u)du = 1 Theorem Let ˆt i T + t t L i, X t = X s0,x(t), Y t = Y t,y(t), Y t = Y t,y(t), X and (Y,Y) independent, then E [ g(x s1,..., X sk+l 1 ) XT = y ] = 1 [ ( p(s 0, x,t,y) lime g(x s1,..., X t,yˆtl 1,...,Yˆt1 )ɛ d YT X t K ɛ 0 ɛ )Y T ]. Forward-reverse representation for conditional diffusions 19.04.2013 Page 16 (26)
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 17 (26)
Assumptions Conditions on the diffusion process: Transition densities p(s, x,t,y) and q(s, x,t,y) of X and Y exist and α x β y p(s, x,t,y) C 1 ( C (t s) ν exp y x 2 ) 2 t s for multi-indices α + β 2 (and sim. for q). Moreover, p(s 0, x,t,y) > 0. Conditions on the kernel: K(v)dv = 1, vk(v)dv = 0 (second order) K(v) C exp ( α v 2+β), C,α,β 0, v R d Conditions on the function: g and its first and second derivatives are polynomially bounded Forward-reverse representation for conditional diffusions 19.04.2013 Page 18 (26)
Assumptions Conditions on the diffusion process: Transition densities p(s, x,t,y) and q(s, x,t,y) of X and Y exist and α x β y p(s, x,t,y) C 1 ( C (t s) ν exp y x 2 ) 2 t s for multi-indices α + β 2 (and sim. for q). Moreover, p(s 0, x,t,y) > 0. Conditions on the kernel: K(v)dv = 1, vk(v)dv = 0 (second order) K(v) C exp ( α v 2+β), C,α,β 0, v R d Conditions on the function: g and its first and second derivatives are polynomially bounded Forward-reverse representation for conditional diffusions 19.04.2013 Page 18 (26)
Assumptions Conditions on the diffusion process: Transition densities p(s, x,t,y) and q(s, x,t,y) of X and Y exist and α x β y p(s, x,t,y) C 1 ( C (t s) ν exp y x 2 ) 2 t s for multi-indices α + β 2 (and sim. for q). Moreover, p(s 0, x,t,y) > 0. Conditions on the kernel: K(v)dv = 1, vk(v)dv = 0 (second order) K(v) C exp ( α v 2+β), C,α,β 0, v R d Conditions on the function: g and its first and second derivatives are polynomially bounded Forward-reverse representation for conditional diffusions 19.04.2013 Page 18 (26)
The forward-reverse estimator for the conditional expectation Stochastic representation: H E [ g(x s1,..., X tl 1 ) XT = y ] = [ ( lime g(x s1,..., X t,yˆtl 1,...,Yˆt1 )ɛ d YT X t K ɛ 0 ɛ )Y T ] /p(s 0, x,t,y) Forward-reverse representation for conditional diffusions 19.04.2013 Page 19 (26)
The forward-reverse estimator for the conditional expectation Stochastic representation: H E [ g(x s1,..., X tl 1 ) XT = y ] = [ ( lime g(x s1,..., X t,yˆtl 1,...,Yˆt1 )ɛ d YT X t K ɛ 0 ɛ Estimator: for (X n ) i.i.d., (Y m,y m ) i.i.d., Ĥ ɛ,m,n := )Y T ] /p(s 0, x,t,y) Nn=1 Mm=1 g ( X n s 1,..., X n s K,Y mˆt L 1,...,Ymˆt 1 ) K ( Y m T X n t Nn=1 ( Mm=1 Y m ) K T X n t ɛ YT m 1 1 NM ɛ d N n=1 Mm=1 K ( Y m T X n t ɛ ɛ ) Y m T ) Y m T >p/2, Forward-reverse representation for conditional diffusions 19.04.2013 Page 19 (26)
The forward-reverse estimator for the conditional expectation Estimator: for (X n ) i.i.d., (Y m,y m ) i.i.d., Ĥ ɛ,m,n := Nn=1 Mm=1 g ( X n s 1,..., X n s K,Y mˆt L 1,...,Ymˆt 1 ) K ( Y m T X n t Nn=1 ( Mm=1 Y m ) K T X n t ɛ YT m 1 1 NM ɛ d N n=1 Mm=1 K ( Y m T X n t ɛ ɛ ) Y m T ) Y m T >p/2, Theorem Assume p(s 0, x,t,y) > p > 0, choose M = N. [( ) For d 4, set ɛ N = CN 1/4 2 ], then E H ĤN,N,ɛN = O(N 1 ). For d > 4, set ɛ N = CN 2/(4+d), then [( ) 2 ] E H ĤN,N,ɛN = O(N 8/(4+d) ). Forward-reverse representation for conditional diffusions 19.04.2013 Page 19 (26)
Implementation Assume K has compact support in B r (0) Algorithm 1. Simulate N indep. trajectories (X n ) N n=1 started at X s 0 = x and (Y m ) N m=1 started at Y t = y on D [s 0,t ] and D [t,t], resp. 2. For fixed m {1,..., N}, find the sub-sample {X nm k s 0,x(t ) : k = 1,...,l m } {X n s 0,x(t ) : n = 1,..., N} B rɛ (Y m t,y ) 3. Evaluate Nm=1 ( ) lm k=1 g X nm k s 1,..., X nm k s K,Y mˆt,...,y mˆt K L 1 1 Ym T Xnm k t ɛ Y T m Ĥ ɛ,m,n Nm=1 lm Ym T Xnm k t ɛ Y T m k=1 K Forward-reverse representation for conditional diffusions 19.04.2013 Page 20 (26)
Implementation Assume K has compact support in B r (0) Algorithm 1. Simulate N indep. trajectories (X n ) N n=1 started at X s 0 = x and (Y m ) N m=1 started at Y t = y on D [s 0,t ] and D [t,t], resp. 2. For fixed m {1,..., N}, find the sub-sample {X nm k s 0,x(t ) : k = 1,...,l m } {X n s 0,x(t ) : n = 1,..., N} B rɛ (Y m t,y ) 3. Evaluate Nm=1 ( ) lm k=1 g X nm k s 1,..., X nm k s K,Y mˆt,...,y mˆt K L 1 1 Ym T Xnm k t ɛ Y T m Ĥ ɛ,m,n Nm=1 lm Ym T Xnm k t ɛ Y T m k=1 K Forward-reverse representation for conditional diffusions 19.04.2013 Page 20 (26)
Complexity Assume that X s0,x(s), ( Y t,y(t),y t,y(t) ) can be simulated exactly at constant cost. Cost of simulation step: O(N) Cost of box-ordering step: O(N log(n)) Cost of Evaluation step: O ( N 2 ɛ d) Complexity estimate Case d 4: Choose ɛ = ( N/log N ) 1/d, achieve MSE O(N 1 ) at cost O(N log N) Case d > 4: Choose ɛ = N 2/(4+d), achieve MSE O(N 8/(4+d) ) at cost O(N log N) Forward-reverse representation for conditional diffusions 19.04.2013 Page 21 (26)
Complexity Assume that X s0,x(s), ( Y t,y(t),y t,y(t) ) can be simulated exactly at constant cost. Cost of simulation step: O(N) Cost of box-ordering step: O(N log(n)) Cost of Evaluation step: O ( N 2 ɛ d) Complexity estimate Case d 4: Choose ɛ = ( N/log N ) 1/d, achieve MSE O(N 1 ) at cost O(N log N) Case d > 4: Choose ɛ = N 2/(4+d), achieve MSE O(N 8/(4+d) ) at cost O(N log N) Forward-reverse representation for conditional diffusions 19.04.2013 Page 21 (26)
Complexity Assume that X s0,x(s), ( Y t,y(t),y t,y(t) ) can be simulated exactly at constant cost. Cost of simulation step: O(N) Cost of box-ordering step: O(N log(n)) Cost of Evaluation step: O ( N 2 ɛ d) Complexity estimate Case d 4: Choose ɛ = ( N/log N ) 1/d, achieve MSE O(N 1 ) at cost O(N log N) Case d > 4: Choose ɛ = N 2/(4+d), achieve MSE O(N 8/(4+d) ) at cost O(N log N) Forward-reverse representation for conditional diffusions 19.04.2013 Page 21 (26)
Outline 1 Introduction 2 The forward-reverse method for transition density estimation 3 A forward-reverse representation for conditional expectations 4 The forward-reverse estimator 5 Numerical examples Forward-reverse representation for conditional diffusions 19.04.2013 Page 22 (26)
Conditional expectations of the realized variance Heston model: ds t = µs t dt + v t S t db 1 t, dv t = (αv t + β)dt + ξ v t (ρdb 1 t + ) 1 ρ 2 db 2 t a(x) = µx 1, σ(x) = x 1 x2 0 αx 2 + β ξρ x 2 ξ 1 ρ 2 x 2 Reverse drift: α(x) = (2x 2 + ρξ µ)x 1 (ρξ α)x 2 + ξ 2, c(x) = x 2 + ρξ µ α. β Realized variance: RV 30 i=1 Objective: E[RV S T = s], T = 1/12 ( log(s ti+1 ) log(s ti ) ) 2 Forward-reverse representation for conditional diffusions 19.04.2013 Page 23 (26)
Conditional expectations of the realized variance Heston model: ds t = µs t dt + v t S t db 1 t, dv t = (αv t + β)dt + ξ v t (ρdb 1 t + ) 1 ρ 2 db 2 t a(x) = µx 1, σ(x) = x 1 x2 0 αx 2 + β ξρ x 2 ξ 1 ρ 2 x 2 Reverse drift: α(x) = (2x 2 + ρξ µ)x 1 (ρξ α)x 2 + ξ 2, c(x) = x 2 + ρξ µ α. β Realized variance: RV 30 i=1 Objective: E[RV S T = s], T = 1/12 ( log(s ti+1 ) log(s ti ) ) 2 Forward-reverse representation for conditional diffusions 19.04.2013 Page 23 (26)
Conditional expectations of the realized variance Heston model: ds t = µs t dt + v t S t db 1 t, dv t = (αv t + β)dt + ξ v t (ρdb 1 t + ) 1 ρ 2 db 2 t a(x) = µx 1, σ(x) = x 1 x2 0 αx 2 + β ξρ x 2 ξ 1 ρ 2 x 2 Reverse drift: α(x) = (2x 2 + ρξ µ)x 1 (ρξ α)x 2 + ξ 2, c(x) = x 2 + ρξ µ α. β Realized variance: RV 30 i=1 Objective: E[RV S T = s], T = 1/12 ( log(s ti+1 ) log(s ti ) ) 2 Forward-reverse representation for conditional diffusions 19.04.2013 Page 23 (26)
Numerical experiment Relative MSE 1e 04 1e 02 1e+00 t x = 15 t x = 29 5 50 500 5000 N Forward-reverse representation for conditional diffusions 19.04.2013 Page 24 (26)
Outlook: application to EM algorithms (nutshell sketch) An EM algorithm for parameter estimation of a diffusion can be decomposed into problems of the following structure. ( Consider a diffusion X 0,x (s) ) P 0 s T θ with unknown parameter θ observed at X 0,x (T) = y. Let g(x 1,..., x K,θ) be some suitable and known (likelihood) function. Goal: Compute for a finer grid 0 = s 0,..., s K+1 = T the estimator θ := argmaxe [ θ g(x 0,x (s 1 ),..., X 0,x (s K ),θ) X 0,x (T) = x ] θ via the following iterative algorithm: Initial guess θ 0. After having determined θ n compute [ θ n+1 = argmaxe θ n g(x 0,x (s 1 ),..., X 0,x (s K ),θ) X 0,x (T) = x ] θ Forward-reverse representation for conditional diffusions 19.04.2013 Page 25 (26)
References Bayer, C., Schoenmakers, J.: Simulation of conditional diffusions via forward-reverse stochastic representations, WIAS prepr. 1764, 2013. New V. at www.wias-berlin.de/people/schoenma/ Bladt, M., Sørensen, M.: Simple simulation of diffusion bridges with application to likelihood inference for diffusions, preprint 2012. Delyon, B., Hu, Y.: Simulation of conditioned diffusion and application to parameter estimation, SPA 116(11), 1660 1675, 2006. Kusuoka, S., Stroock, D.: Applications of the Malliavin calculus, Part II, J. Fac. Sci. Univ. Yokyo Sec. 1A 32, 1 76, 1985. Milstein, G., Schoenmakers, J. G. M., Spokoiny, V.: Transition density estimation for stochastic differential equations via forward reverse representations, Bernoulli 10(2), 281 312, 2004. Forward-reverse representation for conditional diffusions 19.04.2013 Page 26 (26)