Uniform approximation of the Cox-Ingersoll-Ross process
|
|
- Caren Osborne
- 6 years ago
- Views:
Transcription
1 Uniform approximation of the Cox-Ingersoll-Ross process G.N. Milstein J.G.M. Schoenmakers arxiv: v1 [math.pr] 3 Dec 013 December 4, 013 Abstract The Doss-Sussmann DS) approach is used for uniform simulation of the Cox- Ingersoll-Ross CIR) process. The DS formalism allows to express trajectories of the CIR process through solutions of some ordinary differential equation ODE) depending on realizations of a Wiener process involved. By simulating the first-passage times of the increments of the Wiener process to the boundary of an interval and solving the ODE, we uniformly approximate the trajectories of the CIR process. In this respect special attention is payed to simulation of trajectories near zero. From a conceptual point of view the proposed method gives a better quality of approximation from a path-wise point of view) than standard, or even exact simulation of the SDE at some discrete time grid. AMS 000 subject classification. Primary 65C30; secondary 60H35. Keywords. Cox-Ingersoll-Ross process, Doss-Sussmann formalism, Bessel functions, confluent hypergeometric equation. 1 Introduction The Cox-Ingersoll-Ross process V t) is determined by the following stochastic differential equation SDE) dv t) = kλ V t))dt + σ V dwt), V t 0 ) = V 0, 1) where k, λ, σ are positive constants, and w is a scalar Brownian motion. Due to [6] this process has become very popular in financial mathematical applications. The CIR process is used in particular as volatility process in the Heston model [13]. It is known [14], [15]) that for V 0 > 0 there exists a unique strong solution V t0,v 0 t) of 1) for all t t 0 0. The CIR process V t) = V t0,v 0 t) is positive in the case kλ σ and nonnegative in the case kλ < σ. Moreover, in the last case the origin is a reflecting boundary. As a matter of fact, 1) does not satisfy the global Lipschitz assumption. The difficulties arising in a simulation method for 1) are connected with this fact and with the natural requirement of preserving nonnegative approximations. A lot of approximation Ural Federal University, Lenin Str. 51, Ekaterinburg, Russia; Grigori.Milstein@usu.ru Weierstrass-Institut für Angewandte Analysis und Stochastik, Mohrenstrasse 39, Berlin, Germany; schoenma@wias-berlin.de 1
2 methods for the CIR processes are proposed. For an extensive list of articles on this subject we refer to [3] and [7]. Besides [3] and [7] we also refer to [1,, 11, 1], where a number of discretization schemes for the CIR process can be found. Further we note that in [17] a weakly convergent fully implicit method is implemented for the Heston model. Exact simulation of 1) is considered in [5, 9] see [3] as well). In this paper, we consider uniform pathwise approximation of V t) on an interval [t 0, t 0 + T ] using the Doss-Sussmann transformation [8], [0], [19]) which allows for expressing any trajectory of V t) by the solution of some ordinary differential equation that depends on the realization of wt). The approximation V t) will be uniform in the sense that the path-wise error will be uniformly bounded, i.e. sup V t) V t) r almost surely, ) t 0 t t 0 +T where r > 0 is fixed in advance. In fact, by simulating the first-passage times of the increments of the Wiener process to the boundary of an interval and solving this ODE, we approximately construct a generic trajectory of V t). Such kind of simulation is more simple than the one proposed in [5] and moreover has the advantage of uniform nature. Let us consider the simulation of a standard Brownian motion W on a fixed time grid t 0, t i,..., t n = T. Although W may be even exactly simulated at the grid points, the usual piecewise linear interpolation W t) = t i+1 t W t i ) + t t i W t i+1 ) t i+1 t i t i+1 t i is not uniform in the sense of ). Put differently, for any large) positive number A, there is always a positive probability though possibly small) that sup W t) W t) > A. t 0 t t 0 +T Therefore, for path dependent applications for instance, such a standard, even exact, simulation method may be not desirable and a uniform method preserving ) may be preferred. We note that the original DS results rely on a global Lipschitz assumption that is not fulfilled for 1). We therefore have introduced the DS formalism that yields a corresponding ODE which solutions are defined on random time intervals. If V gets close to zero however, the ODE becomes intractable for numerical integration and so, for the parts of a trajectory V t), that are close to zero, we are forced to use some other not DS) approach. For such parts we here propose a different uniform simulation method. Another restriction is connected with the condition := 4kλ σ ) /8 > 0. We note that the case > 0 is more general than the case kλ σ that ensures positivity of V t), and stress that in the literature virtually all convergence proofs for methods for numerical integration of 1) are based on the assumption kλ σ. We expect that the results here obtained for > 0 can be extended to the case where 0, however in a highly nontrivial way. Therefore, the case 0 will be considered in a subsequent work. The next two sections are devoted to DS formalism in connection with 1) and to some auxiliary propositions. In Sections 4 and 5 we deal with the one-step approximation and the convergence of the proposed method, respectively. Section 6 is dedicated to the uniform construction of trajectories close to zero.
3 The Doss-Sussmann transformation.1 Due to the Doss-Sussmann approach [8], [14], [19], [0]), the solution of 1) may be expressed in the form V t) = F Xt), wt)), 3) where F = F x, y) is some deterministic function and Xt) is the solution of some ordinary differential equation depending on the part ws), 0 s t, of the realization w ) of the Wiener process wt). Let us recall the Doss-Sussmann formalism according to [19], V.8. In [19] one consideres the Stratonovich SDE The function F = F x, y) is found from the equation dv t) = bv )dt + γv ) dwt). 4) F y = γf ), F x, 0) = x, 5) and Xt) is found from the ODE dx dt = 1 bf Xt), wt)), X0) = V 0). 6) F/ xxt), wt)) It turns out that application of the DS formalism after the Lamperti transformation Ut) = V t) see [7]) leads to more simple equations. The Lamperti transformation yields the following SDE with additive noise du = U k U)dt + σ dw, U0) = V 0) > 0, where 7) = Let us seek the solution of 7) in the form 4kλ σ. 8) 8 Ut) = GY t), wt)) 9) in accordance with 3)-6). Because the Ito and Stratonovich forrms of equation 7) coincide, we have bu) = U k U, γu) = σ. The function G = Gy, z) is found from the equation i.e., G z = σ, Gy, 0) = y, Gy, z) = y + σ z, 10) and Y t) is found from the ODE dy dt = Y + σ wt) k Y + σ wt)), Y 0) = U0) = V 0) > 0. 11) 3
4 From 9), 10), and solution of 11), we formally obtain the solution Ut) of 7): Ut) = Y t) + σ wt). 1) Hence V t) = U t) = Y t) + σ wt)). 13). Since the Doss-Sussmann results rely on a global Lipschitz assumption that is not fulfilled for 1), solution 13) has to be considered only formally. In this section we therefore give a direct proof of the following more precise result. Proposition 1 Let Y 0) = U0) = V 0) > 0. Let τ be the following stopping time: τ := inf{t : V t) = 0}. Then equation 11) has a unique solution Y t) on the interval [0, τ), the solution Ut) of 7) is expressed by formula 1) on this interval, and V t) is expressed by 13). Proof. Let wt), V t)) be the solution of the SDE system dw = dwt), dv = kλ V )dt + σ V t)dw t), which satisfies the initial conditions w0) = 0, V 0) > 0. Then Ut) = V t) > 0 is a solution of 7) on the interval [0, τ). Consider the function Y t) = Ut) σ wt), 0 t < τ. Clearly, Y t) + σ wt) > 0 on [0, τ). Due to Ito s formula, we get dy t) = dut) σ dwt) = dt Y + σ wt) k Y + σ wt))dt, i.e., the function Ut) σ wt) is a solution of 11). The uniqueness of Y t) follows from the uniqueness of V t)..3 So far we were starting at the moment t = 0. It is useful to consider the Doss- Sussmann transformation with an arbitrary initial time t 0 > 0 which even may be a stopping time, for example, 0 t 0 < τ). In this case, we obtain instead of 11) for Y = Y t; t 0 ) = Ut) σ wt) wt 0)) = V t) σ wt) wt 0)), t 0 t < t 0 + τ, the equation dy dt = Y + σ wt) wt 0)) k Y + σ wt) wt 0))), 14) Y t 0 ; t 0 ) = V t 0 ), t 0 t < t 0 + τ, with given by 8). Clearly, V t) = Y t; t 0 ) + σ wt) wt 0))), t 0 t < t 0 + τ. 15) 4
5 3 Auxiliary propositions 3.1 Let us consider in view of 14) solutions of the ordinary differential equations which are given by dy 0 dt = y 0 k y0, y 0 t 0 ) = y 0 > 0, t t 0 0, 16) y 0 t) = y 0 t 0,y 0 t) = [y 0e kt t 0) + k 1 e kt t 0) )] 1/, t t 0. 17) In the case > 0, i.e., 4kλ > σ, we have: if y 0 > /k then y 0 t 0,y 0 t) /k as t and if 0 < y 0 < /k then y 0 t 0,y 0 t) /k as t. Further y 0 t) = /k is a solution of 16). In the case = 0, the solution y 0 t 0,y 0 t) 0 under t for any y 0 > 0. We note that the case 0 is more general than the case kλ σ we recall that in the latter case V t) > 0, t t 0 ). In the case < 0, the solution y 0 t 0,y 0 t) is convexly decreasing under not too large y 0. It attains zero at the moment t given by t = t k ln y 0 /k /k and yt 0 0,y 0 t) =. In what follows we deal with the case 4kλ σ = 0. 19) Our next goal is to obtain estimates for solutions of the equation dy dt = 18) y + σ ϕt) k y + σ ϕt)), yt 0) = y 0, t 0 t t 0 + θ, 0) cf. 14) ) for a given continuous function ϕt). Lemma Let 0. Let y i t), i = 1,, be two solutions of 0) such that y i t)+ σ ϕt) > 0 on [t 0, t 0 + θ], for some θ with 0 θ T. Then y t) y 1 t) y t 0 ) y 1 t 0 ), t0 t t 0 + θ. 1) Proof. We have From here dy t) y 1 t)) = y t) y 1 t)) ) y 1 t) + σϕt) + k y1 t) + σ ) ϕt)) dt. y t) + σ ϕt) k y t) + σ ϕt)) whence 1) follows. y t) y 1 t)) = y t 0 ) y 1 t 0 )) t y s) y 1 s)) + [ t 0 y 1 s) + σ ϕs))y s) + σϕs)) k y s) y 1 s)) ]ds y t 0 ) y 1 t 0 )), 5
6 Remark 3 It is known that for δ > 1 the Bessel process BES δ has the representation Zt) = Z0) + δ 1 t 0 1 ds + W t), 0 t <, Zs) where W is standard Brownian Motion, Zt) 0 a.s., and that in particular E t 0 1 ds < Zs). See [18]; for δ 1 the representation of BES δ is less simple and involves the concept of local time.) From this fact it is not difficult to show that for > 0 the solution of 7) may be represented as Ut) = Ut 0 ) + t t 0 t 0 Us) k Us))ds + σ wt) wt 0)), U0) > 0, t 0 t <. Thus, with Y t) = Ut) σ wt) wt 0)), it holds that t Y t) = Y t 0 ) + Y s) + σ ws) wt 0)) k Y s) + σ 0))) ) ws) wt ds, for Y 0) = U0) > 0, 0 t <, and that in particular Y is continuous and of bounded variation. From this it follows that ) holds for t 0 t t 0 + T when > 0 and ϕt) = wt) wt 0 ) is an arbitrary Brownian trajectory, and then inequality 1) in Lemma goes through for θ = T. 3.3 Now consider 0) for a continuous function ϕ satisfying ϕt) r, t 0 t t 0 + θ t 0 + T, 3) for some r > 0 and 0 θ T. Along with 16), 0) with 3), we further consider the equations dy dt = y + σr k y + σ r), yt 0) = y 0, 4) dy dt = y σr k y σ r), yt 0) = y 0. 5) Let us assume that y 0 σr > 0, and consider an η > 0, to be specified below, that satisfies y 0 η σr > 0. 6) The solutions of 16), 0) with 3), 4), and 5) are denoted by y 0 t), yt), y t), and y + t), respectively, where y 0 t) is given by 17). By using 17) we derive straightforwardly that y t) = [y 0 + σ r) e kt t 0) + k 1 e kt t 0) )] 1/ σ r, t 0 t t 0 + θ, 7) y + t) = [y 0 σ r) e kt t 0) + k 1 e kt t 0) )] 1/ + σ r, t 0 t t 0 + θ. 8) Note that y t) + σr/ > 0 and y + t) > σr/, t 0 t t 0 + θ. Due to the comparison theorem for ODEs see, e.g., [10], Ch. 3), the inequality y + σ r k y + σ r) y + σ ϕt) k y + σ ϕt)) 6 y σ r k y σ r),
7 which is fulfilled in view of 3) for y > σr/, then implies that y t) yt) y + t), t 0 t t 0 + θ. 9) The same inequality holds for yt) replaced by y 0 t). We thus get yt) y 0 t) y + t) y t), t 0 t t 0 + θ. 30) 4kλ σ Proposition 4 Let = 0, the inequalities 3) and 6) be fulfilled for a 8 fixed η > 0, and let θ T. We then have yt) y 0 t) Crt t0 ) Crθ, t 0 t t 0 + θ, with 31) C = σk + 4σ 3η e k T. In particular, C is independent of t 0, y 0, and r provided 6) holds). Proof. We estimate the difference y + t) y t). It holds y + t) = z t) + σ r, y t) = z + t) σ r, y + t) y t) = σr z + t) z t)), 3) where z ± t) = [y 0 ± σ r) e kt t 0) + k 1 e kt t 0) )] 1/. Further, z + t) z t) = z+ t)) z t)) z + t) + z t) = y 0σre kt t 0) z + t) + z t). 33) Using the inequality a + b) 1/ a + b/a for any a > 0 and b 0, we get z + t) y 0 + σ r)e k t t 0) + k z t) y 0 σ r)e k t t 0) + k 1 e kt t 0) ) y 0 + σ r)e k t t 0), 1 e kt t 0) ) y 0 σ r)e k t t 0), whence Therefore z + t) + z t) y 0 e k t t0) + 1 e kt t0) ) k 1 z + t) + z t) 1 y 0 e k t t 0) 1 e k t t 0) ky 0 σ 4 r ) y 0 y 0 σ 4 r ) e kt t0) 1).. 7
8 From 33) we have that and so due to 3) we get z + t) z t) σre k t t 0) 1 0 y + t) y t) σr1 e k t t 0) ) + ky 0 σ 4 r ) e kt t0) 1) σr e k t t0) e k t t0) ). ky0 σ 4 r ) Since 1 e qϑ qϑ for any q 0, ϑ 0, and y 0 σ 4 r 3 4 η under 6), we obtain 0 y + t) y t) σrk t t 0) + 4σr 3kη e k t t 0) kt t 0 ). From this and 30), 31) follows with C = σk + 4σ 3η e k T. Corollary 5 Under the assumptions of Proposition 4, we get by taking η = y 0, yt) y 0 t) σk + 4σ ) e k 3y0 T rθ, := D 1 + D ) rθ, t y0 0 t t 0 + θ, where D 1 and D only depend on the parameters of the CIR process under consideration and the time horizon T. 4 One-step approximation Let us suppose that for t m, t 0 t m < t 0 + T, V t m ) is known exactly. In fact, t m may be considered as a realization of a certain stopping time. Consider Y = Y t; t m ) on some interval [t m, t m + θ m ] with y m := Y t m ; t m ) = V t m ), given by the ODE cf. 14)), Assume that dy dt = Y + σ wt) wt m)) k Y + σ wt) wt m))), 34) Y t m ; t m ) = V t m ), t m t t m + θ m. y m = V t m ) σr. 35) Due to 15), the solution V t) of 1) on [t m, t m + θ m ] is obtained via V t) = Y t; tm ) + σ wt) wt m)), t m t t m + θ m. 36) Though equation 34) is just) an ODE, it is not easy to solve it numerically in a straightforward way because of the non-smoothness of wt). We are here going to construct an 8
9 approximation y m t) of Y t; t m ) via Proposition 4. To this end we simulate the point t m + θ m, wt m + θ m ) wt m )) by simulating θ m as being the first-passage stopping) time of the Wiener process wt) wt m ), t t m, to the boundary of the interval [ r, r]. So, wt) wt m ) r for t m t t m + θ m and, moreover, the random variable wt m + θ m ) wt m ), which equals either r or +r with probability 1/, is independent of the stopping time θ m. A method for simulating the stopping time θ m is given in Subsection 4.1 below. Proposition 4 and Corollary 5 then yield, Y t; t m ) y m t) D 1 + D ym t m+1 := mint m + θ m, t 0 + T ), where y m t) is the solution of the problem ) r t m+1 t m ), t m t t m+1 with 37) dy m dt = y m k ym, y m t m ) = Y t m ; t m ) = V t m ) that is given by 17) with t m, y m ) = t m, V t m )). We so have, V t) = Y t; tm ) + σ wt) wt m)) = y m t) + σ wt) wt m)) + ρ m t), where due to 37), ρ m t) D 1 + D ) r t ym m+1 t m ), t m t t m+1. 38) We next introduce the one-step approximation V t) of V t) on [t m, t m+1 ] by V t) := y m t) + σ wt) wt m)), t m t t m+1. 39) Since wt m+1 ) wt m ) = r if t m+1 = t m + θ m < t 0 + T, and wt m+1 ) wt m ) r if t m+1 = t 0 + T, the one-step approximation 39) for t = t m+1 is given by V t m+1 ) := y m t m+1 ) + σ wt m+1) wt m )) = 40) y m t m+1 ) + σ { rξm with P ξ m = ±1) = 1/, if t m+1 = t m + θ m < t 0 + T, ζ m if t m+1 = t 0 + T, with ζ m = wt 0 + T ) wt m ) being drawn from the distribution of W t0 +T t m conditional on max 0 s t 0 +T t m W s r, 41) where W is an independent standard Brownian motion. For details see Subsection 4.1 below. We so have the following theorem. Theorem 6 For the one-step approximation V t m+1 ) due to the exact starting value V t m ) = V t m ) = ym, we have the one step error V tm+1 ) V t m+1 ) D 1 + D ) r t m+1 t m ). 4) V t m ) 9
10 4.1 Simulation of θ m and ζ m For simulating θ m we utilize the distribution function Pt) := P τ < t), where τ is the first-passage time of the Wiener process W t) to the boundary of the interval [ 1, 1]. A very accurate approximation Pt) of Pt) is the following one: Pt) Pt) = t 0 P s)ds with πt 3 e t 3e t + 5e t ), 0 < t π, P t) = π t 9π t 5π t π e 8 3e 8 + 5e 8 ), t > π, and it holds P t) P t) , and sup Pt) Pt) , sup t 0 see for details [16], Ch. 5, Sect. 3 and Appendix A3 ). Now simulate a random variable U uniformly distributed on [0, 1], Then compute τ = P 1 U) which is distributed according to P. That is, we have to solve the equation Pτ) = U, for instance by Newton s method or any other efficient solving routine. Next set θ m = r τ m. For simulating ζ m in 40) we observe that 41) is equivalent with rw r t 0 +T t m) conditional on max W u 1. 0 u r t 0 +T t m) We next sample ϑ from the distribution function Qx; r t 0 + T t m )), where Qx; t) is the known conditional distribution function see [16], Ch. 5, Sect. 3) t 0 Qx; t) := P W t) < x max 0 s t W s) < 1), 1 x 1, 43) and set ζ m = rϑ. The simulation of the last step looks rather complicated and may be computationally expensive. However it is possible to take for wt 0 +T ) wt ν ) simply any value between r and r, e.g. zero. This may enlarge the one-step error on the last step but does not influence the convergence order of the elaborated method. Indeed, if we set wt 0 +T ) wt ν ) to be zero, for instance, on the last step, we get V t 0 + T ) = y ν t 0 +T ) instead of 40), and ν V t0 + T ) V t 0 + T ) r D 1 + D ) t m+1 t m ) + σr, 44) V t m ) m=0 Remark 7 We have in any step Eθ n = r, the random number of steps before reaching t 0 + T, say ν + 1, is finite with probability one, and Eν = O1/r ). For details see [16], Ch. 5, Lemma 1.5. In a heuristic sense this means that, if we have convergence of order Or), we obtain accuracy O h), for an expected) number of steps O1/h) similar to the standard Euler scheme. 10
11 5 Convergence theorem In this section we develop a scheme that generates approximations V t 0 ) = V t 0 ), V t 1 ),..., V t n+1 ), where n = 0, 1,,..., and t 1,..., t n+1 are realizations of a sequence of stopping times, and show that the global error in approximation V t n+1 ) is in fact an aggregated sum of local errors, i.e., r n m=0 D 1 + D ) t m+1 t m ) rt D 1 + D ), V t m ) ηn with y m = V t m ), provided that y m σr for m = 0,..., n, and so η n := min 0 m n y m σr. Let us now describe an algorithm for the solution of 1) on the interval [t 0, t 0 + T ] in the case 0. Suppose we are given V t 0 ) and r such that V t0 ) σr. For the initial step we use the one-step approximation according to the previous section and thus obtain see 40) and 4)) V t 1 ) = y 0 t 1 ) + σ wt 1) wt 0 )), V t1 ) = V t 1 ) + ρ 0 t 1 ), where Suppose that ρ 0 t 1 ) D 1 + D ) r t 1 t 0 ) =: C 0 rt 1 t 0 ). 45) V t 0 ) V t 1 ) σr. We then go to the next step and consider the expression V t) = Y t; t1 ) + σ wt) wt 1)), 46) where Y t; t 1 ) is the solution of the problem see 34)) dy dt = Y + σ wt) wt 1)) k Y + σ wt) wt 1))), 47) Y t 1 ; t 1 ) = V t 1 ), t 1 t t 1 + θ 1. Now, in contrast to the initial step, the value V t 1 ) is unknown and we are forced to use V t 1 ) instead. Therefore we introduce Y t; t 1 ) as the solution of the equation 47) with initial value Y t 1 ; t 1 ) = V t 1 ). From the previous step we have that 11
12 Y t1 ; t 1 ) Y t 1 ; t 1 ) = V t1 ) V t 1 ) = ρ0 t 1 ) C 0 rt 1 t 0 ). Hence, due to Lemma, Y t; t1 ) Y t; t 1 ) ρ 0 t 1 ) C 0 rt 1 t 0 ), t 1 t t 1 + θ 1. 48) Let θ 1 be the first-passage time of the Wiener process wt 1 + ) wt 1 ) to the boundary of the interval [ r, r]. If t 1 + θ 1 < t 0 + T then set t := t 1 + θ 1, else set t := t 0 + T. In order to approximate Y t; t 1 ) for t 1 t t let us consider along with equation 47) the equation dy 1 dt = y k 1 y1, y 1 t 1 ) = Y t 1 ; t 1 ) = V t 1 ). Due to Proposition 3 and Corollary 5 it holds that Y t; t1 ) y 1 t) D 1 + D ) r t t 1 ) =: C 1 r t t 1 ), V t 1 ) t 1 t t. 49) and so by 48) we have Y t; t1 ) y 1 t) rc0 t 1 t 0 ) + C 1 t t 1 )), t 1 t t. 50) We also have see 46)) V t) = Y t; t1 ) + σ wt) wt 1)) = y 1 t) + σ wt) wt 1)) + R 1 t), 51) where R 1 t) rc0 t 1 t 0 ) + C 1 t t 1 )), t 1 t t. 5) We so define the approximation V t) := y 1 t) + σ wt) wt 1)), that satisfies 53) V t) = V t) + R 1 t), t 1 t t. 54) and then set V t ) = y 1 t ) + σ wt ) wt 1 )) = 55) y 1 t ) + σ { rξ1 with P ξ 1 = ±1) = 1/, if t = t 1 + θ 1 < t 0 + T,, ζ 1 if t = t 0 + T, cf. 40) and 41). We thus end up with a next approximation V t ) such that V t ) V t ) = R 1 t ) rc0 t 1 t 0 ) + C 1 t t 1 )). 56) From the above description it is obvious how to proceed analogously given a generic approximation sequence of approximations V t m ), m = 0, 1,,..., n, with V t 0 ) = V t 0 ), 1
13 that satisfies by assumption V t m ) σr, for m = 0,..., n, and 57) n 1 V tn ) V t n ) r D 1 + D ) t m+1 t m ) V t m ) 58) Indeed, consider the expression m=0 n 1 =: r C m t m+1 t m ). m=0 V t) = Y t; tn ) + σ wt) wt n)), where Y t; t n ) is the solution of the problem dy dt = Y + σ wt) wt n)) k Y + σ wt) wt n))), 59) Y t n ; t n ) = V t n ), t n t t n + θ n, for a θ n > 0 to be determined. Since V t n ) is unknown we consider Y t; t n ) as the solution of the equation 59) with initial value Y t n ; t n ) = V t n ). Due to 58) and Lemma again, we have Y t; t n ) Y t; t n ) n 1 r m=0 C m t m+1 t m ), t n t t n + θ n. In order to approximate Y t; t n ) for t n t t n + θ n, we consider the equation dy n dt = y k n yn, y n t n ) = Y t n ; t n ) = V t n ). 60) By repeating the procedure 49)-56) we arrive at V t) := y n t) + σ wt) wt n)), t n t t n+1, 61) satisfying V t) V t) = Rn t) r with n m=0 D 1 + D ) t m+1 t m ), t n t t n+1, 6) V t m ) R n t) := Y t; t n ) y n t), t n t t n+1, and in particular 63) n V tn+1 ) V t n+1 ) r D 1 + D ) t m+1 t m ). V t m ) m=0 13
14 Remark 8 In principle it is possible to use the distribution function Q see 43)) for constructing V t) for t n < t < t n+1. However, we rather consider for t n t t n+1 the approximation Ṽ t) := y n t) + σ w nt), t n t t n+1, where a) for t n+1 < t 0 + T, w is an arbitrary continuous function satisfying wt n ) = 0, wt n+1 ) = wt n+1 ) wt n ) = rξ n, max t n t t n+1 w n t) r, and b) for t n+1 = t 0 + T, one may take wt) 0. As a result we get similar to 44) an insignificant increase of the error, n V t) Ṽ t) r D 1 + D ) t m+1 t m ) + σr, t n < t < t n+1. V t m ) m=0 Let us consolidate the above procedure in a concise way. 5.1 Simulation algorithm Set V t 0 ) = V t 0 ). Let the point t n, V t n )) be known for an n 0. Simulate independent random variables ξ n with P ξ n = ±1) = 1/, and θ n as described in subsection 4.1. If t n + θ n < t 0 + T, set t n+1 = t n + θ n, else set t n+1 = t 0 + T. Solve equation 60) on the interval [t n, t n+1 ] with solution y n and set V t n+1 ) = y n t n+1 ) + σ { rξn if t n+1 < t 0 + T, 0 if t n+1 = t 0 + T. So, under the assumption 57) we obtain the estimate 6) possibly enlarged with a term σr). The next theorem shows that if a trajectory of V t) under consideration is positive on [t 0, t 0 + T ], then the algorithm is convergent on this trajectory. We recall that in the case kλ σ almost all trajectories are positive, hence in this case the proposed method is almost surely convergent. Theorem 9 Let 4kλ σ i.e., 0). Then for any positive trajectory V t) > 0 on [t 0, t 0 + T ] the proposed method is convergent on this trajectory. In particular, there exist η > 0 depending on the trajectory V ) only, and r 0 > 0 depending on η such that V t m ) η rσ, for m = 0, 1,,... for any r < r 0. So in particular 57) is fulfilled for all m = 0, 1,..., and the estimate 6) implies that for any r < r 0, V tn+1 ) V t n+1 ) r D 1 + D ) T, n = 0, 1,,..., ν. η 14
15 Proof. Let us define η := 1 min t 0 t t 0 +T r 0 := min η σ, V t) and η ) D 1 + D T η, 64) and let r < r 0. We then claim that for all m, V t m ) η rσ. 65) For m = 0 we trivially have V t 0 ) = V t 0 ) η η r 0 σ rσ. Now suppose by induction that V tm+1 ) V t m+1 ) r D 1 + D V t j ) η for j = 0,..., m.then due to 63) we have η ) T r 0 D 1 + D because of 64). Thus, since V t m+1 ) η, it follows that proves 65) and the convergence for r 0. η ) T η V t m+1 ) η rσ. This Remark 10. In the case where 4kλ σ > kλ trajectories will reach zero with positive probability, that is convergence on such trajectories is not guaranteed by Theorem 9. So it is important to develop some method for continuing the simulations in cases of very small V t m ). One can propose different procedures, for instance, one can proceed with standard SDE approximation methods relying on some known scheme suitable for small V e.g. see [3]). However, the uniformity of the simulation would be destroyed in this way. We therefore propose in the next section a uniform simulation method that may be started in a value V t m ) close to zero. 6 Simulation of trajectories near to zero Henceforth we assume that > 0. Let us suppose that V t n ) = y n σr and consider conditions that guarantee that V t n+1 ) σr under t n+1 t 0 + T. Of course in the case ξ n = 1 this is trivially fulfilled, and we thus consider the case ξ n = 1, yielding V t n+1 ) = y n t n+1 ) σr = [y ne kt n+1 t n) + k 1 e kt n+1 t n) )] 1/ σr. We so need yne kt n+1 t n) + k 1 e kt n+1 t n) ) 9σ r, i.e. 4 yn ) e kt n+1 t n) 9σ r k 4 k. 66) 15
16 Since we are interested in properties of algorithms when r 0, we may further assume w.l.o.g. that 9σ r /4 /k < 0, i.e. r < 3 kσ. 67) Under assumption 67), 66) is obviously fulfilled when y n /k. If y n < /k we need 9σ r e kt n+1 t n) 4 k yn, hence t n+1 t n 1 k ln k y n, k which is fulfilled if 9σ r k 4 y n 3 σr. 68) Note that 67) is equivalent with 3σr/ < /k, and so 68) is the condition we were looking for. Conversely, if σr y n < 3σr/, then V t n+1 ) < σr with positive probability. In view of the above considerations, one may carry out the algorithm of Subsection 5.1 as long as 68) is fulfilled. Let us say that n was the last step where 68) was true. Then the aggregated error of V t n+1 ) due to the algorithm up to step n may be estimated by cf. 57) and 58)), V tn+1 ) V t n+1 ) n r D 1 + D ) n t ym m+1 t m ) = r D 1 + D ) t m+1 t m ). 69) V t m ) m=0 Let us recall that our primal goal is a scheme where V t) V t) 0, almost surely and uniformly in t 0 t t 0 + T. In this respect, and in particular in the case σ > kλ where trajectories may attain zero with positive probability, it is not recommended to carry out scheme 5.1 all the way through until 68) is not satisfied anymore. Indeed, if the trajectory attains zero, the worst case almost sure error bound would then be when all y m would be close to 3σr/, hence of order O1/r). That is, no convergence on such trajectories. We therefore propose to perform scheme 5.1 up to a stopping) index m, defined by V t k ) 1 Ara > 3 σr, k = 0, 1,..., m, and V t m+1 ) < 1 Ara, 70) where A is a positive constant and 0 < a < 1/ is to be determined suitably. A pragmatic choice would be a = 1/3 see Remark 11). Due to 70) and 69) with n replaced by m we then have, m V tm+1 ) V t m+1 ) r for some constant D 3 > 0. m=0 m=0 D 1 + 4D ) t A r a m+1 t m ) : D 3 r 1 a. 71) 16
17 Let us now fix a realization t m+1 := t m+1, and consider two solutions of equation 1) starting at the moment t m+1 from v := V t m+1 ) known value) and v = V t m+1 ) true but unknown value), denoted by V tm+1,v and V tm+1,v, respectively, Let ϑ x, 0 x < A r a, be the first time at which the solution V tm+1,xt + t m+1 ) of 1) attains the level A r a, hence 0 V tm+1,xt m+1 + t) < A r a, 0 t < ϑ x, V tm+1,xt m+1 + ϑ x ) = A r a. A construction of the distribution function of ϑ x is worked out in Section 6.1. Let us now denote t m+ := t m+1 + ϑ with ϑ = ϑ v. For simplicity and w.l.og. we assume that t m+ < t 0 + T ). We then naturally set V t m+ ) = V tm+1,vt m+1 + ϑ) = A r a. The solutions V tm+1,v and V tm+1,v correspond to two solutions Y t, t m+1 ) and Y t, t m+1 ) of 0) with ϕt) = wt) wt m+1 ), t t m+1, starting in Y t m+1, t m+1 ) = v and Y t m+1, t m+1 ) = v, respectively. Due to Lemma, see Remark 3, and 71) it thus follows that V tm+1,vt) V tm+1,vt) = Y t, t m+1 ) Y t, t m+1 ) V tm+1 ) V t m+1 ) and in particular D 3 r 1 a, t m+1 t t m+, 7) V tm+ ) V t m+ ) D 3 r 1 a. In contrast to the previous steps we now specify the behavior of V t) on [t m+1, t m+ ] by V t) = V tm+1,vt), t m+1 t t m+, 73) which we actually do not know. However, we do know that V t m+ ) = V tm+1,vt m+ ) = A r a, and that V is bounded on [t m+1, t m+ ] by A r a. Therefore, if we just take a straight line Lt) that connects the points t m+1, v) and t m+, Ar a ) as an approximation for V t), then V t) Lt) Ar a, t m+1 t t m+. By 7) and 73) we then also have V t) Lt) Ar a, t m+1 t t m+. Thus, the accuracy of the approximation to V for 0 t t m+1 outside the band 0, 1 ) Ara is of order Or 1 a ), and for t m+1 < t < t m+ inside the band 0, Ar a ) of order Or a ). But, at the boundary point V t m+ ) = A r a the accuracy is of order Or 1 a ) again. Finally, the scheme may be continued from the state V t m+ ) = Ar a with the algorithm of Subsection 5.1. Remark 11 From the above construction it is clear that for a = 1/3 in 70) the accuracy for 0 t t m+1 outside the band 0, 1 ) Ara, and for t m+1 < t < t m+ inside the band 0, Ar a ) are of the same order. However, an exponent 0 < a < 1/3 would give a higher accuracy outside the band 0, 1 ) Ara and at the exit points of the band 0, Ar a ), while inside the band the accuracy is worse but uniformly bounded by Ar a. 17
18 6.1 Simulation of ϑ x In order to carry out the above simulation method for trajectories near zero we have to find the distribution function of ϑ x = ϑ x,l, where ϑ x,l is the first-passage time of the trajectory X 0,x s), to the level l. For this it is more convenient to change notation and to write 1) in the form dxs) = kλ Xs))ds + σ Xdws), X0) = x, 74) where without loss of generality we take the initial time to be s = 0. The function ut, x) := P ϑ x,l < t), is the solution of the first boundary value problem of parabolic type [16], Ch. 5, Sect. 3) with initial data and boundary conditions u t = 1 σ x u + kλ x) u, t > 0, 0 < x < l, 75) x x u0, x) = 0, 76) ut, 0) is bounded, ut, l) = 1. 77) To get homogeneous boundary conditions we introduce v = u 1. The function v then satisfies: v t = 1 σ x v + kλ x) v, t > 0, 0 < x < l, x x 78) v0, x) = 1; vt, 0) is bounded, vt, l) = 1. 79) The problem 78)-79) can be solved by the method of separation of variables. In this way the Sturm-Liouville problem for the confluent hypergeometric equation the Kummer equation) arises. This problem is rather complicated however. Below we are going to solve an easier problem as a good approximation to 78)-79). Along with 74), let us consider the equations dx + s) = kλds + σ X + dws), X + 0) = x, 80) dx s) = kλ l)ds + σ X dws), X 0) = x, 81) with 0 l < λ. It is not difficult to prove the following inequalities X s) Xs) X + s). 8) According to 8), we consider three boundary value problems: first 75)-77) and next similar ones for the equations u + t u t From 8) it follows that = 1 σ x u + x = 1 σ x u x + kλ u+, t > 0, 0 < x < l, x + kλ l) u, t > 0, 0 < x < l. 83) x u t, x) ut, x) u + t, x), 18
19 hence v t, x) vt, x) v + t, x), where v = u 1, v + = u + 1. As the band 0 < x < l = A r a, for a certain a > 0, is narrow due to small enough r, the difference v + v will be small and so we can consider the following problem v + t = 1 σ x v + x + kλ v+, t > 0, 0 < x < l, 84) x v + 0, x) = 1; v + t, 0) is bounded, v + t, l) = 0, 85) as a good approximation of 78)-79). Henceforth we write v := v +. By separation of variables we get as elementary independent solutions to 84), T t)x x), where T t) + µt t) = 0, i.e. T t) = T 0 e µt, µ > 0, and 86) 1 σ xx + kλx + µx = 0, X 0+) is bounded, X l) = 0. 87) It can be verified straightforwardly that the solution of 87) can be obtained in terms of Bessel functions of the first kind e.g. see [4]), X x) = X γ ± x) := x γ J ±γ σ 1 ) 8µx = x γ Ox ±γ ) if x 0, with γ := 1 kλ σ. 88) Since X x) has to be bounded for x 0 we may take regardless the sign of γ!)) X x) = Xγ x) =: X γ x) = x γ J γ σ 1 ) 8µx. 89) In our setting we have > 0, i.e. γ < 1/4. The following derivation of a Fourier-Bessel series for v is standard but included for convenience of the reader. Denote the positive zeros of J ν by π ν,m, for example, J 1/ x) = πx sin x and π 1/,m = mπ, m = 1,,... 90) Then the homogeneous) boundary condition X γ l) = 0 yields and we have σ 1 8µl = π γ,m, i.e., µ m := σ π γ,m 8l X γ,m x) := x γ J γ σ 1 ) ) x 8µ m x = x γ J γ π γ,m. l 91) By the well-known orthogonality relation 1 0 zj γ π γ,k z)j γ π γ,k z)dz = δ k,k J γ+1π γ,k ), 19
20 we get by setting z = x/l Now set l 0 J γ π γ,m x l )J γπ γ,m l 0 x l )dx = lδ m,m J γ+1π γ,m ), X γ,m x)x γ,m x)x γ dx = lδ m,m J γ+1π γ,m ). vt, x) = hence β m e µmt X γ,m x), 0 x l. 9) m=1 For t = 0 we have due to the initial condition v0, x) = 1, So for any p = 1,,..., Further it holds that l 0 1 = β m X γ,m x). m=1 l X γ,p x)x γ dx = β p lj γ+1π γκ,p), i.e. 0 l 0 β p = X γ,px)x γ dx lj γ+1π γ,p ). 93) X γ,p x)x γ dx = l 0 ) x x γ J γ π γ,p dx l 1 = l γ+1 z γ+1 J γ π γ,p z) dz 0 = l γ+1 J γ+1 π γ,p ) π γ,p by well-known identities for Bessel functions e.g. see [4]), and 93) thus becomes β p =, p = 1,,... 94) l γ π γ,p J γ+1 π γ,p ) So, from v = u 1, 86) 89), 91), 94), and 9) we finally obtain J x ) [ ut, x) = 1 x γ l γ γ π γ,m l π γ,m J γ+1 π γ,m ) exp σ π γ,m ] t, 8l 0 x l. 95) m=1 Example 1 For γ = 1/4 we get from 95) by 90) straightforwardly, ut, x) = 1 + π l x 1) m m=1 m sin πm x l ) ] exp [ σ π m t. 8l 0
21 For solving 83) we set λ := λ l, and then apply the Fourier-Bessel series 95) with γ replaced by γ := 1 kλ σ = γ + kl σ. 96) Example 13 We now consider some numerical examples concerning u + = u in 95) and u given by 95) due to 96). Note that actually in 95) the function u only depends on σ, l, and γ. That is, u depends on σ, l, and the product kλ. Let us consider a CIR process with σ = 1, λ = 1, k = 0.75, and let us take l = 0.1. We then compare u +, which is given by 95) for γ = 0.5 due to 88) see Example 1), with u given by 95) for γ = due to 96). The results are depicted in Figure 1. The sums corresponding to 95) are computed with five terms more terms did not give any improvement). Normalization of ut, x) For practical applications it is useful to normalize 95) in the following way. Let us treat γ as essential but fixed parameter, introduce as new parameters and consider the function ũ t, x) := 1 x γ m=1 that is connected to 95) via x l = x, 0 < x 1, σ t 8l J γ π γ,m x ) = t, t 0, π γ,m J γ+1 π γ,m ) exp [ π γ,m t ], 0 < x 1, t 0, ũ t, x) = ũ σ t 8l, x l ) = u8l t σ, l x). For simulation of ϑ x we need to solve the equation uϑ x, x) = U, where U Uniform[0, 1]. For this we set x = x/l and solve the normalized equation ũ ϑ x, x) = U, and then take Note that ϑ x = 8l σ ϑ x. P ϑ x < t) = P ϑ x < σ t 8l ) = ũσ t 8l, x l ). We have plotted in Figure the normalized function ũ t, x) for γ = 1/4. 1
22 Figure 1: Upper panel u + 0.1, x), lower panel u + 0.1, x) u 0.1, x), for 0 x Figure : Normalized distribution function ũ t, x) for γ = 1/4
23 References [1] A. Alfonsi 005). On the discretization schemes for the CIR and Bessel squared) processes. Monte Carlo Methods Appl., v. 11, no. 4, [] A. Alfonsi 010). High order discretization schemes for the CIR process: Application to affine term structure and Heston models. Math. Comput., v ), [3] L. Andersen 008). Simple and efficient simulation of the Heston stochastic volatility model. J. of Compute Fin., v. 11, 1-4. [4] H. Bateman, A. Erdélyi 1953). Higher Transcendental Functions. MC Graw-Hill Book Company. [5] M. Broadie, Ö. Kaya 006). Exact simulation of stochastic volatility and other affine jump diffusion processes. Oper. Res., v. 54, [6] J. Cox, J. Ingersoll, S.A. Ross 1985). A theory of the term structure of interest rates. Econometrica, v. 53, no., [7] S. Dereich, A. Neuenkirch, L. Szpruch 01). An Euler-type method for the strong approximation of the Cox-Ingersoll-Ross process. Proc. R. Soc. A 468, no. 140, [8] H. Doss 1977). Liens entre équations différentielles stochastiques et ordinaries. Ann. Inst. H. Poincaré Sect B N.S.), v. 13, no., [9] P. Glasserman 003). Monte Carlo Methods in Financial Engineering. Springer. [10] P. Hartman 1964). Ordinary Differential Equations. John Willey & Sons. [11] D.J. Higham, X. Mao 005). Convergence of Monte Carlo simulations involving the mean-reverting square root process. J. Comp. Fin., v. 8, no. 3, [1] D.J. Higham, X. Mao, A.M. Stuart 00). Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J. Numer. anal., v. 40, no. 3, [13] S.L. Heston 1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, v. 6, no., [14] N. Ikeda, S. Watanabe 1981). Stochastic Differential Equations and Diffusion Processes. North-Holland/Kodansha. [15] I. Karatzas, S.E. Shreve 1991). Brownian Motion and Stochastic Calculus. Springer. [16] G.N. Milstein, M.V. Tretyakov 004). Stochastic Numerics for Mathematical Physics. Springer. [17] G.N. Milstein, M.V. Tretyakov 005). Numerical analysis of Monte Carlo evaluation of Greeks by finite differences. J. Comp. Fin., v. 8, no. 3,
24 [18] D. Revuz, M. Yor 1991). Continuous Maringales and Brownian Motion. Springer [19] L.C.G. Rogers, D. Williams 1987). Diffusions, Markov Processes, and Martingales, v. : Ito Calculus. John Willey & Sons. [0] H.J. Sussmann 1978). On the gap between deterministic and stochastic ordinary differential equations. Annals of probability, v. 6,
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More information(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS
(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University
More informationNumerical Integration of SDEs: A Short Tutorial
Numerical Integration of SDEs: A Short Tutorial Thomas Schaffter January 19, 010 1 Introduction 1.1 Itô and Stratonovich SDEs 1-dimensional stochastic differentiable equation (SDE) is given by [6, 7] dx
More informationarxiv: v12 [math.na] 19 Feb 2015
On explicit numerical schemes for the CIR process arxiv:145.7v1 [math.na] 19 Feb 15 Nikolaos Halidias Department of Mathematics University of the Aegean Karlovassi 83 Samos, Greece email: nikoshalidias@hotmail.com
More informationWEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction
WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationEULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS
Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show
More informationAccurate approximation of stochastic differential equations
Accurate approximation of stochastic differential equations Simon J.A. Malham and Anke Wiese (Heriot Watt University, Edinburgh) Birmingham: 6th February 29 Stochastic differential equations dy t = V (y
More informationChange detection problems in branching processes
Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationStochastic Areas and Applications in Risk Theory
Stochastic Areas and Applications in Risk Theory July 16th, 214 Zhenyu Cui Department of Mathematics Brooklyn College, City University of New York Zhenyu Cui 49th Actuarial Research Conference 1 Outline
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS
PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationStrong Predictor-Corrector Euler Methods for Stochastic Differential Equations
Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations Nicola Bruti-Liberati 1 and Eckhard Platen July 14, 8 Dedicated to the 7th Birthday of Ludwig Arnold. Abstract. This paper
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationAffine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009
Affine Processes Econometric specifications Eduardo Rossi University of Pavia March 17, 2009 Eduardo Rossi (University of Pavia) Affine Processes March 17, 2009 1 / 40 Outline 1 Affine Processes 2 Affine
More informationNested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model
Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationA numerical method for solving uncertain differential equations
Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationApproximating diffusions by piecewise constant parameters
Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationSome Properties of NSFDEs
Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline
More informationIntroduction to Random Diffusions
Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales
More informationLOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES
LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES ADRIAN D. BANNER INTECH One Palmer Square Princeton, NJ 8542, USA adrian@enhanced.com RAOUF GHOMRASNI Fakultät II, Institut für Mathematik Sekr. MA 7-5,
More informationNew ideas in the non-equilibrium statistical physics and the micro approach to transportation flows
New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows Plenary talk on the conference Stochastic and Analytic Methods in Mathematical Physics, Yerevan, Armenia,
More informationExact Simulation of Multivariate Itô Diffusions
Exact Simulation of Multivariate Itô Diffusions Jose Blanchet Joint work with Fan Zhang Columbia and Stanford July 7, 2017 Jose Blanchet (Columbia/Stanford) Exact Simulation of Diffusions July 7, 2017
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationRandom attractors and the preservation of synchronization in the presence of noise
Random attractors and the preservation of synchronization in the presence of noise Peter Kloeden Institut für Mathematik Johann Wolfgang Goethe Universität Frankfurt am Main Deterministic case Consider
More informationIntroduction to numerical simulations for Stochastic ODEs
Introduction to numerical simulations for Stochastic ODEs Xingye Kan Illinois Institute of Technology Department of Applied Mathematics Chicago, IL 60616 August 9, 2010 Outline 1 Preliminaries 2 Numerical
More informationOn the submartingale / supermartingale property of diffusions in natural scale
On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property
More informationMan Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationSimulation methods for stochastic models in chemistry
Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation
More informationStochastic differential equation models in biology Susanne Ditlevsen
Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationThe concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.
The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More information5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko
5 December 216 MAA136 Researcher presentation 1 schemes The main problem of financial engineering: calculate E [f(x t (x))], where {X t (x): t T } is the solution of the system of X t (x) = x + Ṽ (X s
More informationBranching Processes II: Convergence of critical branching to Feller s CSB
Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationHybrid Atlas Model. of financial equity market. Tomoyuki Ichiba 1 Ioannis Karatzas 2,3 Adrian Banner 3 Vassilios Papathanakos 3 Robert Fernholz 3
Hybrid Atlas Model of financial equity market Tomoyuki Ichiba 1 Ioannis Karatzas 2,3 Adrian Banner 3 Vassilios Papathanakos 3 Robert Fernholz 3 1 University of California, Santa Barbara 2 Columbia University,
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationMean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations
Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationAsymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process
Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Mátyás Barczy University of Debrecen, Hungary The 3 rd Workshop on Branching Processes and Related
More informatione - c o m p a n i o n
OPERATIONS RESEARCH http://dx.doi.org/1.1287/opre.111.13ec e - c o m p a n i o n ONLY AVAILABLE IN ELECTRONIC FORM 212 INFORMS Electronic Companion A Diffusion Regime with Nondegenerate Slowdown by Rami
More informationStochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes
BMS Advanced Course Stochastic Processes III/ Wahrscheinlichkeitstheorie IV Michael Scheutzow Lecture Notes Technische Universität Berlin Wintersemester 218/19 preliminary version November 28th 218 Contents
More informationarxiv: v1 [math.pr] 1 Jul 2013
ESTIMATION OF FIRST PASSAGE TIME DENSITIES OF DIFFUSIONS PROCESSES THROUGH TIME-VARYING BOUNDARIES arxiv:307.0336v [math.pr] Jul 03 Imene Allab and Francois Watier Department of Mathematics, Université
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationWeak solutions of mean-field stochastic differential equations
Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationThe Pedestrian s Guide to Local Time
The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments
More informationMultivariate Generalized Ornstein-Uhlenbeck Processes
Multivariate Generalized Ornstein-Uhlenbeck Processes Anita Behme TU München Alexander Lindner TU Braunschweig 7th International Conference on Lévy Processes: Theory and Applications Wroclaw, July 15 19,
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationSDE Coefficients. March 4, 2008
SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationNUMERICAL ANALYSIS OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXPLOSIONS. 1. Introduction
NUMERICAL ANALYSIS OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXPLOSIONS JUAN DÁVILA, JULIAN FERNÁNDEZ BONDER, PABLO GROISMAN, JULIO D. ROSSI, AND MARIELA SUED Abstract. Stochastic ordinary differential
More informationLie Symmetry of Ito Stochastic Differential Equation Driven by Poisson Process
American Review of Mathematics Statistics June 016, Vol. 4, No. 1, pp. 17-30 ISSN: 374-348 (Print), 374-356 (Online) Copyright The Author(s).All Rights Reserved. Published by American Research Institute
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationAPPLICATION OF THE KALMAN-BUCY FILTER IN THE STOCHASTIC DIFFERENTIAL EQUATIONS FOR THE MODELING OF RL CIRCUIT
Int. J. Nonlinear Anal. Appl. 2 (211) No.1, 35 41 ISSN: 28-6822 (electronic) http://www.ijnaa.com APPICATION OF THE KAMAN-BUCY FITER IN THE STOCHASTIC DIFFERENTIA EQUATIONS FOR THE MODEING OF R CIRCUIT
More informationPiecewise Smooth Solutions to the Burgers-Hilbert Equation
Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang
More informationConvergence of Feller Processes
Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationc 2002 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 4 No. pp. 507 5 c 00 Society for Industrial and Applied Mathematics WEAK SECOND ORDER CONDITIONS FOR STOCHASTIC RUNGE KUTTA METHODS A. TOCINO AND J. VIGO-AGUIAR Abstract. A general
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationDiscretization of SDEs: Euler Methods and Beyond
Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More information2nd-Order Linear Equations
4 2nd-Order Linear Equations 4.1 Linear Independence of Functions In linear algebra the notion of linear independence arises frequently in the context of vector spaces. If V is a vector space over the
More informationPathwise uniqueness for stochastic differential equations driven by pure jump processes
Pathwise uniqueness for stochastic differential equations driven by pure jump processes arxiv:73.995v [math.pr] 9 Mar 7 Jiayu Zheng and Jie Xiong Abstract Based on the weak existence and weak uniqueness,
More informationAn Introduction to Malliavin calculus and its applications
An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems
More informationNonlinear representation, backward SDEs, and application to the Principal-Agent problem
Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem
More informationSimulation Method for Solving Stochastic Differential Equations with Constant Diffusion Coefficients
Journal of mathematics and computer Science 8 (2014) 28-32 Simulation Method for Solving Stochastic Differential Equations with Constant Diffusion Coefficients Behrouz Fathi Vajargah Department of statistics,
More informationSimulation of conditional diffusions via forward-reverse stochastic representations
Weierstrass Institute for Applied Analysis and Stochastics Simulation of conditional diffusions via forward-reverse stochastic representations Christian Bayer and John Schoenmakers Numerical methods for
More informationA NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM
J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract
More informationSome asymptotic properties of solutions for Burgers equation in L p (R)
ARMA manuscript No. (will be inserted by the editor) Some asymptotic properties of solutions for Burgers equation in L p (R) PAULO R. ZINGANO Abstract We discuss time asymptotic properties of solutions
More informationOn the quantiles of the Brownian motion and their hitting times.
On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David
More informationNumerical methods for solving stochastic differential equations
Mathematical Communications 4(1999), 251-256 251 Numerical methods for solving stochastic differential equations Rózsa Horváth Bokor Abstract. This paper provides an introduction to stochastic calculus
More informationLecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University
Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationOn an uniqueness theorem for characteristic functions
ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of
More informationarxiv: v1 [math.pr] 24 Sep 2018
A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b
More informationLecture Notes on PDEs
Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential
More informationarxiv: v2 [math.pr] 7 Mar 2014
A new proof of an Engelbert-Schmidt type zero-one law for time-homogeneous diffusions arxiv:1312.2149v2 [math.pr] 7 Mar 214 Zhenyu Cui Draft: March 1, 214 Abstract In this paper we give a new proof to
More informationON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2
URAL MATHEMATICAL JOURNAL, Vol. 2, No. 1, 2016 ON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2 Mikhail I. Gomoyunov Krasovskii Institute of Mathematics and Mechanics,
More information