Central European University

Size: px
Start display at page:

Download "Central European University"

Transcription

1 Central European University Department of Applied Mathematics thesis Pathwise approximation of the Feynman-Kac formula Author: Denis Nagovitcyn Supervisor: Tamás Szabados May 18, 15

2 Contents Introduction 3 1 Discretization of an Itô diffusion Twist & Shrink construction Convergence results Discretization of the Feynman-Kac formula 14.1 Solving real-valued Schrödinger equation Application to the Black-Scholes model Conclusion 7

3 Introduction In the middle of the twentieth century there were essentially two different mathematical formulations of quantum mechanics: the differential equation of Schrödinger, and the matrix algebra of Heisenberg. The two, apparently dissimilar approaches, were proved to be mathematically equivalent and the paradigm of non-relativistic quantum theory remained unchanged before an article written by Richard Feynman 7] was published in His article was an attempt to present the third approach. In essence, the new idea was about associating a quantity known as a probability amplitude (which describes a position of an elementary particle) with an entire path of a particle in space-time. Prior to that idea, probability amplitude was considered to be dependent on the position of the particle at a specific points in time. As was noted by the author: There is no practical advantage to this, but the formula (1) is very suggestive if a generalization to a wider class of action functionals is contemplated. We will see in an instant that author s remark indeed appeared to be true. Based on physics intuition Feynman embodied his thoughts in the following formula which sometimes referred as Feynman integral: ψ(x, t) = 1 A Ω x exp { i t 1 ( ) w V (w s )] ds s } g(w t ) t dw t, (1) where is the Plank constant, g(w) = ψ(w, ) is an initial condition and A is a normalizing constant for the Lebesgue-type infinite product measure s t dw(s) over the space ( of trajectories Ω x, t]. It should be noted that the quantity L(ẇ, t) = 1 w ) s V (ws ) is known as the Lagrangian and the integral L(ẇ, s) ds is the classical action integral along the path W = (w τ, < τ t). Notice that the infinite product measure is not a well-defined mathematical object and Feynman was well aware of that. His hope was that cleverly defined normalizing constant will make it sensible. Unfortunately, he has not presented a mathematically valid form for the constant, instead he proceeded with a famous conjecture that the function (1) solves a complex-valued Schrödinger equation: 1 ψ i t = ψ V (x) ψ. () x As was noted in 196 by Kiyosi Itô 8]: It is easy to see that (1) solves () unless we require mathematical rigor. It should be emphasized that despite numerous attempts mathematically valid solution of () still hasn t been presented. Nevertheless there is a silver lining, having in mind real-valued Feynman integral: ψ(x, t) = 1 A Ω x exp { 1 ( ) w V (w s )] ds s 3 } g(w t ) t dw t, (3)

4 Mark Kac 11] was able to solve a real-valued Schrödinger equation (also referred as the heat equation): ψ t = 1 ψ V (x) ψ, where stands for the Laplace operator. And the solution is the celebrated Feynman- Kac formula: { } ψ(x, t) = exp V w(s)] ds gw(t)] dp x (w), (4) C,t] where C, t] is the space of continuous trajectories and P x is the Wiener measure, which ( 1 ( w t came out from (3) by moving the term e ) ) into the integral measure. Feynman-Kac formula plays very important role in science, it was applied to variety of problems across disciplines. As an illustrative example in the last section we will demonstrate derivation of an option pricing formula. The aim of the current thesis is to provide a discrete analog of the formula (4) based on the strong approximation of Brownian motion by simple symmetric random walks. An underlying process w(t) will take form of an Itô diffusion with nonconstant coefficients. Weakly convergent (in distribution) approximations were given earlier by M. Kac 1] and E. Csáki 6]. This research may be considered as an extension of a specific case introduced in ], where underlying stochastic process w(t) was assumed to have constant coefficients. Results of the current thesis may be useful to give a rigorous prove for the complexvalued case (). According to the work in progress by Tamás Szabados, the normalizing constant A from the formula (1) might be set in such a way that a discrete analog of the Lebesgue-type product measure dw has a binomial distribution, which justifies application A of a construction similar to the one presented in the current thesis. There exists vast amount of articles intended to solve the complex-valued case, the most significant ones are 1], 3], 4], 8], 15]. For example in 8] Itô solves equation () for the case of function V ( ) being a constant, despite substantial degree of mathematical rigor, his solution uses heuristic arguments. The same might be concluded about the majority of existing researches on this topic. In order to fulfill the stated goal we will follow the following outline: using twist & shrink 19] construction of Brownian motion we will establish a discrete analog of a real-valued Schrödinger equation and its solution, which converges to the continuous case. 4

5 Chapter 1 Discretization of an Itô diffusion In current chapter we are going to present a discrete version of time homogeneous Itô diffusion. In the first section discretization of a Wiener process will be given, based on which in the second section several approximation schemes will be discussed and their convergence to the continuous process will be provided. 1.1 Twist & Shrink construction A basic tool of the present paper is an elementary construction of Brownian motion. This construction, taken from 19], is based on a nested sequence of simple, symmetric random walks that uniformly converges to Brownian motion on bounded time intervals with probability 1. This will be called twist & shrink construction. This method is a modification of the one given by Frank Knight in ] and its simplification by Pal Revesz in ]. We summarize the major steps of the twist & shrink construction here. We start with a sequence of independent, symmetric random walks (RW): n S m () =, S m (n) = X m (k) (n 1), based on an infinite matrix of independent and identically distributed random variables X m (k), P{X m (k) = ±1} = 1 (m, k 1), defined on the same complete probability space (Ω, F, P). For the first, we would like to decrease the size of a step in time by a factor of two as we move along the sequence, since ultimately we are pursuing Brownian motion which take values for each real time. That is for a m th random walk S m (n) we would like to have n = 1, but then a natural question arises: how much do we have to decrease m correspondingly the size of a step in space to preserve essential properties of a random walk? Surprisingly, the answer to this question is that we have two care about only one property, namely we have to preserve square root of the expected squared distance from the origin, which is a standard deviation of a simple symmetric RW: k=1 V arsm (n))] = E i (S m,i (n) ES m (n)]) = E i (S m,i (n) ) = n. 5

6 Thus we may conclude that after n steps in time on average our random walk will be at n distance from the origin. So it follows that in order to have n steps in one time unit, the step size in space have to be 1/ n. And this is called shrinking. Next, from the independent RW s we want to create dependent ones in such a way that after shrinking each consecutive RW becomes a refinement of the previous one. Since the spatial unit will be halved at each consecutive row, we define stopping times by T m () =, and for k, T m (k + 1) = min{n : n > T m (k), S m (n) S m (T m (k)) = } (m 1) These are random time instants when a RW visits even integers. After shrinking the spatial unit by half, a suitable modification of this RW will visit the same integers in the same order as the previous RW. And this is called twisting. We operate here on each point ω Ω of the sample space separately, i.e. we fix a sample path of each RW. We define twisted RW s S m recursively for k=1,,... using S m 1 starting with S (n) = S (n) (n ) and S m () = for any m. With each fixed m we procedd for k=,1,,... successively, and for every n in the corresponding bridge, T m (k) < n < T m (k + 1). Each bridge is flipped if its sign differs from the desired: X m (n) = ±X m (n), depending on whether S m (T m (k + 1)) S m (T m (k)) = X m 1 (k + 1) or not. So S m (n) = S m (n 1) + X m (n). Then ( S m (n)) n is still a simple symmetric RW 19, Lemma 1]. The twisted RW s have the desired refinement property: S m+1 (T m+1 (k)) = S m (k) (m, k ). The sample paths of S m (n) (n ) can be extended to continuous functions by linear interpolation, this way one gets S m (t) (t ) for t R +. Putting all together, the m th twist & shrink RW is defined by B m (t) = m Sm (t m ). As an illustration of the twist & shrink construction we will present step 1 approximation of the initially simulated random walk. For two independent simple symmetric random walks please refer to the Figure 1.1. As reader may anticipate our goal is to refine random walk S (t) with another random walk S 1 (t) using the described method. Notice that we took S (t) up to time 4 and to refine it, in the case of this particular simulation, we needed S 1 (t) up to time 1. In the pictures below dots on the graph correspond to the values of the initial random walk. We start our process with S 1 () and wait until it hits = m, for m = 1, it happens at time t =. Then we wait for another move of S 1 (t) of magnitude and it happens at t = 4, however corresponding move of S (t) was down whereas S 1 (t = 4) moved up. So we reflect the whole path of S 1 (t) starting at t = until the end and as it turns out one reflection was enough to mimic direction of moves of S (t). Thus we are ready to properly scale twisted process S 1 (t), namely we divide its time units by and spacial units by 1. As a result we have got B 1 (t) the first step approximation of a Brownian motion. Convergence of the twist & shrink sequence to the Brownian motion is provided by the next theorem. 6

7 Figure 1.1: Two independent random walks Figure 1.: Twisted and shrunk random walk S 1 (t) Theorem A. The sequence of random walks B m (t) uniformly converges to the Brownian motion W (t) on bounded intervals of time with probability 1 as m : sup W (t) B m (t) = O(m 3/4 m/ ), for T. t T The proof of which may be found in, p. 84]. Conversely, with a given Wiener process W (t), one can define the stopping times which yield to the Skorohod embedded random walks B m (k m ) into W (t). For every m let τ m () = and s m (k + 1) = inf {τ : τ > τ m (k), W (s) W (s m (k)) = m } (k ). With these stopping times the embedded dyadic walks by definition are B m (k m ) = W (τ m (k)) (m, k ). This definition of B m can be extended to any real t by pathwise linear interpolation. If a Wiener process is built by the twist & shrink construction described above using a sequence B m of nested random walks and then one constructs the Skorohod embedded random walks B m, it is natural to ask about their relationship. The next theorem demonstrates that they are asymptotically equivalent, the proof may be found in 19, p. 4-31]. 7

8 Theorem B. For any E > 1, and for any F > and m 1 such that F m N(E) take the following subset of Ω : { } A F,m = sup sup n T m,n (k) k m < C E,F m 1/ m, n > m k where C E,F is just a constant factor dependent on E and F, T m,n (k) = T n T n 1 T m (k) for n > m and k, F m ]. Then P{(A F,m) c } E (F m ) 1 E. Moreover, lim n n T m,n (k) = t m (k) exists almost surely and on A F,m we have B m (k m ) = W (t m (k)) ( k m F ). Further, almost everywhere on A F,m and any < δ < 1, we have τ m(k) = t m (k), and sup τ m (k) k m C E,F m 1/ m, k m K max τ m (k) τ m (k 1) m (7/δ) m(1 δ). 1 k m K Essentially the second theorem tells us two important things. First, distance between time instances of twist & shrink process B m (k) and Skorokhod embedded process B m (k) approaches. In other words for m big enough two partitions of the time axis have the same mesh. Second, whenever we are in the subset A F,m we may use stopped Brownian motion instead of twist & shrink process. 1. Convergence results Consider the following Itô diffusion: or in the integral form: X t = x + dx t = a(x t ) dt + b(x t ) dw t, a(x s ) ds + b(x s ) dw s, (1.1) where x is an initial state which we assume to be deterministic for the sake of simplicity, W t stands for the Wiener process and a(x t ) and b(x t ) are drift and diffusion coefficients respectively satisfying certain conditions which we will specify below. All the requirements that we are going to list are very natural since we would like every term (1.1) to make sense. First, we will describe a class of functions for which the Itô integral is defined. Using the notation stated above, function b( ) has to satisfy the following properties: (i) (t, w) b(t, w) is B F-measurable, where B denotes the Borel σ-algebra on, ) (ii) b(t, w) is F t -adopted, where F t stands for the natural filtration generated be the Brownian motion 8

9 ] t (iii) E b (s, w) ds <. Reader should note that class of functions satisfying conditions (i)-(iii) is not the largest one for which Itô integral is defined, for the detailed discussion refer to 9, p. 5]. Second, we would like function a( ) to be such that E a(s, w) ds] <. As known from the theory of stochastic differential equations global Lipschitz continuity is sufficient to guarantee existence and uniqueness of the solution of an SDE: Let T and a( ): R R, b( ) : R R be measurable functions satisfying a(x) a(y) K x y (1.) b(x) b(y) K x y, (1.3) for some positive constant K and any x, y R. It is worth mentioning that linear growth condition for functions a( ), b( ) is a consequence of global Lipschitz continuity taking y =, more precisely: a(x) C(1 + x ) (1.4) b(x) C(1 + x ), (1.5) for some positive constant C and any x R. To proceed we need a useful upper bound for the second moment of Itô diffusion X t. We are going to give a proof for the case of one dimensional Itô diffusion. Reader may refer to the Theorem of 1, p. 136] for the case of general Itô process. The proof that we are about to give uses very straightforward approach and as a result gives an upperbound which is a little different from the conventional one, however it is still usable. Lemma 1. Assume that we have an Itô process X t satisfying conditions (1.)-(1.3) then where C 1 = 6 C (T + 1) E X t e t C 1 (3x + 1), x R, t, T ], Proof. Take a process X t in the form of equation (1.1) and square it to get: ( ( ( ) ) Xt 3 x + a(x s ) ds) + b(x s ) dw s. Now taking expectation of both sides of the above inequality and using some basic results will lead us to the desired upperbound: E X t 3 x + 3 t E 3 x + 6 T C E a(x s ) ds + 3 E b(x s ) ds (1 + X s ) ds + 6 C E 3 x + 6 t C (T + 1) + 6 C (T + 1) E X s ds, (1 + X s ) ds notice that to get the first inequality we have used Cauchy-Schwartz inequality and Itô isometry, while to obtain the second we used linear growth condition for functions a( ), b( ). Now to complete the proof use Gronwall s lemma to get: E X t 3 x + 6 t C (T + 1) + 6 C (T + 1) The result follows after easy integral calculus. e 6 C (T +1)(t s) (3 x + 6 s C (T + 1)) ds. 9

10 Now let s see how we can construct discrete approximation that converges strongly to the Itô diffusion X t. Firstly we are going to use Skorokhod embedded random walks. Let B τ m be m th step simple symmetric random walk and X m be m th approximation of X given by X τ m n+1 = X τ m n + a( X τ m n ) τ n+1 + b( X τ m n ) B τ m n+1, where a( ) and b( ) are functions specified above, τ n+1 = τ n+1 τ n, with τ i being Skorokhod embedded times, B t m n+1 = B t m n+1 B t m n and initial condition X m = x for m. Alternatively we may rewrite our approximation in integral form: n X τ m n = x + a( X n τ m i ) τ i+1 + b( X τ m i ) B τ m i+1. i=1 Furthermore instead of working on the whole probability space let s restrict ourselves to the subspace A F,m which was defined before. Then B τ m i = W τi and we may rewrite our discrete approximation as: n X τ m n = x + a( X n τ m i ) τ i+1 + b( X τ m i ) W τi+1. (1.6) i=1 Observe that by by linear interpolation we can extend our approximation to an arbitrary time t, T ], then X m t = x + τnt a( X m s ) ds + i=1 i=1 τnt b( X m s ) dw s, (1.7) where τ Nt := inf {τ i : τ i > t}. Notice that approximations (1.6) and (1.7) are very close to each other, namely their L -convergence might be rigorously checked by techniques similar to the one in Theorem below. Intuitively, since by Theorem B τ i is arbitrarily close to i for m large enough, which means that the first sum in (1.6) is a Riemann sum and the second sum approaches Itô integral as m by construction. We are going to check convergence only in case of the scheme (1.7). Let us also introduce notation δ := max i {τ i+1 τ i } which stands for the maximum mesh size for a given approximation at level m. Theorem 1. Given an Itô diffusion X t satisfying conditions (1.), (1.3) and discrete approximation X t m described above as m E X t X t m ]. Proof. Let s denote then Z(T ) E sup t T Z(T ) := sup t T τnt sup t T + a(x s ) ds + τ Nt 3 (R 1 + R + R 3 ), E X t X t m E sup X t X t m t T a(x s ) a( X m s )] ds + b(x s ) dw s τ Nt τnt b(x s ) b( X m s )] dw s 1

11 where R 1, R and R 3 will be given explicitly below. τnt R 1 := E sup a(x s ) a( X s m )] ds t T τnt E sup τ Nt a(x s ) a( X s m ) ds K τ NT E t T K (T + ɛ) T +ɛ sup t s τnt E X t X m t ds = K (T + ɛ) X s X m s ds T +ɛ Z(s) ds, since for m large enough τ NT = T + ɛ by Theorem B and during the derivation we have used Cauchy-Schwartz inequality, Lipschitz continuity and Fubini theorem to get the result. In a similar manner we are going to deal with the second term. τnt R := E sup b(x s ) b( X s m )] dw s t T τnt 4 E b(x s ) b( X τnt s m )] dw s 4 K E X s X s m ds 4 K T +ɛ sup t s T +ɛ E X t X t m ds = 4 K Z(s) ds, where we have used Doob s inequality, Itô isometry and Lipschitz continuity. As reader may have noticed the first two terms involve integrating up to time τ Nt meanwhile the third term takes care of the remainder: R 3 := E sup t T E sup t T E sup t T a(x s ) ds + b(x s ) dw s τ Nt τ Nt ( ) ( t ) t a(x s ) ds + b(x s ) dw s τ Nt τ Nt ] (t τ Nt ) (4 δ C + 16 C ) C 1,T δ. T τ Nt a (X s ) ds + 8 E T τ NT b (X s ) ds τ NT (1 + EX s ) ds (4 T C + 16 C )e T C 1 (3 x + 1) δ Denoting C 1,T := (4 T C +16 C )e T C 1 (3 x +1) and C,T := K (T + ɛ + 4) and collecting all the estimates, for constants C 1,T and C,T being solely dependent on T and not δ, we have T +ɛ Z(T ) C 1,T δ + C,T Z(s) ds. Now using Gronwall s lemma one can get: Z(T ) C T δ. Next, by the consequence of Jensen s inequality known as the Lyapunov s inequality, for < s < t (E f s ) 1/s ( E f t) 1/t, setting s = 1, t = and f = X t X m t we have: sup t T E X t X m t Z(T ) C T δ. 11

12 Thus from the Theorem B we may infer that as m, δ which gives convergence on the subspace A F,m of Ω. Moreover since P{(A F,m )c } goes to zero, claim of the theorem holds for the whole space. For discussions in the sequel of the paper it is unfortunate to have random time instances. In fact, it is more convenient to have a dyadic rational points instead of a sequence of stopping times. That is why we are motivated to upgrade the discrete approximation in (1.6). Let us use twist & shrink construction of a Brownian motion and denote our new approximation as X m then: with X m X m t i = a(x m t i ) i+1 + b(x m t i ) B m t i+1, = x for m. Once again we may rewrite it as n n Xt m n = x + a(xt m i ) i+1 + b(xt m i ) Bt m i+1. (1.8) i= Using linear interpolation we may have a discrete scheme given in the following form X m t = x + Nt i= N t a(xs m ) ds + b(xt m i ) Bt m i+1, (1.9) where N t is such that t Nt = inf{t i : t i > t}. Note that discrete schemes (1.8) and (1.9) are also very close because now we have Riemann sum in (1.8) from the start. Now it is natural to ask about closeness of two discrete approximations for X t m and Xt m given by (1.7) and (1.9). To demonstrate that they are asymptotically (in L 1 sense) close, we will use technique similar to the one in Theorem 1. Theorem. Given two discrete approximations X m t and X m t described above, as m sup t T i= E X m t X m t ]. Proof. Let s restrict our working space to the subspace A F,m of Ω, so W τ = B m τ. Without loss of generality we may assume that τ Nt t Nt. Let s denote then U(T ) := sup t T E X t m Xt m E sup X t m Xt m t T Nt U(T ) E sup a( t T X N t s m ) a(xs m )] ds + i= τnt + a( X N t s m ) ds + b(xt m i ) B t m i+1 t Nt i= 4 (E 1 + E + E 3 + E 4 ), where E 1, E, E 3 and E 4 will be given explicitly below. Nt E 1 := E sup a( X s m ) a(xs m )] ds t T τi+1 τ i N t i= b( X m s ) b(x m t i )] dw s b(x m t i ) B m t i+1 Nt E sup t Nt a( X tnt s m ) a(xs m ) ds K t NT E X s m K T t T T sup t s E X m t X m t ds = K T 1 T U(s) ds, X m s ds

13 where we have used Cauchy-Schwartz inequality, Lipschitz continuity and Fubini theorem. N t τi+1 E := E sup b( t T X s m ) b(xt m N T τi+1 i )] dw s 4 E b( i= τ i X s m ) b(xt m i )] dw s i= τ i { NT τi+1 4 E b( X s m ) b(xt m i )] dw s i= τ i } τi+1 τj+1 + b( X s m ) b(xt m i )] dw s b( X s m ) b(xt m j )] dw s, i<j τ i τ j where we have used Doob s martingale inequality to get rid of the supremum. It is easy to show that expectation of the second sum is introducing conditional expectation and using property of Itô integral. Thus we have N T τi+1 E 4 E b( X N T τi+1 s m ) b(xt m i )] dw s = 4 E b( X s m ) b(xt m i ) ds i= N T 4 K i= τ i τi+1 τ i sup t s i= T +ɛ E X t m Xt m ds = 4 K U(s) ds, where we have used Itô isometry, Fubini theorem and Lipschitz continuity. Reader should note that in the current case third term takes care of the remainder: τnt E 3 := E sup a( t T X τnt s m ) ds E sup(τ Nt t Nt ) a( X s m ) ds t T t Nt τ C ρ (1 + E( X s m )) ds C ρ e T C 1 (3 x + 1) t C 3,T ρ, where we have used Cauchy-Schwartz inequality, consequence of Lipschitz continuity, Lemma 1, ρ := sup t T (τ Nt t Nt ) and where limits of integration denoted as t and τ corresponds to ρ. Also notice that by Theorem B ρ C E,F m 1 m, so ρ as m. Fortunately the last term could be handled easily: N t E 4 := b(xt m i ) B N t t m i+1 b(xt m i ) Bt m i+1 =, i= since two sums are identical. And using the same logic as in the end of the previous theorem we have the claim. Theorem 3. Given the discrete approximation scheme Xt m k functions a( ), b( ) as m i= sup EX t Xt m k ] t,t ] τ i t Nt and Lipschitz continuous Proof. By applying simple estimate for the square of the sum we have ( ) sup EX t Xt m k ] sup EX t X t m k ] + sup E X t m k Xt m k ]. t,t ] t,t ] t,t ] Take limit of both sides as m and use Theorem 1 and. Thus we may choose scheme Xt m k as a discrete approximation of X t for further analysis because it is more advantageous to have deterministic partition in time. 13

14 Chapter Discretization of the Feynman-Kac formula In this chapter we are going to prove existence part of the solution for the real-valued Schrödinger equation based on the discrete approximation. In the first section discrete version of the Feynman-Kac formula will be given and its convergence to the continuous case will be shown. In the second section application of the Feynman-Kac formula to the option pricing theory of mathematical finance will be demonstrated..1 Solving real-valued Schrödinger equation We are going to prove that if functions g( ), r( ) C(R) then the differential equation f = A f r f; t >, x R (.1) t f(, x) = g(x); x R, has a solution known as Feynman-Kac functional: f(t, x ) = E x e ] t r(xs) ds g(x t ). Where A is an operator which is called infinitesimal generator and it is defined by E x f(x t )] f(x ) A f(x) = lim, t t given that D A is the domain of A and f D A. For the brief introduction to the concept of infinitesimal generators reader may refer to 9, p. 117] and for more extensive view to 14, p. 16]. For arbitrary function f D A there is no direct way to express generator A in a usual sense as a sum of derivative operators, it is only possible using results from distribution theory. A brief discussion of this approach may be found in 18, p.18] however handling this case is not an easy task. That is why we will analyze heat equation with dissipation term given in the most general form by (.1). For a special case when f C there exist a way to compute A and it turns out that: A f(x) = a(x) x f + 1 b(x) xx f, where x and xx denotes first and second derivative with respect to x. 14

15 If one can show that Feynman-Kac functional f(t, x ) is in C then differential equation (.1) is equivalent to f t = a(x) xf + 1 b(x) xx f r(x)f. (.) Unfortunately as pointed by 18, p. 119] smoothness of f(t, x ) is not something easy to check in case of X t being a general Itô diffusion. For the simple case of an Itô diffusion with constant coefficients a, b reader should refer to 16] to see that in this situation Feynman-Kac functional and its discrete analog are C functions and hence heat equation has solution of the form (.). Since we are working with more general case than in 16] we are going to solve the differential equation (.1), where generator A is not specified. We would like to continue current section by presenting a discrete version of the heat equation (.1). Lemma (Discrete Feynman-Kac formula). Given time-homogeneous discrete Itô diffusion Xt m k defined above, with coefficients a( ), b( ) Lipschitz continuous. For functions r( ), g( ) C, discrete Feynman-Kac functional f m (t k, x ) = E x e ] k i= r(xm t ) i g(xt m k ), (.3) is the unique solution of the following difference equation: f m (t k+1, x ) f m (t k, x ) = Ex f m (t k, X t1 )] f m (t k, x ) er(x ) 1 f m (t k+1, x ) (.4) f m (, x ) = g(x ), where t k+1 = t k +. Proof. (Existence) Consider difference quotient from the definition of the generator of a discrete Itô diffusion and use Z(t k ) notation: E x f m (t k, X t1 )] f m (t k, x ) = 1 Ex {E Xt 1 Z(tk )g(xt m k )] E x Z(t k )g(xt m k )]} = 1 { Ex E x g(x mtk+1 )e ] k i= r(xm t ) i+1 F t1 Z(t k )g(xt m k ) } = 1 ] Ex g(xt m k+1 )Z(t k+1 )e r(xm t ) Z(t k )g(xt m k ) = 1 Ex g(xt m k+1 )Z(t k+1 ) g(xt m k )Z(t k )]+ ] +E x g(xt m k+1 )Z(t k+1 ) er(x ) 1. Rearrange the terms to finish the proof of the existence part. (Uniqueness) Consider the following version of the difference equation (.4): e r(x ) f m (t k+1, x ) = E x f m (t k, X t1 )] (.5) and suppose that there exists another solution w(t k, x ) satisfying equation (.5) with initial condition w(, x ) = g(x ). We are going to prove the claim by induction on k. 15

16 For the base case take k = and use equation (.5) to deduce that e r(x ) w(t 1, x ) = E x w(, X t1 )] = E x g(x t1 )] = E x f m (, X t1 )] = e r(x ) f m (t 1, x ), hence w(t 1, x ) = f m (t 1, x ). Now assume that w(t k, x ) = f m (t k, x ) holds for k let us check whether it is true for k + 1. e r(x ) w(t k+1, x ) = E x w(t k, X t1 )] = E x f m (t k, X t1 )] = e r(x ) f m (t k+1, x ) So by induction it follows that w(t k, x ) = f m (t k, x ) for all k. Recall that our aim is to approximate differential equation (.1) or written equivalently Af(t, x ) = f t (t, x ) + r(x )f(t, x ). (.6) For that purpose let us first rewrite difference equation (.4) in an equivalent form E x f m (t k, X t1 )] f m (t k, x ) = f m(t k+1, x ) f m (t k, x ) + + er(x ) 1 f m (t k+1, x ). (.7) So we would like to take limit of the above equation as m. Our strategy is to establish convergence of the right hand side of the equation (.7) proceeding term by term. Let s introduce the following notation: ψ m (t k, x) := e k i= r(xm t i ) g(x m t k ) ψ(t, x) := e o r(xs) ds g(x t ). Theorem 4. Assume that functions r( ), g( ) : R R are from C (R) also suppose that a( ), b( ) : R R are Lipschitz continuous. Then as m we have uniform L -convergence on, K] R: sup f m (t k, x) f(t, x)] t,t ] Proof. By the fact that for any random variable X: (EX]) EX ] we have sup f m (t k, x ) f(t, x )] = sup E x (ψ m (t k, x ) ψ(t, x ))] t,t ] t,t ] sup E x ψ m (t k, x ) ψ(t, x )]. t,t ] Hence it is enough to show convergence of the right hand side of the above enequality. For that purpose we are going to use the following simple estimate: (e b d e c a) = e b (d e c+b a) = e b (d a + a e b c a) e b ((d a) + a (1 e b c ) ). (.8) Then apply first order Taylor series expansion for function e b c around to get: e b c = 1 + e s (b c), where s, b c]. Substituting instead of function e b c in (.8) its expansion one can get: (e b d e c a) e b ((d a) + a e s (b c) ). 16

17 So using this estimate and definition of ψ m (t k, x) and ψ(t, x) one can get: e k i= r(xm t ) i g(xt m k ) e r(xs) ds g(x t ) sup E ψ m (t k, x) ψ(t, x)] = sup E t,t ] t,t ] sup E t,t ] e k i= r(xm t i ) {(g(x m t k ) g(x t )) + +g (X t ) e s ( k i= r(x m t i ) ] ) r(x s ) ds, where s, k i= r(xm t i ) r(x s) ds]. By assumption r( ) C hence r( ) R, then sup E ψ m (t k, x) ψ(t, x)] e T R sup E (g(xt m k ) g(x t )) + t,t ] t,t ] +g (X t ) e s ( k i= r(x m t i ) r(x s ) ds ). (.9) The next step is to take limit of both sides of the above inequality and we are going to do it term by term. Let us start with the first term and apply first order Taylor series expansion of function g(x m t k ) around X t : g(x m t k ) g(x t ) = g(z) (X m t k X t ), where z X t, X m t k ]. Then sup Eg(Xt m k ) g(x t )] = sup Eg (z) (Xt m k X t ) ] G sup EXt m k X t ], t,t ] t,t ] t,t ] since by assumption g( ) C hence g( ) G for some positive constant G. Now take limit as m and use Theorem 3 to conclude that: lim sup m t,t ] Eg(X m t k ) g(x t )] =. For the second term notice that k i= r(xm t i ) = k i=1 k i i=1 t i 1 r(x s ) ds. Then k t k sup E r(xt m i ) r(x s ) ds] = sup E t,t ] i= t,t ] i=1 k i T i=1 i t i 1 r(x m t i ) ds and r(x s) ds = i t i 1 r(x m t i ) r(x s )] ds t i 1 r(x m t i ) r(x s )] ds, where we have used simple estimate for the square of the sum and Cauchy-Schwartz inequality. As in the case of function g( ), using first order Taylor series expansion of r(xt m k ) around X t and taking limit of the above inequality as m one can easily derive that k t sup E r(xt m i ) r(x s ) ds]. t,t ] i= To finish the proof notice that in the inequality (.9) as m e s = a.s. and in L and since by assumption g( ) C all the terms converge to. A straightforward corollary of the Theorem 4 is that er(x ) 1f m (t k+1, x ) converges ] 17

18 uniformly in L to r(x )f(t, x ), which means convergence of the second term in the equation (.7). The next step is to show that the difference quotient fm(t k+1,x ) f m(t k,x ) from the equation (.7) converges to the time derivative of the continuous Feynman-Kac functional as m. However in order to achieve that we need to develop a theory. We would like to continue by proving discrete version of Itô formula. Despite the fact that this result will be used later on, it is very important on its own. The proof that we are about to present goes in line with the proof of Theorem 4.1. in 9, p. 44], more precisely we are going to consider the case of a discrete process X tk given by: X tk = a(t k 1, w) + b(t k 1, w) B m t k, where functions a(t k 1, ), b(t k 1, ) are simple processes adopted to the filtration F tk generated by B m t k. We are going to define a simple discrete process Y in a standard way used in many textbooks. Definition. We say that that Y is a simple process if there exists a sequence of times < t 1 t... increasing to a.s. and random variables ξ, ξ 1,... such that ξ j is F tj -measuarble, Eξj ] < for all j, and Y (t) = ξ I {} (t) + ξ i 1 I (ti 1,t i ](t), (t ) i=1 Generalization of the next theorem to the case of square-integrable adopted processes a( ), b( ) follows from Lemma of 1, p. 15]. Theorem 5 (Discrete Itô formula). Let X tk be a discrete Itô process with coefficients a(, ), b(, ) being simple processes. Let function g C(, ) R) then g(t k, X tk ) is again a discrete Itô process given by ( g g(t j, X tj ) = t (t j, X tj ) + g x (t j+1, Xt m j )a(t j, w) + 1 ) g x (t j+1, X tj )b (t j, w) + + g x (t j+1, X tj )b(t j, w) Bt m j+1 + o(). Proof. g(t j, X tj ) = g(t j+1, X tj+1 ) g(t j, X tj ) = g(t j+1, X tj+1 ) g(t j+1, X tj ) + g(t j+1, X tj ) g(t j, X tj ) Let us start by applying first order Taylor series expansion in time of g(t j+1, X tj ) around t j using o( ) form of the remainder: g(t j+1, X tj ) = g(t j, X tj ) + g t (t j, X tj ) + o(), where o() := g (s t j, X tj ) g (t t j, X tj ) ] with s j t j, t j+1 ] and since by assumption g C o() it follows that lim =. Similarly, consider second order Taylor series expansion of g(t j+1, X tj+1 ) in space around X tj : g(t j+1, X tj+1 ) = g(t j+1, X tj ) + g x (t j+1, X tj ) X tj + 1 g x (t j+1, X tj )( X tj ) +o( Xt j ), 18

19 X tj, X tj+1 ]. Us- ] where o( Xt j ) := 1 g (t x j+1, S tj ) g (t x j+1, X tj ) ( X tj ) with S tj ing definition of the process X tj : ( X tj ) = (a(t j, w) + b(t j, w) B m t j+1 ) = a (t j, w)() + b (t j, w)( B m t j+1 ) + a(t j, w) v(t j, w) B m t j+1 = a (t j, w)() + b (t j, w) + a(t j, w) b(t j, w) B m t j+1. It is clear that o( X t j ) = o() since B m t j+1 =. Then ( X tj ) = b (t j, w) + o(). Putting all together we have the claim: ( g g(t j, X tj ) = t (t j, X tj ) + g x (t j+1, Xt m j )a(t j, w)+ + 1 ) g x (t j+1, X tj )b (t j, w) + + g x (t j+1, X tj )b(t j, w) Bt m j+1 + o(). As pointed in Oksendal 9, p. 46] we may extend Theorem 5 to the case of g( ) C by approximating it with sequence of functions g n ( ) C. Use telescopic sum to see connection to the continuous Itô formula: k 1 g(t k, X tk ) = g(, x ) + g(t j, X tj ) i= k 1 ( g t (t i, X ti ) + g = g(, x ) + x (t i+1, Xt m i )a(t i, w)+ i= + 1 ) g x (t k 1 i+1, X ti )b (t i, w) + i= g x (t i+1, X ti )b(t i, w) B m t i+1 + k 1 + o(). (.1) i= Using techniques similar to Theorem 4 upon taking limit of (.1) as one has convergence to the continuous Itô lemma for simple processes a(, ), b(, ). In the sequel of the section we will return to the time homogeneous case of a discrete Itô diffusion with coefficients a(x tk ) and b(x tk ). We would like to apply discrete Itô lemma to the discrete Feynman-Kac functional to have its representation in form of expectation of an Itô process. For that purpose recall definition of the discrete Feynman-Kac functional: f m (t k, x) = E x e ] k i= r(xm t ) i g(xt m k ). For further analysis assume that r( ), g( ) C (R) and functions a( ), b( ) are bounded and Lipschitz continuous. It is clear that g(x m t k ) is a discrete Itô process since function g( ) satisfies requirements 19

20 of Theorem 5, then it follows that ( g g(xt m j ) = x (Xm t j )a(xt m j ) + 1 ) g x (Xm t j )b (Xt m j ) + + g x (Xm t j )b(xt m j ) Bt m j+1 + o(). (.11) Let us introduce a new notation Z(t j ) := e j i= r(xm t )(t i i+1 t i ) and let s compute Z(t j ): Z(t j ) = Z(t j+1 ) Z(t j ) = e j+1 i= r(xm t ) i e j i= r(xm t ) i = e ( ) j i= r(xm t ) i e r(xm t ) j+1 1 = e ( ) j i= r(xm t ) i r(xt m j+1 ) + o() = Z(t j )r(x m t j+1 ) + o(), (.1) where o() := e s + 1] r(x m t j+1 ) with s, r(x m t j+1 )]. Notice that inside of the Feynman-Kac functional we have the product Z(t k )g(x m t k ), so it would be advantageous to express it as an Itô diffusion as well: Z(t j )g(xt m j )] = Z(t j+1 )g(xt m j+1 ) Z(t j )g(xt m j ) ( ) = Z(t j+1 ) g(xt m j+1 ) g(xt m j ) + g(xt m j ) (Z(t j+1 ) Z(t j )) = Z(t j+1 ) g(x m t j ) + g(x m t j ) Z(t j ). (.13) Now we are ready to present difference quotient fm(t k+1,x ) f m(t k,x ) in a useful form. Lemma 3. Given that functions r( ), g( ) C and a( ), b( ) are bounded and Lipschitz continuous f m (t k+1, x ) f m (t k, x ) = E x r(xt m k+1 )Z(t k )g(xt m k ) + o() + { +Z(t k+1 ) x g(xt m k ) a(xt m k ) + 1 }] xxg(xt m k ) b (Xt m k ). (.14) Proof. Using expressions for Z(t j ), g(x m t j ) given by (.11), (.1) and by taking sum in the equation (.13) we could write the product Z(t k )g(x m t k ) as k 1 k 1 Z(t k )g(xt m k ) = g(x ) + g(xt m i ) Z(t i ) + Z(t i+1 ) g(xt m i ) i= k 1 i= = g(x ) r(xt m i+1 )Z(t i )g(xt m i )+ i= k 1 i= k 1 { + Z(t i+1 ) x g(xt m i ) a(xt m i ) + 1 } xxg(xt m i ) b (Xt m i ) + k 1 + Z(t i+1 ) x g(xt m i ) b(xt m i ) Bt m i+1 + o(). i= Next take expectation of both parts to get E x Z(t k )g(xt m k ) ] k 1 k 1 = g(x ) + E x r(xt m i+1 )Z(t i )g(xt m i+1 ) + o() i= i= k 1 { + Z(t i+1 ) x g(xt m i ) a(xt m i ) + 1 } ] xxg(xt m i ) b (Xt m i ). Similarly one can compute expression for E x Z(t k+1 )g(x m t k+1 )] and claim follows easily after subtraction. i= i=

21 For the continuous case existence of the time derivative of the Feynman-Kac functional could be shown by applying logic similar to the one in the proof of Lemma , p. 118]. Moreover, an exact form of the expression could be also inferred (using notations from the Oksendal s book): t f(t, x ) = E x r(x t )g(x t )Z t + +Z t ( x g(x t ) a(x t ) + 1 )] xxg(x t ) b (X t ). (.15) In the next theorem we will prove convergence of (.14) to (.15) as m which is the last step on the way of establishing convergence of a difference heat equation to the continuous case. Theorem 6. Suppose that functions r( ), g( ) C and a( ), b( ) are Lipschitz continuous then as m fm (t k+1, x ) f m (t k, x ) sup )] t,t ] t f(t, x. Proof. Use the fact that (EX]) EX] for any random variable X. Then fm (t k+1, x ) f m (t k, x ) sup )] t,t ] t f(t, x sup t,t ] E r(xt m k+1 )Z(t k )g(xt m k ) + r(x t )g(x t )Z t + +Z(t k+1 ) x g(xt m k ) a(xt m k ) Z t x g(x t ) a(x t )+ + 1 Z(t k+1) xx g(xt m k ) b (Xt m k ) 1 Z t xx g(x t ) b (X t ) + o() ]. Using simple estimate (a + b + c + d) 4(a + b + c + d ) we have fm (t k+1, x ) f m (t k, x ) sup E t,t ] t f(t, x )] 4 sup E r(x t )g(x t )Z t r(xt m k+1 )Z(t k )g(xt m k )] + t,t ] +4 sup E Z(t k+1 ) x g(xt m k ) a(xt m k ) Z t x g(x t ) a(x t ) ] + t,t ] + 4 sup E Z(t k+1 ) xx g(xt m k ) b (Xt m k ) Z t xx g(x t ) b (X t ) ] + t,t ] ] o() +4 sup E t,t ] ( 4 I 1 + I + 1 ] ) o() I 3 + sup E, (.16) t,t ] 1

22 where I 1, I, I 3 will be defined below. Let us start with I 1. I 1 := sup E r(x t )g(x t )Z t r(xt m k+1 )Z(t k )g(xt m k ) t,t ] = sup E r(x t )g(x t )Z t r(x t )Z(t k )g(xt m k )+ t,t ] +r(x t )Z(t k )g(x m t k ) r(x m t k+1 )Z(t k )g(x m t k ) sup E r (X t )(g(x t )Z t g(xt m k )Z(t k )) ] + t,t ] + sup E t,t ] ] ] Z (t k )g (X mtk )(r(x t ) r(x mtk+1 )) ]. Since by assumption r( ) R and g( ) G for some positive constants R, G > : I 1 R sup E g(x t )Z t g(xt m k )Z(t k ) ] + t,t ] + G e T R sup E t,t ] So taking limit as m by Theorem 4 we have r(x t ) r(x m t k+1 )]. lim I 1 =. m Use similar technique for the second term: I := sup E Z(t k+1 ) x g(xt m k ) a(xt m k ) Z t x g(x t ) a(x t ) ] t,t ] = sup E Z(t k+1 ) x g(xt m k ) a(xt m k ) Z(t k+1 ) x g(xt m k ) a(x t )+ t,t ] +Z(t k+1 ) x g(xt m k ) a(x t ) Z t x g(x t ) a(x t ) ] sup E Z (t k+1 ) ( x g(xt m k )) (a(xt m k ) a(x t )) ] + t,t ] + sup E a (X t )(Z(t k+1 ) x g(xt m k ) Z t x g(x t )) ]. t,t ] Using linear growth of a( ) and assumption x g( ) G 1 for some positive constant G 1 and for m large enough, we get I K G 1 e T R sup E ] X m t k X t + t,t ] (1 ) ( +4C sup E + X t Z(tk+1 ) x g(xt m k ) Z t x g(x t ) ) ] t,t ] K G 1 e T R sup E ] X m t k X t + t,t ] +4(1 + K1) C sup E Z(t k+1 ) x g(xt m k ) Z t x g(x t ) ], t,t ] where we have used assumption that function x g(x t ) has compact support, hence we may conclude that when X t is bounded by K 1 when it belongs to the support of x g(x t ) and for otherwise x g(x t ) and x g(x m t ) will be equal to upon taking the limit as m. Notice that by Theorem 3 the first term on the right hand side of the inequality goes to as m and for the second apply technique similar to the one in Theorem 4 to see L convergence of Z tk x g(x m t k ) to Z t x g(x t ) and so lim I =. m

23 For the third term I 3 := 1 sup E Z(t k+1 ) xx g(xt m k ) b (Xt m k ) Z t xx g(x t ) b (X t ) ] t,t ] apply similar logic and conclude that lim I 3 =. m Thus upon taking limit of the inequality (.16) as m the proof is completed. We finish current section by presenting convergence result which proves that continuous Feynman-Kac functional solves continuous-time heat equation and might be approximated in L by (.3). Theorem 7 (Convergence of a real-valued Schrödinger equation). Assume that functions r( ), g( ) C and a( ), b( ) are bounded and Lipschitz continuous then as m E x f m (t k, X t1 ) f m (t k, x )] sup A f(t, x )] t,t ] Proof. Use equations (.13), (.7) and apply Theorems 4 and 6.. Application to the Black-Scholes model In the previous sections we were dealing with so called forward case, its name stems from the fact that the initial condition of the system was given at time t =. However in some applications of the Feynman-Kac formula (particularly in finance) we have terminal condition instead, which gives value of the functional at time t = T. That is why the former is referred in the literature as a backward case. To put it simply, in the first case we know the starting point of the system, whereas in the second case we know the end state only. Since we will be working with an option pricing model from financial mathematics, we will face the backward case. So it is necessary to define backward version of the discrete Feynman-Kac functional: f b m(t k, x) = E e N i=k r(xm t i ) g(x m t N ) X m t k ] = x, where N := T m and let Z b (t k ) := e N i=k r(xm t i ). To proceed further we need version of Lemma with boundary condition. Lemma 4. Given time-homogeneous discrete Itô diffusion Xt m k defined above, backward discrete Feynman-Kac functional fm(t b k, x) is the unique solution of the following difference equation: fm(t b k+1, x) fm(t b k, x) = Ef m(t b k, X tk+1 ) Xt m k = x] fm(t b k, x) e r(x) 1 f b m(t k+1, x) (.17) fm(t, b x) = g(x), where t k+1 = t k +. Proof. (Existence) 3

24 Consider difference quotient from the definition of the generator of a discrete Itô diffusion and use Z b (t k ) notation: Efm(t b k, X tk+1 ) Xt m k = x] fm(t b k, x) = 1 E EZ b (t k )g(xt m N ) Xt m k = X tk+1 ] Z b (t k )g(xt m N ) X m tk = x ] = 1 Ee E r(xm t ) k Z b (t k+1 )g(xt m N ) F tk+1 ] Z b (t k )g(xt m N ) ] X t m k = x = 1 e E r(xm t ) k Z b (t k+1 )g(xt m N ) Z b (t k )g(xt m N ) ] X m tk = x = 1 E Z b (t k+1 )g(xt m N ) Z b (t k )g(xt m N ) X m tk = x ] + e r(xm t ) k 1 +E Z b (t k+1 )g(xt m N ) ] X m tk = x. Rearrange the terms to finish the proof of the existence part. (Uniqueness) To prove the uniqueness part consider the following version of the difference equation (.17) e r(x) f b m(t k+1, x) = E f b m(t k, X tk+1 ) X m tk = x ], and apply induction starting from the boundary condition at time T and proceeding with step backward in time. Next, following similar lines as in the previous chapter, one can show L -convergence as m of two terms: fm(t b k, x) f b (t, x), where f b (t, x) := E e ] T t r(x s) ds g(x T ) X t = x fm(t b k+1, x) fm(t b k, x) f b (t, x), t hence we conclude that difference equation (.17) is a discretization of the following differential equation: f b t (t, x) = A f b (t, x) + r(x) f b (t, x), with boundary condition f b (T, x) = g(x). Now we will provide connection of the above equation to the option pricing theory. For simplicity assume that there are two assets trading on the financial market: risky (equity) and risk-less (bond). Let function r( ) be constant, it represents risk-free interest rate in the economy, to be precise let it be return of a zero-coupon default-free bond. Let S t be a price process of a risky asset given by ds t = a(s t ) dt + b(s t ) dw t, where a( ), b( ) are functions described in the previous sections and W t is a Brownian motion, we will call a( ) and b( ) drift and diffusion coefficients respectively. Dynamics of a risk-free bond, let s call it β t, is entirely deterministic and follows simple differential equation: dβ t = r β t dt, hence β t = e rt. Our task is to find a fair price of the financial instrument known as option issued on an equity, which value at time t depends solely on the price of an underlying asset S t. In 4

25 the simplest case the only payment to the holder of an option occurs at a maturity date T and let us denote it as g(s T ). Our task is to determine the price of an option at any time t T. In the spirit of the original paper written by Fischer Black and Myron Scholes ] we assume that payoff of an option may be replicated by holding necessary amount of stocks α 1 (t) and bonds α (t) and let s call such a portfolio V (t): V (t) = α 1 (t)s t + α (t)β t. Then value of an option at time t denoted by f(t, S t ) should be equal to the value of the replicating portfolio V (t), such a condition will prevent an arbitrage opportunity. Also we have to assume self-financing condition for the portfolio V (t) which means that until time T no funds were taken out from the portfolio, then value of the portfolio satisfies dv (t) = α 1 (t)ds t + α (t)dβ t. (.18) The key step in the process is so called change of measure, reader may find comprehensive theory behind it in 9, p ], the legitimacy of the procedure essentially guaranteed by the Girsanov second theorem 9, p. 157]. Assume that the Novikov s condition is satisfied E e 1 T ( a(su) r Su b(su) ) du ] <, then the new probability measure Q is defined via Radon-Nykodim derivative as: dq dp = t a(su) r Su e dw b(su) u 1 a(su) r Su ( b(su) ) du. Under the new probability measure W t := a(s u) r S u b(s u) du + W t is a Q-Brownian motion and the price process S t has the stochastic representation: ds t = rs t dt + b(s t )d W t. (.19) One can solve SDE (.19) by applying general Itô lemma to the function ln(s t ) to get: S t = S e ( r 1 b (Su) S u ) du+ b(su) Su dwu, and hence discounted price process has the following form: = S e β t Notice that if function b( ) satisfies Novikov s condition: E Q e 1 T ] b (Su) Su du <, S t b(su) Su dwu 1 then discounted price process S(t)/β t is a martingale and hence from equation (.18) it follows that discounted replicating portfolio V (t)/β t is also a martingale. Since there is no arbitrage price of an option is a martingale and we know that at maturity it pays g(s(t )) or in discounted form e r(t t) g(s(t )) then by martingale property it follows that: f(t, S t ) = E Q e r(t t) g(s(t )) S t = x ]. (.) Notice that (.) defines backward Feynman-Kac functional and hence is a solution of the following differential equation: f (t, x) = A f(t, x) + r(x) f(t, x), t b (Su) S u du. 5

26 or provided that payout function g(s(t )) C we replace generator A with an exact expression: t f(t, x) = rx x f(t, x) + 1 b (x) xx f(t, x) + r f(t, x), which is known as Black-Scholes equation. It is important to know an exact number of equities and bonds in the replicating portfolio at any time t T in order to guarantee that the price of an option given in equation (.) will be realized. So we may apply generalized Itô lemma for f(t, S t ) assuming that f(t, S t ) C 1, to get: df(t, S t ) = t f(t, X t )dt + s f(t, S t )ds t + 1 ssf(t, S t )(ds t ), substitute (.19) instead of ds t to get ( df(t, S t ) = t f(t, X t ) + rs t s f(t, S t ) + 1 ) b (S t ) ss f(t, S t ) dt + b(s t ) s f(t, S t )d W t. On the other hand, using equation (.18) one can derive df(t, S t ) = α 1 (t)ds t + α (t)dβ t = (α 1 (t)r S t + α (t)r β t ) dt + α 1 (t)b(s t )d W t. In order for two expressions for the dynamics of an option price to be equal we must have: α 1 (t) = s f(t, S t ) α (t) = 1 ( t f(t, S t ) + 1 ) r β t b (S t ) ss f(t, S t ). 6

27 Conclusion Using twist & shrink construction of a Brownian motion we have provided several discrete approximation schemes for a time-homogeneous Itô diffusion. Assuming that its coefficients satisfy global Lipschitz continuity condition we have established L -convergence to the continuous case. Discrete Itô diffusion with deterministic partition of the time scale then was used to present a discretization of the Feynman-Kac functional. Assuming that functions r( ) and g( ) are twice continuously differentiable with compact support we have proved L -convergence of the discrete Feynman-Kac functional and its time derivative to the continuous analogs. Consequently existence of the solution for the original heat equation followed. Special treatment of uniqueness part may be found in 9, p. 137]. In the last section we have given an alternative prove of the Black-Scholes option pricing formula for a case of diffusion price process. A natural continuation of which may be a theory similar to the discrete model of Cox-Ross-Rubenstein 5]. 7

28 Bibliography 1] Albeverio S. (1997) Wiener and Feynman path integral and their applications. Proc. Symp. Appl. Math. 5, ] Black F., Scholes M. (1973) The pricing of options and corporate liabilities. J. Polit. Econ. 81, ] Cameron R. (196) A family of integrals serving to connect the Wiener and Feynman integrals. J. Math. Phys. 39, ] Cameron R., Storvick D. (1983) A simple definition of the Feynman integrals with applications. Mem. A.M.S., Providence, R. I. 5] Cox J, Ross S., Rubinstein M. (1979) Option pricing: A simplified approach. J. Financ. Econ. 7, ] Csáki E. (1993) A discrete Feynman-Kac formula. J. Statist. Plann. Inference 34, ] Feynman R. (1948) Space-Time approach to Non-Relativistic Quantum Mechanics. Rev. of Mod. Phys., ] Itô K. (196) Wiener integral and Feynman integral. Proc. Fourth Berkley Symp. on Math. Stat. Probab. II, ] Øksendal B. (7) Stochastic Differential Equations. Sixth Edition, Springer, Berlin. 1] Kac M. (1949) On distribution of certain Wiener functionals. Trans. A.M.S. 65, ] Kac M. (1951) On some connections between probability theory and differential and integral equations. Proc. Second Berkeley Symp. on Math. Stat. Prob., ] Kloeden P., Platen E. (1999) Numerical Solution of Stochastic Differential Equations. Springer, Berlin. 13] Knight F. (196) On the random walk and Brownian motion. Trans. Amer. Math. Soc. 13, ] Kuo H. (6) Introduction to stochastic integration. Springer. 15] Nelson E. (1964) Feynman integrals and the Schrödinger equation. J. Math. Phys. 5, ] Nika Z., Szabados T. (14) Strong approximation of Black-Scholes theory based on simple random walks. 1 MSC. 8

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

arxiv: v1 [math.pr] 24 Sep 2018

arxiv: v1 [math.pr] 24 Sep 2018 A short note on Anticipative portfolio optimization B. D Auria a,b,1,, J.-A. Salmerón a,1 a Dpto. Estadística, Universidad Carlos III de Madrid. Avda. de la Universidad 3, 8911, Leganés (Madrid Spain b

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko

5 December 2016 MAA136 Researcher presentation. Anatoliy Malyarenko. Topics for Bachelor and Master Theses. Anatoliy Malyarenko 5 December 216 MAA136 Researcher presentation 1 schemes The main problem of financial engineering: calculate E [f(x t (x))], where {X t (x): t T } is the solution of the system of X t (x) = x + Ṽ (X s

More information

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University Lecture 4: Ito s Stochastic Calculus and SDE Seung Yeal Ha Dept of Mathematical Sciences Seoul National University 1 Preliminaries What is Calculus? Integral, Differentiation. Differentiation 2 Integral

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered

More information

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2 B8.3 Mathematical Models for Financial Derivatives Hilary Term 18 Solution Sheet In the following W t ) t denotes a standard Brownian motion and t > denotes time. A partition π of the interval, t is a

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model. 1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market

More information

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES

MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES MULTIDIMENSIONAL WICK-ITÔ FORMULA FOR GAUSSIAN PROCESSES D. NUALART Department of Mathematics, University of Kansas Lawrence, KS 6645, USA E-mail: nualart@math.ku.edu S. ORTIZ-LATORRE Departament de Probabilitat,

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

Squared Bessel Process with Delay

Squared Bessel Process with Delay Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Introduction to Diffusion Processes.

Introduction to Diffusion Processes. Introduction to Diffusion Processes. Arka P. Ghosh Department of Statistics Iowa State University Ames, IA 511-121 apghosh@iastate.edu (515) 294-7851. February 1, 21 Abstract In this section we describe

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Trajectorial Martingales, Null Sets, Convergence and Integration

Trajectorial Martingales, Null Sets, Convergence and Integration Trajectorial Martingales, Null Sets, Convergence and Integration Sebastian Ferrando, Department of Mathematics, Ryerson University, Toronto, Canada Alfredo Gonzalez and Sebastian Ferrando Trajectorial

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

A numerical method for solving uncertain differential equations

A numerical method for solving uncertain differential equations Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Backward martingale representation and endogenous completeness in finance

Backward martingale representation and endogenous completeness in finance Backward martingale representation and endogenous completeness in finance Dmitry Kramkov (with Silviu Predoiu) Carnegie Mellon University 1 / 19 Bibliography Robert M. Anderson and Roberto C. Raimondo.

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Numerical methods for solving stochastic differential equations

Numerical methods for solving stochastic differential equations Mathematical Communications 4(1999), 251-256 251 Numerical methods for solving stochastic differential equations Rózsa Horváth Bokor Abstract. This paper provides an introduction to stochastic calculus

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Multiple points of the Brownian sheet in critical dimensions

Multiple points of the Brownian sheet in critical dimensions Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology FE610 Stochastic Calculus for Financial Engineers Lecture 3. Calculaus in Deterministic and Stochastic Environments Steve Yang Stevens Institute of Technology 01/31/2012 Outline 1 Modeling Random Behavior

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information