Itô s Theory of Stochastic Integration

Size: px
Start display at page:

Download "Itô s Theory of Stochastic Integration"

Transcription

1 chapter 5 Itô s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge posed by integrals of the sort in There are several reasons for my decision to postpone doing so until now, perhaps the most important of which is my belief that, in spite of, or maybe because of, its elegance, Itô s theory of stochastic integration tends to mask the essential simplicity and beauty of his ideas as we have been developing them heretofore. However, it is high time that I explain his theory of integration, and that is what I will be doing in this chapter. However, we will not deal with the theory in full generality and will restrict our attention to the case when the paths are continuous. In terms of equations like , this means that we will not try to rationalize the dp t integral except in the case when p is Brownian motion i.e., M = and p = w, p. The general theory is beautiful and has been fully developed by the French school, particularly by C. Dellacherie and P.A. Meyer who have published a detailed account of their findings in [5]. Because it already contains most of the essential ideas, we will devote this chapter to stochastic integration with respect to Brownian motion. 5.1 Brownian Stochastic Integrals Let Ω, F, P be a complete probability space. Then βt, F t, P will be called an R n -valued Brownian motion if {F t : t } is a non-decreasing family of P-complete sub σ-algebras of the σ-algebra F and β : [, Ω R n is a B [, F-measurable map with the properties that properties that 1 a β = and t βt, ω is continuous for P-almost every ω, b ω βt, ω is F t -measurable for each t [,, c for all s [, and t,, βs + t βs is P-independent of F s and has the distribution of a centered Gaussian with covariance ti R n under P. 1 It should be noticed that our insistence on the completeness of all σ-algebras imposes no restriction. Indeed, if F or the F t s are not complete but a, b, and c hold for some ω β, ω, then they will continue to hold after all σ-algebras have been completed. 125

2 126 5 Itô s Theory of Stochastic Integration Notice that the preceding definition is very close to saying that β, ω is continuous for all ω and that the P-distribution of ω Ω β, ω C [, ; R n is given by Wiener measure, the measure which would have been denoted by P I,, in 2.4 and by P starting in To be more precise, first observe that, without loss in generality, one can assume that β, ω C [, ; R n for all ω, in which case a, b, and c guarantee that P I,, is the P-distribution of ω β, ω. Conversely, if Ω = C [, ; R n, P = P I,,, and F and F t are, respectively, the P-completions of B and B t, then one gets a Brownian motion by taking βt, p = pt. Thus, the essential generalization afforded by the preceding definition is that the σ-algebras need not be inextricably tied to the random variables βt. That is, F t must contain but need not be the completion of σ {βτ : τ [, t]} A Review of Paley Wiener Integral. As an aid to understanding Itô s theory, it may be helpful to recall the theory of stochastic integration which was introduced by Paley and Wiener. Namely, let βt, F t, P be a Brownian motion on the complete probability space Ω, F, P, and assume β, ω C [, ; R n for all ω Ω. Given a Borel measurable function θ : [, R n which has bounded variation on each finite interval, one can use Riemann-Stieltjes theory 2 to define t [, I θ t, ω = θτ, dβτ, ω R n R Because it is given by a Riemann Stieltjes integral, we can say that cf I θ t, ω = lim N m= By c, we know that, for each N N, ω θm2 N, N mβt, ω, R n θm2 N, N mβt, ω R n m= is a centered Gaussian with variance θ[τ]n 2 dτ, and so it is an easy step to the conclusion that ω I θ t, ω is a centered Gaussian with variance equal to θτ 2 dτ. In fact, with only a little more effort, one sees 2 What is needed here is the fact cf. Theorem in [34] that Riemann-Stieltjes theory is completely symmetric: ϕ is Riemann Stieltjes integrable with resect of ψ if and only if ψ is with respect to ϕ. In fact, the integration by parts formula is what allows one to exchange the two.

3 5.1 Brownian Stochastic Integrals. 127 that Is + t Is is P-independent of σ {Iσ : σ [, s]} and that its P-distribution is that of a centered Gaussian with variance s θτ 2 dτ. Moreover, I θ, ω C [, ; R n, and so we now know that I θ t, F t, P is a continuous, square integrable martingale. In particular, by Doob s inequality, θ 2 L 2 [, ;R n = lim EP[ I θ t 2] E P[ I θ 2 ] t [, 4 θ 2 L2 [, ;R n. The relations in can be used as the basis on which to extend the definition of θ I θ to square integrable θ s which do not necessarily possess locally bounded variation. Indeed, says that, as a map taking θ L 2 [, ; R n with locally bounded variation into continuous square integrable martingales on Ω, F t, P, θ I θ is a continuous. Hence, because the smooth elements of L 2 [, ; R n are dense there, this map admits a unique continuous extention. To be precise, define M 2 P; R to be the space of all R-valued, square integrable, P-almost surely continuous P-martingales M relative to {F t : t } such that M M 2 P;R = sup E P[ Mt 2] 1 2 <. t [, Although it may not be apparent, M 2 P; R is actually a Hilbert space. In fact, it can be isometrically embedded as a closed subspace of L 2 P; R. Namely, if M M 2 P; R, then M is an L 2 -bounded martingale and therefore, by the L 2 -martingale convergence theorem cf. Theorem in [36], there exists an M L 2 P; R to which {Mt : t } converges both P- almost surely and in L 2 P; R. In particular, this means that, for each t, Mt = E P[ M F t ] P-almost surely. Moreover, because Mt 2, F t, P is a submartingale, Mt L 2 P;R M M 2 P;R = M L 2 P;R. Hence, the map M M 2 P; R M L 2 P; R is a linear isometry. Finally, to see that {M : M M 2 P; R} is closed in L 2 P; R, suppose that {M k } k=1 M2 P; R and that M k X is L 2 P; R. By Doob s Inequality, sup E P[ M l M k 2 [, l>k ] 4 sup M l M k 2 L 2 P;R l>k as k. Hence, there exists an F-measurable map ω Ω M C [, ; R n such that lim k E P[ M M k [, ] 2 =. But clearly, for each t, Mt = E P [X F t ] P-almost surely, and so not only is M M 2 P; R but also, since X is σ t F t -measurable, X = M. For future reference, we will collect these observations in a lemma.

4 128 5 Itô s Theory of Stochastic Integration Lemma. The space M 2 P; R with the norm given by is a Hilbert space. Moreover, for each M M 2 P; R, M lim t Mt exists both P-almost surely and in L 2 P; R, and the map M M 2 P; R M L 2 P; R is a linear isometry. By combining Lemma with the remarks which precede it, we arrive at the following statement, which summarizes the Paley Wiener theory of stochastic integration Theorem. There is a unique, linear isometry θ L 2 [, ; R n I θ M 2 P; R with the property that I θ t, ω is given by when θ has locally bounded variation. In particular, for each T, I θ T = I 1[,T ] θ P-almost surely. Finally, for each θ L 2 [, ; R n and all s < t, I θ t I θ s is P-independent of F s and its P-distribution is that of a centered Gaussian with variance s θτ 2 dτ Itô s Extension. Itô s extention of the preceding to θ s which may depend on ω as well as t is completely natural if one keeps in mind the reason for his wanting to make such an extention. Namely, he was trying to make sense out integrals which appear in his method of constructing Markov processes. Thus, he wanted to find a notion of integration which would allow him to interpret lim N m= σ X N m2 N, x N mp t as an integral. In particular, he had reason to suppose that it was best to make sure that the integrand is independent of the differential by which it is being multiplied. With this in mind, we say that a map F on [, Ω into a measurable space is progressively measurable if F [, T ] Ω is B [,T ] F T -measurable for each T [,. 3 The following elementary facts about progressive measurability are proved in Lemma of [36] Lemma. Let PM denote the collection of all A [, Ω such that 1 A is an R-valued, progressively function. Then PM is a sub σ-algebra of B [, F, and a function on [, Ω is progressively measurable if and only if it is measurable with respect to PM. Furthermore, if F : 3 So far as I know, the notion of progressive measurability is one of the many contributions which P.A. Meyer made to the subject of stochastic integration. In particular, Itô dealt with adapted functions: F s for which ω F T, ω is F T -measurable for each T. Even though adaptedness is more intuitively appealing, there are compelling technical reasons for preferring progressive measurability. See Remark in [36] for further comments on this matter.

5 5.1 Brownian Stochastic Integrals. 129 [, Ω E, where E is a metric space, with the properties that F, ω is continuous for each ω and F T, is F T for each T, then F is progressively measurable. In fact, if F t is P-complete for each t and F, ω is continuous for P-almost every ω, then F is progressively measurable if F T, is F T -measurable for each T. Now let Θ 2 P; R n denote the space of all progressively measurable θ : [, Ω R n with the property that θ Θ2 P;R n E P [ ] θt 2 dt <. Since, by Lemma 5.1.6, an equivalent way to describe Θ 2 P; R n is as the subspace of progressively measurable elements of L 2 Leb [, P, we know that Θ 2 P; R n is a Hilbert space. Our goal is to show cf. Exercise below that there is a unique linear isometry θ Θ 2 P; R n I θ M 2 P; R with the property that when θ is an element of Θ 2 P; R n such that θ, ω has locally bounded variation for each ω I θ t, ω = = θt, ω, βt, ω R n θτ, ω, dβτ, ω R n βτ, ω, dθτ, ω where the integrals are taken in the sense of Riemann Stieltjes. R n, Lemma. Let SΘ 2 P; R n 4 be the subspace of uniformly bounded θ Θ 2 P; R n for which there exists an N N with the property that θt, ω = θ[t] N, ω for all t, ω [, Ω. Then SΘ 2 P; R n is dense in Θ 2 P; R n. Moreover, if θ SΘ 2 P; R n, I θ is given as in 5.1.7, and A θ t, ω then E θ t, F t, P is a martingale when θτ, ω 2 dτ, E θ t, ω exp I θ t, ω 1 2 A θt. In particular, both I θ t, F t, P and I θ t 2 A θ t, F t, P are martingales. 4 The S here stands for simple.

6 13 5 Itô s Theory of Stochastic Integration Proof: To prove the density statement, first observe that, by any standard truncation procedure, it is easy to check that uniformly bounded elements of Θ 2 P; R n which are supported on [, T ] Ω for some T [, form a dense subspace of Θ 2 P; R n. Thus, suppose that θ Θ 2 P; R n is uniformly bounded and vanishes off of [, T ] Ω. Choose a ψ Cc R; [, so that ψ on, ] and ψt dt = 1, and set R θ k t, ω = k ψ kt τ θτ, ω dτ. Then, for each k Z + and ω Ω, θ k, ω is smooth and vanishes off of [, T +1]. In addition, because ψ is supported on the right half line, it follows from Lemma that θ k is progressively measurable. At the same time, for each ω, θ k, ω L2 P;R n θ, ω L2 P;R n and θ k, ω θ, ω L2 [, ;R n as k. Hence, by Lebesgue s Dominated Convergence Theorem, θ k θ Θ 2 P;R n. In other words, we have now proved the density of the uniformly bounded elements of Θ 2 P; R which are supported on [, T ] Ω for some T and are smooth as functions of t [, for each ω Θ. But clearly, if θ is such an element of Θ 2 P; R and θ N t, ω θ[t] N, ω, then θ N θ in Θ 2 P; R. To prove the second assertion, let θ be a uniformly bounded element of Θ 2 P; R which satisfies θt, ω = θ[t] N, ω. Then, for m2 N s < t m + 12 N, E P[ E θ t ] Fs = E θ se t s 2 θm2 N 2 E P[ exp θm2 N, βt βs ] Fs = E R n θ s since βt βs is P-independent of F s. Clearly, for general s < t, one gets the same conclusion by iterating the preceding result. Hence, we now know that E θ t, F t, P is a martingale. To complete the proof from here, first observe that we can replace θ by λθ for any λ R. In particular, this means that E P[ e I θt ] e 1 2 A2 t E P[ E θ t + E θ t ] = 2e 1 2 A2t, where A is the uniform upper bound on θt, ω. At the same time, by Taylor s Theorem, E λθ t 1 I θ t λ λ 2 e I θt and E λθ t + E λθ t 2 λ 2 I θ t 2 A θ t λ 3 e I θt for < λ 1. Hence, by Lebesgue s Dominated Convergence Theorem, we get the desired conclusion after letting λ.

7 5.1 Brownian Stochastic Integrals Theorem. There is a unique linear, isometric map θ Θ 2 P; R n I θ M 2 P; R with the properties that I θ is given by for each θ SΘ 2 P; R. Moreover, given θ 1, θ 2 Θ 2 P; R, I θ1 ti θ2 t θ1 τ, θ 2 τ, F R n t, P is a martingale. Finally, if θ Θ 2 P; R and E θ t is defined as in 5.1.1, then E θ t, F t, P is always a supermartingale and is a martingale if A θ T is bounded for each T [,. See Exercises and for more refined information. Proof: The existence and uniqueness of θ I θ is immediate from Lemma Indeed, from that lemma, we know that this map is linear and isometric on SΘ 2 P; R n and that SΘ 2 P; R n is dense in Θ 2 P; R n. Furthermore, Lemma says that I θ t 2 A θ t, F t, P is a martingale when θ SΘ 2 P; R, and so the general case follows from the fact that I θk t 2 A θk t I θ t 2 A θ t in L 1 P; R when θ k θ in Θ 2 P; R. Knowing that I θ t 2 A θ t, F t, P is a martingale for each θ Θ 2 P; R n, we get by polarization. That is, one uses the identity I θ1 ti θ2 t = 1 4 θ1 τ, θ 2 τ R n dτ Iθ1+θ 2 t 2 I θ1 θ 2 t 2 A θ1+θ 2 t + A θ1 θ 2 t. Finally, to prove the last assertion, choose {θ k } 1 SΘ 2 P; R so that θ k θ in Θ 2 P; R n. Because E θk t, F t, P is a martingale for each k and E θk t E θ t in P-probability for each t, Fatou s Lemma implies that E θ t, F t, P is a supermartingale. Next suppose that θ is uniformly bounded by a constant Λ <, and choose the θ k s so that they are all uniformly bounded by Λ as well. Then, for each t [, T ], E P[ E θk t 2] e Λt E P[ E 2θk t ] = e Λt, and so E θk t E θ t in L 1 P; R for each t [, T ]. Hence, we now know that E θ t, F t, P is a martingale when θ is bounded. Finally, if A θ t, ω Λt < for each t [, and θ m t, ω 1 [,m] θt, ω θt, ω, then E θm t E θ t in P-probability and E P[ E θm t 2] e Λt for each t, which again is sufficient to show that E θ t, F t, P is a martingale. Remark With Theorem , we have completed the basic construction in Itô s theory of Brownian stochastic integration, and, as time

8 132 5 Itô s Theory of Stochastic Integration goes on, we will increasingly often replace the notation I θ by the more conventional notation θτ, dβτ R n = I θ t. Because it recognizes that Itô theory is very like a classical integration theory, is good notation. On the other hand, it can be misleading. Indeed, one has to keep in mind that, in reality, Itô s integral is, like the Fourier transform on R, defined only up to a set of measure and via an L 2 -completion procedure. In addition, for the cautionary reasons discussed in & 3.3.3, it is a serious mistake to put too much credence in the notion that an Itô integral behaves like a Riemann Stieltjes integral Stopping Stochastic Integrals and a Further Extension. The notation θτ, dβτ for I R n θ t should make one wonder to what extent it is true that, for ζ 1 ζ 2, ζ2 t ζ 1 θτ, dβτ R n I θ t ζ 2 I θ t ζ 1 = 1 [ζ1,ζ 2t θτ, dβτ R n. Of course, in order for the right hand side of preceding to even make sense, it is necessary that 1 [ζ1,ζ 2θ be progressively measurable, which is more or less equivalent to insisting that ζ 1 and ζ 2 be stopping times Lemma. Given θ Θ 2 P; R n and stopping times ζ 1 ζ 2, holds P-almost surely. In fact, if α is a bounded, F ζ1 -measurable function and α1 [ζ1,ζ 2θt, ω equals αωθt, ω or depending on whether t is or is not in [ζ 1 ω, ζ 2 ω, then α1 [ζ1,ζ 2θ Θ 2 P; R n and α I θ t ζ 2 I θ t ζ 1 = α1[ζ1,ζ 2θτ, dβτ R n. Proof: Clearly, to check that θ α1 [ζ1,ζ 2θ is in Θ 2 P; R n, it is enough to check that θ is progressively measurable, and this is an elementary exercise. Next, set t = I θ t ζ 2 I θ t ζ 1 and Ĩt = θτ, dβτ R n.

9 5.1 Brownian Stochastic Integrals. 133 Then, by Doob s Stopping Time Theorem and , E P[ α t Ĩt 2] = E P[ α 2 t 2] 2E P[ α tĩt] + E P[ Ĩt 2] [ ] [ t ζ2 ] t ζ2 = E P α 2 θτ 2 dτ 2E P α θτ, θτ dτ R n t ζ 1 t ζ 1 [ ] + E P θτ 2 dτ =. The preceding makes it possible to introduce the following extension of Itô s theory. Namely, define Θ 2 loc P; Rn to be the space of progressively measurable θ : [, Ω R n with the property that, P-almost surely, A θ t T θt 2 dt < for all T [,. At the same time, define M loc P; R to be the space of continuous local martingales. That is, M M loc P; R if M : [, Ω R is a progressively measurable function, M, ω is continuous for P-almost every ω, and there exists a sequence {ζ k } 1 of stopping times such that ζ k P-almost surely and, for each k Z +, Mt ζ k, F t, P is a martingale Theorem. There is a unique linear map θ Θ 2 loc P; Rn I θ M loc P; R with the property that for any θ Θ 2 loc P; Rn and stopping time ζ: [ ζ E P θτ ] 2 dτ < = I θ t ζ = 1 [,ζ τ θτ, dβτ. R n Because it is completely consistent to do so, we will continue to use the notation θτ, dβτ to denote I R n θ, even when θ Θ 2 loc P; Rn. Another direction in which it is useful to extend Itô s theory is to matrixvalued integrands. Namely, we have the following, which is a more or less trivial corollary of the preceding theorem Corollary. Let βt, F t, P be an R n -valued Brownian motion, and suppose that σ : [, Ω HomR n ; R n is a progressively measurable function with the property that T σt 2 H.S. dt < P-almost surely for all T,. Then there is a P-almost surely unique, R n -valued, progressively measurable function t στ dβτ with the property that, P-almost surely, ξ, στ dβτ στ ξ, dβτ for all t [, & ξ R n. R n R n =

10 134 5 Itô s Theory of Stochastic Integration In particular, if ζ is a stopping time and then [ ζ E P στ ] 2 H.S. dτ <, ζ is an R n -valued martingale, and ζ is an R-valued martingale Exercises. στ dβτ στ dβτ, F t, P 2 ζ στ 2 H.S. dτ, F t, P Exercise We claimed, but did not prove, that Itô s integration theory coincides with Riemann Stieltjes s when the integrand has locally bounded variation. To be more precise, let θ Θ 2 loc P; Rn and assume that θ, ω has locally bounded variation for each ω Ω. Show that, one version of the random variable ω I θ, ω is given by the indefinite, Riemann Stieltjes integral of θ, ω with respect to β, ω. Hint: First show that there is no loss in generality to assume that θ is uniformly bounded and that, for each ω, θ, ω is right continuous at, and left continuous on,. Second, because θ, ω is Riemann Stieltjes integrable with respect to β, ω on each compact interval, verify that the Riemann Stieltjes integral of θ, ω on [, t] with respect to β, ω can be computed as the limit of the Riemann sums θm2 N, ω, N mβt, ω. R n m= Finally, use the boundedness of θ and the left continuity of θ, ω to see that [ θτ θ[τ]n ] 2 dτ =. lim N EP Exercise One of the most important applications of the Paley Wiener integral was made by Cameron and Martin. To explain their application, use, as in 3.1.1, P to denote the measure on C [, ; R n corresponding to the Lévy system I,,. That is, P is the standard

11 5.1 Brownian Stochastic Integrals. 135 Wiener measure on C [, ; R n, and so pt, B t, P is a Brownian motion. Next, given η L 2 [, ; R n, set ht = ητ dτ, and let τ h : C [, ; R n C [, ; R n denote the translation map given by τ h p = p + h. Then the theorem of Cameron and Martin states that the measure τ h P is equivalent of P and that its Radon Nikodym derivative R h is given by the Cameron Martin formula R h = exp I η 1 2 η 2 L 2 [, ;R n. Prove their theorem. Hint: There are several ways in which to prove their theorem. Perhaps the one best suited to the development here is to first prove that P can be characterized as the unique probability measure P on C [, ; R n with the property that E P[ ] e I θ 1 = exp 2 θ 2 L 2 [, ;R n for all piecewise constant, compactly supported θ : [, R n. In this expression, I θ, p denotes the Riemann Stieltjes integral of θ with respect to p over taken over the support of θ. Knowing this, it is easy to check that R 1 h dτ h P = dp for compactly supported, piecewise constant η s. To complete the proof, one need only construct a sequence {η k } 1 of compact supported, piecewise constant functions so that η k η in L 2 [, ; R n. Since the corresponding functions h k tend uniformly on compacts to h while the corresponding R hk s tend in L 1 P ; R to R h, the general case follows. Exercise Because we are dealing with processes which are almost surely continuous, most of the subtlety in the notion of a local martingale is absent. Indeed, given any progressively measurable M : [, Ω R with the property that M, ω is continuous for P-almost every ω, show that Mt, F t, P is a local martingale if and only if, for each k Z +, Mt ζk, F t, P is a martingale, where ζ k ω inf { t : Mt, ω k }. In this connection, show that a local martingale Mt, F t, P is a martingale if M [,T ] L 1 P for each T [,. In particular, use this to see that an Itô stochastic integral I θ t is a P-square integrable martingale if and only if [ T E P θt ] 2 dt < for all T [,. In a slightly different direction, show that a local martingale Mt, F t, P is a supermartingale if it is uniformly bounded from below.

12 136 5 Itô s Theory of Stochastic Integration Exercise Assume that σ : [, Ω HomR n ; R n is a progressively measurable function for which holds, set Xt = στ dβτ and At = στστ dτ, and assume that, for some T, and ΛT,, ξ, AT ξ R n ΛT ξ 2, ξ R n. Show that, for each ɛ, 1, ] ɛ X E [exp P [,T ] e1 ɛ n 2. 2ΛT In particular, conclude that P X [,T ] R e1 ɛ n 2 e ɛr2 2ΛT for all R >, and E P [exp X 2 [,T ] 2nΛT ] e 3 2 for all n 1. Hint: Given ξ R n, set X ξ t = ξ, Xt R n and Et, ξ = exp X ξ t 1 2 ξ, Atξ, R n and show, using the last part of Theorem and Doob s inequality, that, for each q 1,, E P [ sup e q t [,T ] e 1 2 qλt ξ 2 ] Rn ξ,xt e 1 2 qλt ξ 2 E P[ E, ξ q [,T ] q q E P[ ET, ξ q] q 1 q e 1 2 q2 ΛT ξ 2 q E P[ ET, qξ] q 1 ] q = e 1 q 2 qλt ξ 2. q 1 Next, multiply the preceding through by 2πτ n 2 e ξ 2 2τ where τ = ɛ q 2 ΛT, and integrate with respect of ξ over R n. Finally, let q. Exercise In this exercise we will develop some of the intriguing relations between Itô stochastic integrals and Hermite polynomials. These relations reflect the residual Gaussian characteristics which stochastic integrals inherit from their driving Brownian fore-bearers. Deeper examples of these relations will be investigated in Exercise

13 5.2 Itô s Integral Applied to Itô s Construction Method. 137 i Given a, define H m x, a for x R so that the identity λ2 λx e 2 a = m= λ m m! H mx, a, λ R, holds. Show that H m x, = x m and, for a >, H m x, a = a m 2 H m a 1 2 x where H m x H m x, 1. In addition, show that H m x = 1 m e x2 2 m x e x2 2, and conclude that the H m s can be generated inductively from H x = 1 and H m = x x Hm 1. In particular, use this to conclude that H m is an mth order polynomial with the properties that: the coefficient of x m is 1, the coefficient of x l is unless l has the same parity as m, and the constant term in H 2m is 1 m 2m! 2 m m!. ii Assume that θ Θ 2 P; R with A θ T is uniformly bounded, and show that e I θt is P-integrable. Use this along with the last part of Theorem to justify for all m 1. E P[ H m Iθ T, A θ T ] ] = dm dλ m EP[ E λθ T λ= = iii By combining i and ii, show that there exist universal constant B 2m for m Z + such that B 1 2m EP[ I θ T 2m] E P[ A θ T m] B 2m E P[ I θ T 2m], which is a primitive version of Burkholder s inequality. Finally, check that these inequalities continue to hold for any θ Θ 2 loc P; Rn, not just those for which A θ T is uniformly bounded. 5.2 Itô s Integral Applied to Itô s Construction Method Because our motivation for introducing stochastic integration was the desire to understand equations like , it seems reasonable to ask whether the theory developed in 5.2 does in fact do anything to increase our understanding. Of course, because we have been dealing with nothing but Brownian stochastic integrals, we will have to restrict our attention to processes for which the Lévy measure is Basic Existence and Uniqueness Result for S.D.E. s. Let σ : [, Ω R n HomR n ; R n and b : [, Ω R n R n be functions with the properties that, for each x R n, t, ω σt, x, ω

14 138 5 Itô s Theory of Stochastic Integration and t, ω bt, x, ω are progressively measurable, t, ω σt,, ω and t, ω bt,, ω are bounded, and σt, x 2, ω σt, x 1, ω H.S. sup sup x 1 x 2 t,ω x 2 x 1 bt, x 2, ω bt, x 1, ω x 2 x 1 < Theorem. Given functions σ and b satisfying and an R n - valued Brownian motion βt, F t, P, there exists for each x R n a P-almost surely unique R n -valued, progressively measurable function X, x which solves 5 cf. Corollary Xt, x = x + σ τ, Xτ, x dβτ + Moreover, there exist a C,, depending only on sup x R n sup t,ω such that, for all T,, and σt, x, ω H.S. bt, x, ω, 1 + x E P[ X, x 2 ] [,T ] C 1 + x 2 e CT 2 b τ, Xτ, x dτ, t. E P[ Xt, x Xs, x 2 ] C 1 + x 2 e CT 2 t s, s < t T. Proof: Set X t, x x. Assuming that X N, x has been defined and that t σ t, X N t, x satisfies , set cf. Corollary * X N+1 t, x = x + σ τ, X N τ, x dβτ + b τ, X N τ, x dτ, and observe that t σ t, X N+1 t, x will again satisfy Hence, by induction on N, we can produce the sequence {X N, x : N } so 5 Just in case it is not clear, the condition that X, x satisfy the equation implicitly contains the condition that X, x is P -almost surely continuous. In particular, this guarantees that the right hand side of makes sense.

15 5.2 Itô s Integral Applied to Itô s Construction Method. 139 that, for each N N, t σ t, X N t, x satisfies and * holds. Moreover, by the last part of Corollary plus Doob s Inequality, E P[ X 1, x X, x [ ] T 2 8E P σ τ, x ] 2 [,T ] H.S. dτ and, for any N 1: [ ] T + 2T E P b τ, x 2 dτ < E P[ X N+1, x X N, x 2 [,T ] [ T 8E P στ, XN τ, x σ X N 1 τ, x ] 2 H.S. [ T + 2T E P b τ, X N τ, x b X N 1 τ, x ] 2 dτ. Thus, by 5.2.1, there is a K < such that E P[ XN+1, x X N, x ] 2 [,T ] K1 + T T ] E P[ XN, x X N 1, x 2 [,t] ] dt for all N 1; and so, by induction, we see that, for each T,, E P[ XN+1, x X N, x ] 2 [,T ] KN 1 + T N T N E P[ X1, x X, x ] 2. N! [,T ] From the preceding it is clear that, for each T,, lim sup E P[ XN, x X N, x ] 2 =. [,T ] N N >N Hence, we have now verified the existence statement. To prove the uniqueness assertion, suppose that X, x and X, x are two solutions, and note that, for each k 1, ] E P[ X, x X, x 2 [,T ζ k ] K1 + T T E P[ X, x X, x 2 [,t ζ k ] ] dt,

16 14 5 Itô s Theory of Stochastic Integration where ζ k inf{t : Xt, x X t, x k}. But e.g., by Gronwall s inequality this means that E P[ X, x X, x ] 2 =, [,T ζ k ] and so we obtain the desired conclusion after letting k. Turning to the asserted estimates, note that E P[ X, x [ ] T 2 3 x E P σ t, Xt, x ] 2 [,T ] H.S. dt [ ] T + 3T E P b t, Xt, x 2 dt, and so there exists a K, with the required dependence, such that E P[ 1 + X, x 2 [,T ] ] T 3 x 2 + K1 + T E P[ 1 + X, x ] 2 dt. [,t] Hence the first estimate follows from Gronwall s inequality As for the second estimate, assume that s < t T, and use E P[ ] [ ] Xt, x Xs, x 2 2E P σ τ, Xτ, x 2 dτ H.S. s [ + 2T E P b τ, Xτ, x ] 2 dτ together with the first estimate to arrive at the second estimate Corollary. Assume that σ : R n HomR n ; R n and b : R n R n are continuous functions which satisfy σx 2 σx 1 H.S. bx 2 bx 1 sup <. x 1 x 2 x 2 x 1 Refer to the setting in 3.1.2, and take M = and F. Then p t, B t, P is an R n -valued Browning motion and the function p X, x, p C [, ; R n described in G1 is the P -almost surely unique, {B t : t }-progressively measurable solution to Xt, x = x + σ Xτ, x dp τ + s b Xτ, x dτ, t.

17 5.2 Itô s Integral Applied to Itô s Construction Method. 141 Proof: Let X, x be the solution to 5.2.5, and define {X N, x : N } as in H4 of Note that X N t, x, p = x + Thus, σ X N [τ] N, x, p dp τ + ] b X N [τ] N, x, p dτ. [ E P X, x X N, x 2 [,t] [ 16E P σ X[τ] N, x σ X N [τ] N, x ] 2 H.S. dτ [ + 4tE P b X[τ]N, x b X N [τ] N, x ] 2 dτ [ + 16E P σ Xτ, x σ X[τ] N, x ] 2 H.S. dτ [ + 4tE P b Xτ, x b X[τ]N, x ] 2 dτ ; and so, by the last estimates in Theorem and the Lipschitz estimates on σ and b, we see that, for each T, x, R n, there is a KT, x < such that [ E P X, x X N, x ] 2 [,t] [ KT, x2 N + KT, x E P X, x X N, x ] 2 dτ [,τ] for all t [, T ]. Finally, apply Gronwall s inequality to conclude that [ lim X, x X N, x ] 2 =. N EP [,t] Remark In the literature, is called the stochastic integral equation for a diffusion processes with diffusion coefficient σσ and drift coefficient b. Often such equations are written in differential notation: dxt, x = σ Xt, x dp t + b Xt, x dt with X, x = x, in which case they are called a stochastic differential equation. It is important to recognize that the joint distribution of p p, X, x, p under P would be the same were we to replace the canonical Brownian motion p t, B t, P by any other R n -valued Brownian motion βt, F t, P. Indeed, as both the construction in Theorem and the one in make clear, this joint distribution depends only on the distribution of ω β, ω, which is the same no matter which realization of Brownian motion is used.

18 142 5 Itô s Theory of Stochastic Integration Subordination. When solving a system of ordinary differential equations one can sometimes take advantage of an inherent lower triangularity in the system. That is, one may be able to arrange the equations in such a way that the system contains a subsystem which is autonomous onto itself. In this case, the entire system can be solved by first solving the autonomous subsystem, plugging the solution into the remaining equations, and then solving resulting, now time-dependent, equations. The same device applies when solving stochastic differential equations, and is particularly interesting in the following situation. Let σ : R n HomR n ; R n and b : R n R n be as in Corollary Suppose that n = k + l, n = k + l, and αx vx σz = & bz = for z = x, y R k R l, βx, y wx, y where v : R k R k, w : R n R l, α : R k HomR k ; R k and β : R n HomR l ; R l. Next, define Σ : [, R l C [, ; R k HomR l ; R l and B : [, R l C [, ; R l R l so that Σ t, y; p = β pt, y and B t, y; p w pt, y. Finally, write r t = p t, q t, where p t and q t are, respectively, the orthogonal projections of r t onto R k and R l thought of as subspaces of R n = R k R l, and note that, when C [, ; R n is identified with C [, ; R k C [, ; R l, P can be identified with P Q, where P and Q are the standard Wiener measures on C [, ; R k and C [, ; R l, respectively. Under these conditions, we want go about solving by first solving the equation Xt, x, p = x + + α Xτ, x, p dp τ v Xτ, x, p dτ, relative to the Wiener measure P, then, for each p C [, ; R k and y R l, solving Y t, y, q ; p = y + + Σ τ, Y τ, y, q ; p; p dq τ B τ, Y τ, y, q ; p; p dτ

19 5.2 Itô s Integral Applied to Itô s Construction Method. 143 relative to Q, and finally showing that t Z t, z, p, q Xt, x, p = Y t, y, q ; X, x, p is a solution to original stochastic integral equation whose coefficients were σ and b. The following theorem says that this subordination procedure works Theorem. Let r = p, q Z, z, r be defined for z = x, y as in Then r Z, z, r is the P -almost surely unique solution to the stochastic integral equation Zt, z, r = z + σ Zτ, z, r dr τ + b Zτ, z, r dτ, t. Hence, if Φ : C [, ; R k C [, ; R l R is measurable and either bounded or non-negative, then E P[ Φ Z, z, r ] = Φ X, x, p ; Y, y, q ; X, x, p Qdq Pdp. Proof: The proof of this result is really just an exercise in the use of Fubini s Theorem. Namely, let r Z, z, r be the P -almost surely unique solution to the stochastic integral equation Zt, z, r = z + σ Zτ, z, r dr τ + b Zτ, z, r dτ, t. What we have to show is that Z, z, r = Z, z, r for P -almost every r C [, ; R n. Let Xt, z, r and Ỹ t, z, r be the orthogonal projections of Zt, z, r onto R k and R l. By Fubini s Theorem, we know that, for Q-almost every q, the function p X, z, p, q satisfies X t, z, p, q = x + + α X τ, z, p, q dp τ v X τ, z, p, q dτ, t, P-almost surely. Thus, by uniqueness, for Q-almost every q, X, z, p, q = X, x, p for P-almost every p.

20 144 5 Itô s Theory of Stochastic Integration Starting from the preceding and making another application of Fubini s Theorem, we see that, for P-almost every p, the function q Ỹ, z, p, q satisfies Ỹ t, z, p, q = y + + β Xτ, x, p, Ỹ τ, z, p, q dq τ w Xτ, x, p, Ỹ τ, z, p, q dτ, t, Q-almost surely. Hence, by uniqueness, for P-almost every p, Ỹ, z, p, q = Y, y, q ; X, x, p Q-almost surely. After combining these two and making yet another application of Fubini s Theorem, we have now shown that Z, z, r = Z, z, r for P -almost every r Exercises. Exercise Here is an example which indicates how Theorem gets applied. Namely, referring to the setting in that theorem, suppose that β and w are independent of y. That is, βx, y = βx and wx, y = wx. Next, define V : [, C [, ; R k HomR l ; R l and B : [, C [, ; R k R l so that V t, p = ββ pτ dτ and B t, p = w pτ dτ, and let Γ t p, dη denote the Gaussian measure on R l with mean Bt, p and covariance V t, p. Given a stopping time ζ : C [, ; R k [, ] and a bounded measurable f : R n R show that E P[ f Zζp, z, r ], ζp < = E P[ Φ, ζ < ] where Φp = f X, x, p, y + η Γ ζp X, x, p, dη for ζp <. R l 5.3 Itô s Formula The jewel in the crown of Itô s stochastic integration theory is Itô s formula. Depending on ones point of view, his formula can be seen as the solution to any one of a variety of problems. From the point of view which comes out of the considerations in Chapters 3 and 4, especially G2 in 3.1.2, it gives us a representation for the martingales being discussed there. From a more general standpoint, Itô s formula is the fundamental theorem of the calculus for which his integration theory is the integral, and, as such, it is the identity on which nearly everything else relies.

21 5.3 Itô s Formula. 145 In the following statement: βt, F t, P is an R n -valued Brownian motion, and X X : [, Ω R k is a progressively measurable and, for P-almost every ω, X, ω is a continuous function of locally bounded variation Y Y : [, Ω R l is a progressively measurable function and, for each 1 j l, the jth component Y j of Y Y is the dβt-itô stochastic integral I θj of some θ j Θ 2 loc P; Rn. Z Zt = Xt, Y t. With this notation, Itô s Formula is the formula given in the next theorem. Our proof is based on the technique introduced by Kunita and Watanabe in their now famous article [21] Theorem. Given any F C 1,2 R k R l ; R, F Zt F Z = k i= xi F Zτ dx j τ + l j=1 yj F Zτ θ j τ, dβτ R n l θj τ, θ j τ R n yj yj F Zτ dτ, t, j,j =1 P-almost surely. Here, dx i -integrals are taken in the sense of Riemann Stieltjes and dβ-integrals are taken in the sense of Itô. Proof: We begin by making several reductions. In the first place, without loss in generality, we may assume that Z, ω is continuous for all ω Ω. Secondly, we may assume that all the X i s have uniformly bounded variation and that all the θ j s are elements of Θ 2 P; R n. Indeed, if this is not the case already, then we can introduce the stopping times k l ζ R = inf var [,t] X i + θj τ 2 dτ R, i=1 replace Zt by Zt ζ R, and, at the end, let R. Similarly, we may assume that F has compact support. Finally, under these assumptions, a standard mollification procedure makes it clear that we need only handle F s which are both smooth and compactly supported. Thus, from now on, we will be assuming that: Z, ω is continuous for all ω, the X i s have uniformly bounded variation, the θ j s are elements of Θ 2 P; R n, and F is smooth and compactly supported. Finally, by continuity, it suffices to prove that the identity holds P-almost surely for each t. j=1

22 146 5 Itô s Theory of Stochastic Integration Given N N and t,, define stopping times {ζm N : m } so that and { ζm+1 N = inf τ ζm N : Zτ Zζ N m τ } max θ j σ 2 dσ 2 N t. 1 j l ζm N ζ N Next, set Zm N = Xm N, Ym N = Z ζm N, and N m,j = Y j ζm+1 N Y j ζm. N By continuity, we know that, for each ω, ζmω N = t for all but a finite number of m s. Hence, F Zt F Z = F Z N m+1 F Zm N. m= Furthermore, by the Fundamental Theorem of Calculus, F Zm+1 N F Zm N = F Xm N, Ym+1 N F Zm N k ζ N m+1 + xi F Xτ, Ym+1 N dxi τ, and, by Taylor s Theorem, F X N m, Y N m+1 F Z N m = i=1 ζ N m l yj F Zm N N m,j j= l j,j =1 yj yj F Z N m N m,j N m,j + EN m, where there exists a C 3 <, depending only on the bound on the third order derivatives of F, such that l Em N C 3 2 N N m,j 2. Next, we use Lemma to first write yj F Z N m N m,j = and then conclude that F Zt F Z k = i= j=1 yj F Z N m1 [ζ N m.ζn m+1 θ j τ, dβτ R n, F N i τ dx i τ + l m= j,j =1 l j=1 yj θn j τ, dβτ R n yj yj F Zm N N m,j N m,j + Em, N m=

23 5.3 Itô s Formula. 147 where F N i τ xi F Xτ, Y N m+1 & θn j τ yj F Z N mθ j τ for τ [ζ N m, ζ N m+1. Because E N m C 3 2 N l j=1 N m,j 2 and [ ] E P N m,j 2 = E P m= m= N m,j 2 = E P[ Y j t Y j 2], we know that m= EN m tends to in L 2 P; R as N. At the same time, it is clearly that, if C 2 is a bound on the second derivatives of F, then and Fi N τ dx i τ xi F Zτ dx i τ C 22 N var [,t] X i [ E P θn j τ, dβτ R n yj F Zτ θ j τ, dβτ ] 2 R n [ = E P θn j τ yj F Zτ θ j τ ] 2 dτ C24 2 N θ j 2 Θ 2 P;R n. Hence, since it is obvious that ζ N m+1 θj τ, θ j τ R n yj yj F Zζ N m dτ m= ζ N m θj τ, θ j τ R n yj yj F Zτ dτ, all that remains is to show that, for all 1 j, j l, ζ yj yj F Zm N N m,j N m,j m= in P-measure. To this end, remark that ζ N m N m+1 θj τ, θ j τ R n dτ E P[ N m,j N ] [ ] m,j Fζ N t = E P Y j ζm+1y N j ζm+1 N Y j ζmy N j ζm N F ζ N m [ ζ N = E P m+1 θj τ, θ j τ ] dτ R n F ζm N, ζ N m

24 148 5 Itô s Theory of Stochastic Integration and conclude that the terms S N m,j,j y j yj F Z N m N m,j N m,j ζ N m+1 ζ N m are orthogonal for different m s. Hence, E P At the same time, m= 2 Sm,j,j N = θj τ, θ j τ R n yj yj F Z N m dτ E P[ Sm,j,j N 2]. m= Sm,j,j N 2 C24 2 N ζ N N m,j 2 + N m,j 2 + θ j m+1 θj τ θ j τ 2 dτ. Hence, E P m= 2 Sm,j,j N C 2 4 N E P [ Yj t Y j 2 + Yj t Y j 2 + θj τ 2 + θ j τ 2 ] dτ 2C 2 4 N E P[ Y t Y 2]. Remark If one likes to think about the Fundamental Theorem of Calculus in terms of differentials, then the following is an appealing way to think about Itô s formula. In the first place, given a progressively measurable function α : [, Ω R which satisfies P ατ 2 θ j τ 2 dτ < = 1 for all t, one should introduce the notation ατdy j τ ατθj τ, dβτ R n.

25 5.3 Itô s Formula. 149 That is, in terms of differentials, dy j t θ j t, dβt. At the same R N time, one should define ατ dy j τdy j τ ατ θ j τ, θ j τ R n dτ for progressively measurable α s satisfying P ατ θ j τ 2 + θ j τ 2 dτ < = 1 for all t. In terms of differentials, this means that we are taking dy j tdy j t = θj t, θ j t dt. Finally, define dx R n i tdx i t = = dx i tdy j t for all 1 i, i k and 1 j l. With this notation, a differential version of Itô s formula becomes df Zt = grad Zt, dzt + 1 dzt, Hess R k+l Zt F dzt. 2 R k+l Although it looks a little questionable, this way of writing Itô s formula has more than mnemonic value. In fact, it highlights the essential fact on which his formula rests. Namely, dβ m t may be a differential, but it is not, in a classical sense, an infinitesimal. Indeed, there is abundant evidence that dβ m t is of order dt. 6 Thus, it is only reasonable that, when dealing with Brownian paths, one will not get a differential relation in which the left hand side differs from the right hand side by odt unless one uses a two-place Taylor s expansion. Furthermore, it is clear why dβ m t 2 = dt. In order to figure out how dβ m dβ m should be interpreted when m m, remember that βm+β m 2 and βm β m 2 are both Brownian motions. Hence, 2dβ m tdβ m t = d β 2 m + β m t d β 2 m β m t = dt dt = dt. 2 2 In other words, dβ m tdβ m t = δ m,m dt, which now explains why the preceding works. Namely, dx i tdx i t = dt and dx i tdy j t = dx i t θ j t, dβt = dt R n because dx i tdx i t and dβtdy j t are odt, and dy j tdy j t = m,m θ j t m θ j t m dβ m tdβ m t = θ j t, θ j t R n dt because dβ m tdβ m t = δ m,m dt. 6 Lévy understood this point thoroughly and in fact made systematic use of the notation dt.

26 15 5 Itô s Theory of Stochastic Integration Exercises. Exercise Given an R n -valued Brownian motion βt, F t, P and a θ Θ 2 loc Rn, define E θ as in i Show that E θ is P-uniquely determined by the fact that it is progressively measurable and de θ t = E θ t θt, dβt R n and E θ = 1. For this reason, E θ is sometimes called the Itô exponential. Hint: Checking that E θ is a solution to is an elementary application of Itô s formula. To see that it is the only solution, suppose that X is a second solution, and apply Itô s formula to see that d Xt E θ t =. ii Show that E θ t, F t, P is always a supermartingale and that it is a martingale if and only if E P[ E θ t ] = 1 for all t. iii From now on, assume that E P[ e 1 2 A θt ] < for each T. First observe that I θ t, F t, P is a square integrable martingale, and conclude that, for every α, e αi θt, F t, P is a submartingale. In addition, show that E P[ e 1 2 I θt ] E P[ e 1 2 A θt ] 1 2 < for all T. Hint: Write e 1 2 I θ = E 1 2 θ e 1 4 A θ, and remember that E P[ E θ T ] 1. iv Given λ, 1, determine p λ 1, so that λp λp λ λ 1 λ = For 1 λ 2 E λ 2 pθ, and p [1, p λ ], set γλ, p = λpp λ 1 λ 2 conclude that, for any stopping time ζ,, note that E p2 λθ = e γλ,pi θ E P[ E λθ T ζ p2] E P[ e 1 2 I θt ] 2λpp λ. By taking p = p λ and applying part iii, see that this provides sufficient uniform integrability to check that E λθ t, F t, P is a martingale for each λ, 1. v By taking p = 1 in the preceding, justify 1 = E P[ E λθ T ] E P[ e 1 2 I θt ] 2λ1 λ E P [ E θ T ] λ 2 for all λ, 1. After letting λ 1, conclude first that E P[ E θ T ] = 1 and then that E θ t, F t, P is a martingale. The fact that implies Eθ t, F t, P is a martingale is known as Novikov s criterion.

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Part III Stochastic Calculus and Applications

Part III Stochastic Calculus and Applications Part III Stochastic Calculus and Applications Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 218 These notes are not endorsed by the lecturers, and I have modified them often significantly

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stochastic Integration.

Stochastic Integration. Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock WEYL S LEMMA, ONE OF MANY Daniel W Stroock Abstract This note is a brief, and somewhat biased, account of the evolution of what people working in PDE s call Weyl s Lemma about the regularity of solutions

More information

Chapter VIII Gaussian Measures on a Banach Space

Chapter VIII Gaussian Measures on a Banach Space Chapter VIII Gaussian Measures on a Banach Space As I said at the end of 4.3., the distribution of Brownian motion is called Wiener measure because Wiener was the first to construct it. Wiener s own thinking

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Stochastic Analysis I S.Kotani April 2006

Stochastic Analysis I S.Kotani April 2006 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic

More information

Kolmogorov s Forward, Basic Results

Kolmogorov s Forward, Basic Results 978--521-88651-2 - Partial Differential Equations for Probabilists chapter 1 Kolmogorov s Forward, Basic Results The primary purpose of this chapter is to present some basic existence and uniqueness results

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION A la mémoire de Paul-André Meyer Francesco Russo (1 and Pierre Vallois (2 (1 Université Paris 13 Institut Galilée, Mathématiques 99 avenue J.B. Clément

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Malliavin Calculus: Analysis on Gaussian spaces

Malliavin Calculus: Analysis on Gaussian spaces Malliavin Calculus: Analysis on Gaussian spaces Josef Teichmann ETH Zürich Oxford 2011 Isonormal Gaussian process A Gaussian space is a (complete) probability space together with a Hilbert space of centered

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as Harmonic Analysis Recall that if L 2 ([0 2π] ), then we can write as () Z e ˆ (3.) F:L where the convergence takes place in L 2 ([0 2π] ) and ˆ is the th Fourier coefficient of ; that is, ˆ : (2π) [02π]

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Potential Theory on Wiener space revisited

Potential Theory on Wiener space revisited Potential Theory on Wiener space revisited Michael Röckner (University of Bielefeld) Joint work with Aurel Cornea 1 and Lucian Beznea (Rumanian Academy, Bukarest) CRC 701 and BiBoS-Preprint 1 Aurel tragically

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Tools of stochastic calculus

Tools of stochastic calculus slides for the course Interest rate theory, University of Ljubljana, 212-13/I, part III József Gáll University of Debrecen Nov. 212 Jan. 213, Ljubljana Itô integral, summary of main facts Notations, basic

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Chapter 4. The dominated convergence theorem and applications

Chapter 4. The dominated convergence theorem and applications Chapter 4. The dominated convergence theorem and applications The Monotone Covergence theorem is one of a number of key theorems alllowing one to exchange limits and [Lebesgue] integrals (or derivatives

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Stochastic Integration and Continuous Time Models

Stochastic Integration and Continuous Time Models Chapter 3 Stochastic Integration and Continuous Time Models 3.1 Brownian Motion The single most important continuous time process in the construction of financial models is the Brownian motion process.

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review

Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review Solving Stochastic Partial Differential Equations as Stochastic Differential Equations in Infinite Dimensions - a Review L. Gawarecki Kettering University NSF/CBMS Conference Analysis of Stochastic Partial

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations Simo Särkkä Aalto University, Finland November 18, 2014 Simo Särkkä (Aalto) Lecture 4: Numerical Solution of SDEs November

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information