STOCHASTIC ANALYSIS. F. den Hollander H. Maassen. Mathematical Institute University of Nijmegen Toernooiveld ED Nijmegen The Netherlands

Size: px
Start display at page:

Download "STOCHASTIC ANALYSIS. F. den Hollander H. Maassen. Mathematical Institute University of Nijmegen Toernooiveld ED Nijmegen The Netherlands"

Transcription

1 STOCHASTIC ANALYSIS F. den Hollander H. Maassen Mathematical Institute University of Nijmegen Toernooiveld ED Nijmegen The Netherlands august 2

2 References: 1. F. Black and M. Scholes, The pricing of options and corporate liabilities, J. Polit. Econom. 81, , R. Cameron, W.T. Martin, Transformations of Wiener integrals under translations, Ann. Math. 2 45, , R.A. Carmona, S.A. Molchanov, Parabolic Anderson problem and intermittency, AMS Memoir 518, American Mathematical Society, Providence RI, K.L. Chung and R. Williams, Introduction to Stochastic Integration, Birkhäuser, Boston, 199 2nd edition. 5. E.B. Davies, One-parameter semigroups, Academic Press R. Durrett, Stochastic calculus. A practical introduction, Probability and Stochastic Series. CRC Press E.B.Dynkin, The optimum choice of the instant for stopping a Markov process, Soviet Mathematics 4, , I.V. Girsanov, On transforming a certain class of stochastic processes by absolutely continuous substitution of measures, Theory of Probability and Applications 5, , P.R. Halmos, Measure Theory, van Nostrand Reinhold, New York, R. van der Hofstad, F. den Hollander, W. König, Central limit theorem for the Edwards model, Ann. Probab. 25, , K. Itô, Lectures on Stochastic Processes, Tata Institute of Fundamental Research, Bombay, G. Kallianpur, Stochastic Filtering Theory, Springer, New York, R.E. Kalman and R.S. Bucy, New results in linear filtering and prediction theory, Trans. ASME Ser. DJ. Basic Eng. 83, 95-18, T. Mikosh, Elementary stochastic calculus with finance in view, Advanced Series on Statistical Science & Applied Probability 6, B. Øksendal, Stochastic Differential Equations: an introduction with applications, Springer Berlin, Heidelberg. 1-st edition 1985, 5-th edition L.S. Ornstein and G.E. Uhlenbeck, On the theory of Brownian motion, Phys. Rev. 36, , N. Wiener, Differential space, J. Math. and Phys. 2, , N. Wiener, The homogeneous chaos, Amer. J. Math. 6, , D. Williams, Probability with martingales, Cambridge University Press, Cambridge The notes in this syllabus are based primarily on the book by Øksendal. 1

3 Contents 1 Introduction Applications What is noise? 5 2 Brownian motion Notations Definition of Brownian motion Continuity of paths and the problem of versions Roughness of paths 13 3 Stochastic integration: the Gaussian case The stochastic integral of a non-random function The Ornstein-Uhlenbeck process Some hocus pocus 18 4 The Itô-integral Step functions Arbitrary functions Martingales Continuity of paths 25 5 Stochastic integrals and the Itô-formula The one-dimensional Itô-formula Some examples The multi-dimensional Itô-formula Local times of Brownian motion 33 6 Stochastic differential equations Strong solutions Weak solutions 39 7 Itô-diffusions, generators and semigroups Introduction and motivation Basic properties Generalities on generators Dynkin s formula and applications 48 8 Transformations of diffusions The Feynman-Kac formula The Cameron-Martin-Girsanov formula 53 9 The linear Kalman-Bucy filter Example 1: observation of a single random variable Example 2: repeated observation of a single random variable Example 3: continuous observation of an Ornstein-Uhlenbeck process 6 2

4 1 The Black and Scholes option pricing formula Stocks, bonds and stock options The martingale case The effect of stock trading Motivation Results Inclusion of the interest rate 71 3

5 1 Introduction In stochastic analysis one studies random functions of one variable and various kinds of integrals and derivatives thereof. The argument of these functions is usually interpreted as time, so that the functions themselves can be thought of as paths of random processes. Here, like in other areas of mathematics, going from the discrete to the continuous yields a pay-off in simplicity and smoothness, at the price of more complicated analysis. Compare, to make an analogy, the integral x3 dx with the sum n k=1 k3. The integral requires a more refined analysis for its definition and for the proof of its properties, but once this has been done the integral is easier to calculate. Similarly, in stochastic analysis you will become acquainted with a convenient differential calculus as a reward for some hard work in analysis. 1.1 Applications Stochastic analysis can be applied in a wide variety of situations. We sketch a few examples below. 1. Some differential equations become more realistic when we allow some randomness in their coefficients. Consider for example the following growth equation, used among other areas in population biology: d dt S t = r + N t S t. 1.1 Here, S t is the size of the population at time t, r is the average growth rate of the population, and the noise N t models random fluctuations in the growth rate. 2. The Langevin equation describes the behaviour of a dust particle suspended in a liquid: m d dt V t = ηv t + N t. 1.2 Here, V t is the velocity at time t of the dust particle, the friction exerted on the particle due to the viscosity η of the liquid is ηv t, and the noise N t stands for the disturbance due to the thermal motion of the surrounding liquid molecules colliding with the particle. This equation is fundamental in physics. 3. The path of the dust particle in example 2 is observed with some inaccuracy. One measures the perturbed signal Z t given by Z t = V t + Ñt. 1.3 Here Ñt is again a noise. One is interested in the best guess for the actual value of V t, given the observation Z s for s t. This is called a filtering problem: how to filter away the noise Ñt. Kalman and Bucy 1961 found a linear algorithm, which was almost immediately applied in aerospace engineering. Filtering theory is now a flourishing and extremely useful discipline. 4. Stochastic analysis can help solve boundary value problems such as the Dirichlet problem. If the value of a harmonic function f on the boundary of some bounded regular region D R n n 1 is known, then the value of f in the interior of D can be expressed as follows: E f B x τ = fx, 1.4 4

6 where Bt x := x + N sds is an integrated noise or Brownian motion, starting at x, and τ denotes the time when this Brownian motion first reaches the boundary. A harmonic function f is a function satisfying f = with the Laplacian. 5. Stochastic analysis has found extensive application nowadays in finance. A typical problem is the following. At time t = an investor buys stocks and bonds on the financial market, i.e., he divides his initial capital C into A shares of stock and B shares of bonds. The bonds will yield a guaranteed interest rate r, whereas the stock price S t, measured relative to time, is assumed to satisfy the growth equation 1.1. With a keen eye on the market the investor sells stocks to buy bonds and vice versa. Such dealings require no extra investments, and are called self-financing. Let A t and B t be the amounts of stocks and bonds held at time t. The total value C t of stocks and bonds at time t is C t = A t S t + B t e r t. 1.5 The assumption that the tradings are self-financing can be expressed as: An interesting question is now: dc t = A t ds t + B t de r t. 1.6 What would our investor be prepared to pay at time for a so-called European call option, i.e., the right to buy at some later time T a share of stock at a predetermined price K? The rational answer, q say, was found by Black and Scholes 1973 through an analysis of the possible self-financing strategies leading from an initial investment q to the same payoff as the option would do. Their formula is now being used on stock markets all over the world. The goal of this course is to first make sense of the above equations, and then to work with them. 1.2 What is noise? In all the above examples the unexplained symbol N t occurs, which is to be thought of as a completely random function of t, in other words, the continuous-time analogue of a sequence of independent identically distributed random variables. In a first attempt to catch this concept, it is tempting to try and meet the following requirements: 1. N t is independent of N s for t s. 2. The random variables N t t all have the same probability distribution µ. 3. E N t =. However, these requirements do not produce what we want. We shall show that such a continuous i.i.d. sequence N t is not measurable in t, unless it is identically zero. Let µ denote the probability distribution of N t, which by requirement 2 does not depend on t. If N t is not a sure constant function of t, then there must be a value a R such that p := P[N t a] is neither nor 1. Now consider the set E of time points where the noise is 5

7 less than a. It can be shown that with probability 1 the set E is not Lebesgue measurable. Without giving a full proof we can understand this as follows. Let λ denote the Lebesgue measure on R. If E would be measurable, then by the independence requirement 1 we would expect its relative share in any interval c, d to be p, by a continuous analogue of the law of large numbers: λ E c, d = p d c. 1.7 On the other hand, it is known from measure theory that every measurable set B is arbitrarily thick somewhere with respect to λ, i.e., for every α < 1 an interval c, d can be found such that λ B c, d > αd c. cf. Halmos 1974 Theorem III.16.A: Lebesgue s density theorem. So by 1.7, E is not measurable. This is a bad property of N t because in view of 1.1, 1.2, 1.3 and 1.4, we would like to be able to consider integrals of N t. For this reason, let us approach the problem from a different angle. Instead of N t itself, let us consider directly the integral of N t, and give it a name: B t := N s ds. The three requirements on the evasive object N t then translate into three quite sensible requirements for B t. BM1. For any = t t 1... t n the random variables B tj+1 B tj j =,..., n 1 are independent. BM2. B t has stationary increments, i.e., the joint probability distribution of B t1 +s B u1 +s, B t2 +s B u2 +s,..., B tn +s B un +s does not depend on s, where t i > u i for i = 1, 2,..., n are arbitrary. BM3. E B t = for all t. We add a normalisation: BM4. EB 2 1 = 1. Still, these four requirements do not determine B t. For example, the compensated Poisson jump process also satisfies them. Our fifth requirement fixes the process B t uniquely as will become clear later on: BM5. t B t is continuous a.s with probability 1. The stochastic process B t satisfying these requirements is called the Wiener process or Brownian motion. In the next chapter we shall give an explicit construction. 6

8 2 Brownian motion In this section we shall construct Brownian motion on [, T ] for some T >. We follow the original idea of Norbert Wiener in the 193 s. As we shall see, it clearly displays the noise N t as a random distribution in the sense of Schwartz, rather than a random function. T Norbert Wiener Fig. : the function e n n = 5 Consider the orthonormal basis e 1, e 2, e 3,... of the Hilbert space L 2 [, T ] given by 2 nπt e n t = T sin n 1, t T. T For a function f L 2 [, T ] the coefficients on this basis i.e. the Fourier coefficients are the weights with which the frequencies n = 1, 2, 3,... are represented in f. Now, white noise should contain all frequencies with equal weights and in a random way. This leads to the following tentative definition of N t : N t := ω n e n t t T, 2.1 n=1 where ω 1, ω 2, ω 3,... is a sequence of independent standard Gaussian random variables. Now, the probability that the sequence ω 1, ω 2, ω 3,... should be square summable is. So N t will almost surely not be an function in L 2 [, T ]. But we shall be able to make sense of 2.1 in the sense of Schwartz distributions. 2.1 Notations Let S m m Z denote the space of all sequences ω = ω 1, ω 2,... R N for which ω 2 m := n 2m ωn 2 <. n=1 Note that S is the space l 2 N of square summable sequences. Let us denote by S the intersection S := m Z S m, 7

9 and by S the union S := m Z S m. S consists of the rapidly decreasing sequences and S of all the polynomially bounded ones. Under the pairing ω, x := ω n x n, the spaces S m and S m are each others dual, and S is the dual of S. We have the following inclusions: S S 2 S 1 S = l 2 S 1 S 2 S. Next let P denote the infinite product measure on R N that makes ω = ω 1, ω 2,... into a sequence of independent standard Gaussian random variables. In a formula: Pdω := n=1 n=1 1 2π e 1 2 ω2 ndω n. The following lemma says that P-almost surely ω n increases slower than n, making the square of their quotient summable. Lemma 2.1 PS 1 = P { ω Ω : n=1 ω 2 n n 2 < } = 1. Proof. For n N, let M n : R N R denote the random variable M n ω := n j=1 ω 2 j j 2. Since the ω j s are standard Gaussian, we have for all n N, EM n = n j=1 1 j 2 π2 6 =: c and therefore, for all k N and n N, by the Markov inequality, P[M n > k] 1 k EM n c k. Since M n ω is increasing in n for all ω, it follows that P[M n k for all n] = P [M n k] = lim P[M n k] 1 c n k, n N and hence PS 1 = P k N n N [M n k] = lim k 1 c = 1. k 8

10 Since S 1 S, ω also lies in S with probability 1. Now, S consists of the sequences of Fourier coefficients of the tempered distributions, and the latter are a good class to work with. So we shall henceforth use S as our sample space Ω. The connection with the space of distributions is as follows. Let D[, T ] denote the space of infinitely differentiable functions on [, T ] with compact support inside, T. A function f D[, T ] determines a sequence of real numbers ν n := e n, f := T e n tftdt n N. For all even m N, the property that f is m times continuously differentiable implies that ν S m by partial integration: T f m t 2 dt = = e n, f m 2 = n=1 n=1 nπ T n=1 e m n, f 2 2m π 2m en, f 2 = ν 2 T m. Conversely, ν S m implies that f m L 2 [, t], so f is m 1 times continuously differentiable. Now, a sequence ν 1, ν 2,... is said to converge to an element ν of S ν is itself a sequence of numbers! if it converges in S m for every m. On the other hand, a sequence f 1, f 2,... in D[, T ] is said to converge to an f D[, T ] if for all m N the sequence f m 1, f m 2,... converges uniformly to f m. Therefore the topology on S is carried over to D[, T ]. 2.2 Definition of Brownian motion We now define white noise Ñ on [, T ] by Ñ : Ω D[, T ]: ω, f ω n e n, f. For fixed ω Ω, the map Ñω: f Ñω, f is a distribution on [, T ]. Hence, with the measure P being defined on Ω, our map Ñ becomes a random distribution. On the other hand, for fixed f D[, T ] the map n=1 Ñ f : ω Ñω, f is a Gaussian random variable with mean zero and variance f 2 : EÑf = E ω i e i, f = i=1 EÑ 2 f = E i=1 j=1 ω i ω j e i, f e j, f = e i, f 2 = f 2. i=1 9

11 It follows that the map f Ñf can be extended to a Hilbert space isometry L 2 [, T ] L 2 Ω, P. Indeed, if f i is a sequence of functions in D[, T ] such that f i f i for some f L 2 [, T ], then the sequence Ñf i=1 is a Cauchy-sequence in L2 Ω, P: Ñf i Ñf j 2 = EÑf i Ñf j 2 = EÑ 2 f i f j = f i f j 2. So it has a limit, N f say. Moreover, the map f N f is isometric: Hence again for all f, g L 2 [, T ]: EN 2 f = f 2 f L 2 [, T ]. 2.2 EN f =, EN f N g = f, g. 2.3 However, the extension from Ñ on Ω D[, T ] to N on Ω L2 [, T ] has a price: whereas Ñ f f D[, T ] is a random element of D[, T ], N f f L 2 [, T ] is not a random element of L 2 [, T ] = L 2 [, T ], the reason being that N f ω is only defined for fixed f L 2 [, T ] and almost all ω Ω. The set of those ω for which N f ω is defined for all f L 2 [, T ] has measure. We employ the extension from Ñ to N in order to define Brownian motion: Definition. Brownian motion over the time interval [, T ] is the family B t t [,T ] of random variables Ω R given by B t ω := N 1[,t] ω. We finally show that this definition meets all our requirements. Proposition 2.2 B t satisfies the conditions BM1 BM4 of Section 1. Moreover: BM5. There exists a version of B t t [,T ] such that t B t is continuous. BM6. With probability 1, t B t has infinite variation over every time interval and is therefore nowhere differentiable. Proof. From the construction of N it is clear that for any f 1, f 2,..., f k L 2 [, T ] the random variables N f1, N f2,..., N fk are jointly Gaussian. A standard result in probability theory ensures that i. jointly Gaussian random variables are independent as soon as they are uncorrelated, ii. their joint probability distribution depends only on their expectations and their correlations. So the independence of the increments of Brownian motion BM1 follows from the orthogonality of the functions 1 [tj,t j+1 = 1 [,tj+1 1 [,tj j =,..., n 1, since N maps orthogonal functions f and g to uncorrelated random variables by 2.3. The stationarity of the increments BM2 follows from the fact that the inner products of the functions 1 [uj +s,t j +s j = 1,..., n do not depend on s. Properties BM3 and BM4 are obvious: EB t = EN 1[,t] =, EBt 2 = 1 [,t] 2 = t. The proof of properties BM5 and BM6 requires some preparation and is deferred to Sections From 2.3 it follows that for all s, t : EB s B t = 1 [,s], 1 [,t] = s t

12 2.3 Continuity of paths and the problem of versions The continuity of paths t B t ω with ω Ω is a somewhat subtle matter. First, let us make clear that the continuity of the curve t B t in L 2 Ω, P is not at all sufficient to ensure continuity of the paths t B t ω. Example 2.1. Let Ω = [, 1] with Lebesgue measure P. Let X t t [, 1] denote the process { if t ω, X t ω := 1 [,t] ω = 1 if t > ω. So X t ω, far from being continuous, moves by making a single jump at a random time. However, t X t is a continuous curve in L 2 Ω, P, since X t X s 2 = s du = t s s t. Second, let us observe that a description of a curve in L 2 Ω, P, such as t B t, can never be sufficient to ensure pathwise continuity. This is shown by the following example. Example 2.2. Take Ω, P as in the previous example. Now let Y t ω be given by { 1 if ω t is irrational Y t ω = if ω t is rational. Let Z t ω = 1 for all ω Ω, t [, 1]. Clearly, t Z t ω is constant, hence continuous for all ω Ω, whereas t Y t ω is discontinuous for all ω Ω. Nevertheless, for any fixed value of t, Y t and Z t are almost surely equal, so they correspond to the same curve in L 2 Ω, P. These observations leads us to the following definition: Definition. We say that a process Y t L 2 Ω, P t [, T ] is a version of a process Z t L 2 Ω, P t [, T ] if t [, T ] : Y t ω = Z t ω for almost all ω Ω. This said, we proceed to prove that Brownian Motion has a version with continuous paths, as claimed in property BM5 in Proposition 2.2. Proof. For the fourth moment of a centered Gaussian random variable ξ we have Eξ 4 = 3 Varξ 2. Hence for the process B t we have E B t B s 4 = 3 t s 2. Now fix α, 1/4 and put ε := 1 4α >. Then, for all s, t [, T ], we have by the Markov inequality, P [ B t B s t s α] t s 4α E B t B s 4 3 t s 2 4α = 3 t s 1+ε, 11

13 so that for all n N and k < 2 n, and hence P [ B k+12 n T B k2 n T 2 nα] 3 2 n1+ε, 2 n 1 n=1 k= 2 n 1 n=1 k= P [ B k+12 n T B k2 n T 2 nα] 3 2 n 2 nε = 3 n=1 2 nε = 3 2 ε 1 <. By the first Borel-Cantelli lemma it follows that if A n denotes the event A n := [ k < 2 n : B k+12 n T B k2 n T 2 nα], then with probability 1 only finitely many A n s happen. So for almost all ω Ω there exists Mω N such that A n does not occur for n Mω, i.e., for all n Mω we have k < 2 n : B k+12 n T ω B k2 n T ω < 2 nα. 2.5 Now fix an ω for which the latter holds, and let s and t be two points in the set Q of dyadic rationals of T, { Q := k2 n } T n, k N, k 2 n, such that s t 2 Mω T. Choose n Mω such that 2 n+1 T s t < 2 n T. Let k N be such that k2 n T has distance less than 2 n T to both s and t. We may then write the dyadic representation s = k2 n T + t = k2 n T + l σ i 2 n+i T i=1 m τ i 2 n+j T, j=1 where l, m N and σ i, τ j { 1,, 1 } for i = 1,..., l and j = 1,..., m. By 2.5 we may now conclude that B t ω B s ω < l σ i 2 n+iα + i=1 m τ j 2 n+jα j=1 2 2 n+1α 1 2 α 2 T α 1 2 α s t α. It follows that the restriction of the function t B t ω to Q may be extended to a continuous function t B t ω on all of [, T ]. 12

14 It remains to show that B t is a version of B t. Choose t [, T ] and let t 1, t 2,... be a sequence in Q tending to t. Let φ : R R be a bounded, continuous and strictly increasing function. Then φb t ω φ B t ω 2 Pdω = φb t φ B t 2 = lim φb ti φ B ti 2 =, i Ω since t B t is continuous in L 2 Ω, P and t B t is pointwise continuous, and therefore also continuous in L 2 Ω, P by bounded convergence. Henceforth, when speaking of B t we shall always mean its continuous version and drop the tilde from the notation. 2.4 Roughness of paths Although the path of Brownian motion is continuous, it is extremely rough. This is expressed by the following basic lemma, which we shall need to prove property BM6 in Proposition 2.2. Lemma 2.3 Let = t n < t n 1 < < t n n = T be a family of partitions of the interval [, T ] that gets arbitrarily fine for n, in the sense that lim max n j<n tn j+1 tn j =. Then L 2 lim n 1 n j= B t n j+1 B n t 2 = T. j Proof. Abbreviate B j = B n t B n j+1 t and t j = t n j [ B j 2 T 2 ] 2 = E B j 2 T j j j+1 tn j = E B i 2 B j 2 2T E B j 2 + T 2 = j = j i,j j. Let δ n = max j t j. Then E B j 4 + E B i 2 E B j 2 2T t j + T 2 i j 3 t j 2 + i j = 2 t j 2 j 2δ n t j j = 2δ n T, t i t j T 2 j 13

15 where again we have used the fact that the fourth moment of a centered Gaussian random variable ξ is given by E ξ 4 = 3 Varξ 2. As lim n δ n =, the statement follows. We may write the message of Lemma 2.3 symbolically as db t 2 = dt, 2.6 saying that Brownian motion has quadratic variation growing linearly with time. This expression will acquire a precise meaning during the sequel of this course. For the moment, let us just say that B t has large fluctuations on a small scale, namely db t is of order dt dt. To prove BM6 of Proposition 2.2 we need one more preparatory lemma, which applies to a general sequence of random variables. Lemma 2.4 Let X 1, X 2, X 3,... be a sequence of real-valued random variables such that for some p >, lim n E X n p = for some p >. Then there exists a subsequence X nk k=1 tending to almost surely. Proof. Choose a subsequence such that k=1 E X n k p <. By Chebyshev s inequality we have, for all m N, [ P X nk 1 ] m p E X nk p, m and therefore, for all m N, k [ P X nk 1 ] <. m The first Borel-Cantelli lemma now implies that for any m, P [ X nk 1m ] for finitely many k = 1. By σ-additivity the intersection over m of the event between brackets also has probability 1: [ P m N K N k K : X nk < 1 ] = 1, m i.e., X nk almost surely as k. We are finally ready to finish our proof of Proposition 2.2: Proof. By Lemmas 2.3 and 2.4 with p = 2 there exists an increasing sequence n k such that, for almost all ω Ω, lim n 1 k j= B t n k j+1 2 ω B n t k ω = T. j 14

16 Fix an ω Ω for which this holds. Let ε nk := max B j := j max B t n k ω B n j+1 t k ω. j j<n k Then lim k ε nk = by the uniform continuity of t B t which is a continuous function on the compact interval [, T ]. It follows that, n k 1 j= B j n k 1 j= 1 B j 2 1 T as k ε nk ε nk This concludes the construction of Brownian motion. In the next sections we shall see that Brownian motion is the building block of stochastic analysis. The construction of Brownian motion induces a probability measure on the set of all continuous functions [, T ] R, which is called the Wiener measure. It can be proved that all random variables in L 2 Ω, P can be represented in a natural way as integrals of products of increments of Brownian motion. This is Wiener s chaos expansion, Wiener

17 3 Stochastic integration: the Gaussian case This section serves as a motivation for the Itô-calculus presented in Sections 4 and The stochastic integral of a non-random function If f n L 2 [, T ] is a step function, i.e., n 1 f n t = c n j 1 [tj,t j+1 t = t < t 1 < < t n = T, j= then from the definition B t := N 1[,T ] it follows that n 1 N fn = c n j B tj+1 B tj, j= which is precisely what we would mean by the integral T f ntdb t. Moreover, if f n f in L 2 [, T ], then N fn N f in L 2 Ω, P by the isometric property of N described in Section 2.2. So f N f maps L 2 [, T ] into the Gaussian random variables on Ω, P. It is therfore natural to define for all f L 2 [, T ]: T 3.2 The Ornstein-Uhlenbeck process ftdb t ω := N f ω. In 198 Langevin proposed an equation in order to describe the motion of a particle suspended in a liquid. He wanted to give a treatment which was more refined and more in harmony with Newtonian mechanics than the above Brownian motion model. Let V t denote the velocity of the particle at time t. According to Newton s second law the derivative d dt V t should be equal to the sum of the forces acting on the particle see example 2 in Section 1: we take the mass of the particle equal to 1. Langevin wrote d dt V t = ηv t + σn t, where the first term on the r.h.s. models the friction due to the viscosity η of the liquid and the second term describes the sum total of the erratic collisions of liquid molecules against the particle, σ being the strength of the noise. In the spirit of the preceding discussion we rewrite the Langevin equation as which again is shorthand for the integral equation V t V s = η dv t = ηv t dt + σdb t, 3.1 s V u du + σb t B s 3.2 in L 2 Ω, P. This equation was solved by Ornstein and Uhlenbeck 193 in the following way. Rewrite 3.1 as de ηt V t = σe ηt db t. 16

18 1 t T Figure 1: The function f t. Integrating we obtain, for t T, so we find e ηt V t V = σ V t = e ηt V + σ = e ηt V + σn ft, e ηs db s, e ηt s db s 3.3 where f t : s 1 [,t] s exp[ ηt s] is shown in Figure 1. Assuming V =, we obtain the following first and second moments for the Gaussian random variable V t recall 2.2: EV t =, EVt 2 = σ 2 f t 2 = σ 2 e 2ηs t ds = σ2 2η 1 e 2ηt. Ignoring for the moment that time stops at T T is arbitrary and could be taken very large, we take the limit t : lim EV t 2 = σ2 t 2η. 3.4 This can be read as saying that the particle will eventually attain a certain mean energy. Let us now show that the above informal calculation is indeed correct. Proposition 3.1 For all V R the process V t t [,T ] defined by 3.3 satisfies the stochastic integral equation 3.2. Proof. part After substitution of 3.3 into 3.2, the latter is found to consist of a deterministic e ηt V e ηs V = η which is obviously valid, and a stochastic part σn ft σn fs = ησ s s e ηu V du, N fu du + σb t B s, 17

19 which can be obtained by applying the map N to both sides of the following identity in L 2 [, T ]: f t f s = η s f u du + 1 [s,t] s t T. The proof of this identity is left as an exercise. 3.3 Some hocus pocus Let us again consider the Ornstein-Uhlenbeck process V t t [,T ] and perform some suggestive formal manipulations. It is tempting to write dv 2 t = 2V t dv t = 2V t ηv t dt + σdb t, which, under the formal assumption that V t and db t are independent quantities, leads to dev 2 t = 2ηEV 2 t dt + 2σEV t EdB t = 2ηEV 2 t dt. However, this result is false since it would imply that EV 2 t tends to as t, contradicting 3.4. What went wrong? How should we change the rules of formal manipulation of differentials? Should we reject the independence assumption V t db t? This is indeed a possibility, and leads to the so-called stochastic calculus of Stratonovic. However, we shall not reject this independence assumption, but instead follow Itô and reconsider the first step dv 2 t = 2V t dv t. Let us take seriously the formula db t 2 = dt in 2.6, implying that dv t 2 = σ 2 dt, and let us expand dv 2 t to second order in dv t : dv 2 t = 2V t dv t + dv t 2 This gives, upon calculation of expectations, = V t + dv t 2 V 2 t = 2V t ηv t dt + σdb t + σ 2 dt. dev 2 t = 2ηEV 2 t dt + σ 2 dt, leading indeed to the correct equilibrium value 3.4. Note the occurrence of the extra term σ 2 dt compared to the previous calculation. This term is the essential ingredient to make ends meet. The above argument is heuristic at this stage, because both B t and V t are functions of infinite variance, and hence nowhere differentiable. Still, we shall see that it makes perfect sense in the right interpretation. This will be clarified in Sections 4 and 5. 18

20 4 The Itô-integral Kiyosi Itô In this section we shall extend the concept of stochastic integration by allowing the function ft, ω in the integral T ft, ωdb tω to become stochastic as well. However, we shall see that ft, ω cannot be completely arbitrary, but has in some way to be fitting to ω B t ω. The construction was pioneered by Kiyosi Itô in the 194 s and leads to what is nowadays called Itô s stochastic calculus. 4.1 Step functions We define the stochastic integral of a step function of the form by T n 1 ϕt, ω = c j ω1 [tj,t j+1 t j= n 1 ϕt, ωdb t ω := c j ω B tj+1 ω B tj ω. 4.1 j= The next thing to do would be to approximate f by step functions f n and define fdb t to be the limit of f n db t. But here we meet a difficulty! Example 4.1. Put ft, ω := B t ω. Two reasonable approximations of f are ϕ n and ψ n given by n 1 ϕ n t, ω := B tj ω1 [tj,t j+1 t, j= n 1 ψ n t, ω := B tj+1 ω1 [tj,t j+1 t, j= 19

21 where t, t 1,..., t n are defined as in Lemma 2.3 in Section 2.4 and are not ω-dependent. However, from our definition 4.1 we find that T T n 1 ψ n db t ϕ n db t = B j 2, which, according to Lemma 2.3, does not tend to as n but to the constant T. In other words, the variation of the path t B t is too large for B t db t to be defined in a casual way. j= We now introduce a requirement on our function f for the approximation by step functions f n to work nicely. Definition. Let F t denote the σ-field generated by { B s s t }. Let F := F T. A stochastic process on the probability space Ω, F, P of Brownian motion is a measurable map [, T ] Ω R. The process will be called adapted to the family of σ-fields F t t [,T ] if ω ft, ω is F t -measurable for all t [, T ]. The space L 2 Ω, F t, P of square-integrable F t -measurable functions may be thought of as those functions of ω Ω that are fully determined by the initial segment [, t] R: s B s ω of the Brownian motion. In other words, g L 2 Ω, F t, P when gω = gω as soon as B s ω = B s ω for all s [, t]. Let L 2 B, [, T ] denote the space of all adapted stochastic processes f : [, T ] Ω R that are square integrable: T f 2 L 2 Ω,P = dt ft, ω 2 Pdω <. Ω The natural inner product that makes L 2 B, [, T ] into a real Hilbert space is T f, g := dt ft, ωgt, ωpdω Ω T = E ft, gt, dt. We note that the step functions ϕ n in the last example are adapted, since ϕ n t, ω= B tj for t [t j, t j+1, so that ϕ n t, ω only depends on past values of B. On the other hand, ψ n is not adapted, since at time t [t j, t j+1 it already anticipates the Brownian motion at time t j+1 : ψ n t, ω= B tj+1 ω. The next theorem is a crucial property of stochastic integrals of adapted step functions. Proposition 4.1 The Itô-isometry Let ϕ be a step function in L 2 B, [, T ], and let I ϕω := T ϕt, ωdb t ω be its stochastic integral according to 4.1. Then I is an isometry: I ϕ L 2 Ω,P = ϕ L 2 B, 4.2 2

22 i.e., Ω T 2 T Pdω ϕt, ωdb t ω = dt Pdω ϕ 2 t, ω. Ω Proof. By adaptedness, c i in 4.1 is independent of B j := B tj+1 B tj for i j. Therefore n 1 2 I ϕ 2 L 2 Ω,P = E c j B j j= n 1 n 1 = E c i c j B i B j = i= j= n 1 j= E c 2 j B j i<j n 1 = Ec 2 je B j 2 j= n 1 = Ec 2 j t j, j= E c i c j B i E Bj where we use that E B j =, E B j 2 = t j recall BM3-BM4 in Section 1. On the other hand, T ϕ 2 LB,[,T ] = E [ n 1 j= n 1 n 1 T = i= j= n 1 = t j Ec 2 j. j= ] 2 c j 1 [tj,t j+1 t dt 1 [ti,t i+1 t1 [tj,t j+1 tdt Ec i c j The two expressions are the same. 4.2 Arbitrary functions To go from step functions to arbitrary functions we need the following. Lemma 4.2 Every function f L 2 B, [, T ] can be approximated arbitrarily well by step functions in L 2 B, [, T ]. Proof of Lemma 4.2. We divide the proof into three steps of successive approximation. Step 1. Every bounded pathwise continuous g L 2 B, [, T ] can be approximated by a sequence of step functions. 21

23 Proof. Partition the interval [, T ] into n pieces by times t j n j=1 in the customary way. Define n 1 ϕ n t, ω := gt j, ω1 [tj,t j+1 t. j= Then, since t gt, ω is continuous and max j t j for all ω Ω, we have T lim n Hence, by bounded convergence, gt, ω ϕn t, ω 2 dt =. T lim E gt, ω ϕn t, ω 2 dt =. n Step 2. Every bounded h L 2 B can be approximated by a sequence of bounded continuous functions in L 2 B. Proof. Suppose h M. For n N, let ψ n be a non-negative continuous function of the form given in Figure 2, with the properties ψ n x = for x / [, 1/n] and ψ nxdx = 1. Define g n t, ω := ψ n t shs, ωds. Think of ψ n as a mollifier of h. Then t g n t, ω is continuous for all ω, and g n M. Moreover, for all ω, T lim n gn s, ω hs, ω 2 ds =, since ψ n constitutes an approximate identity. Again, by bounded convergence, T lim E gn s, ω hs, ω 2 ds =. n Step 3. Every f L 2 B, [, T ] can be approximated by bounded functions in L 2 B, [, T ]. This is in fact a general property L 2 -spaces. Proof. Let f L 2 B and put h n t, ω := n n ft, ω. Then f h n 2 L 2 B T dt Pdω 1 [n, ft, ω ft, ω 2, Ω which tends to as n by dominated convergence. 22

24 1/n Figure 2: The function ψ n. On the basis of Proposition 4.1 and Lemma 4.2 we can now define the Itô-integral of a function g L 2 B, [, T ] as follows. Approximate g by step functions ϕ n L 2 B, [, T ], i.e., ϕ n g in L 2 B, [, T ]. Apply I to each of the ϕ n. Since I is an isometry, the sequence I ϕ n has a limit in L 2 Ω, P. This is what we define to be the Itô-integral Ig of g: T gt, ωdb t ω := Igω = L 2 lim n I ϕ n ω. Here is an example of a stochastic integral. Example 4.2. The following identity holds: T B t db t = 1 2 B2 T 1 2 T. Proof. We choose an adapted approximation of B t ω, namely ϕ n t, ω of the example in Section 4.1. By definition, T n 1 ϕ n tdb t = B j B j, where we use the shorthand notation B j := B tj and B j := B tj+1 B tj. Note that B i = j<i B j. We therefore have BT 2 = 2 B j = i j j= B i i<j B i B j = i B i j B j B j = i T B i ϕ n tdb t. From Lemma 2.3 in Section 2.4 it now follows that IB = T B t db t = lim n T ϕ n tdb t = 1 2 B2 T T. 23

25 Note that the integral in the above example is actually different from what it would be for a smooth function f with f =, namely: T ftdft = 1 2 ft 2. What the example shows is that stochastic integration is ordinary integration except that the diagonal terms must be left out. This will be made precise in Section 5, where we shall encounter a faster way to calculate stochastic integrals. 4.3 Martingales In Section 4.4 we shall prove that the Itô-integral w.r.t. Brownian motion of an adapted square-integrable stochastic process always has a continuous version. For this we shall need the interlude on martingales described in this section. For a general introduction on martingales we refer to the book by D. Williams Definition. By the conditional expectation at time t [, T ] of a random variable X L 2 Ω, P we shall mean its orthogonal projection onto L 2 Ω, F t, P, the space of random variables that are determined by the Brownian motion up to time t. We denote this projection by EX F t, or briefly E t X. In words, E t Xω is the best estimate in the sense of least mean square error that can be made of Xω on the basis of the knowledge of B s ω for s t. Definition. motion if An adapted process M L 2 B, [, T ] is called a martingale w.r.t. Brownian E s M t = M s for s t T. A martingale is a fair game : the expected value at any time in the future is equal to the current value. Note that Brownian motion itself is a martingale, since for s t T, E s B t = E s B s + B t B s = B s + E s B t B s = B s, because B t B s is independent of and hence orthogonal to any function in L 2 Ω, F s, P, in particular, to Brownian motion. Theorem 4.3 The stochastic integral of an adapted step function is a martingale with continuous paths. Proof. This follows directly from the fact that Brownian motion has continuous paths and satisfies the martingale property use the definition of the stochastic integral of a step function given in 4.1 in Section 4.1. The following powerful tool will help us prove that the Itô-integral of any process in L 2 B, [, T ] possesses a continuous version. Theorem 4.4 The Doob martingale inequality If M t is a martingale with continuous paths, then for all p 1 and λ >, P[ sup M t λ] 1 t T λ p E M T p. 24

26 Proof. We may assume that E M T p <, otherwise the statement is trivially true. Let Z t := M t p. Then, since x x p is a convex function, Jensen s inequality gives that Z t is sub-martingale: for all s t T, E s Z t = E s M t p E s M t p = M s p = Z s. It follows in particular that E M s p < for all s [, T ]. Let us discretise time and first prove a discrete version of the Doob inequality. To that end we fix n N and put t k = kt/n, k =, 1,..., n. Let Kω denote the smallest value of k for which Z tk λ p, if this occurs at all. Otherwise, put Kω =. Then we may write, since [K = k] F tk, P[ max k n M t k λ] = = n P[K = k] k= n k= n k= n k= 1 λ p EZ T 1 λ p E1 [K=k]Z tk 1 λ p E 1 [K=k] EZ T F tk 1 λ p E1 [K=k]Z T = 1 λ p E M T p. Here, the second inequality uses the sub-martingale property at time t k. To get the same for continuous time, let A n denote the event [max k n M tk λ]. Then we have A 1 A 2 A 4 A 8, and so, t M t being continuous, P[ sup M t λ] = P A 2 n = lim PA 2 n 1 t T n λ p E M T p. n= 4.4 Continuity of paths We shall use the martingale inequality of Theorem 4.4 in Section 4.3 to prove the existence of a continuous version for our stochastic integrals. Two stochastic processes I t and J t are called versions of each other if I t ω and J t ω are equal for all t [, T ] and almost all ω Ω. recall Definition 2.3 in Section 2.3. Theorem 4.5 Let f L 2 B, [, T ]. Let I t ω = fs, ωdb s ω. Then there exists a version J t of I t such that t J t ω is continuous for almost all ω Ω. 25

27 Proof. The point of the proof is to turn continuity in L 2 Ω, P into continuity of paths. This requires several estimates. Let ϕ n L 2 B, [, T ] be an approximation of f by step functions. Put I n t, ω = ϕ n s, ωdb s ω. By Lemma 4.3 in Section 4.3, I n is a pathwise continuous martingale for all n. The same holds for the differences I n I m. Therefore, by Doob s martingale inequality with p = 2 and λ = ε and the Itô-isometry, we have [ ] P sup I n t I m t ε t T 1 ε 2 E I n T I m T 2 = 1 T ε 2 E ϕ n t ϕ m t 2 dt = 1 ε 2 ϕ n ϕ m 2 L 2 B,[,T ], which tends to as n, m because ϕ n is a Cauchy sequence. We can therefore choose an increasing sequence n 1, n 2, n 3,... of natural numbers such that [ P ] sup I nk+1 t I nk t > 2 k 2 k. t T By the first Borel-Cantelli lemma it follows that [ P sup I nk+1 t I nk t > 2 k for infinitely many k t T Hence for almost all ω there exists Kω such that for all k Kω, so that for l > k > Kω, sup I nk+1 t, ω I nk t, ω 2 k, t T l 1 sup t T I nl t, ω I nk t, ω 2 j 2 k 1. This implies that t I nk t, ω as k converges uniformly to some function t Jt, ω, which must therefore be continuous. It remains to show that Jt, ω is a version of It, ω. This can be done by Fatou s lemma, namely for all t [, T ]: Jt, ω It, ω 2 Pdω = lim inf I n k t, ω It, ω 2 Pdω Ω Ω k lim inf I nk t, ω It, ω 2 Pdω =. k Ω From now on we shall always take fs, ωdb s to mean a t-continuous version of the integral. We have completed our construction of stochastic integrals. In Sections 5 8 we shall investigate their main properties. j=k ] =. 26

28 5 Stochastic integrals and the Itô-formula In this chapter we shall treat the Itô-formula, a stochastic chain rule that is of great help in the formal manipulation of stochastic integrals. We say that a process X t is a stochastic integral if there exist square-integrable adapted processes U t, V t L 2 B, [, T ] such that, for all t [, T ], X t = X + U s ds + V s db s. 5.1 The first integral on the r.h.s. is of finite variation, being pathwise differentiable almost everywhere. The second integral is an Itô-integral and therefore a martingale. A decomposition of a process into a martingale and a process of finite variation is called a Doob-Meyer decomposition. Processes in L 2 B, [, T ] that have such a decomposition are called semi-martingales. Equation 5.1 is conveniently rewritten in differential form: dx t = U t dt + V t db t. 5.2 Example 5.1. In Section 4.2 it was shown that the process B 2 t satisfies the equation db 2 t = dt + 2B t db t The one-dimensional Itô-formula Relation 5.3 is an instance of a general chain rule for functions of stochastic integrals that can be stated as follows: the differential must be expanded to second order and every occurrence of db t 2 must be replaced by dt. Here is the precise rule. Theorem 5.1 Itô-formula. Let X t be a stochastic integral. Let g : [, R R be twice continuously differentiable. Then the process Y t := gt, X t satisfies dy t = g t t, X tdt + g x t, X tdx t where dx t 2 is to be evaluated according to the multiplication table: dt db t dt db t dt i.e., with the Doob-Meyer decomposition 5.1: dx t = U t dt + V t db t dx t 2 = V 2 t dt. 2 g x 2 t, X t dx t 2,

29 In terms of the explicit form 5.2 of X t, we can write 5.4 as dy t = U tdt + V t db t with which in turn stands for U t = g t t, X t + g x t, X t U t V t = g x t, X t V t, T T Y T = Y + U sds + V sdb s. 2 g x 2 t, X t Vt 2 The third term in the r.h.s. of 5.4 is called the Itô correction. We shall prove Theorem 5.1 via the following extension of Lemma 2.3 in Section 2.4. Lemma 5.2 If A t is a process in L 2 B, [, T ], then n 1 A tj B j 2 j= T A t dt in L 2 Ω, P as n. Proof. We leave this as an exercise. We now give the proof of Theorem 5.1. It will be a bit sketchy, but the details are easily filled in. Proof. We shall use the by now standard notation t j = t n j where the points t n j n j= with = t n j+1 tn j, and B j = B n j = B n t B n j+1 t, j = t n < t n 1 < < t n n = T for n = 1, 2, 3,... form a sequence of partitions of [, T ] whose mazes δ n = max j<n tn j tend to zero as n. We shall generally write X j for X tj. 1. First, we may assume that g, g t, g x, 2 g, 2 g and 2 g t 2 x 2 x t are all bounded. If they are not, then we stop the process as soon as the absolute value of one of them reaches the value N, and afterwards take the limit N. 28

30 2. Next, using Taylor s theorem we obtain gt, X T g, X = j gt j+1, X j+1 gt j, X j = j g t t j, X j t j + j j where ε j = o t j 2 + X j 2 for all j. 3. The first two terms converge because δ n : j g T t t j, X j t j g x t j, X j X j = j j j g x t j, X j X j 2 g t 2 t j, X j t j 2 + j 2 g x 2 t j, X j X j 2 + j g t t, X t dt, 2 g t x t j, X j t j X j ε j, g x t j, X j U j t j + V j B j + o1 T g x t, X t U t dt + T g x t, X t V t db t. 4. The third and the fourth term tend to zero. For instance, if in the fourth term we substitute X j = U j t j + V j B j, then a term j 2 g t x t j, X j V j t j B j =: j c j t j B j 5.5 arises. But, because c j is F tj -measurable and c j M for all j, it follows that 5.5 tends to zero because 2 E c j t j B j = Ec 2 j t j 3. j j 5. The fifth term again converges: 1 2 g 2 x 2 X j 2 = 1 2 g 2 x 2 t j, X j U j t j + V j B j 2 + o1 j j = 1 2 g 2 x 2 t j, X j Uj 2 t j 2 + 2U j V j t j B j + Vj 2 B j 2 + o1 1 2 j T 2 g x 2 t j, X j V 2 t dt by Lemma 5.2 recall the multiplication table in Theorem

31 5.2 Some examples Example 5.2. We can now generalise 5.3 as follows: use Theorem 5.1 with gt, x = x n, U t. d Bt n = nbt n 1 db t + 1 nn 1Bn 2 t dt, Example 5.3. More generally, we have for every twice continuously differentiable function f : R R, dfb t = f B t db t f B t dt. 5.7 Example 5.4. With the help of Itô s formula 5.6 it is possible to quickly calculate an integral like T B2 t db t, in much the same way as ordinary integrals. We make a guess for the indefinite integral, calculate its derivative, and where needed we apply a correction. In the case at hand we would guess the integral to be something like Bt 3, so we calculate V t 1: db 3 t = 3B 2 t db t + 3B t db t 2 = 3B 2 t db t + 3B t dt = B 2 t db t = 1 3 d B 3 t Bt dt = T T Bt 2 db t = 1 3 B3 T B t dt. Example 5.5. In the same way it is found that T sin B t db t = 1 cos B T 1 2 T cos B t dt. Example 5.6. Let f be differentiable. We can rewrite N f Theorem 5.1 with gt, x = ftx, U t, V t 1: = T ft db t as follows, use d ftb t = dftb t + ftdb t = T T ftdb t = ft B T fb f tb t dt. This is a partial integration formula for the integration of functions with respect to Brownian motion. Example 5.7. Let us solve the stochastic differential equation dx t = βx t db t, 3

32 which is a special case of the growth equation in Example 1 in Section 1. We try Y t = exp βb t, and obtain with the help of the Itô-formula that dy t = βy t db t β2 Y t dt. This is obviously growing too fast: the second term in the r.h.s., which is the Itô-correction, must be compensated. We therefore try next X t = exp αty t, finding dx t = αe αt Y t dt + e αt dy t = αx t dt + βx t db t β2 X t dt The dt terms cancel if α = 1 2 β2 and we find the solution X t = e βb t 1 2 β2t. This process is called the exponential martingale. 5.3 The multi-dimensional Itô-formula Theorem 5.3 Let Bt, ω = B 1 t, ω, B 2 t, ω,..., B m t, ω be Brownian motion in R m, consisting of m independent copies of Brownian motion. Let Xt, ω = X 1 t, ω,..., X 1 t, ω be the stochastic integral given by dx i t = U i tdt + m V ij tdb j t i = 1,..., n, 5.8 j=1 for some processes U i t and V ij t in L 2 B, [, T ]. Abbreviate 5.8 in the vector notation dx = Udt + V db. Let g : [, T ] R n R p be a C 2 -function. Then the process Y t, ω := gt, X 1 t,..., X n t satisfies the stochastic differential equation dy i t = g i t t, X t dt + n j=1 g i x j t, X t dx j t n j,k=1 2 g i x j x k t, X t dx j tdx k t 5.9 i = 1,..., p, where the product dx j dx k has to be evaluated according to the rules db j db k = δ jk dt db j dt = dt 2 =. 31

33 Equation 5.9 is the multi-dimensional version of Itô s formula, and can be proved in the same way as its one-dimensional counterpart. Be careful to keep track of all the indices. The easiest case is m = n, p = 1. Example 5.8. Bessel process. Let R t ω = B t ω, where B t is m-dimensional Brownian motion and is the Euclidean norm. Apply the Itô-formula to the function r : R m R + : x x. We compute r x i dr = m j=1 = x i r B j R db j and 2 r x 2 i = 1 r x2 i. So we find r 3 m 1 R B2 m j B j R 3 dt = R db j + m 1 2R dt. j=1 For notational convenience we have dropped here the arguments t and ω. j=1 The next theorem gives a way to construct martingales out of Brownian motion. A function f on R m is called harmonic if f =, where denotes the Laplace operator. m i=1 2 x 2 i Theorem 5.4 If f : R m R is harmonic, then fb t is a martingale. Proof. Write out dfb t = m i=1 f x i Bt db i t m m i=1 j=1 2 f x i x j Bt db i tdb j t. The second part in the r.h.s. is zero because db i tdb j t = δ ij dt and f =. Integration yields m f fb t fb = Bt db i t. x i i=1 This is an Itô-integral and hence a martingale. We can understand Theorem 5.4 intuitively as follows. A harmonic function f has the property that its value in a point x is the average over its values on any sphere around x. This property, together with the fact that multi-dimensional Brownian motion is isotropic, explains why fb t is a fair game. The following technical extension of Itô s formula 5.7 will be useful later on. f " f k " Figure 3: The graph of f k approximating f having a jump. 32

34 Lemma 5.5 Itô s formula for a function of Brownian motion fb t = fb + f B s db s f B s ds 5.1 still holds if f : R R is C 1 everywhere and C 2 outside a finite set { z 1,..., z N }, with f bounded on some neighbourhood of this set. Proof. Take f k C 2 R such that f k f and f k f as k, both uniformly, and such that for x / { z 1,..., z N }: { f k x f x as k f k x M in a neighbourhood of { z 1,..., z N }. Fig. 3 shows the graph of f k approximating some f with a jump. For f k we have the Itô formula f k B t = f k B + f k B sdb s f k B sds. In the limit as k this equality tends, term by term in L 2 Ω, P, to 5.1. Indeed, the distances f k B t fb t and f k B fb tend to by the uniformity of the convergence f k f, the norm difference f k B s db s f B s db s tends to by the uniformity of the convergence f k f plus Itô s isometry, and the difference E f k B s ds f B s ds 2 t o Ω f k B sω f B s ω 2 Pdω ds tends to by dominated convergence combined with the fact that { } P λ ω, s Ω [, t] : B s ω {z 1,..., z N } =. 5.4 Local times of Brownian motion As an application of Lemma 5.5 we shall prove Tanaka s formula for the local times of Brownian motion. Theorem 5.6 Tanaka Let t B t be a one-dimensional Brownian motion. Let λ denote the Lebesgue measure on [, T ]. Then the limit 1 L t := lim ε 2ε λ { s [, t] : B s ε, ε } exists in L 2 Ω, P and is equal to L t = B t B sgnb s db s. Think of L t as the density per unit length of the total time spent close to the origin up to time t. 33

35 ε ε Figure 4: The function g ε t. Proof. For ε >, consider the function { x if x ε g ε x = ε + x 2 /ε if x < ε, 1 2 as shown in Fig. 4. Then g ε is C 2, except in the points { ε, ε }, and it is C 1 everywhere on R. Apply Lemma 5.5 to get 1 2 where g εx and g ε x are given by and g ε B s ds = g ε B t g ε B g εx = { sgnx if x ε x/ε if x < ε g ε x = 1 ε 1I ε,εx if x / { ε, ε }. g εb s db s, 5.11 Now, the limit as ε of the l.h.s. of 5.11 is precisely L t. Moreover, we trivially have g ε B t B t and g ε B B as ε. Hence it suffices to prove that the integral in the r.h.s. of 5.11 converges to the appropriate limit: To see why the latter is true, estimate g ε B s sgnb s 2 db s = g ε B s sgnb s db s in L 2 Ω, P. = E 1 2 1I ε,ε B s ε B s sgnb s db s 1 2 1I ε,ε B s ε B s sgnb s ds P B s ε, ε ds as ε, 34

36 where in the second equality we use the Itô-isometry, in the inequality we use that B s < ε implies that ε 1 B s sgnb s 1, and the last statement holds because the Gaussian random variable B s s > has finite density. It follows that L t exists and can be expressed as in the statement of the theorem. Note that for smooth functions f, ft f sgn fs f sds = because d dt ft = sgn ft f t ft. Thus, the local time is an Itô-correction to this relation, caused by the fact that d B t = sgn B t db t : if B t passes the origin during the time interval t j, then B tj+1 B tj need not be equal to sgnb tj B tj. The difference is a measure of the time spent close to the origin. The existence of the local times of Brownian motion was proved by Lévy in the 193 s using hard estimates. The above approach is shorter and more elegant. What is described above is the local time at the origin: L t = L t. In a completely analogous way one can prove the existence of the local time L t x at any site x R. The process plays a key role in many applications associated with Brownian motion. For instance, it is used to define path measures that model the behaviour of polymer chains. Let P T denote the Wiener measure on the time interval [, T ]. Fix a number β >, and define a new measure Q β T on path space by setting dq β T dp T = 1 Z β T [ ] exp β L 2 T x dx R where Z β T is the normalising constant. What the measure Qβ T does is reward paths that spread themselves out compared to Brownian motion. We may think of P T as modeling the erratic spatial distribution of the polymer and of the exponential weight factor as modeling its stiffness or self-repellence. The parameter β is the strength of the self-repellence. It is known that XT lim T Qβ T [ϑβ ε, ϑβ + ε] = 1 ε > T for some constant ϑβ, called the asymptotic speed of the polymer, given by ϑβ = Cst β 1 3. See R. van der Hofstad, F. den Hollander and W. König The Brownian motion corresponds to β = and has zero speed., 35

37 6 Stochastic differential equations A stochastic differential equation for a process Xt with values in R n is an equation of the form dx i t = b i t, X t dt + m σ ij t, X t db j t i = 1,, n. 6.1 j=1 Here, Bt = B 1 t, B 2 t,, B m t is an m-dimensional Brownian motion, i.e., an m- tuple of independent Brownian motions on R. The functions b i and σ ij from R R n to R with i = 1, 2,..., n and j = 1, 2,..., m form a field b of n-vectors and a field σ of n m-matrices. A process X L 2 B, [, T ] for which 6.1 holds is called a strong solution of the equation. In more pictorial language, such a solution is called an Itô-diffusion with drift b and diffusion matrix σσ. In this section we shall formulate a result on the existence and the uniqueness of Itôdiffusions. It will be convenient to employ the following notation for the norms on vectors and matrices: n n m x 2 := x 2 i x R n ; σ 2 := σij 2 = trσσ σ R n m. i=1 i=1 j=1 Also, we would like to take into account an initial condition X = Z, where Z is an R n -valued random variable independent of the Brownian motion. All in all, we enlarge our probability space and our space of adapted processes as follows. We choose a probability measure µ on R n and put Ω := R n Ω m ; F t := BR n F m t t [, T ]; P := µ P m ; Z : Ω R n : z, ω z; L 2 B, [, T ] := {X L 2 R n, µ L 2 Ω, F T, P m L 2 [, T ] R n ω X x t,iω is F t-measurable}. This change of notation being understood, we shall drop the primes again. 6.1 Strong solutions Theorem 6.1 Fix T >. Let b: [, T ] R n R n and σ : [, T ] R n R n m be measurable functions, satisfying the growth conditions as well as the Lipschitz conditions b t, x σt, x C 1 + x t [, T ], x R n bt, x bt, y σt, x σt, y D x y t [, T ], x, y R n, for some positive constants C and D. Let B t t [,T ] be m-dimensional Brownian motion and let µ be a probability measure on R n with R n x 2 µdx <. Then the stochastic differential equation 6.1 has a unique continuous adapted solution Xt given that the law of X is equal to µ. 36

38 Proof. The proof comes in three parts. 1. Uniqueness. Suppose X, Y L 2 B, [, T ] are solutions of 6.1 with continuous paths. Continuity will be proved in part 3. Put b i t := b i t, X t b i t, Y t σ ij t := σ ij t, X t σ ij t, Y t. Applying the inequality a + b 2 2a 2 + b 2 for real numbers a and b, the independence of the components of Bt, the Cauchy-Schwarz inequality gsds2 t gs2 ds for an L 2 -function g, the multi-dimensional Itô-isometry and finally the Lipschitz condition, we find E X t Y t 2 := = 2 = 2 n E X i t Y i t 2 i=1 n E i=1 2t = 2t b i sds + m j=1 n 2 m E b i sds + i=1 n E i=1 2 b i sds + 2 n E b i s 2 ds + 2 i=1 E bs 2 ds + 2 2t + 1D 2 E X s Y s 2 ds. 2 σ ij sdb j s j=1 2 σ ij sdb j s n m E i=1 j=1 i=1 j=1 σ ij s 2 ds n m E σ ij s 2 ds E σs 2 ds So the function f : t E X t Y t 2 satisfies the integral inequality for the constant A = 2D 2 T + 1. ft A fsds t [, T ] 6.2 Lemma 6.2 Gronwall s lemma Inequality 6.2 implies that f =. Proof. Put F t = fsds. Then F t is C1 and F t = ft AF t. Therefore d e ta F t = e ta ft AF t. dt Since F =, it follows that e ta F t implying F t. So we have ft AF t and hence ft =. 37

39 Now let Thus, we have E X t Y t 2 = for all t [, T ]. In particular, t [, T ] Q : X t ω = Y t ω for almost all ω. N := { ω Ω t [, T ] Q : X t ω Y t ω }. Then N is a countable union of null sets, so PN =. For Ω\N we have for all t [, T ] Q, X t ω = Y t ω, and since X t and Y t have continuous paths we conclude that for almost all ω the equality extends to all t [, T ]. This completes the proof of uniqueness. 2. Existence. We shall find a solution of 6.1 by iterating the map defined by L 2 B, [, T ] L 2 B, [, T ]: X X X t = Z + bs, X s ds + σs, X s db s. Let us start with the constant process X t := Z, and define recursively X k+1 t := X k t k. The calculation in part 1 can be used to conclude that E Xt Ỹt 2 A E X s Y s 2 ds for any X s, Y s L 2 B. Iteration of this result yields, for k 1 and the choice X t = X k+1 E t where we insert X k 2 = E t A Xk t E X k s X k 1 t 2 X k 1 s X k t 2 ds sk 1 A k ds k 1 ds k 2 Ak t k K t [, T ], k! X s = Z + bs, Zds + s1 σs, ZdB s and Ỹt = X 1 ds E s X k 1 t, Z 2 38

40 and K is given by K := sup E t [,T ] 2 sup t [,T ] { E bs, Zds + σs, ZdB s 2 2 bs, Zds + E σs, ZdB s 2 } 2T 2 C 2 E 1 + Z 2 + 2T C 2 E 1 + Z 2 2C 2 T 2 + T E 1 + Z 2, which is finite by the growth condition and the requirement that Z has finite variance. Now let X = L 2 B, [, T ]- lim k X k. The existence of this limit follows because k A k t k /k! < Cauchy sequence. Then X = L 2 B, [, T ]- lim k Xk = L 2 B, [, T ]- lim k Xk+1 = X. So X is a solution of Continuity and adaptedness. By Theorem 4.5 the paths t X t ω can be assumed to be continuous for almost all ω Ω. The fact that the solution is adapted is immediate from the construction. 6.2 Weak solutions What we have shown in Section 6.1 is that there exists a strong solution of 6.1, i.e., a solution that is an adapted functional of the Brownian motion. In the literature on stochastic differential equations often a different point of view is taken, namely one where the Brownian motion itself is also considered unknown. This leads to a so-called weak solution of stochastic differential equations, i.e., a solution in distribution. Definition. A weak solution of 6.1 is a pair B t, X t, measurable w.r.t. some filtration G t t [,T ] on some probability space Ω, G, P, such that B t is m-dimensional Brownian motion and such that 6.1 holds. The key point here is that the filtration need not be F t t [,T ] = σb s s [,t] : if it is, then we have a strong solution. Definition. 1. Strong uniqueness is uniqueness of a strong solution. 2. Weak uniqueness holds if all weak solutions have the same law. By the following example we illustrate the difference between these two concepts. 39

41 Example 6.1. Tanaka. This example is related to our example of Brownian local time in Section 5.4. Let B t be a Brownian motion and define B t := sgnb s db s. Since d B t = sgnb t db t we have, by Itô s formula in Lemma 5.4, d B 2 t = 2 B t d B t + d B t 2 with d B t 2 = sgnb t 2 db t 2 = db t 2 = dt. Hence B t is a martingale with quadratic variation t see Section 2.4: B t 2 2 B s d B s = t. Since B t satisfies the requirements BM1 BM5 of Section 1, it must be a Brownian motion. Turning matters around, B t is itself a solution of the stochastic differential equation db t = sgnb t d B t, 6.3 since B t = db s = sgnb s 2 db s = sgnb sd B s. Because every solution of 6.3 is a Brownian motion, we have weak uniqueness. However, because B t obviously is also a solution of 6.3, there are two solutions of 6.3 for a given process B t. In other words, we do not have strong uniqueness. Taking this argument one step further we find cf. Theorem 5.6: Bt = Bt B L t, where L t is the local time at the origin. By its definition, L t is adapted to the filtration G t t [,T ] generated by B t t [,T ]. Hence so is B t. It follows that F t G t, where F t t [,T ] is the filtration generated by B t t [,T ]. However, since B t is not G t -measurable its sign is not determined by G t, it is not F t -measurable. Note that 6.3 is an Itô-diffusion with bt, x = and σt, x = sgnx. The latter is not Lipschitz, which is why Theorem 6.1 does not apply. 4

42 7 Itô-diffusions, generators and semigroups The goal of this section is to give a functional analytic description of Itô-diffusions that will allow us to bring powerful analytic tools into play. 7.1 Introduction and motivation An Itô-diffusion Xt x with initial condition X x = x Rn is a Markov process in continuous time. If we suppose that the field b of drift vectors and the field σ of diffusion matrices are both time-independent, i.e., Xt x is the solution starting at x of the stochastic differential equation dx t = bx t dt + σx t db t 7.1 where b and σ satisfy the growth and the Lipschitz conditions mentioned in Theorem 6.1, then this Markov process is stationary = time-homogeneous and can be characterised by its transition probabilities P t x, B := P [X x t B] t, where B runs through the Borel subsets of R n. These transition probabilities satisfy the one-parameter Chapman-Kolmogorov equation P t+s x, B = P t x, dyp s y, B. 7.2 y R n An alternative way to characterise an Itô-diffusion is by its action on a sufficiently rich class of functions f : R n R n. Namely, if we define S t by S t fx := E fxt x = P t x, dyfy, 7.3 y R n then 7.2 leads to S t+s = S t S s, 7.4 i.e., the transformations S t t form a one-parameter semigroup. Such a semigroup is determined by its generator A, defined by 1 Af := lim t t S tf f, 7.5 which describes the action of the semigroup over infinitesimal time intervals. In what follows we make the above observations more precise and we study the interplay between the diffusion X x t, its generator A and its semigroup S t t. 7.2 Basic properties Let t, and for s t let Xs t,x denote the solution of the stochastic differential equation 7.1 with initial condition X t,x t = x. Lemma 7.1 Stationarity The processes s Xs t,x s t. and s X x s t are equal in law for 41

43 Proof. These processes are solutions of the equations dxs t,x = bxs t,x ds + σxs t,x db s B t dxs t x = bxs tds x + σxs tdb x s t respectively, for s t. Since B s B t and B s t are Brownian motions on [t, T ], both starting at, the two processes have the same law by weak uniqueness which is implied by strong uniqueness obtained in Theorem 6.1. Proposition 7.2 Markov property Let Xt x t denote the solution of 7.1 with initial condition X x = x. Fix t. Let C R n be the space of all continuous functions R n R that tend to at infinity. Then for all s the conditional expectation EfXt+s F x t depends on ω Ω only via Xt x. In fact, EfX x t+s F t = S s fx x t. Proof. Fix t. For y R n and s t, let Y s y := X t,y s+t. Then, according to Lemma 7.1, Y s y and X y s have the same law. Hence, for all f C R n, E fy s y = E fx y s = S s fy. On the other hand, the random variable Y s y depends on ω Ω only via the Brownian motion u B u ω B t ω for u t, so Y s y is stochastically independent of F t. By strong uniqueness we must have, for s, Y s X x t = X x t+s a.s., since both sides are solutions of the same stochastic differential equation 7.1 for s with initial condition Y Xt x = Xt x. As Xt x is F t -measurable, we now have E fxt+s F x t ω = f Y s Xt x ωω Pdω = S s f Xt x ω, ω Ω where ω represents the randomness before time t and ω the randomness after time t. Definition. Let B be a Banach space. A jointly continuous one-parameter semigroup of contractions on B is a family S t t of linear operators S t : B B satisfying the following requirements: CS1. S t f f for all t, f B; CS2. S f = f for all f B; CS3. S t+s = S t S s for all t, s ; CS4. the map t, f S t f is jointly continuous [, B B. 42

44 We choose B = C R n. The natural norm on this space is the supremum norm For f C R n we define f := sup x R n fx. S t fx := E fx x t, where X x t t denotes the solution of 7.1 with initial condition X x = x. Theorem 7.3 The operators S t with t form a jointly continuous one-parameter semigroup of contractions C R n C R n. Proof. We prove properties 1 4 in Definition Fix t and f C R n. We show that the function S t f again lies in C R n, by showing that it is continuous and tends to at infinity. The inequality S t f f is obvious from the definition of S t. Continuity: Choose x, y R n and consider the processes Xt x Section 6.1 yields E Xt x X y t 2 2 x y 2 + A and X y t. The basic estimate in E Xs x Xs y 2 ds for some constant A. Putting F t := E Xx s X y s 2 ds, we may write this as So Hence, as F =, or We thus obtain that d dt F t 2 x y 2 + 2AF t. d e 2At F t 2 x y 2 e 2At. dt e 2At F t x y 2 1 A 1 e 2At, AF t x y 2 e 2At 1. E X x t X y t 2 2 x y 2 + 2AF t 2 x y 2 e 2At. 7.6 Now choose ε >. Since f is uniformly continuous on R n, there exists a δ > such that x, y R n : x y δ = fx fy ε 2. 43

45 It follows that for x, y R n not more than δ := δ εe At /2 2 f apart we may estimate S t fx S t fy = EfXt x EfX y t E fx x t fx y t ε f P [ X x t X y t > δ ] ε f 1 δ 2 e2at x y 2 < ε, where the third inequality uses the Markov inequality and 7.6. We conclude that S t f is uniformly continuous as well. Approach to at infinity: It suffices to prove that there exists a τ > such that for all t [, τ] we have lim x S tfx =. Indeed, the same then holds for all t by the semigroup property. Heuristically speaking, we must prove that the diffusion cannot escape to infinity in a finite time. We start with the estimate E X x,k+1 t X x,k t 2 Ak t k Dt1 + x 2 k! for the iterates of the map X X in Section 6.1, starting from the constant process X x, t = x. Since Xt x = x + k Xx,k+1 t X x,k because X x,k t Xt x as k, it follows that E X x t x 2 D t 1 + x with D t = Dt k A k t k /k! 2. Now choose ε >. Let M > be such that fx ε 2 for all x R n with x M, and let τ > be such that 16D τ f < 1 2ε. Then, by the triangle inequality in R n and by Chebyshev s inequality, we have for all x R n with x > 2M 1 and all t [, τ]: S t fx ε 2 + f P [ Xx t < M] ε 2 + f P [ X x t x > 1 2 x ] ε 2 + f 4 x 2 E Xx t x 2 ε 2 + f 4 x 2 D τ 1 + x 2 ε f 4D τ x + 1 < ε. Since ε is arbitrary, this proves the claim. 2. From the initial condition X x = x it is obvious that S fx = E fx x = fx. 44

46 3. The semigroup property of S t is a consequence of the Markov property of X t. Namely, for all s, t and all x R n : S t+s f x = E f Xt+s x = E E f Xt+s x Ft = E S s f Xt x = S t S s f x. 4. To prove the joint continuity of the map t, f S t f, it in fact suffices to show that S t f f as t. Indeed, for all s > t, S t f S s g = S t f S s t g f S s t g f g + g S s t g. Note that from 7.7 it follows that lim t E Xt x x 2 =. Now again take ε >. Let δ > be such that x y < δ = fx fy < ε/2. Let t > be such that E Xt x x 2 < εδ 2 /4 f for t t. Then for t [, t ] we have, again by Chebyshev s inequality, S t f x f x = f Xt x ω fx P dω Ω ε f P [ Xx t ω x δ] ε f δ 2 E Xx t x 2 < ε. 7.3 Generalities on generators We next describe some basic theory for one-parameter semigroups on Banach spaces. We refer to the book of E.B. Davies 198 for further details. Definition. The domain of a generator A of a one-parameter semigroup S t t on a Banach space B is defined by { } DomA := f B lim 1 t S tf f exists, t and for f in this domain, Af denotes the limit. This definition leads to the following. Proposition 7.4 Let A be the generator of a jointly continuous one-parameter semigroup S t t on a Banach space B. Then i DomA is dense in B; ii S t leaves DomA invariant: S t DomA DomA; iii f DomA: S t Af = AS t f = d dt S tf. 45

47 Proof. i. For f B and ε > we define f ε := 1 ε ε S t fdt. Then lim ε f ε = f, so { f ε ε >, f B } is dense in B. But we also have 1 h S h 1I f ε = 1 h S h 1I 1 ε h+ε ε S t fdt ε = 1 S t fdt hε h = 1 h S t S ε fdt hε h S t fdt 1 ε S εf f as h. So f ε DomA, and hence DomA is dense in B. ii,iii. If f DomA, then 1 h S h 1I S t f = S t S t fdt 1 h S h 1I f S t Af, as h. So S t f DomA and AS t f = S t Af. The identity AS t f = d dt S tf follows from the definition of A. Since S t is the solution of the differential equation d dt S tf = AS t f with initial condition S = 1I, it is customary to write S t = e ta. The next theorem gives us an explicit formula for the generator of an Itô-diffusion on a large subset of its domain namely, all functions that are twice continuously differentiable and have compact support. Theorem 7.5 Let S t t be the one-parameter semigroup on C R n associated to an Itôdiffusion with coefficients b and σ. Let A be the generator of S t t. Then C 2 c R n DomA, and for all f C 2 c R n : Afx = i b i x f x i x σxσx ij i,j 2 f x i x j x. 7.8 Proof. Denote the r.h.s. of 7.8 by A f. By Itô s formula we have dfx t = i f X t dx i t f x 2 X t dx i tdx j t. i x i,j i x j Since dx i t = b i X t dt + j σ ijx t db j t, we have dx i dx j = σ ik db k k l σ jl db l = σ ik σ jk dt = σσ ij dt. k 46

48 Using the condition X x = x, we conclude that for any f C2 c R n, f X x t fx = i,j f x i X x s σ ij X x s db j s + The first part on the r.h.s. is a martingale and therefore 1 Af x = lim t t E f Xx 1 t f x = lim E t t Since A f is a continuous function, we have almost surely 1 lim t t A fx x s ds A fx, A f X x s ds. 7.9 A f Xs x ds. 7.1 and since A f is bounded we can apply dominated convergence to the r.h.s. of 7.1 and conclude that Afx = A fx. Theorem 7.5 shows that on C 2 c R n the generator A acts as a second-order differential operator, with a first-order part coming from the drift and a second-order part coming from the diffusion. This fundamental fact forms a bridge between the theory of partial differential equations and the study of diffusions. In general it is difficult to calculate the evolution semigroup S t, but in a few exceptional cases, such as that of the Ornstein-Uhlenbeck process, the differential equation d dt S t = S t A can be explicitly solved. Proposition 7.6 Let η and σ be positive numbers, and let S t t be the semigroup of contractions C R C R associated to the stochastic differential equation dx t = ηx t dt + σ db t Then for all f C R an x R, S t fx = 1 η σ π1 e 2ηt exp ηxe ηt y 2 σ 2 1 e 2ηt fy dy. Exercise Prove this proposition by checking explicitly that for all t d dt S tf = S t Af, where A is the generator of the Ornstein-Uhlenbeck process, given for f C 2 R by Below we shall give a stochastic proof. Afx := ηxf x σ2 f x. 47

49 Proof. First note that the solution Xt x of 7.11 with initial condition X x = x is given by X x t = xe ηt + σn ft, where f t x = 1 [,t] xe ηx t. Cf For t, let γ t denote the Gaussian density with mean and variance σ 2 f t 2 = σ 2 1 e 2ηt /2η, and for f C 2R, let ˆf denote the Fourier transform of f, so that fx = 1 e iωx ˆfω dω, 2π Then, since Ee iωσn f t = ˆγ t ω, we have, using Fubini s theorem and the property that the Fourier transform of a convolution product equals the product of the Fourier transforms, S t fx = E fxt x = 1 2π E e iωxx t ˆfωdω = 1 2π = 1 = 2π E e iωxe ηt +iωσn ft ˆfω dω e iωxe ηt ˆγ t ω ˆfω dω γ t xe ηt yfy dy By writing out the function γ t we obtain the statement. 7.4 Dynkin s formula and applications Having thus identified the generator associated with Itô-diffusions, we next formulate an important formula for stopped Itô-diffusions. A stopping time τ for X x t t is a random variable such that for all t the event [τ t] := {ω Ω τω t} is an element of the sigma-algebra F t generated by X x s s [,t]. Theorem 7.7 Dynkin s formula Let X be a diffusion in R n with generator A. Let τ be a stopping time with Eτ <. Then for all f Cc 2 R n and all x R n, τ E x f X τ = f x + E x Af X s ds. Proof. The theorem is intuitive, but its proof is rather technical. We limit ourselves here to a very brief sketch. By applying 7.9 to the stopped process t Xt τ x and letting t tend to infinity afterwards, we find: τ f Xτ x f τ = f x + Xs x σ ij Xs x db j s + Af Xs x ds. x i i,j 48

50 After taking expectations we get the statement, since stopped martingales starting at have expectation. See Dynkin The simplest example of a diffusion is Brownian motion itself: X t = B t b, σ id. Its generator is 1/2 times the Laplacian. We investigate two problems. Example 7.1. Consider Brownian motion Bt a := a + B t starting at a R n. Let R > a. What is the average time Bt a spends inside the ball D R = { x R n : x R }? Solution: Choose f Cc 2 R n such that fx = x 2 for x R. Let τr a denote the first time the Brownian motion hits the sphere D R. Then τr a is a stopping time. Put τ := τ R a T and apply Dynkin s formula, to obtain τ E fbτ a 1 = fa + E 2 f Ba s ds = a 2 + neτ, where we use that 1 2 f n. Obviously, E fba τ R 2. Therefore Eτ 1 n R 2 a 2. As this holds uniformly in T, it follows that EτR a 1 n R 2 a 2 <. Hence τ τr a as T by monotone convergence. But then we must have fbτ a Bτ a R a 2 = R 2 as T, and so in fact E τr a = 1 n R 2 a 2 by dominated convergence. Example 7.2. Let b R n be a point outside the ball D R. What is the probability that the Brownian motion starting in b ever hits D R? Solution: We cannot use Dynkin s formula directly, because we do not know whether the Brownian motion will ever hit D R i.e., whether τ <. In order to obtain a well-defined stopping time, we need a bigger ball D M := { x R n x M } enclosing the point b. Let σm,r b be the first exit time from the annulus A M,R := D M \D R starting from b. Then clearly σm,r b = τ M b τ R b. Now take A = 1 2, the generator of Brownian motion, and suppose that f M,R : A M,R R n satisfies the following requirements: i f M,R =, i.e., f M,R is harmonic, ii f M,R x = 1 for x = R, iii f M,R x = for x = M. We then find [ B b P σ b M,R ] = R ] = E [f M,R B b = f σm,r b M,R b, where the first equality uses ii and iii and the second equality uses Dynkin s formula in combination with i. Incidentally, this equation says that f M,R is uniquely determined if it exists. 49

51 Next, we let M to obtain [ ] P τr b < [ ] = P M b : τr b < τm b, because any path of Brownian motion that hits D R must be bounded. From the latter we in turn obtain [ ] [ ] P τr b < = P M b [τr b < τm] b = lim P τr b < τm b M [ ] B = lim P b M σm,r b = R = lim f M,Rb. M Thus, the only thing that remains to be done is to calculate this limit, i.e., we must solve i-iii. For n = 2 we find, after some calculation: f M,R b = log b log M log R log M 1 as M. For n 2, on the other hand, we find: f M,R b = b 2 n M 2 n R 2 n M 2 n { 1 if n = 1 b /R 2 n if n 3. It follows that Brownian motion with n = 1 or n = 2 is recurrent, since it will hit any sphere with probability 1. But for n 3 Brownian motion is transient. 5

52 8 Transformations of diffusions In this section we treat two formulas from the theory of diffusions, both of which describe ways to transform diffusions and are proved using Itô-calculus. 8.1 The Feynman-Kac formula Theorem 8.1 Feynman-Kac formula For x R n, let Xt x be an Itô-diffusion with generator A and initial condition X x = x. Let v : Rn [, be continuous, and let St v f for f C R n and t be given by St v f x = E f Xt x exp vxudu x. Then S v t t is a jointly continuous semigroup on C R n in the sense of Definition 7.2 with generator A v. Proof. It is not difficult to show, by the techniques used in the proof of Theorem 7.3, that St v f lies again in C R n, and that the properties 1, 2 and 4 in Definition 7.2 hold note that v is non-negative. It is illuminating, however, to explicitly prove Property 3. For s t, let Zs,t x := exp s v Xu s,x du. Property 3 in Definition 7.2 is preserved due to the particular form of the process Z x,t. In fact, let f C R n. Then S v t+s f x = E Z x,t+sf X x t+s where by stationarity. So indeed = E Z x,tz Xx t = E Z,tE x = E Z,tg x Xt x = S v t g x, t,t+s f Xt+s x Z Xx t t,t+s f X t,xx t t+s F t gy = E Z y t,t+s fxt,y t+s F t = E Z y,s fxy s = Ss v fy S v t+s f x = S v t S v s f x. Let us finally show that A v is the generator of S v t t. To that end we calculate dropping the upper index x: d Z,t f X t = f X t dz,t + Z,t df X t = [ v X t f X t + Af X t ] Z,t dt + Z,t σ X t T f X t, db t

53 Here we use Itô s formula and dx t = bx t dt + σx t db t to compute df fx t = i = i f X t dx i t f X t dx i tdx j t x i 2 x i,j i x j f x i X t b i X t dt + i,j 1 2 i,j 2 f x i x j X t σσ ij X t dt, f x i X t σ ij X t db j t 8.2 where the first and third term are precisely AfX t dt according to Theorem 7.5. Taking expectations on both sides of 8.1, we get de Z,t f X t = E [ vf + Af] X t Z,t dt. In short, dst v f = St v A vf dt, which means that A v is indeed the generator of the semigroup St v t recall Proposition 7.4iii. We can give the semigroup St v t a clear probabilistic interpretation. The positive number vy is viewed as a hazard rate at y R n, the probability per unit time for the diffusion process to be killed. Let us extend the state space R n by a single point, the coffin state, where the system ends up after being killed. Then it can be shown that there exists a stopping time τ, the killing time, such that the process Y x t satisfies Y x t := { Xt x if t τ if t > τ E f Yt x = E f Xt x exp v Xu x du, given by provided we define f :=. The proof of this requires an explicit construction of the killing time τ, which we shall not give here. Example 8.1. The Feynman-Kac formula was originally formulated as a non-rigorous pathintegral formula in quantum mechanics by R. Feynman, and was later reformulated in terms of diffusions by M. Kac. The connection with quantum mechanics can be stated as follows. If X t is Brownian motion, then the generator of St v t is 1 2 v. This is 1 the Hamilton operator of a particle in a potential v in R n. According to Schrödinger, the evolution in time of the wave function ψ L 2 R n describing such a particle is given by ψ Ut v ψ, where Ut v is a group of unitary operators given by U v t = exp it 1 2 v t R. This group can be obtained by analytic continuation in t into the complex domain of the semigroup S v t = exp t 1 2 v t [,. 52

54 Example 8.2. Consider the partial differential equation with initial conditions t ux, t = ux, t + ξxux, t, x Rd, t, 8.3 ux, = 1 x R d. 8.4 Here, is the Laplacian acting on the spatial variable and {ξx : x R d } is a field of random variables that plays the role of a random medium. Depending on the choice that is made for the probability law of this random field, 8.3 can be used in various areas of chemistry and biology. For instance, ux, t describes the flow of heat in a medium with randomly located sources and sinks. A formal solution of 8.3 and 8.4 can be written down with the help of the Feynman- Kac formula: [ ] ux, t = E exp ξbs x ds. This representation expresses ux, t in terms of a Brownian motion starting at x and is the starting point for a detailed analysis of the behaviour of the random field {ux, t : x R d } as t. See Carmona and Molchanov Finally, if v fails to be nonnegative, then the Feynman-Kac formula may still hold. For instance, it suffices that v be bounded from below. This guarantees that the properties in Definition 7.2 hold. 8.2 The Cameron-Martin-Girsanov formula Brownian motion starting at x R n can be represented naturally on the probability space Ω, F, P x, where Ω = C[, T ] R n is the space of its paths, F is the σ-algebra generated by the cylinder sets {ω Ω a ωt b} a, b R, t [, T ], and P x is the probability distribution on C[, t] of B x t := x + B t t [,T ] constructed in Section 2. We want to compare the probability distribution P x with that of another process derived from it, namely, Brownian motion with a drift. Let X x t denote the solution of dx x t = b X x t dt + db t, X x = x, 8.5 where b is bounded and Lipschitz continuous. The process Xt x on Ω. P x b induces a probability measure Theorem 8.2 Cameron-Martin-Girsanov formula For all x R n the probability measures P x and P x b are mutually absolutely continuous with Radon-Nikodym derivative given by dp x T T b B x dp x s s [,T ] = exp b Bs x, dbs x 1 2 b Bs x 2 ds. Proof. Fix x P n. The idea of the proof is to show that X x t has the same distribution under P x as B x t has under ρ T P x, where ρ T denotes the Radon-Nikodym derivative of the 53

55 theorem. In other words, we shall show that for all t 1 t 2 t n T and all f 1, f 2,... f n C R n : E x f 1 X t1 f n X tn = E x ρ T f 1 B x t1 fn B x tn. 8.6 Indeed, by the definition of P x b the l.h.s. is equal to f 1 ωt 1 f n ωt n P x b dω and the r.h.s. is equal to Ω Ω f 1 ωt 1 f n ωt n ρ T ωp x dω, while the functions ω f 1 ωt 1 f n ωt n generate the σ-algebra F when t 1,..., t n are running. We shall prove 8.6 by showing that both sides are equal to S t1 f1 S t2 t 1 Stn t n 1 f n x. 8.7 Let us start with the l.h.s. of 8.6. First we note that, for all s t T, all F in the algebra A s of bounded F s -measurable functions on Ω and all f C R n, by the property of conditional expectations and the Markov property we have that E x F fx t = E x EF fx t F s = E x F EfX t F s = E x F S t s fx s. If we apply this result repeatedly to the product f 1 X t1 f n X tn, first projecting onto A tn 1, then onto A tn 2, and all the way down to A t1, then we obtain 8.7. Next consider the r.h.s. of 8.6, which is more difficult to handle. We shall show that it is also given by 8.7, in three steps: Step 1. For all t T and F A t : E x F ρ T = E x F ρ t. 8.8 Hence the r.h.s. of 8.6 does not depend on T as long as T t n. Step 2. For all s T, F A s and f C R n : E x F f B t ρ T = E x F S t s fb s ρ T. 8.9 Step 3. We apply 8.9 repeatedly, first with t = t n, s = t n 1, F A tn 1, then with t = t n 1, s = t n 2, F A tn 2, continuing all the way down to t = t 1, and we obtain 8.7. Thus, to complete the proof it remains to prove 8.8 and 8.9. Proof of 1. Put ρ t := exp Z t with dz t = b B t, db t 1 2 b B t 2 dt. 54

56 Then ρ T is as defined above and Itô s formula gives dρ t = exp Z t dz t exp Z t dz t 2 = ρ t b B t, db t, where two terms cancel because dz t 2 = bb t 2. It follows that t ρ t is a martingale. Therefore, for F A t, E x F ρ T = E x E F ρ T F t = E x F E ρ T F t = E x F ρ t. Proof of 2: Note that, by Itô s formula, d ρ t f B t = f B t dρ t + ρ t dfb t + dρ t dfb t = ρ t fb t b B t, db t + f B t, db t + ρ 1 t 2 f B t dt + b B t, f B t dt following a calculation similar to 8.1 and 8.2. Hence, for s t with F A s we have, by 8.8, d dt Ex F ρ T f B t = d dt Ex F ρ t f B t = E x F ρ t Af B t = E x F Af B t ρ T, where A := b, is the generator of X t. Therefore the l.h.s. and the r.h.s. of 8.9 have the same derivative with respect to t for any t s. Since they are equal for t = s, they must be equal for all t s. The Cameron-Martin-Girsanov formula is a generalisation by Girsanov 196 of the original formula of Cameron and Martin 1944 for a translation in Wiener space, which we now give as a special case. Let h be a square integrable function [, T ] R, and consider the shifted noise Ñ given by Ñ f := h, f + N f. Clearly this corresponds to a shift in the associated Brownian motion given by B t = hsds + B t. Now, since this shifted Brownian motion satisfies the stochastic differential equation d B t = ht dt + db t, the Cameron-Martin-Girsanov formula says that the probability distributions P and P are related by d P T T dp = exp ht db t 1 2 ht 2 dt = e N h 1 2 h 2. 55

57 This Radon-Nikodym derivative can be understood as a quotient of Gaussian densities, as the following analogy shows. Let γ be the standard Gaussian density on the real line, and γ µ is its translate over a distance µ R, then γ µ x γx = e 1 2 x µ2 e 1 2 x2 = e 2µx 1 2 µ2. 56

58 9 The linear Kalman-Bucy filter The Kalman-Bucy filter is an algorithm that filters out random errors occurring in the observation of a stochastic process. When put on a fast computer, the algorithm follows the observations in real time : at each moment it produces the best possible estimate of the actual value of the process under observation given certain probabilistic assumptions on the process itself and on the observation errors. The filter is called linear because the estimates depend linearly on the observations. In fact, the model on which the algorithm is based is situated in a Hilbert space of jointly Gaussian random variables with mean zero. In such a space independence is equivalent to orthogonality. For the general nonlinear theory of stochastic filtering we refer to the book of Kallianpur 198. The basic idea of linear filtering is best understood by a simple example. From there we shall build up the full model of Kalman and Bucy. 9.1 Example 1: observation of a single random variable Let X be a Gaussian random variable with mean and variance a 2. Suppose that we cannot observe X directly, but only with some small Gaussian error W, independent of X, that has mean and variance m 2. So we observe Z := X + W. We are interested in the best estimate ˆX of X based on our observation of Z and the above assumptions. It is reasonable to interpret best as the requirement that EX ˆX 2 be minimal, which leads to ˆX := E X F Z, i.e., the orthogonal projection of X onto L 2 Ω, F Z, P, where F Z is the σ-algebra generated by Z. This is equivalent to projecting X onto the one-dimensional space R Z. Proposition 9.1 The best estimate of X is given by see Fig. 5 ˆX = E XZ E Z 2 Z = a 2 a 2 + m 2 Z. Proof. Put Then, because EXW =, Y := a 2 a 2 + m 2 Z. E X Y Z m 2 X a 2 W X = E + W a 2 + m 2 =. So X Y Z. For jointly Gaussian random variables with mean zero, orthogonality i.e., absence of correlation implies independence. So X Y Z. It follows that X Y F for all F L 2 Ω, FZ, P, and since Y L 2 Ω, FZ, P it follows that Y = EX FZ =: ˆX see Fig

59 W =, m Z = a, m X^ X = a, Figure 5: Relative positions of X, W, Z and ˆX in the space spanned by X and W. Orthogonality corresponds to independence. Distances in the plane correspond to distances in mean square. The mean square distance between X and ˆX can be read off from Fig. 5: X ˆX 2 = X 2 ˆX 2 = a 2 Note that X ˆX mina, m. a 4 a 2 + m 2 = a2 m 2 a 2 + m 2 = 1 a m Example 2: repeated observation of a single random variable Next, let us measure X several times with i.i.d. Gaussian observation errors W 1, W 2,..., W k, all independent of X. This gives us the sequence of outcomes Z j = X + W j, j = 1,, k. Again we ask for the best estimate ˆX k based on these observations. Theorem 9.2 The best estimate of X after k observations is given by a ˆX 2 k := E X FZ 1,..., Z k = 1 k a Z j. k m2 k j=1 Note that ˆX k X almost surely as k, because 1 K k j=1 W j almost surely as k. Thus, frequent observation allows us to retrieve X without error. We shall prove Theorem 9.2 in two ways. First we shall give a direct proof. After that we shall give a recursive proof suggesting a way to implement this estimation procedure in a machine in real time. Direct Proof. As the problem is linear, and as there is no reason to prefer any one of the Z j to another, let us try as our best estimate Y k = α k k j=1 Z j 58

60 for some constant α k. Again, we require EX Y k Z i = i = 1,..., k. So, because EXZ i = EX 2 = a 2, k k = E X Y k Z i = E X α k Z j Z i = a 2 α k E Z i Z j. Now which gives Using this result we obtain j=1 EZ i Z j = E X + W i X + W j = k EZ i Z j = ka 2 + m 2. j=1 α k = a 2 ka 2 + m 2 = 1 a 2 k a k m2 j=1 { a 2 if i j; a 2 + m 2 if i = j, Therefore Y k = ˆX k if α k is given the above value. Recursive Proof. The goal of this proof is to find a recurrence relation in k for ˆX k. This is done by means of a procedure that is essential to linear filtering: one identifies what is new in each of the consecutive observations by means of the so-called innovation process N j j 1. In terms of P j, the projection onto Z j := spanz 1,..., Z j in the spirit of Fig.5: N j is defined as follows: P j F = EF FZ 1,..., Z j, N j := Z j P j 1 Z j = Z j P j 1 X + W j = Z j P j 1 X = Z j ˆX j 1. The N 1,..., N j 1 all lie in the space Z j 1, whereas N j is orthogonal to it. It follows that the N j are orthogonal. Since dimz j = j, the variables N 1,..., N j form an orthogonal basis of Z j. Therefore ˆX k := P k X = with the coefficients determined by the orthogonality property. Now, E XN j = E X X + W j ˆX j 1 = E X X ˆX j 1 = E X ˆX 2 j 1 =: σj 1 2 k j=1 ˆX j 1 X ˆX j 1 because ˆX j 1 = P j 1 X and E Nj 2 = E X + W j ˆX 2 j 1 = m 2 + σj 1, 2 EXN j EN 2 j N j

61 because W j X and W j ˆX j 1. This gives us a recursion for ˆX k : ˆX k+1 = ˆX σk 2 k + Z m 2 + σk 2 k+1 ˆX k m 2 σ 2 = ˆX m 2 + σk 2 k + k m 2 + σk 2 Z k+1, where the r.h.s. is a convex combination. We are able to calculate σk 2 by means of the recursion σ 2 k σ2 k+1 = X ˆX k 2 X ˆX k+1 2 = X P k X 2 X P k+1 X 2 = P k X P k+1 X 2 = ˆX k+1 ˆX k 2 = E XN k+1 2 EN 2 k+1 = This equation can be simplified by changing to the reciprocal of σ k+1 : 1 σk+1 2 = 1 σk σ2 k m 2 = 1 σk m 2, where σ 2 = a2. We thus find for σ k the relation σ 4 k m 2 + σk 2. σ 2 k = m2 a 2 m 2 + ka If 9.2 is substituted into 9.2, then we find the following recursion relation for ˆX k+1 : m ˆX 2 + ka 2 a k+1 = ˆX 2 m 2 + k + 1 a 2 k + m 2 + k + 1 a 2 Z k This relation expresses the new estimate as a convex combination of the old estimate and the new observation, which is clearly useful for calculation in real time. 9.3 Example 3: continuous observation of an Ornstein-Uhlenbeck process We now apply the above recursive program to the observation Z t of an Ornstein-Uhlenbeck process X t : dx t = αx t dt + βdb t X Gaussian,X B t 9.4 and dz t = γx t dt + δdw t Z =. 9.5 Here B t and W t are independent Brownian motions, and α, β, γ, δ R are parameters. Note that W t plays the same role as the i.i.d. random variables W j in the previous example, and Z t plays the role of Z j. The parameters α and β are the drift and the diffusion coefficients of the observed process X t. We can think of γ as some kind of coupling constant that indicates how strong the process Z t is influenced by X t. The parameter δ plays the role of m. Theorem 9.3 Linear Kalman-Bucy filter Let the Ornstein-Uhlenbeck process X t and its observation process Z t be given by 9.4 and 9.5. Then the projection ˆX t of X t onto Z t, the closed linear span of Z s s t, satisfies the stochastic differential equation d ˆX t = α γ2 δ 2 σ t2 ˆX t dt + γ δ 2 σ t2 dz t

62 where σt 2 satisfies the ordinary differential equation d dt σ t2 = 2ασ t 2 + β 2 γ2 δ 2 σ t Equation 9.6 is the analogue of the recursion 9.3 in Example 2 above. Equation 9.7 will be solved at the end of this section. We shall build up the proof of Theorem 9.3 via several lemmas. In order to calculate ˆX t, we first identify the innovation process of Z t as the closest analogue to an orthogonal basis. This goes as follows. Let V t := W t + γ X s δ ˆX s ds. 9.8 Then, since Z t = δw t + γ X sds, we can write V t = 1 Z t γ δ Hence ˆX s ds. δdv t = dz t γ ˆX t dt = 1I P t dz t, 9.9 provided we take P t the projection onto Z t just before dt, so that P t dw t =. Because of 9.9, V t indeed deserves the name of innovation process: it isolates from the observations that part which is independent of the past. The idea of the proof of Theorem 9.3 is now to write ˆX t as a stochastic integral over the innovation process V t, analogous to the expansion 9.1 in the orthogonal basis N j. This will be done in Lemma 9.5 and requires the following fact: Lemma 9.4 V t is a Brownian motion. Proof. The family {V t } t [,T ] is jointly Gaussian. It therefore suffices to show that the covariance is the same as for a Brownian motion cf. 2.4: E V t V s = s t s, t. First note that V t Z t, and that V s V t Z t for s t because V s V t = s t dv u = s t [dw u + γ δ X u ˆX u du] is orthogonal to Z t by construction. Taking expectations we get E V t V s EV 2 t = E V t V s V t =, So EV t V s = EV 2 t. We next calculate the derivative of V 2 t : Taking expectations we get dv 2 t = 2V t dv t + dv t 2 = 2V t dv t + dt. de Vt 2 = 2E Vt dv t + dt = dt, because V t dv t. Integration now yields the desired result: E V t V s = EV 2 t = t = t s s t. We now make the following claim, which is the analogue of

63 Lemma 9.5 For all F L 2 Ω, F, P, the projection P t onto Z t acting on F is given by P t F = f sdv s, where fs := E F V s. In particular, f L 2 [, t]. The proof is deferred to later. An explanation for this theorem can be given by comparing the integral with the sum P k F = k E F N j N j, j=1 where N j is an orthonormal basis ENj 2 = 1. The continuous analogue is the formal expression P t F = = E F N s N s ds d ds E F V s dv s. with N s = dvs ds To prove Lemma 9.5 we need the following technical lemma: Lemma 9.6 The following three spaces are equal for all t : i Z t, ii { f s dz s f L 2 [, t] }, iii { g s dv s g L 2 [, t] }. Proof. 1. First we show that ii is a closed linear space. Let Z be the operator L 2 [, t] L 2 Ω, P defined as Zf = fsdz s. Then, because dz s 2 = δ 2 ds, δ 2 f 2 Zf 2 δ 2 + γ 2 E Xs 2 ds f 2. Indeed, by the Itô isometry and the independence of X and W, we have 2 2 E f s dz s = E γ f s X s ds + δ f s dw s 2 = γ 2 E f s X s ds + δ 2 f 2. The first term in the r.h.s. lies between and γ 2 f s 2 ds E Xs 2 ds. This implies that Z : L 2 [, t] RanZ is bounded and has a bounded inverse, so RanZ is closed. 62

64 2. Clearly, Z t is contained in ii because, for all s t, Z s = 1 [,s] u dz u = Z 1 [,s] RanZ. Conversely, suppose that f = lim j f j with f j simple. Then Zf j span Z s s t, which means that Zf j is a linear combination of Z s, s t. Hence Zf Z t, so that also Z t RanZ. Thus, i=ii. 3. Obviously, iii is closed because V : L 2 [, t] L 2 Ω, P : g g s dv s is an isometry. We know that RanV RanZ, since V s Z t = RanZ for all s t. To show equality of i and iii we must show that Z s RanV for all s t. Now, omitting γ and δ for the moment, we may write s dz s = dv s + ˆX s ds = dv s + h s r dz r ds, for some h s L 2 [, s] because ˆX s spanz r r s, so that, for all g L 2 [, t], g s dv s = = gsdz s grdz r s h s rdz r gsds h s rgsds dz r. By the theory of Volterra integral equations, it is possible for any u and h s s t to choose g such that r g r r h s r g s ds = 1I [,u] r r t. Then Z u = 1I [,u] r dz r = g s dv s RanV. Proof of Lemma 9.5: Use the equality of i and iii from Lemma 9.6 to obtain a function g L 2 [, t] such that P t F = g s dv s. Then, for u t, we find by the isometric property of integration w.r.t. V t Lemma 9.4, f u := E F V u = E P t F V u = E g s dv s 1I [,u] r dv r = g, 1I [,u] = u g r dr, which implies f = g. We thus conclude that f L 2 [, t] and P t F = f s dv s. Now we can prove Theorem

65 Proof. By Lemma 9.5 we may write ˆX t = P t X t = f t s dv s, where f t s := E X t V s. Let us calculate f t s for s t using the solution of 9.4 for X t, which can be written as We have: X t = β e αt s db s + e αt X. f t s = E X t V s = E X t W s + γ δ = γ δ = γ δ = γ δ s s s E s X t X u ˆX u du X u ˆX u du E e αt u X u X u ˆX u e αt u E du X u ˆX u 2 du. Differentiating this expression w.r.s. s we get f t s = γ δ e αt s E X s ˆX 2 s =: γ δ e αt s σ s 2. This means that ˆX t satisfies the stochastic differential equation: { [ ] } d ˆX d t = dt f ts dv s dt + f ttdv t = α ˆX 1 t dt + f t t δ dz t γ δ ˆX t dt = α γ2 δ 2 σ t2 ˆX t dt + γ δ 2 σ t2 dz t, which is 9.6. To finish the proof of Theorem 9.3 we must find an expression for σ t, given by σ t 2 := E X t ˆX 2 t = E Xt 2 E ˆX2 t, where use Lemma 9.5 E Xt 2 = e 2αt E X 2 + β 2 e 2αt s ds = d dt E Xt 2 = 2αE X 2 t + β 2, 64

66 and d dt E ˆX2 t = d dt = = 2α f t s 2 ds 2f t s d dt f t s ds + f t t 2 f t s 2 ds + γ2 δ 2 σ t4 = 2αE ˆX 2 t + γ2 δ 2 σt4. It follows that σt 2 satisfies 9.7, and Theorem 9.3 is proved. In the case treated above, where α, β, γ, and δ are constants, 9.7 can be solved with σ = a to give σ t 2 = α 1 Kα 2 exp γ 2 α 2 α 1 t/δ 2 1 K exp γ 2 α 2 α 1 t/δ 2, where and α 1 = 1 γ 2 αδ 2 δ α 2 δ 2 + γ 2 β 2 α 2 = γ 2 αδ 2 + δ α 2 δ 2 + γ 2 β 2 K = a2 α 1 a 2 α 2. This is easily checked by substitution. The formulas are not easy, but explicit. We note that Theorem 9.3 remains valid when α, β, γ and δ are allowed to depend on time. 65

67 1 The Black and Scholes option pricing formula. Myron Scholes and Fisher Black In 1973 Black and Scholes published a paper after two rejections containing a formula for the fair price of a European call option on stocks. This formula now forms the basis of pricing practice on the option market. It is a fine example of applied stochastic analysis, and marks the beginning of an era were banks employ probabilists that are well versed in Itô calculus. A European call option on stocks is the right, but not the obligation, to buy at some future time T a share of stock at price K. Both T and K are fixed ahead. Since at time T the value of the stock may be higher than K, such a right-to-buy in itself has a value. The problem of this section is: What is a reasonable price for this right? 1.1 Stocks, bonds and stock options Let B t t [,T ] be standard Brownian motion on the probability space Ω, F, P with filtration F t t [,T ]. We model the price of one share of a certain stock on the financial market by an Itô-diffusion S t t [,T ] on the interval,, described by the stochastic differential equation ds t = S t µdt + σdb t. 1.1 The constant µ R usually positive describes the relative rate of return of the shares. The constant σ > is called the volatility and measures the size of the random fluctations in the stock value. Shares of stock represent some real asset, for example partial ownership of a company. They can be bought or sold at any time t at the current price S t. Let us suppose that on the market certain other securities, called bonds, are available that yield a riskless return rate r. This is comparable to the interest on bank accounts. The value β t of a bond satisfies the differential equation dβ t = β t rdt. 1.2 The question we are addressing is: How much would we be willing to pay at time for the right to buy at time T one share of stock at the price K > fixed ahead? Such a right is 66

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Stochastic Integration and Stochastic Differential Equations: a gentle introduction Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***

Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

An Overview of the Martingale Representation Theorem

An Overview of the Martingale Representation Theorem An Overview of the Martingale Representation Theorem Nuno Azevedo CEMAPRE - ISEG - UTL nazevedo@iseg.utl.pt September 3, 21 Nuno Azevedo (CEMAPRE - ISEG - UTL) LXDS Seminar September 3, 21 1 / 25 Background

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Tools of stochastic calculus

Tools of stochastic calculus slides for the course Interest rate theory, University of Ljubljana, 212-13/I, part III József Gáll University of Debrecen Nov. 212 Jan. 213, Ljubljana Itô integral, summary of main facts Notations, basic

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Part III Stochastic Calculus and Applications

Part III Stochastic Calculus and Applications Part III Stochastic Calculus and Applications Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 218 These notes are not endorsed by the lecturers, and I have modified them often significantly

More information

1.1 Definition of BM and its finite-dimensional distributions

1.1 Definition of BM and its finite-dimensional distributions 1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

Problem List MATH 5143 Fall, 2013

Problem List MATH 5143 Fall, 2013 Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

One-Parameter Processes, Usually Functions of Time

One-Parameter Processes, Usually Functions of Time Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

MA50251: Applied SDEs

MA50251: Applied SDEs MA5251: Applied SDEs Alexander Cox February 12, 218 Preliminaries These notes constitute the core theoretical content of the unit MA5251, which is a course on Applied SDEs. Roughly speaking, the aim of

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Extension of continuous functions in digital spaces with the Khalimsky topology

Extension of continuous functions in digital spaces with the Khalimsky topology Extension of continuous functions in digital spaces with the Khalimsky topology Erik Melin Uppsala University, Department of Mathematics Box 480, SE-751 06 Uppsala, Sweden melin@math.uu.se http://www.math.uu.se/~melin

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Uniform Integrability of Martingales. On a Question by Alexander Cherny The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Jae Gil Choi and Young Seo Park

Jae Gil Choi and Young Seo Park Kangweon-Kyungki Math. Jour. 11 (23), No. 1, pp. 17 3 TRANSLATION THEOREM ON FUNCTION SPACE Jae Gil Choi and Young Seo Park Abstract. In this paper, we use a generalized Brownian motion process to define

More information

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES CLASSICAL PROBABILITY 2008 2. MODES OF CONVERGENCE AND INEQUALITIES JOHN MORIARTY In many interesting and important situations, the object of interest is influenced by many random factors. If we can construct

More information