Stochastic Calculus (Lecture #3)

Similar documents
On pathwise stochastic integration

(B(t i+1 ) B(t i )) 2

1. Stochastic Processes and filtrations

Elementary properties of functions with one-sided limits

Lecture 19 L 2 -Stochastic integration

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

n E(X t T n = lim X s Tn = X s

Exercises. T 2T. e ita φ(t)dt.

Theoretical Tutorial Session 2

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

I forgot to mention last time: in the Ito formula for two standard processes, putting

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

Stochastic integration. P.J.C. Spreij

From Random Variables to Random Processes. From Random Variables to Random Processes

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Lecture 12. F o s, (1.1) F t := s>t

Lecture 21 Representations of Martingales

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

On Integration-by-parts and the Itô Formula for Backwards Itô Integral

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Stochastic Calculus. Alan Bain

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Convergence at first and second order of some approximations of stochastic integrals

Topics in fractional Brownian motion

DOOB S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES

Solutions to the Exercises in Stochastic Analysis

I. ANALYSIS; PROBABILITY

A Concise Course on Stochastic Partial Differential Equations

Exercises in stochastic analysis

Part III Stochastic Calculus and Applications

Rough paths methods 4: Application to fbm

Brownian Motion and Conditional Probability

Risk-Minimality and Orthogonality of Martingales

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Stochastic Analysis I S.Kotani April 2006

Homogenization for chaotic dynamical systems

Pathwise Construction of Stochastic Integrals

FOUNDATIONS OF MARTINGALE THEORY AND STOCHASTIC CALCULUS FROM A FINANCE PERSPECTIVE

Lecture 22 Girsanov s Theorem

Brownian Motion and Stochastic Calculus

STAT 331. Martingale Central Limit Theorem and Related Results

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Math 5520 Homework 2 Solutions

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Stochastic Differential Equations.

Useful Probability Theorems

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Lecture 17 Brownian motion as a Markov process

Lecture 10. Riemann-Stieltjes Integration

A Fourier analysis based approach of rough integration

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

3 Integration and Expectation

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

3. WIENER PROCESS. STOCHASTIC ITÔ INTEGRAL

ELEMENTS OF STOCHASTIC CALCULUS VIA REGULARISATION. A la mémoire de Paul-André Meyer

Optimal stopping for non-linear expectations Part I

Maximum Process Problems in Optimal Control Theory

An essay on the general theory of stochastic processes

Gaussian Processes. 1. Basic Notions

arxiv:math/ v4 [math.pr] 12 Apr 2007

An Introduction to Stochastic Calculus

Immersion of strong Brownian filtrations with honest time avoiding stopping times

MA 8101 Stokastiske metoder i systemteori

Stochastic Models (Lecture #4)

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Some Tools From Stochastic Analysis

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Fast-slow systems with chaotic noise

Stochastic Analysis. Prof. Dr. Andreas Eberle

Stochastic Integration and Continuous Time Models

Stochastic Analysis. Prof. Dr. Nina Gantert. Lecture at TUM in WS 2011/2012. June 13, Produced by Leopold von Bonhorst and Nina Gantert

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

Integral representations in models with long memory

A Short Introduction to Diffusion Processes and Ito Calculus

Quasi-sure Stochastic Analysis through Aggregation

P (A G) dp G P (A G)

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

Applications of Ito s Formula

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

The Lebesgue Integral

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

The Pedestrian s Guide to Local Time

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Random G -Expectations

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Riesz Representation Theorems

1 Brownian Local Time

Ito Formula for Stochastic Integrals w.r.t. Compensated Poisson Random Measures on Separable Banach Spaces. B. Rüdiger, G. Ziglio. no.

Transcription:

Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014

Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral: preliminaries. 4. Brownian motion calculus 5. Itô s formula. 6. Stochastic differential equations. 7. Strong and weak solutions of SDEs. 8. Change of measure.

Itô integral: preliminaries 3.1. Functions of bounded variation. 3.2. Lebesgue-Stieltjes/Riemann-Stieltjes integral. 3.3. The stochastic integral t 0 B(s)dB(s). 3.4. Integration of simple previsible processes. 3.5. Some martingale results.

Functions of bounded variation Definition The variation of a real function g over the interval [a, b] is defined as V g ([a, b]) := sup g(t i ) g(t i 1 ), where the supremum is taken over partitions a = t 0 < t 1 < < t n = b. If V g (t) := V g ([0, t]) < for all t 0, then g is said to be a function of finite variation. i=1

Functions of bounded variation Example If g is increasing then V g (t) = g(t) g(0). If g is decreasing then V g (t) = g(0) g(t). (Exercise.) Example If g is differentiable with continuous derivative then (Exercise.) V g (t) = t 0 g (s) ds. Example The function g(t) = t sin(1/t) for t > 0 and g(0) = 0 is continuous and differentiable at all points except 0, but has infinite variation. (Exercise.)

Functions of bounded variation Lemma (Jordan Decomposition) Any function g of finite variation can be written as the difference of two monotone functions. If g is right-continuous, then also the two monotone functions can be chosen to be right-continuous. Proof. Take, for example, h 1 (t) = 1 2 (V g(t) + g(t)) and h 2 (t) = 1 2 (V g(t) g(t)). Remark (1) The representation is not unique. (2) If g(0) = 0 and if h i as above we have 0 h i (t) V g (t).

Lebesgue-Stieltjes/Riemann-Stieltjes integral Suppose that g is right-continuous and of bounded variation. Then with h 1, h 2 as before we can define the two (σ-finite) measures µ 1 ((a, b]) := h 1 (b) h 1 (a) and µ 2 ((a, b]) := h 2 (b) h 2 (a). Then for f : [0, ) R the Lebesgue-Stieltjes integral is defined as fdg := fdµ 1 fdµ 2, (0,t] (0,t] (0,t] provided (0,t] f dµ i < for i = 1, 2. One can show that this definition is independent of the choice of h 1 and h 2.

Lebesgue-Stieltjes/Riemann-Stieltjes integral If f is continuous, then the Lebesgue-Stieltjes integral coincides with the Riemann-Stieltjes integral t 0 fdg = lim f (t k )(g(t k) g(t k 1 )), t k 1 t k t k, where 0 = t 0 < t 1 < < t n = t, defines a partition, whose refinement tends to zero. Remark (1) The limit above is independent of the choice of t k. (2) One can show: if the Riemann-Stieltjes integral exists for all continuous f, then g is of finite variation.

Lebesgue-Stieltjes/Riemann-Stieltjes integral (3) One can show: the Riemann-Stieltjes integral exists, if one of the two functions f or g is continuous and the other one is of bounded variation (4) Due to (3): if f is of bounded variation and B s a BM, then is defined for all ω. t 0 f (s)db s (ω) = we could define the integral path-wise (5) Partial integration: suppose that g and f are continuous and of bounded variation. Then f (t)g(t) = f (0)g(0) + fdg + gdf. (0,t] (0,t]

The stochastic integral t 0 B sdb s One of the main targets of this course is to define t 0 X s db s. By the previous statement, if X s (ω) were of bounded variation for all ω, we could define the integral pathwise as a Riemann-Stieltjes integral. However, this would already exclude the case where X s = B s as is implied by the following result. Lemma A BM path is a.s. not of bounded variation. Proof. Fix ω and write B t for B t (ω). Then if V B( ) (t) were < (B tk B tk 1 ) 2 V B( ) (t) sup B tk B tk 1 0. 1 k n

The stochastic integral t 0 B sdb s Let us check what happens. B tk 1 (B tk B tk 1 ) = 1 2 = 1 2 (B tk + B tk 1 )(B tk B tk 1 ) 1 2 (Bt 2 k Bt 2 k 1 ) 1 2 = 1 2 B2 t 1 2 (B tk B tk 1 )(B tk B tk 1 ) (B tk B tk 1 ) 2 (B tk B tk 1 ) 2 P 1 2 (B2 t B t ) = 1 2 (B2 t t). Compared to partial integration, we get extra term t/2.

The stochastic integral t 0 B sdb s Now consider n B t k (B t k B tk 1 ), with Then B t k (B tk B tk 1 ) = t k = αt k 1 + (1 α)t k, 0 α 1. B tk 1 (B tk B tk 1 ) + (B t k B tk 1 )(B tk B tk 1 ). The first summand converges to 1 2 (B2 t t) as we have shown. The second, converges to (1 α)t (exercise).

The stochastic integral t 0 B sdb s Hence n B t k (B t k B tk 1 ), with t k = αt k 1 + (1 α)t k, 0 α 1. converges (in probability) to 1 2 B2 t + ( ) 1 2 α t. Unlike the usual Riemann-Stieltjes integral, this limit depends on the choice of the nodes t k.

The stochastic integral t 0 B sdb s A way out of this problem is to fix the nodes. Different choices of α yield different integrals. 1. The choice α = 0 gives rise to the Itô integral. 2. Another popular choice is α = 1/2, which corresponds to the Stratonovich integral. As we will see, the Itô integral has the important advantage, that the resulting integrated process becomes a (local) martingale, which is a fundamental property.

Integration of simple previsible processes Our previous investigations motivate to define the stochastic integral as a Riemann-Stieltjes integral, where the process is evaluated on the left limit of the intervals. Definition (Simple previsible) A process Y is called simple previsible with respect to a filtration (F t ) t 0, if it can be represented as Y t = ξ 0 I{t = 0} + ξ k I{t (t k 1, t k ]}, where 0 = t 0 t 1 t 2 t n <, ξ k are bounded r.v.s and F tk 1 -measurable, and ξ 0 is a bounded F 0 -measurable r.v. In the sequel (F t ) t 0 will be the filtration of a BM B(t), satisfying the usual conditions.

Integration of simple previsible processes Example Define Y t = ξ k := max{min{b tk 1, c} c} (c > 0), if t k 1 < t t k, k n, and Y 0 = 0. Then Y is simple previsible. We denote E the set of simple previsible processes (SPPs). Definition (Stochastic integral of SPPs) Let Y E. Then the stochastic integral of Y with respect to B is defined as I B (Y ) = 0 Y s db s := ξ k (B tk B tk 1 ).

Integration of simple previsible processes It is trivial to see that E ( I B (Y ) ) 2 <. Hence, I B : E L 2 = L 2 (Ω, A, P). Here are some other simple properties: Well def.: The definition is independent of the (non-unique) representation of Y. Linear: If X and Y are simple previsible and α and β real numbers, then I B (αx + βy ) = αi B (X) + βi B (Y ). Moments: EI B (Y ) = 0 and E ( I B (Y ) ) 2 = E 0 Y 2 s ds.

Integration of simple previsible processes The first two properties are simple and left as an exercise. We prove the moments properties. [ ] E Y s db s = E[ξ k E[B tk B tk 1 F tk 1 ]] = 0. k n 0 Furthermore [ ] 2 E Y s db s = E[ξ k ξ l (B tk B tk 1 )(B tl B tl 1 )]. 0 k n l n

Integration of simple previsible processes When k < l E[ξ k ξ l (B tk B tk 1 )(B tl B tl 1 )] = E[E[ξ k ξ l (B tk B tk 1 )(B tl B tl 1 ) F tl 1 ]] = E[ξ k ξ l (B tk B tk 1 )E[B tl B tl 1 F tl 1 ]] = 0. Same argument applies when l < k. Hence, [ ] 2 E Y s db s = = = 0 E[ξk 2 (B t k B tk 1 ) 2 ] E[E[ξk 2 (B t k B tk 1 ) 2 F tk 1 ]] = E[ξk 2 E[(B t k B tk 1 ) 2 F tk 1 ]] [ ] E[ξk 2 ](t k t k 1 ) = E ξk 2 (t k t k 1 ) = E 0 Y 2 s ds.

Integration of simple previsible processes The mapping I B is transforming a function (Y s ) into a (random) number. We now wish to obtain a random process instead, which should represent t 0 Y sdb s. To this end we define I B (Y ) t := For a simple function this gives I B (Y ) t = 0 Y s I{s t}db s. ξ k (B tk t B tk 1 t).

Integration of simple previsible processes We obtain the following further properties. Continuity The paths of I B (Y ) t = t 0 Y sdb s are continuous. Martingale I B (Y ) t an L 2 -bounded martingale, i.e. and for s t sup E ( ) 2 I B (Y ) t < t 0 E[I B (Y ) t F s ] = I B (Y ) s.

Integration of simple previsible processes The continuity follows easily from the continuity of the BM paths. The L 2 -boundedness follows from sup t 0 E ( ) 2 I B (Y ) t = sup E t 0 = sup E t 0 0 t 0 Y 2 s I{s t}ds Y 2 s ds = E 0 Y 2 s ds <. (In the last step we used the monotone convergence theorem.)

Integration of simple previsible processes Finally, we can add t = t j and s = t k, k j, to the partition and get j E[I B (Y ) t F s ] = E ξ l (B tl B tl 1 ) F s l=1 j = I B (Y ) s + E[ξ l (B tl B tl 1 ) F s ] l=k+1 j = I B (Y ) s + E[ξ l E[B tl B tl 1 F tl 1 ] F s ] = I B (Y ) s. l=k+1

Some martingale results Having developed the stochastic integral for the case where the integrand is a simple previsible function, is a first step towards a general definition. To extend the integral to a bigger class of integrals (next lecture), we need some background information about martingales. Recall again that X = (X t : t 0) is a martingale w.r.t. to F = (F t : t 0) if (i) E X t < for all t 0, and (ii) E[X t F s ] = X s a.s. for all 0 s t. Examples: Let (B t ) a Brownian motion and F t = σ(b s : s t). Then the following process are martingales w.r.t. F: 1. B t. 2. B 2 t t. 3. exp(θb t tθ 2 /2). 4. If E Y <, then E[Y F t ] is a martingale.

Some martingale results Many important properties of martingales can be shown under the assumption that the paths are càdlàg. Definition (càdlàg) A function f on [0, ) is called càdlàg (continue à droite, limite à gauche), if it is right-continuous and if left limits exist. Example Continuous functions are càdlàg. Distribution functions are càdlàg. f : [0, 2] R with f (t) = sin(1/(1 t))i{t < 1} isn t càdlàg. Since one can prove that if the filtration F fulfills the usual conditions, then every martingale X w.r.t. F possesses a càdlàg version, this is not really a restriction.

Some martingale results Theorem (Optional stopping theorem) If X is a martingale with right-continuous paths and if T is a bounded stopping time, then X T is integrable and F T measurable. In addition we have E[X T ] = E[X 0 ] Theorem (Optional sampling theorem) If X is a martingale with right-continuous paths and if S T are bounded stopping times, then E[X T F S ] = X S a.s.

Some martingale results Theorem (Doob s inequality) If X possesses right-continuous paths, then [ E ] sup X s 2 4E X t 2. 0 s t Remark Doob s inequality holds in a more general setting for p-th order moments, p > 1. We now provide some results when X is an L 2 -bounded martingale, i.e. a martingale X for which sup EXt 2 <. t 0

Some martingale results Theorem (Martingale convergence theorem) If X possesses right-continuous paths, then We have X t L 2 X. E[X F t ] = X t In addition, Doob s inequality holds: [ E a.s. ] sup X s 2 4E X 2. 0 s<

Exercises 1. Let (M t ) be a square integrable martingale. Show that for Y F s and E(Y 2 ) < we have E(Y (M t M s )) = 0 for t s. This shows that martingale increments are orthogonal. 2. Consider n B t k (B t k B tk 1 ), with t k = (1 α)t k 1 + αt k, 0 α 1. Show that if the mesh of the partition tends to zero, then the above term converges in probability to 1 2 B2 t + (α 1 2 )t.

Exercises 3. Show that the integral of a simple previsible process is independent of the representation of the integrand. 4. Show that the integral of a simple previsible process is linear. I.e. I B (αx + βy ) = αi B (X) + βi B (Y ). 5. Show that if E Y <, then E[Y F t ] is a martingale.