Stochastic Analysis I S.Kotani April 2006

Size: px
Start display at page:

Download "Stochastic Analysis I S.Kotani April 2006"

Transcription

1 Stochastic Analysis I S.Kotani April 6 To describe time evolution of randomly developing phenomena such as motion of particles in random media, variation of stock prices and so on, we have to treat stochastic processes. The mathematical tools of analyzing stochastic processes have been well developed from various points of view. The purpose of this lecture is to give a basic knowledge on Ito s calculus based on the martingale theory. 1 Conditional expectations 1.1 σ fields and information In the modern probability theory the notion of σ fields is important because it is supposed to express a certain aspect of information. Let Ω, F, P be a probability space, that is, Ω is a set finite or infinite, F is a σ field and P is a probability measure. Recall that a σ field F is a family of subsets of Ω satisfying the three properties: 1 φ, Ω F A F = A c F 3 A n F n = 1,, = A n F. n=1 Suppose we have two sub σ fields G 1, G of F such that G 1 G. Then this can be interpreted as G contains more information than G 1. This is plausible in the following reason. If G 1 and G consist of finite families of subsets of Ω, then there exist partitions {A 1, A,, A m },{B 1, B,, B n } of Ω such that { G 1 = { i I G = } A i ; I is arbitrary subset of {1,,, m} B j ; J is arbitrary subset of {1,,, n} j J }. Then G 1 G if and only if the partition {B 1, B,, B n } is finer than the partition {A 1, A,, A m }, that is, any B j is included in some A i. Example 1 Let Ω be the set of all Japanese and consider two partitions classifications of Ω. One classification is by sex and another one is by sex, weight. Define appropriately G 1, G associated with these two classifications. The notion of random variables also should be understood in this context. A random variable X on Ω, F, P is an F measurable function on Ω, that is, X 1, a] {ω Ω; Xω a} F for any a R. 1

2 Suppose a sub σ-field G of F is generated by a finite partition {B 1, B,, B n } of Ω. Then a random variable X is G measurable if and only if X is a constant on each B j. In another word, a finite partition is equivalent to a random variabale taking only finitely many values. Generally a σ-field σx generated by a random variable X is defined by σx = { X 1 F ; F B R }, where B R is the Borel σ-field of R. Analogously a σ-field σx 1, X,, X m generated by random variables {X 1, X,, X m } is defined by σx 1, X,, X m = { X 1 F ; F B R m } with X = X 1, X,, X m. This σ-field is supposed to contain the information of X. 1. Conditional expectations as random variables In elementary probability theory, for A, B F, the probability of A conditioned on B is defined by P A B P A B =. P B However in modern probability theory it is supposed that information is nothing but a σ-field, therefore it is reasonable to define a conditional probability based on a sub σ-field G of F. For simplicity assume G consists of a finite partition {B 1, B,, B n } of Ω. Then we define P A G as a random variable by P A G ω = P A B j if ω B j P B j n P A B j = I Bj ω, P B j j=1 where I B is the characteristic function of B, that is, { 1 if ω B, I B ω = if ω B c. It should be remarked that P G ω is a probability measure on Ω, F for each fixed ω Ω. Analogously the conditional expectation E X G for a random variable X with finite expectation is defined by E X G ω = E I Bj X if ω B j P B j n E I Bj X = I Bj ω P B j = j=1 n j=1 B j Xω P dω I Bj ω = P B j Ω Xω P dω G ω.

3 Therefore the conditional expectation E X G also is a random variable measurable with respect to G. If X = I A, then E I A G = P A G. In this way, when a sub σ-field G is generated by a finite partition, its conditional expectation is easily defined. However, if it is not so, we have to take another approach. To see this, we pay attention to the following properties of the conditional expectation introduced above. Set Y = E X G. Then 1 Y is measurable with respect to G. Y has a finite expectation and E I B Y = E I B X holds for any B G. Theorem For any random variable X with finite expectation there exists a unique random variable Y satisfying 1,. Proof. To prove the uniqueness let Z be another random variable satisfying 1,. Then E I B Y Z = E I B X E I B X = for any B G. Choose B + = {ω Ω; Y ω Zω }. Then B + G and I B+ Y Z, hence we have E I B+ Y Z = = I B+ Y Z = a.s.. Similarly introducing B = {ω Ω; Y ω Zω < }, we see I B Y Z = a.s. Combining these two equalities, we can conclude Y = Z a.s.. To prove the existence we define a singed measure Then it clearly satisfies µb = E I B X for B G. µb = if P B =, that is, µ is absolutely continuous with respect to P. Applying the Radon- Nikodym theorem, we see that there exists a G measurable random variable Y with finite expectation such that µb = Y ωp dω = E I B Y, B which comletes the proof. This unique random variable Y is denoted by E X G. In addition to the properties 1,, this satisfies 3 For any bounded G measurable random variable G it holds E GX G = GE X G. 4 X 1 X = E X 1 G E X G, in particular E X G E X G. 5 G 1 G = E E X G G 1 = E X G 1. 6 If X is independent to G, then E X G = EX 3

4 Martingalesdiscrete time parameter.1 Definitions The notion of martigales was introduced by Doob in 195 s, which plays an indispensable role in the theory of stochastic analysis nowadays. This is a generalization of sums of indipendent random variables and is effective because it is flexible under many non-linear transformations. A family of sub σ fields {F n ; n =, 1,, } of F is called a filtration if F F 1 F n F. A stochastic process one parameter family of random variables {X n } n = {X n ; n =, 1,, } is called adapted to {F n } n, if X n is measurable with respect to F n for each fixed n. An adapted random variables {X n } n is called a martingale with respect to {F n } n, if it satisfies 1 E X n < for n =, 1,,. E X n+1 F n = X n for n =, 1,,. From this definition martingales can be understood as a mathematical expression of a fair game, which will be ensured in the optional stopping theorem below. Example 3 Let {Y n } n 1 be a family of independent random variables with finite expectations. { Xn = Y i 1 EY 1 + Y EY + + Y n EY n for n 1 X = is a martingale with respect to F n = σx 1, X,, X n and F = {φ, Ω}. ii Moreover assume E Y n < and EY n = for n 1. Then { Xn = Y 1 + Y + + Y n EY 1 + EY + + EY n for n 1 X = is a martingale. iii Suppose E Y n = 1 for all n. Then { Xn = Y 1 Y Y Y n for n 1 is a martingale. X = 1 Example 4 Let X be a random variable with finite expectation and {F n } n be a filtration. Then X n = E X F n becomes a martingale. 4

5 Example 5 martingale transformation=discrete stochastic integral Let {M n } n be a martingale and {f n } n be random variables such that f n is bounded and measurable with respect to F n for n =, 1,,. Then f X n = k M k+1 M k if n 1 k= if n =. becomes a martingale. It is convenient to introduce wider notions. {X n } n is called a submartingalesupermartingale with respect to {F n } n, if in addition to the property 1 it satisfies 3 E X n+1 F n X n for n. E X n+1 F n X n for n. Example 6 Let {X n } n be a martingale and f be a convex function, that is, fαx + βy αfx + βfy if α + β = 1, α, β and x, y R. Suppose E f X n < submartingale. for n =, 1,,. Then {f X n } n becomes a Example 7 Let {Y n } n be a Markov process on S with transition kernel {px, dy} and f be a subharmonic function on S for this Markov process, that is, fx fypx, dy. If E fy n <, then X n = fy n is a submartingale.. Martingale inequality S In order to establish the theorem of law of large numbers in its full generality, Kolmogorov used a maximal inequality for sums of independent random variables with means as an analogy of an inequality in Fourier analysis. Doob pointed out the inequality is valid also for submartingales, and it became a very useful tool in stochastic analysis. Theorem 8 martingale inequaly Let {X n } n be a submartingale. Then for any λ > it holds that P max X k λ λ 1 E X n ; max X k λ λ 1 E X + n. k n k n Proof. Since the last two inequalities are trivial, we show only the first one. Set { } A = max X k λ k n. A m = {X < λ, X 1 < λ,, X m 1 < λ, X m λ} 5

6 Then {A m } m n are disjoint and n A = A m, A m F m. m= The definition of submartingales3 of Section.1 implies for n m EX n ; A m EX m ; A m. Hence n n EX n ; A = EX n ; A m EX m ; A m λp A. m= m= Corollary 9 Suppose {X n } n is a martingale. Then i P max X k λ λ p E X n p for p 1 Kolmogorov-Doob, k n p ii E max X k p p E X n p for p > 1. k n p 1 Proof. Since fx = x p is convex if p 1, { X n p } n becomes a submartingale due to Example8. Applying Theorem11, we see P max X k λ = P max X k p λ p λ p E X n p. k n k n To prove ii, set Y = max k n X k. Observe for a non-negative random variable Y EY p = E pλ p 1 I Y λ dλ = holds. Thus from i it follows that EY p pλ p 1 P Y λ dλ pλ p 1 λ 1 E X n ; Y λ dλ = p p 1 E X n Y p 1 p p 1 {E X n p } p 1 {EY p } 1 p 1, hence EY p p p E X n p. p 1 6

7 .3 Optional stopping theorem For further investigation of stochastic processes, the notion of stopping times is crucial. A stopping time is not simply a non-negative random variable, but it has to be non-anticipating, which will be specified soon. Let {F n } n be a filtration. A non-negative integer valued random variable τ taking possibly is called a stopping time with respect to {F n } n if it satisfies {τ n} F n for n {τ = n} F n for n. Typical examples of stopping times are given by hitting times for adapted stochastic processes. Example 1 Let {X n } n be a stochastic process adapted to {F n } n and F BR. Define { inf {n ; Xn F } if X τ = n F for some n < otherwise. Then τ is called the first hitting time to F and becomes a stopping time with respect to {F n } n. On the other hand, the last hitting time { sup {n ; Xn F } if X τ = n F for some n < otherwise, can not be a stopping time. We remark the following Lemma 11 Let σ, τ be stopping times. Then σ τ, σ τ and σ + τ are also stopping times. Proof. We prove the statement only for σ + τ. Observe {σ + τ = n} = which concludes the proof. Now the theorem is n {σ = k, τ = n k} F n, k= Theorem 1 Optional stopping theorem Let {X n } n be a submartingale. Then for any stopping times τ, σ such that τ σ, {X n τ X n σ } n becomes a submartingale. In particular, if τ is bounded, then E X τ E X σ. Proof. Set f k = I τ>k σ = I τ>k I σ>k. Then X n τ X n σ = f k X k+1 X k if n 1, k= 7

8 and, since f k, we see as in Example5 that {X n τ X n σ } n is a submartingale: n E f k X k+1 X k F n = f k X k+1 X k + f n E X n+1 X n F n k= which completes the proof. k= f k X k+1 X k, k= The theorem says that if {X n } is a martingale and τ is a bounded stopping time, then E X τ = E X. In this sense, martingales are fair games. 3 Martingale convergence theorems One advantage of using martingales in analysis of stochastic processes is that they make the proofs of convergences easy. The following argument initiated by Doob is a key to establish the convergence theorems for submartingales. For a sequence of real numbers {x, x 1,, x N } and a < b, set τ = and { τn+1 = min {k τ n ; x k a} for even n τ n+1 = N if {k τ n ; x k a} = φ { τn+1 = min {k τ n ; x k b} for odd n. τ n+1 = N if {k τ n ; x k b} = φ Notice here 1 = τ τ 1 τ τ N = N and, if τ n = N for some n < N, then τ n+1 = τ n+ = = τ N = N. Define the upcrossing number of {x, x 1,, x N } between a, b by { { max k; xτk 1 U = U x, x 1,, x N ; a, b = { a, x τ b} k if k; a, x xτk 1 τ b} k = φ. We illustrate those definitions below. Lemma 13 Doob Let {X, X 1, X,, X N } be a submartingale and U be its upcrossing number between a, b. Then b a E U E X N a X a E X N X +. 8

9 Proof. For simplicity assume N is even. Observe X N X = X τ1 X τ + X τ X τ1 + + X τn X τn 1 b a U + X τ1 X τ + X τ3 X τ + + X τn+1 X τn. Since {τ n } are stopping times, applying Theorem1, we see Thus we have E X τn+1 X τn. E X N X b a E U. Set Y n = X n a. Then it is easy to see that {Y n } is also a submartingale and UX, X 1,, X N ; a, b = UY, Y 1,, Y N ; a, b, we complete the proof. Now the main result is Theorem 14 Suppose {X n } n is a submartingale satisfying E X n + C for all n x + = x with some C >. Then there exists a finite random variable X such that hold. lim X n = X a.s., and E X < n Proof. For a < b, U N ω = U X ω, X 1 ω,, X N ω ; a, b as N. Define Uω = lim N U N ω. Since E X N X + E X + N + E X C + E X, from Lemma19 it follows that However setting A a,b = EU C + E X, hence Uω < a.s.. 1 { ω Ω; lim inf n X n ω < a < b < lim sup n } X n ω, we see U ω = for ω A a,b. Hence we have P A a,b = from 1. Set A = Then P A = and for ω A c, we have A a,b. a<b a,b Q lim inf X n ω = lim sup X n ω X ω [, + ]. n n 9

10 However Fatou s lemma implies Here we have used E X lim inf n E X n C + E X <. E X n = E X + n which completes the proof. As for supermartingales we have E Xn E X n + E X, Theorem 15 Let {F n } <n be a family of σ fields such that F n F n+1 F 1 F, and {X n } <n be a family of random variables adapted to {F n }. Then there exists a random variable X, + ] such that Moreover if then lim n X n = X lim EX n <, n a.s.. E X <, hence X <. Proof. The number of downcrossings D of sequences {X M, X M+1,, X 1, X } between a, b can be defined analogously as the number of upcrossings and we can prove the following estimate b a E D E X M b X b b + E X, because { X n } is a submartinagle and its upcrossing number between b, a is eaqual to the downcrossing number for {X M, X M+1,, X 1, X }. Now the rest of the proof is the same as that of Theorem14 except that X may take +. However if lim EX n < is valid, then n E X n = EX n + EXn EX n + EX = E X lim EX n + EX n <. Corollary 16 Let {F n } <n<+ be a family of σ fields such that F n F n+1, and X be a random variable with finite expectation. Then { E X F+ as n + E X F n E X F as n almost surely and in L 1 Ω, P, where F + = σ F n, F = σ F n. n n 1

11 4 Martingalescontinuous time parameter In this section we discuss martingales with continuous time parameter. {F t } t be a filtration with continuous parameter, that is, F s F t if s t. Let {X t } t be a stochastic processes a family of random variables adapted to {F t }, that is, for each t X t is F t measurable. We can define the notions of martingales, submartingales and supermartingales in this case just like the discerete case. In the following arguments we need to impose the right continuity of {X t } t, that is, P lim X s = X t for every t = 1 s t Throughout this lecture we assume that all stochastic processes are at least right continuous. For a filtration {F t } set Then we have F t+ = s>t F s = n 1 F t+ 1 n F t. Lemma 17 Let {X t } t be a right continuous submartingale with respect to {F t }. Then {X t } t is a submartingale with respect to {F t+ }. Proof. Let t > s and choose n 1 such that t > s + 1 n. Then Fs+ E X t 1 X n s+ 1. n The right hand side converges to X s as n. On the other hand, the left hand side converges to E X t F s+ due to Corollary16, which completes the proof. Let Keeping this Lemma in our minds, from now on we assume filtrations are always right continuous, that is, F t+ = F t for every t. The three main results for martingales with discrete parameter are also valid for martingales with continuous parameter. Theorem 18 Martingale inequalities i Let {X t } t be a right continuous submartingale. Then for any λ > P sup X s λ λ 1 E X t ; sup X s λ λ 1 E X + t λ 1 E X t. s t s t Suppose {X t } t is a right continuous martingale. Then ii P sup X s λ λ p E X t p for p 1, λ >, s t p iii E sup X s p p E X t p for p > 1. s t p 1 11

12 Proof. Let Q be the set of all rational numbers and F n be finite subsets of Q [, T ] such that F n F n+1, max F n = t for every n 1 and F n = Q [, T ]. Since {X t } t Fn is a submartingale with discrete parameter, we can apply Theorem8 and Corollary9. The right continuity of {X t } implies n=1 max X s sup X s as n, s F n s [,T ] which shows i. ii and iii can be proved similarly. To prove the optional stopping theorem we give a remark on stopping times. The notion of stopping times can be defined similarly, that is, a non-negative random variable τ is called an {F t } stopping time if {τ t} F t for every t. The right continuity of {F t } implies this definition is equivalent to because We give examples. {τ < t} F t for every t >, {τ < t} = { τ t 1 n} Ft n=1 {τ t} = { }. τ < t + 1 n Ft+ n=1 Example 19 Let {X t } t be a right continuous stochastic process adapted to {F t }. For A R set { inf {t ; Xt A} if { } = φ τ A = if { } = φ. If A is an open set or a closed set, then τ A becomes an {F t } stopping time. Proof. Assume first A is open. Then {τ A < t} = Suppose A is closed. Then where A n = {τ A t} = r Q, r<t {X r A} F t. {τ An t} F t, n=1 { x R ; dist x, A > 1 } n open set in R. For a general Borel set A, the measurability of τ A is a highly non-trivial problem, and we need extra properties for {X t } t and the completion of probability spaces. 1

13 Lemma i Let σ, τ be stopping times. Then so are σ τ, σ τ, σ + τ. ii Let {σ n } n 1 be a sequence of stopping times which are decreasing or increasing. Then so is lim σ n. n Proof. We left the proof for the readers. Theorem 1 Optional stopping theorem Let {X t } t be a right continuous submartingale and σ, τ be stopping times such that τ σ. Assume E sup t T X t < for each T >. 3 Then {X t τ X t σ } becomes a submartingale. In particular, if τ is bounded, we have E X τ E X σ. Proof. For fixed n 1, set τ = k + 1 n if σ = k + 1 n if and Then n τ, n σ are k n τ < k + 1 k n n σ < k + 1, n F k = F. k { } n Fk stopping times, because { n τ k} = {τ < k } n F k n = F k. { } Since Xk = X is a submartingale with respect to k Fk, we see from n Theorem1 for k > l E Xk n τ X k n σ F l X l n τ X l n σ. Now assume k 1 t < k n inequality implies E n, l 1 X k n τ X k n σ I A E s < l n and let A F n s F l. Then the above X l n τ X l n σ I A, On the other hand, since τ τ, σ σ and k l t, n s as n, the right n continuity of {X t } shows X k n τ X t τ, X l n τ X s τ, X k n σ X t σ, X l n σ X s σ as n. X However, since X k, n τ k n σ sup u t+1 X u, which is integrable due to the assumption 3, the dominated convergence theorem proves E X t τ X t σ I A E X s τ X s σ I A for A F s. 13

14 { } Remark Since a more detailed argument shows that X k n τ, X k n σ are uniformly integrable, we see that the condition 3 is unnecessary in general. If {X t } t is a right contin- Theorem 3 Martingale convergence theorem uous submartingale satisfying E X + t C for any t, then there exists a finite random variable X such that Proof. For fixed n 1, set X t X as t a.s. and E X <. T n = { } k ; k =, 1,,, T = n T n. Then T is countable and dense in [,. For N 1 and a < b, let UN n a, b be the upcrossing number of {X t } t Tn [,N] between a, b. Then from Lemma13, we have n=1 b a E U n Na, b E X + N + E X C + E X. Since as n, N, UN n a, b is increasing, let Ua, b be its limit. Then we see which implies E Ua, b C + E X b a Ua, b < a.s.. <, Varying a < b among all rational numbers, we see that with probability 1 lim X t [, + ] exists a.s.. t t T The right continuity of {X t } implies On the other hand, we have lim t t T X t = lim t X t. E X t = E X + t EXt EX + E X + t EX + C, hence Fatou s lemma shows E X EX + C <. 14

15 5 Brownian motion A typical and the most important martingale with continuous parameter is Brownian motion. In this section we introduce this process and study its basic properties. Brownian motion was found by an English botanist Brown in 188 as a random motion of some components of pollen. In 195 Einstein used this process to study the existence of molecules without knowing the previous discovery by Brown. Brownian motion was first defined mathematically as a continuous stochastic process by Wiener in 193, and this process is called sometimes Wiener process. Lévy studied properties of paths of Brownian motions from very original points of view. Kolmogorov tried to develop a dynamical theory of Markov processes and pointed out that Brownian motion could play a basic role for its purpose. Ito initiated a random dynamical theory based on Brownian motion in 194 and started a rigorous treatment of stochastic differential equations. A stochastic process {B t } t on a probability space Ω, F, P is called a Brownian motion if it satisfies 1 Each sample {B t } t is continuous as a function of t. For any sequence { t t 1 t n }, the random variables { Bt1 B t, B t B t1,, B tn B t} are independent. 3 The distribution of B t B s is N, t s, that is, a Gaussian distribution with expectation and variance t s. These three properties determine uniquely the distribution of a Brownian motion as a stochastic process. These three properties are not independent. Actually owing to the central limit theorem the properties 1 and imply that the distribution of B t B s becomes a Gaussian. The existence of a stochastic process satisfying the three properties is not trivial, and we had to wait Wiener. He constructed a Brownian motion as a Fourier series with random coefficients. Now a more intuitive introduction of the process is possible as a limit of a simple random walk. We give here its short explanation. Let {Y n } n 1 be independent random varibales taking values {±1} with equal probability 1/. Set { Y1 + X X n = + + X n if n 1, if n =. Define a continuous stochastic process {X t } t by X t = t n X n+1 + n + 1 t X n if n t n + 1. Since EX t = and E X t = t n + n if n t n + 1, we see Xmt E = mt n + n if n mt n + 1. m m Denoting t = n + ε, we have m Xmt E = mε + n m m. 15

16 However, ε 1, therefore as m it holds that m Xmt E t. m Then the central limit theorem tells us that the distribution of {X mt / m} converges to N, t as m. If we take a closer look at vector valued random variables { Xmt1 X mt m, X mt X mt1,, X } mt n X mt m, m we can show that their distributions converge to independent Gaussian distibutions N, t 1 t, N, t t 1,, N, t n t. To construct a Brownian motion as a limit of {X mt / m} this argument is not sufficient and we have to discuss the convergence of the distributions µ m of {X mt / m} induced on the space C [, = {w; w is a continuous function on [, }. Donsker established this convergence and the limit µ is a probability measure on C [, governing the probability law of the Brownian motion. Although its sample path is continuous, it is known that any sample path is differentiable at no points of [,, which makes it difficult to construct a stochastic analysis based on Brownian motions. To readers who get interested in a more detailed explanation of the construction of Brownian motions, we recomend them to look suitable text books. A Brownian motion in d-dimensional space R d d dim.b.m. in short can be defined similarly. Let { B t = } Bt 1, Bt,, Bt d be an t Rd valued stochastic process satisfying 1 Each sample {B t } t is continuous as a function of t. For any sequence { t t 1 t n }, the random variables { Bt1 B t, B t B t1,, B tn B t} are independent. 3 The distribution of B t B s is N, t s I, that is, a Gaussian distribution with expectation and variance matrix t s I. From this definition we can easily see that each { } Bt i { {B } 1, { } t B t t,, { } } B d t t are independent. t Set F t = σ {B s ; s t}. We remark Proposition 4 For a d dim.b.m.{b t } t, { B i t are martingales with respect to {F t }. t is a 1-dim.B.M. and { } }t and BtB i j t δ ij t t 16

17 Proof. For t s, E B i t F s = E B i t B i s F s + E B i s F s = E B i t B i s + B i s = B i s. We have used here the fact that B i t B i s is independent of F s. On the other hand, EBtB i j t F s B i = E t Bs i B j t Bs j F s + E BsB i j t Bs j F s + E BtB i s j F s = E Bt i Bs i B j t Bs j + BsEB i j t Bs j + BsE j Bt i F s = δ ij t s + BsB i s. j A Brownian motion has a lot of symmetries. reveals a part of these symmetries. The following proposition Proposition 5 Let {B t } t be a d-dim.b.m. starting from. Then i For a fixed c >, {B ct / c} t is a d-dim.b.m. starting from. ii For a fixed t, {B t+t B t } t is a d-dim.b.m. starting from. iii {tb t 1} t is a d-dim.b.m. starting from. Proof. Only iii is not trivial to prove. To show this, we have only to look the property for ˆB t = tb t 1. To see this, we consider a family of random vectors {ˆB ˆB } t, t1,, ˆB tn. It is easy to see that their joint distribution is a Gaussian in R n+1d with expectation. If we could show that the distributions of ˆB ˆB t, t1,, ˆB tn, B t,b t1,, B tn are equal, then we have the property } for {ˆB t. However Gaussian distributions in multi-dimensional space are t determined by their expectations and covariances, we have only to compute for t s and 1-dim.B.M. E ˆBt ˆBs = tse B t 1B s 1 = ts t 1 s 1 = s = E B t B s. 6 Stochastic integral To define an integral based on a function Y on the real line, usually the function has to be of bounded variation. However, as we see in Brownian motions, martingales does not have paths with bounded variation. Therefore the ordinary Lebesgue-Stieltjes integral can not be applied to an integration with respect to martingales. Fortunately martingales have an extended version of variation, which will be seen in this section, and we make use of this property for the definition of the stochastic integral. 17

18 6.1 Variational processes Let { = t < t 1 < t < < t n = T } be a partition of [, T ] and denote it by Π. For p 1 and a function {Y t } on [, T ] set V p Π, Y = Yti+1 Y ti p, V1 Y = sup V 1 Π, Y. Π i= A function {Y t } t is called of bounded variation on [, T ] if V 1 Y <. Now, denoting Π = max {t i+1 t i ; i n 1}, we have for p > V p+1 Π, Y V 1 Π, Y sup s t T t s Π Y t Y s p V 1 Y sup Y t Y s p. s t T t s Π Therefore, if {Y t } t is continuous and of bounded variation, then V p+1 Π, Y as Π. What we would like to show in this section is the existence of non-decreasing, continuous function A t for square integrable martingale M such that V Π n, M A T if Π n. Then a decomposition: M t = N t + A t 4 is valid, where {N t } t is a continuous martingale. The {A t } t is called the variational process associated with {M t } t. This process is crucial to define stochastic integrals based on martingales. Since { } Mt is a submartingale,usually this process is introduced through the Doob-Meyer decomposition of submartingales. However in this lecture we employ another approach to take a shortcut. Fix a right continuous filtration {F t } t throughout this section. A stochastic process {f t } t adapted to the given filtration is called a stepfunction if there exists an increasing sequence {t n } n and random variables {φ n } n such that { = t < t 1 < t < < t n <, t n as n, f t = φ i if t t i, t i+1 ], where φ i is measurable w.r.t. F ti In this section we assume the boundedness for stepfunctions, that is, there exists a constant C such that f t ω C for all t and ω Ω. For a later purpose we denote the set of all bounded stepfunctions by L = L F. Let {M t } t be a continuous martingale which is square integrable, that is, EMt < for every t and satisfies M =. We define a stochastic integral I t f of a stepfunction based on this martingale by I f = and k 1 I t f = φ i i + φ k t if t k < t t k+1, 5 i= where i = M ti+1 M ti, t = M t M tk. Since there are many ways to express a stepfunction, it should be proved that the above integral does not depend on the expressions. We leave its proof to the readers. 18

19 Lemma 6 {I t f} t is a continuous martingale satisfying for t > s E I t f F s = Is f + E φ i i + φ k t F s, 6 k 1 i=l where we assume t l = s without loss of generality. Proof. The continuity is clear from the definition. A computation E I t f F s = E I s f F s + E k 1 φ i i + φ k t F s = I s f, shows the martingale property of {I t f}. Now we calculate the conditional expectation of the square: E k 1 I t f F s = E I s f + φ i i + φ k t i=l F s k 1 k 1 = I s f + E φ i i + φ k t F s + I s fe φ i i + φ k t F s i=l i=l + E φ i φ i i j + φ i φ k i t F s l i<j k 1 because for i < j we see and which proves 6. = I s f + E l i k 1 k 1 i=l φ i i + φ k t, i=l E φ i φ i i j F s = E E φ i φ i i j Ftj Fs = E φ i φ i i E j Ftj Fs =, E φ i φ k i t F s = E E φ i φ k i M t M tk F tk F s = E φ i φ k i E M t M tk F tk F s =, Since we have to discuss convergence of sequences of continuous martingales, let M T = M T F be the set of all continuous square integrable martingales on [, T ] and introduce a norm of M = {M t } t T M T M = M,T = E M T. Lemma 7 With this norm the space M T becomes a Hilbert space. 19

20 Proof. For M = {M t } t T M T, we see M t = E M T F t if t T. Therefore if we have a Cauchy sequence {M n } n 1 M T, whose {M n T } converges to an f L Ω, P in L norm, then a martingale defined by M t = E f F t becomes a limit of {M n } n 1 in M T if we could show the continuity of {M t}. To see this we apply ii of Theorem18 to a martingale M n M m, that is, P sup Mt n Mt m λ λ E MT n MT m 7 t T for any λ >. Since {MT n} n 1 is a Cauchy sequence, for any k 1 we can choose a subsequence {n k } k 1 such that M n E k+1 T M n k T 1 8 k. Set A k = { ω Ω; sup n M k+1 t ω M n k t ω } 1 t T k, A = n 1 A k. k n Then 7 implies P A k k for every k 1, thus P A P A k P A k 1 as n, k k n k n k n hence we see P A =. If ω / A, then there exists a K 1 such that sup n M k+1 t ω M n k t ω 1 t T k for any k K. For k > l, we have for every t [, T ] M n k t ω M n l t ω l i k 1 M n i+1 t ω M ni t ω l i k 1 1 i as k, l, hence {M n k t ω} t T becomes a Cauchy sequence with sup-norm and converges uniformly on [, T ] to a continuous function {M tω} t T. Since Mt n = E MT n F t, we see MT n MT m F t M n t M m t E = E Mt n Mt m E MT n MT m as n, m, thus for every t [, T ], {M n t } becomes also a Cauchy sequence converging to M t, which concludes M t = M t. The Lemma below indicates that continuous martingales can not be functions of bounded variation unless they are constants.

21 Lemma 8 Suppose {X t } t is an adapted continuous process and has a decomposition X t = M t + A t with a continuous martingale {M t } and a bounded variation process {A t } with A =. Under the condition that E V 1 A <, this decomposition is unipue. Proof. Suppose we have another such decomposition {M t, A t} of {X t } t and set Y t M t M t = A t A t, {Y t } t is a continuous martingale with bounded variation. Here, first we assume {Y t } t is bounded by C. Then observing V Π, Y CV 1 Y, V 1 Y V 1 A + V 1 A L 1, we have by the dominated convergence theorem E V Π, Y as Π. 8 On the other hand, from the martingale property for t > s it follows that E Y t Y s = E Yt Y t Y s + Ys = E Y t Ys, hence we see E V Π, Y = E Y ti+1 Y ti = E i= i= Y t i+1 Y t i = EY T. This together with 8 shows EYT =, which is nothing but Y t = for every t from the martingale property of {Y t }. If {Y t } t is not bounded, we have only to stop {Y t } by τ c = inf {t ; Y t > c}, then {Y t τc } becomes a martingale, because it satisfies 3: E sup t T Y t E V 1 Y E V 1 A + V 1 A <. Applying the above argument to {Y t τc }, we see Y t τc =. The rest of the proof is clear. Now, for {M t } t M T and a fixed partition Π = { = t < t 1 < t < < t n = T } of [, T ] set { f Π t = M ti if t i < t t i+1, i =, 1,,, n 1 and f Π =, N Π t = I t f Π and k 1 A Π t = M Mti+1 t i if t k t < t k+1, k =, 1,,, n 1 i=. A Π T = M Mti+1 t i i= 1

22 Then it holds that M t = N Π t + A Π t + M t M tk if t k t < t k+1. 9 Lemma 9 Let {M t } t M T and suppose {M t} t is bounded, that is, there exists a C > such that M t ω C holds for every t and ω Ω. Then there exists a continuous and non-decreasing process {A t } t T such that holds if Π n. E sup t T A Πn t A t 1 Proof. From 6 it follows that E Nt Π 4C E A Π t + M t M tk = 4C E Mt 4C 4, E Nt Π Nt Π E sup s t f s Π fs A Π Π Π t + M t M t k, 11 1 where Π Π is the partition of [, T ] generated by Π, Π. We easily see from 9,11 Thus 1 implies E E A Π t + M t M tk E M 4 t + E N Π t 1C 4. E Nt Π Nt Π sup fs Π fs Π 4 E At Π Π + M t M t k s t 1C E sup fs Π fs Π s t Since { } fs Πn converges to {Ms } uniformly on [, T ] and sup s t f Π n 4 s fs Πm 4 C 4, the bounded convergence theorem shows the right hand side of 13 converges to as n, m. Then } Lemma7 concludes that there exists a {N t } t M T {N such that Πn converges to {N t } t on [, T ]. Therefore { A Πn t } t also converges to a continuous stochastic process {A t} t, because t the reminder term M t M tk clearly converges to. Now we can show the decomposition 4.

23 Theorem 3 For {M t } t M T there exist a unique martingale {N t} t and a non-decreasing stochastic process {A t } t with A = such that 4 holds. Proof. The uniqueness has been proved in Lemma8. The existence of the decomposition is already proved in case M is bounded. If {M t } t is not bounded, we truncate it by τ c = inf {t ; M t > c}, From iii of Theorem18, we see E sup M s E s T sup Ms s T 4E M T, 14 which assures 3 of Theorem1. Then {M t τc } becomes a bounded continuous martingale and by the previous argument we have a decomposition If c > c, then τ c > τ c, hence M t τ c = N c t + A c t. N c t τ c + A c t τ c = N c t + A c t. Then the uniqueness of the decomposition implies N c t τ c = N c t, A c t τ c = A c t, therefore we define N t = N c t, A t = A c t if t τ c. The property 14 shows E M t τ c 4E M t, and hence E A t τc + E M = E M t τc 4E M t. Letting c, we see A t τc A t from τ c, hence E A t 4E Mt <, and N t L 1. We have to check the martingale property of {N t }. To verify this we have only to see the L 1 convergence of Nt c N t for every fixed t. However this is clear from E Nt c N t E Mt τ c Mt + E A t τc A t = E M t τc Mt + E A t EA t τc. M t τc Mt converges to a.s. as c and is dominated by an integrable sup s T Ms. Therefore the first term converges to. The second term tends to, because {A t } is non-decreasing. For a given {M t } M T the non-decreasing stochastic process {A t} t in Theorem3 is called the variational process of {M t } t and is denoted by M t. It is convenient to define a variational process for a product of two martingales. Making use of an identity M t N t = M t + N t M t N t, 4 for two such processes {M t, N t } t we introduce M, N t = M + N t M N t. 4 { M, N t } can be understood as the unique continuous process of bounded variation which makes M t N t M, N t a martingale. This remark implies the following 3

24 Lemma 31 i, t satisfies the property of inner products, that is,, t is bilinear and M, M t. ii For any stopping time τ, Let M τ t = M t τ, N τ t = N t τ. Then M τ, N τ t = M, N t τ. From Proposition4 we easily see Example 3 Let { B t = } Bt 1, Bt,, Bt d t be a d dim. B.M. Then B i, B j t = δ ij t. 6. Stochastic integral If a process {Y t } is of bounded variation that is, V 1 Y <, an integral of suitable functions based on the process is possible, which is called as the Lebesgue-Stiltjes integral. In this section we define a stochastic integral based on continuous martingales, which was initiated by Ito. As we have seen in the last section, martingales are not of bounded variation but of bounded variation of the second order in a weak sence. In this case we have to discuss the integral keeping in mind the quadratic structure of martingales developed in the last section. For stepfunctions we have already defined a stochastic integral?? based on martingales. We try to extend the integral to wider class of random functions. Throughout this section, we fix a large T > and define the integral on [, T ]. For M M T and a function f = ft, ω on [, T ] Ω which is measurable with respect to B[, T ] F, we define T f = E ft, ω d M t, and denote the set of all such functions f satisfying f < by L Ω, P, M. Clearly this space is a Hilbert space and the space of all stepfunctions L restricted on [, T ] is contained in L Ω, P, M. Define L M = the closure of L in L Ω, P, M. 15 The following Lemma tells us a typical function belonging to L M. Lemma 33 Suppose a function f = ft, ω is left-continuous for every fixed ω Ω and f t, is F t measurable for each t. If f L Ω, P, M, then f L M. Proof. First suppose f is bounded. For each n 1, introduce { f, ω, if t = f n t, ω = f k, ω, if t ] k n, k+1. n, k =, 1,, n Then f n L and the left-continuity implies f n t, ω ft, ω for any fixed t, ω [, T ] Ω. Therefore the bounded convergence theorem shows f n f, hence f L M. If f is not bounded, we prepare ϕ N x = N, if x N x, if x < N N, if x N, 4

25 and set f N = ϕ N f. Then f N L M and it is easy to see that f N f as N, hence f L M. Now we look back 6. Since Mt = N t + M t, we have E ft i i Fs = f ti E i Fs = f ti M ti+1 M ti. Therefore we see E I t f k 1 F s Is f = E ft i i + ft k t F s i=l = E f u ω d M u F s s 16 In particular, setting s = and taking expectation, we have E I t f = E f u ω d M u. 17 Notice I f M T and M T is a Hilbert space Lemma7. For f L M, choose a sequence of {f n } L such that f f n as n. Then 17 shows E I T f n I T f m = E T f n,u ω f m,u ω d M u = f n f m, hence {I f n } n 1 converges in M T, and its limit is denoted by I f = I M f if necessary. This I M f is called a stochastic integral based on a martingale M. Now suppose M, N M T and f L M, g L N. Then I M f and I N g are in M T. The next problem is to compute their, t. For this purpose, first we assume f, g L. Then a similar calculation as 6,15 shows E It M fit N k 1 g F s I M s fis N g = E f ti g ti i i + f tk g tk t t F s i=l = E f u ωg u ωd M, N u F s with i = N t i+1 N ti, t = N t N tk. Therefore we have s I M f, I N g = f t u ωg u ωd M, N u 18 Since the convergence of the right hand side of 18 is not trivial for general f L M, g L N, we have to examine it. From 18 it follows that for f, g L T f u ωg u ωd M, N u I M f T I N g T T T = f u ω d M u g u ω d N u. 19 5

26 Let hs be a non-random stepfunction. Then for any bounded variation function vt, we see T T sup hudvu = d v u, h 1 where h = sup { hu ; u [, T ]} and v u is the total variation on [, u] of v. Let h 1, h be two non-random stepfunctions. Then in 19 replacing f, g by h 1 f, h g respectively, we have T T T f u ωg u ω d M, N u f u ω d M u g u ω d N u. Now it is easy to see that holds for any f L M, g L N. Summing up these argument, we obtain Theorem 34 Suppose M, N M T and 18 are valid. and f L M, g L N. Then We remark the following Lemma 35 Let M M T, f L M and.τ be a stopping time. Then I M t τ f = I M τ t f τ, where M τ t = M t τ, f τ t = I [,τ] tf t. Proof. For f L, the statement is clearly valid and then pass to the limit. In the calculation notice ii of Lemma Localization of stochastic integral It is not convenient that the stochastic integral is defined only for square integrable martingales. In this section we try to extend the notion of martingales with the help of the optional stopping theorem. Set { } M = M {Mt } loc,t = t T ; there exists stopping times {τ n } n 1 such that τ n τ n+1 and M τn = {M t τn } t T M, T and L loc M { f = {ft } = t T ; there exists stopping times {τ n } n 1 such that τ n τ n+1 and f τn = { I [,τn]tf t } t T L M }. An element of M loc,t is called a local martingale. For M M loc,t, f L loc M, IMτn t f τn is defined as stochastic integrals introduced in Section5.. If n > m, then from Lemma35 we have I M τm t Therefore we can define f τm = I M τn τm t f τn τm = I M τn t τ m f τn. I M t f = I M τn t f τn if t τ n. 6

27 It is clear that I M f M loc,t. For M, N M loc,t, we define M, N t = M τn, N τn t if t τ n. This definition can be seen well-defined through ii of Lemma31. It is easy to see that I M f, I N g t = f s g s d M, N s, 1 for M, N M loc,t and f, g L loc M. 6.4 Ito s formula If A t is a continuous function with bounded variation of the first order and f is a differentiable function whose derivative f is continuous, then the following formula is valid. fa t fa = f A s da s. This can be seen as follows. Let Π = { = t < t 1 < t < < t n = t}. Set fy fx = f xy x + ε 1 x, yy x. Then for a large enough C such that A s [ C, C] for any s [, t] Thus ρ 1 ε sup { ε 1 x, y ; y x ε, x, y [ C, C] } as ε. fa t fa = fa fatk+1 t k Then as Π, k= = f A tk A tk+1 A tk + ε 1 Atk, A tk+1 Atk+1 A tk k= k= f A tk t A tk+1 A tk f A s da s, k= and ε 1 Atk, A tk+1 Atk+1 A tk ε1 Atk, A tk+1 Atk+1 A tk k= k= ρ 1 Π V 1 A. If the process A t is replaced by a martingale M t, the formula corresponding to is no longer valid, and we have to taking accout of the next term of the Taylor expansion of f. 7

28 Theorem 36 Let M i = { M i t } M loc,t and { A i t} be adapted continuous processes with bounded variation for 1 i d. Then for f = fx 1, x,, x d which has continuous derivatives up to the second order with respect to any variables, we have fx t fx = d i=! f xi X s dx i s + 1 d i,j=! f xi,x j X s d M i, M j s, 3 where X t = Xt 1, Xt,, Xt d and X i t = Mt i + A i t. Proof. For simplicity we assume d = 1. We assume first M, A are bounded by C. From the assumption on f it follows that with fy fx = f xy x + 1 f xy x + ε x, y y x, ρ ε sup { ε x, y ; y x ε, x, y C } as ε. Then for Π = { = t < t 1 < t < < t n = t} we see fx t fx = fx fxtk+1 t k = f X tk X tk+1 X tk k= + 1 f X tk X tk+1 X tk + ε Xtk, X Xtk+1 tk+1 X tk k= I Π 1 + I Π + I Π 3. k= k= Since f X t is continuous and bounded, it is clear that I1 Π = f X tk M tk+1 M tk + f X tk A tk+1 A tk k= f X s dm s + k= f X s da s = f X s dx s, as Π. Here the first term converges in L Ω, P from Theorem34. To treat I3 Π, set ε Π, X = sup s,t T, s t Π X t X s. Then However and I 3 Π ρ ε Π, X X tk+1 X tk. 4 k= Xtk+1 X tk Atk+1 A tk + Mtk+1 M tk k= k= k= Atk+1 A tk ε Π, A V1 A as Π. k= 8

29 The second term is nothing but M tk+1 M tk = A Π t k= see9. Then from Lemma9 it follows that A Πn t M t in L Ω, P as Π n. Hence, by choosing a suitable subsequence {n k }, we see A Πn k t M t uniformly on [, T ] a.s.. Then 4 shows I Πn k 3. As for I Π, only the term J Π f X tk M tk+1 M tk k= has a non-trivial limit. However we have As we have seen that A Πn k t J Π = f X s da Π s. M t uniformly on [, T ] a.s., hence J Πn k f X s d M s. Consequently we have proved 3 if d = 1 and M is bounded. For a general M M τn τc loc,t, we have only to truncate it, namely Mt = M t τn τ c. Set τ = τ n τ c. Then 3 is valid for M τ, hence from ii of Lemma31 and the definitions in Section5.3 it follows that fx t τ fx = f X s τ dx s τ + 1 f X s τ d M τ s = τ f X s dx s + 1 τ f X s d M s. Since τ = τ n τ c as n, c we complete the proof if d = 1. A process X t = M t + A t with a continuous local martingale M and a continuous adapted process with bounded variation is called a semi-martingale. Ito s formula is a chain rule in stochastic analysis and it can be understood more intuitively if we write 3 in a differential form: dfx t = d f xi X t dxt i + 1 i=! d f xi,x j X t d M i, M j. t i,j=! The variational process is expressed sometimes as d M, N t = dm t dn t, and if A, A are continuous processes with bounded variation, we set da t dm t = dm t da t = da t da t =. Then the Ito s formula can be written as dfx t = d f xi X t dxt i + 1 i=! d f xi,x j X t dx i dx j t. i,j=! 9

30 7 Applications Ito s formula has many applications, however here we give two of them. For other applications see the exercises. 7.1 Martingale characterization of Brownian motions Lévy pointed out that Brownian motions can be characterized by martingales and it led Stroock-Varadhan to characterization of various stochastic processes through martingales, which is now called the martingale problem. Theorem 37 Let M i M loc,t with M i, M j t = δ ijt and M i = for i = 1,,, d. Then M t = Mt 1, Mt,, Mt d is a d dimensional Brownian motion. Proof. For ξ R d apply Ito s formula to e iξ Mt : e iξ Mt = 1 + i = 1 + i d ξ k e iξ Ms dms k 1 k=1 d ξ k ξ l e iξ Ms d M k, M l s k,l=1 d ξ k e iξ Ms dms k 1 ξ e iξ Ms ds. k=1 Thus we have for t s E e iξ Mt F s = e iξ M s 1 ξ E e iξ Mu F s du. Regarding this relationship as an integral equation with unknown E e iξ Mt F s, we obtain E e iξ Mt F s = e iξ M s e 1 t s ξ, which is equivalent to E e iξ Mt Ms F s = e 1 t s ξ. This shows that M t M s is independent of F s and M t M s is distributed as a Gaussian distribution of expectation and variance matrix t s I, which concludes that M t is a d dim.b.m.. s Corollary 38 Let B t be a d dim.b.m. and g ij t, ω 1 i,j d be an orthogonal matrix for every t and ω Ω. Assume g ij L B. Then M t = M 1 t, Mt,, Mt d with Mt i = becomes a d dim.b.m.. d j=1 g ij s, ωdb j s 3

31 7. Brownian functionals and stochastic integral Let B t = B 1 t, B t,, B d t be a d dim.b.m. starting from and set F t = σ {B s ; s t} = σ {B s B u ; u s t}. In this section we show that any F T measurable square integrable random variable can be expressed by a stochastic integral based on {B t }. For simplicity assume d = 1. Set m linear combination of e iξ kb rk B rk 1 F = k=1. with ξ k R, r r 1 r m T Lemma 39 F is dense in L Ω, F T, P. Proof. Let { Y = f Br1 B r,, B rm B rm 1 ; f is a Borel measurable F 1 = function on R m with E f Br1 B r,, B rm B rm 1 < Then { F F 1 and F 1 is dense in L Ω, F T, P. Hence we have only to prove that linear combination of e iξ x with ξ R m} is dense in L R m, N, Λ, where N, Λ is a Gaussian distribution with expectation and covariance matrix r 1 r r r r m r m 1 However this is clear by the Fourier transform. }. Theorem 4 For any X L Ω, F T, P such that X = EX + T there exists a unique f L B f s, ω db s. 5 Proof. Let S be the set of all X which is represented as 5. 1 X = e iξbt Br S, r t T. Due to Ito s formula we have X = e 1 ξ t r + T r I [r,t siξe iξbs Br e 1 ξ s t db s, hence setting f s, ω = I [r,t siξe iξbs Br e 1 ξ s t, we obtain e iξbt Br S. Suppose X k S with bounded f k for k = 1,,, n and f k s, ωf l s, ω = for k l. Then X 1 X X n S. Let X k t = EX k + f k s, ωdb s. 31

32 Then Ito s formula implies thus X 1 tx t = X 1 X + = EX 1 EX + X 1 sdx s + X sdx 1 s + X 1 sf s + X sf 1 s db s + d X 1, X s f 1 sf sds, T X 1 X = EX 1 EX + X 1 sf s + X sf 1 s db s S. Inductively we can show X 1 X X n S. 3 Suppose X n S and X n X in L Ω, P. Then X S. This follows immediately from Lemma7. Summing up the above argument and Lemma39 conclude the theorem. For higher dimensional Brownian motions also the corresponding theorem is valid, that is, Theorem 41 Let Ω, F T, P be a probability space generated by a d dim.b.m. {B t }. Then for any X L Ω, F T, P,there exist unique f i L B i i = 1,,, d such that X = EX + d i=1 T f i s, ω db i s. This theorem was first proved by K.Ito by applying the Wiener-Ito expansion and is now used to Black-Sholes theory to show the completeness of markets. Corollary 4 Let Ω, F T, P be a probability space generated by a d dim.b.m. {B t }. Then F t+ = F t for every t. Proof. Let X L Ω, F t+, P. Then for any n 1, X L Ω, F t+1/n, P, and applying Theorem41, we have X = EX + However the uniqueness implies d i=1 +1/n f n i s, ω db i s. fi n s, ω = fi m s, ω if s [, t + 1 ] and n m, n hence we see X = EX + d i=1 f 1 i s, ω db i s. 3

33 Corollary 43 Let Ω, F T, P be a probability space generated by a d dim. B.M. {B t }. Any square integrable martingale M = {M t } on this probability space can be represented by a stochastic integral based on the Brownian motion, and hence is automatically continuous. Proof. Since M T L Ω, F T, P, Theorem41 implies Therefore we see M T = EM T + d i=1 T M t = E M T F t = EM T + f i s, ω db i s. d i=1 f i s, ω db i s. 8 Stochastic differential equations Ito constructed stochastic integral based on Brownian motions in order to define rigorously ordinary differential equations whose coefficients are derivatives of Brownian motions. His equatios are now called as stochastic differential equations SDE and are used to describe various random phenomena. This section is devoted to a brief introduction to the theory of SDE. Let {a ij t, x, b i t, x} 1 i d,1 j d be functions on [, R d satisfying certain smoothness conditions. A SDE is dxt i = a ij t, X t db j t + b i t, X t dt, 1 i d. d j=1 More precisely this equation has to be understood in an integral form: d Xt i = x i + j=1 a ij s, X s db j s + b i s, X s ds, 1 i d. 6 If a ij t, x =, then the above equation becomes an ordinary differential equation. We prove here the existence and uniqueness of the equation 6. Assume {a ij t, x, b i t, x} 1 i d,1 j d are continuous functions [, T ] R d satisfying a ij t, x a ij t, y + b i t, x b i t, y K x y 7 with some K > and for any x, y R d, t [, T ]. Set { L c = X t = Xtω i } d ; X i=1 t is continuous,adapted and satisfying X T <, where X t = E sup X s. s t 33

34 Theorem 44 Under the condition 7 the equation 6 has a unique solution for any initial condition x i 1 i d R d. Proof. For {X t } L c define Since d Φ X i t = x i + j=1 a ij s, X s db j s + b i s, X s ds, 1 i d. a ij t, x + b i t, x K x + a ij t, + b i t, K x + K 1 8 with K 1 = max { a ij t, + b i t, ; t T, 1 i d, 1 j d }, we see from iii of Theorem18 and 8 E sup Φ X i t t T x d i + E sup a ij s, X s dbs j t + E sup b i s, X s ds t T j=1 t T d T T x i + 8E a ij s, X s ds + T E b i s, X s ds j=1 x i + 16K d T E sup X t + 4T E sup X t + K t T t T with some other constant K. Therefore Φ X L c. Similarly from 7 we have for X, Y L c Φ X Φ Y t K 3 with some constant K 3. Therefore setting X Y s ds 9 X t = x i, X n t = Φ X t for n 1, we see which implies Thus X n+1 X n t K 3 X n+1 X n K 3t n t n! X n X ds for n 1, s X n+1 X n T <. n= X 1 X ds. s Since L c is complete as we have seen in Lemma7, there exists an X L c such that X n X in L c. This X satisfies X =Φ X, hence 6. The uniqueness 34

35 follows from a truncation argument. Let {X t, X t} be two solutions of 6 and set τ c = inf {t ; X t X t c}. Then we see hence Xt τc E X Xs τc t τ c K 3 E X s τ c ds, which implies the uniqueness. Xt τc E X t τ c =, 8.1 SDE and partial differential equations Let {B t } be a d dim.b.m. and ut, x be a smooth bounded function on [, R d satisfying u t = 1 u, u, x = fx. 3 Then applying Ito s formula to M t = ut t, B t + x, we see dm t = u t T t, B t + xdt + u xi T t, B t + x dbt i i,j d 1 i d u xi,x j T t, B t + x db i tdb j t = u t T t, B t + xdt + 1 i d + 1 u T t, B t + x dt = u xi T t, B t + x dbt, i 1 i d u xi T t, B t + x db i t hence {M t } turns out to be a martingale. Therefore we have EM T = EM = ut, x = EfB T + x. This shows the uniqueness of the bounded solutions for the equation 3 and ut, x = EfB t + x satisfies the equation 3. This argument can be extended to the solutions of SDE whose coefficients do not depend on t. Under the condition of Theorem44 on the coefficients, set L = 1 1 i,j d σ ij x + b i x, x i x j x i 1 i d where σ ij x = 1 k d a ik xa jk x. 35

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Part III Stochastic Calculus and Applications

Part III Stochastic Calculus and Applications Part III Stochastic Calculus and Applications Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 218 These notes are not endorsed by the lecturers, and I have modified them often significantly

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Stochastic Calculus (Lecture #3)

Stochastic Calculus (Lecture #3) Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Brownian Motion. Chapter Definition of Brownian motion

Brownian Motion. Chapter Definition of Brownian motion Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

1.1 Definition of BM and its finite-dimensional distributions

1.1 Definition of BM and its finite-dimensional distributions 1 Brownian motion Brownian motion as a physical phenomenon was discovered by botanist Robert Brown as he observed a chaotic motion of particles suspended in water. The rigorous mathematical model of BM

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Stochastic Integration.

Stochastic Integration. Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Multiple points of the Brownian sheet in critical dimensions

Multiple points of the Brownian sheet in critical dimensions Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Admin and Lecture 1: Recap of Measure Theory

Admin and Lecture 1: Recap of Measure Theory Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Stochastic Calculus. Alan Bain

Stochastic Calculus. Alan Bain Stochastic Calculus Alan Bain 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral and some of its applications. They

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Math Probability Theory and Stochastic Process

Math Probability Theory and Stochastic Process Math 288 - Probability Theory and Stochastic Process Taught by Horng-Tzer Yau Notes by Dongryul Kim Spring 217 This course was taught by Horng-Tzer Yau. The lectures were given at MWF 12-1 in Science Center

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information