Math 735: Stochastic Analysis

Size: px
Start display at page:

Download "Math 735: Stochastic Analysis"

Transcription

1 First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional expectation 5. Martingales 6. Poisson process and Brownian motion 7. Stochastic integration 8. Covariation and Itô s formula 9. Stochastic differential equations 1. Diffusion processes 11. General Markov processes 12. Probability distributions on function spaces 13. Numerical schemes 14. Change of measure 15. Filtering 16. Finance 17. Technical lemmas 18. Appendix

2 First Prev Next Go To Go Back Full Screen Close Quit 2 1. Introduction The basic concepts of probability: Models of experiments Sample space and events Probability measures Random variables Closure properties of collection of random variables The distribution of a random variable Definition of the expectation Properties of expectations Jensen s inequality

3 First Prev Next Go To Go Back Full Screen Close Quit 3 Experiments Probability models experiments in which repeated trials typically result in different outcomes. As a means of understanding the real world, probability identifies surprising regularities in highly irregular phenomena. If we roll a die 1 times we anticipate that about a sixth of the time the roll is 5. If that doesn t happen, we suspect that something is wrong with the die or the way it was rolled.

4 First Prev Next Go To Go Back Full Screen Close Quit 4 Probabilities of events Events are statements about the outcome of the experiment: {the roll is 6}, {the rat died}, {the television set is defective} The anticipated regularity is that P (A) #times A occurs #of trials This presumption is called the relative frequency probability. interpretation of

5 First Prev Next Go To Go Back Full Screen Close Quit 5 Definition of probability The probability of an event A should be P (A) = lim n #times A occurs in first n trials n The mathematical problem: Make sense out of this. The real world relationship: Probabilities are predictions about the future.

6 First Prev Next Go To Go Back Full Screen Close Quit 6 Random variables In performing an experiment numerical measurements or observations are made. Call these random variables since they vary randomly. Give the quantity a name: X {X = a} and {a < X < b} are statements about the outcome of the experiment, that is, are events

7 First Prev Next Go To Go Back Full Screen Close Quit 7 The distribution of a random variable If X k is the value of X observed on the kth trial, then we should have P {X A} = lim n #{k n : X k A} n This collection of probabilities determine the distribution of X.

8 First Prev Next Go To Go Back Full Screen Close Quit 8 The sample space The possible outcomes of the experiment form a set Ω called the sample space. Each event (statement about the outcome) can be identified with the subset of the sample space for which the statement is true.

9 First Prev Next Go To Go Back Full Screen Close Quit 9 The collection of events If Then A = {ω Ω : statement I is true for ω} B = {ω Ω : statement II is true for ω} A B = {ω Ω : statement I and statement II are true for ω} A B = {ω Ω : statement I or statement II is true for ω} A c = {ω Ω : statement I is not true for ω} Let F be the collection of events. Then A, B F should imply that A B, A B, and A c are all in F. F is an algebra of subsets of Ω. In fact, we assume that F is a σ-algebra (closed under countable unions and complements).

10 First Prev Next Go To Go Back Full Screen Close Quit 1 The probability measure Each event A F is assigned a probability P (A). From the relative frequency interpretation, we must have P (A B) = P (A) + P (B) for disjoint events A and B and by induction, if A 1,..., A m are disjoint m P ( m k=1a k ) = P (A k ) finite additivity k=1 In fact, we assume countable additivity: If A 1, A 2,... are disjoint events, then P ( k=1a k ) = P (A k ). P (Ω) = 1. k=1

11 First Prev Next Go To Go Back Full Screen Close Quit 11 A probability space is a measure space A measure space (M, M, µ) consists of a set M, a σ-algebra of subsets M, and a nonnegative function µ defined on M that satisfies µ( ) = and countable additivity. A probability space is a measure space (Ω, F, P ) satisfying P (Ω) = 1.

12 First Prev Next Go To Go Back Full Screen Close Quit 12 Random variables If X is a random variable, then we must know the value of X if we know that outcome ω Ω of the experiment. Consequently, X is a function defined on Ω. The statement {X c} must be an event, so {X c} = {ω : X(ω) c} F. In other words, X is a measurable function on (Ω, F, P ). R(X) will denote the range of X Ω} R(X) = {x R : x = X(ω), ω

13 First Prev Next Go To Go Back Full Screen Close Quit 13 Borel sets Definition 1.1 The Borel subsets B(R) of R is the smallest σ-algebra of subsets of R containing (, c] for all c R. The Borel subsets B(R d ) is the smallest σ-algebra of subsets of R d containing the open subsets of R d. Note the every continuous function on R d is Borel measurable.

14 First Prev Next Go To Go Back Full Screen Close Quit 14 Closure properties of the collection of random varibles Lemma 1.2 Let X 1, X 2,... be R-valued random variables. a) If f is a Borel measurable function on R d, then Y = f(x 1,..., X d ) is a random variable. b) sup n X n, inf n X n, lim sup n X n, and lim inf n X n are random variables.

15 First Prev Next Go To Go Back Full Screen Close Quit 15 Distributions Definition 1.3 The distribution of a R-valued random variable X is the Borel measure defined by µ X (B) = P {X B}, B B(R). µ X is called the measure induced by the function X.

16 First Prev Next Go To Go Back Full Screen Close Quit 16 Discrete distributions Definition 1.4 A random variable is discrete or has a discrete distribution if and only if R(X) is countable. If X is discrete, the distribution of X is determined by the probability mass function p X (x) = P {X = x}, x R(X). Note that x R(X) P {X = x} = 1.

17 First Prev Next Go To Go Back Full Screen Close Quit 17 Examples Binomial distribution P {X = k} = ( ) n p k (1 p) n k, k for some postive integer n and some p 1 Poisson distribution k =, 1,..., n for some λ >. P {X = k} = e λλk, k =, 1,... k!

18 First Prev Next Go To Go Back Full Screen Close Quit 18 Absolutely continuous distributions Definition 1.5 The distribution of X is absolutely continuous if and only if there exists a nonnegative function f X such that P {a < X b} = b a f X (x)dx, a < b R. Then f X is the probability density function for X.

19 First Prev Next Go To Go Back Full Screen Close Quit 19 Examples Normal distribution f X (x) = 1 e (x µ)2 2σ 2 2πσ Exponential distribution f X (x) = { λe λx x x <

20 First Prev Next Go To Go Back Full Screen Close Quit 2 Expectations If X is discrete, then letting R(X) = {a 1, a 2,...}, X = i a i 1 Ai where A i = {X = a i }. If i a i P (A i ) <, then E[X] = i a i P {X = a i } = i a i P (A i )

21 First Prev Next Go To Go Back Full Screen Close Quit 21 For general X, let Y n = nx n, Z n = nx n. Then Y n X Z n, so we must have E[Y n ] E[X] E[Z n ]. Specifically, if k k P {k < X k + 1} <, which is true if and only if E[ Y n ] < and E[ Z n ] < for all n (we will say that X is integrable), then define Notation E[X] lim n E[Y n ] = lim n E[Z n ]. (1.1) E[X] = Ω XdP = Ω X(ω)P (dω).

22 First Prev Next Go To Go Back Full Screen Close Quit 22 Properties Lemma 1.6 (Monotonicity) If P {X Y } = 1 and X and Y are integrable, then E[X] E[Y ]. Lemma 1.7 (Positivity) If P {X } = 1, and X is integrable, then E[X]. Lemma 1.8 (Linearity) If X and Y are integrable and a, b R, then ax+ by is integrable and E[aX + by ] = ae[x] + be[y ].

23 First Prev Next Go To Go Back Full Screen Close Quit 23 Jensen s inequality Lemma 1.9 Let X be a random variable and ϕ : R R be convex. If E[ X ] < and E[ ϕ(x) ] <, then ϕ(e[x]) E[ϕ(X)]. Proof. If ϕ is convex, then for each x, ϕ + ϕ(y) ϕ(x) (x) = lim y x+ y x and ϕ(y) ϕ(x) + ϕ + (x)(y x). Setting µ = E[X], exists E[ϕ(X)] E[ϕ(µ) + ϕ + (µ)(x µ)] = ϕ(µ) + ϕ + (µ)e[x µ] = ϕ(µ).

24 First Prev Next Go To Go Back Full Screen Close Quit 24 Consequences of countable additivity P (A c ) = 1 P (A) If A B, then P (B) P (A). If A 1 A 2, then P ( k=1 A k) = lim n P (A n ). P ( k=1a k ) = P ( k=1(a k A c k 1)) = = lim n n k=1 P (A k A c k 1) k=1 P (A k A c k 1) = lim n P (A n ) If A 1 A 2, then P ( k=1 A k) = lim n P (A n ). A n = k=1a k ( k=na k A c k+1)

25 First Prev Next Go To Go Back Full Screen Close Quit 25 Expectations of nonnegative functions If P {X } = 1 and l= lp {l < X l +1} = or P {X = } >, we will define E[X] =. Note, however, whenever I write E[X] I mean that E[X] is finite unless I explicitly allow E[X] =.

26 First Prev Next Go To Go Back Full Screen Close Quit Notions of convergence Convergence of random variables Convergence in probability Bounded Convergence Theorem Monotone Convergence Theorem Fatou s lemma Dominated Convergence Theorem Linear spaces and norms L p spaces

27 First Prev Next Go To Go Back Full Screen Close Quit 27 Convergence of random variables a) X n X a.s. iff P {ω : lim n X n (ω) = X(ω)} = 1. b) X n X in probability iff ɛ >, lim n P { X n X > ɛ} =. c) X n converges to X in distribution (denoted X n X) iff lim P {X n x} = P {X x} F X (x) n for all x at which F X is continuous.

28 First Prev Next Go To Go Back Full Screen Close Quit 28 Relationship among notions of convergence Theorem 2.1 a) implies b) implies c). Proof. (a b) P { X n X > ɛ} P {sup m n X m X > ɛ} and lim sup P { X n X > ɛ} P ( n {sup X m X > ɛ}) P { lim X n X} = n m n n (b c) Let ɛ >. Then P {X n x} P {X x + ɛ} P {X n x, X > x + ɛ} P { X n X > ɛ} and hence lim sup P {X n x} P {X x + ɛ}. Similarly, lim inf P {X n x} P {X x ɛ} Since ɛ is arbitrary, the implication follows.

29 First Prev Next Go To Go Back Full Screen Close Quit 29 Convergence in probability Lemma 2.2 a) If X n X in probability and Y n Y in probability then ax n + by n ax + by in probability. b) If Q : R R is continuous and X n X in probability, then Q(X n ) Q(X) in probability. c) If X n X in probability and X n Y n in probability, then Y n X in probability. Remark 2.3 (b) and (c) hold with convergence in probability replaced by convergence in distribution; however (a) is not in general true for convergence in distribution.

30 First Prev Next Go To Go Back Full Screen Close Quit 3 Bounded Convergence Theorem Theorem 2.4 Suppose that X n X and that there exists a constant b such that P { X n b} = 1. Then E[X n ] E[X]. Proof. Let {x i } be a partition of R such that F X is continuous at each x i. Then x i P {x i < X n x i+1 } E[X n ] x i+1 P {x i < X n x i+1 } i i and taking limits we have i x i P {x i < X x i+1 } lim inf n E[X n] lim sup E[X n ] x i+1 P {x i < X x i+1 } n i As max x i+1 x i, the left and right sides converge to E[X] giving the theorem.

31 First Prev Next Go To Go Back Full Screen Close Quit 31 Convergence of bounded trunctaion Lemma 2.5 Let X [, ] a.s. (allowing P {X = } > ). Then lim M E[X M] = E[X]. Proof.Check the result first for X having a discrete distribution and then extend to general X by approximation.

32 First Prev Next Go To Go Back Full Screen Close Quit 32 Monotone Convergence Theorem Theorem 2.6 Suppose X n X and X n X [, ] in probability. Then lim n E[X n ] = E[X] (allowing = ). Proof. For M >, E[X] E[X n ] E[X n M] E[X M] where the convergence on the right follows from the bounded convergence theorem. It follows that E[X M] lim inf n and the result follows by Lemma 2.5. E[X n] lim sup E[X n ] E[X] n

33 First Prev Next Go To Go Back Full Screen Close Quit 33 Example Lemma 2.7 Suppose the P {Y k } = 1 and k=1 E[Y k] <. Then Y Y k < a.s. and E[Y ] = k=1 E[Y k]. k=1 Proof. By the monotone convergence theorem, E[Y ] = lim n E[ n Y k ] = k=1 Since E[Y ] <, P {Y < } = 1. E[Y k ] <. k=1

34 First Prev Next Go To Go Back Full Screen Close Quit 34 Fatou s lemma Lemma 2.8 If X n and X n X, then lim inf E[X n ] E[X]. Proof. Since E[X n ] E[X n M] we have lim inf E[X n ] lim inf E[X n M] = E[X M]. By the Monotone Convergence Theorem E[X M] E[X] and the lemma folllows.

35 First Prev Next Go To Go Back Full Screen Close Quit 35 Dominated Convergence Theorem Theorem 2.9 Assume X n X, Y n Y, X n Y n, and E[Y n ] E[Y ] <. Then E[X n ] E[X]. Proof. For simplicity, assume in addition that X n + Y n X + Y and Y n X n Y X (otherwise consider subsequences along which (X n, Y n ) (X, Y )). Then by Fatou s lemma lim inf E[X n + Y n ] E[X +Y ] and lim inf E[Y n X n ] E[Y X]. From these observations lim inf E[X n ] + lim E[Y n ] E[X] + E[Y ], and hence lim inf E[X n ] E[X]. Similarly lim inf E[ X n ] E[ X] and lim sup E[X n ] E[X]

36 First Prev Next Go To Go Back Full Screen Close Quit 36 Markov inequality Lemma 2.1 P { X > a} E[ X ]/a, a >. Proof. Note that X a1 { X >a}. Taking expectations proves the desired inequality.

37 First Prev Next Go To Go Back Full Screen Close Quit 37 Linear spaces A set L is a real linear space if there is a notion of scalar multiplication (a, f) R L af L and addition (f, g) L f + g L with the following properties: 1. Associativity: f + (g + h) = (f + g) + h 2. Commutativity: f + g = g + f 3. Existence of identity: f + = f 4. Existence of an inverse: f + ( f) = 5. Distributivity: a(f + g) = af + ag and (a + b)f = af + bf 6. Compatible with multiplication in R: a(bf) = (ab)f 7. Scalar identity: 1f = f

38 First Prev Next Go To Go Back Full Screen Close Quit 38 Norms Definition 2.11 : L [, ) is a norm if 1. af = a f 2. f + g f + g (triangle inequality) 3. f = implies f =.

39 First Prev Next Go To Go Back Full Screen Close Quit 39 L p spaces For 1 p <, L p is the collection of random variables X with E[ X p ] < and the L p -norm is defined by X p = E[ X p ] 1/p. L is the collection of random variables X such that P { X c} = 1 for some c <, and X = inf{c : P { X c} = 1}.

40 First Prev Next Go To Go Back Full Screen Close Quit 4 Properties of L p norms 1) X p = implies X = a.s.. 2) E[XY ] X p Y q 1 p + 1 q = 1. 3) X + Y p X p + Y p

41 First Prev Next Go To Go Back Full Screen Close Quit 41 Inequalities for (p = q = 2) Schwartz inequality: Note that E[(aX + by ) 2 ] = a 2 E[X 2 ] + 2abE[XY ] + b 2 E[Y 2 ]. Assume that E[XY ] (otherwise replace X by X) and take a, b >. Then E[XY ] a 2b E[X2 ] + b 2a E[Y 2 ]. Take a = Y 2 and b = X 2. Triangle inequality: We have X + Y 2 2 = E[(X + Y ) 2 ] = E[X 2 ] + 2E[XY ] + E[Y 2 ] X X 2 Y 2 + Y 2 2 = ( X 2 + Y 2 ) 2.

42 First Prev Next Go To Go Back Full Screen Close Quit 42 Norms determine metrics It follows that r p (X, Y ) = X Y p defines a metric on L p, the space of random variables satisfying E[ X p ] <. (Note that we identify two random variables that differ on a set of probability zero.) A sequence in a metric space is Cauchy if lim r p(x n, X m ) = n,m and a metric space is complete if every Cauchy sequence has a limit.

43 First Prev Next Go To Go Back Full Screen Close Quit 43 Completeness of L p spaces Lemma 2.12 For 1 p, L p is complete. Proof. For example, in the case p = 1, suppose {X n } is Cauchy and let n k satisfy sup m>nk X m X nk 1 = sup m>nk E[ X m X nk ] 4 k. Then k=1 E[ X n k+1 X nk ] <, so by Lemma 2.7, Y = k=1 X n k+1 X nk < a.s. and with probability one, the series X X n1 + k=1 (X nk+1 X nk ) = lim k X nk is absolutely convergent. It follows that X X nk Y and by the dominated convergence theorem and the Cauchy property lim X m X 1 lim X X n m k 1 + X nk X m 1 =. k,m

44 First Prev Next Go To Go Back Full Screen Close Quit 44 More on convergence in probability Since by the Markov inequality P { X n X ɛ} E[ X n X p ] ɛ p, convergence in L p implies convergence in probability. Lemma 2.13 Convergence in probability is metrizable by taking ρ (X, Y ) = inf{ɛ > : P { X Y ɛ} ɛ}, and the space of real-valued random variables with metric ρ (sometimes denoted L ) is complete.

45 First Prev Next Go To Go Back Full Screen Close Quit Continuous time stochastic processes. Random variables in a functionspace Properties of cadlag functions Filtrations Stopping times Poisson process Brownian motion General assumption: (Ω, F, P ) is a complete probability space.

46 First Prev Next Go To Go Back Full Screen Close Quit 46 Random variables in a functionspace A continuous time stochastic process is a random function defined on the time interval [, ). For each ω Ω, X(, ω) is a real or vector-valued function (or more generally, E-valued for some complete, separable metric space E). We assume that all stochastic processes are cadlag, that is, for each ωɛω, X(, ω) is a right continuous function with left limits at each t >. D E [, ) will denote the collection of cadlag E-valued functions on [, ). D E [, ) is sometimes refered to as Skorohod space.

47 First Prev Next Go To Go Back Full Screen Close Quit 47 Properties of cadlag functions Lemma 3.1 For each ɛ >, a cadlag function has, at most, finitely many discontinuities of magnitude greater than ɛ in any compact time interval and hence at most countably many discontinuities in [, ). Proof. If there were infinitely many values of t [, T ] with r(x(t), x(t )) > ɛ, this set, call it Γ ɛ,t, would have a right or left limit point, destroying the cadlag property. The collection of all discontinuities is the union of Γ ɛ,t over rational ɛ and T and, hence, is countable.

48 First Prev Next Go To Go Back Full Screen Close Quit 48 Cadlag processes are determined by countably many time points If X is a cadlag process, then it is completely determined by the countable family of random variables, {X(t) : t rational}. It is possible to define a metric on D E [, ) so that it becomes a complete, separable metric space. The distribution of an E-valued, cadlag process is then defined by µ X (B) = P {X( ) B} for B B(D E [, )).

49 First Prev Next Go To Go Back Full Screen Close Quit 49 Process distribution determined by finite dimensional distributions Theorem 3.2 Let X be an E-valued, cadlag process. Then µ X on D E [, ) is determined by its finite dimensional distributions {µ t1,t 2,...,t n : t 1 t 2... t n ; n } where µ t1,t 2,...,t n (Γ) = P {(X(t 1 ), X(t 2 ),..., X(t n )) Γ}, Γ B(E n ).

50 First Prev Next Go To Go Back Full Screen Close Quit 5 Filtrations Definition 3.3 A collection of σ-algebras {F t }, satisfying for all s t is called a filtration. F s F t F A stochastic process X is adapted to a filtration {F t } if X(t) is F t -measurable for all t. A filtration {F t } is complete if F contains all events of probability zero and is right continuous if F t = s>t F s. F t is interpreted as corresponding to the information available at time t (the amount of information increasing as time progresses). If a process is adapted, then the state of the process at time t is part of the information available at time t.

51 First Prev Next Go To Go Back Full Screen Close Quit 51 The natural filtration corresponding to a process Let X be a stochastic process. Then Ft X = σ(x(s) : s t) denotes the smallest σ-algebra such that X(s) is Ft X -measurable for all s t. {Ft X } is called the natural filtration generated by X. Sometimes the term natural filtration is used for the right continuous completion of {F X t }. We will denote the right continuous completion of {F X t } by {F X t }, that is, assuming (Ω, F, P ) is complete, F X t = s>t (σ(n ) F X s ), where σ(n ) denotes the σ-algebra generated by the null sets in F.

52 First Prev Next Go To Go Back Full Screen Close Quit 52 Classes of stochastic processes An E-valued stochastic process X adapted to {F t } is a Markov process with respect to {F t } if E[f(X(t + s)) F t ] = E[f(X(t + s)) X(t)] for all t, s and f B(E), the bounded, measurable functions on E. A real-valued stochastic process X adapted to {F t } is a martingale with respect to {F t } if for all t, s. E[X(t + s) F t ] = X(t) (3.1)

53 First Prev Next Go To Go Back Full Screen Close Quit 53 Stopping times Definition 3.4 A random variable τ with values in [, ] is an {F t }- stopping time if {τ t} F t t. Lemma 3.5 Let X be a cadlag stochastic process that is {F t }-adapted. If K is closed, τ K = inf{t : X(t) or X(t ) K} is a stopping time. Proof. {τ K t} = {X(t) K} n s<t,s Q {X(s) K 1/n }, where K ɛ = {x : inf y K x y < ɛ}. In general, for B B(R), τ B = inf{t : X(t) B} is not a stopping time; however, if (Ω, F, P ) is complete and the filtration {F t } is complete and right continuous, then for any B B(R), τ B is a stopping time.

54 First Prev Next Go To Go Back Full Screen Close Quit 54 Closure properties of the collection of stopping times If τ, τ 1, τ 2... are stopping times and c is a constant, then 1) τ 1 τ 2 and τ 1 τ 2 are stopping times. 2) τ + c, τ c, and τ c are stopping times. 3) sup k τ k is a stopping time. 4) If {F t } is right continuous, then are stopping times. inf τ k, lim inf τ k, lim sup k k k τ k

55 First Prev Next Go To Go Back Full Screen Close Quit 55 Discrete approximation of stopping times Lemma 3.6 Let τ be a stopping time and for n = 1, 2,..., define τ n = k + 1 2, if k n 2 τ < k + 1, k =, 1,.... n 2n Then {τ n } is a decreasing sequence of stopping times converging to τ. Proof. Observe that {τ n t} = {τ n [2n t] 2 n } = {τ < [2n t] 2 n } F t.

56 First Prev Next Go To Go Back Full Screen Close Quit 56 Information at a stopping time Definition 3.7 For a stopping time τ, define F τ = {A F : A {τ t} F t, t }. Then F τ is a σ-algebra and is interpreted as representing the information available to an observer at the random time τ. Occasionally, one also uses F τ = σ{a {t < τ} : A F t, t } F.

57 First Prev Next Go To Go Back Full Screen Close Quit 57 Properties of the information σ-algebras Lemma 3.8 If τ 1 and τ 2 are stopping times and τ 1 τ 2, then F τ1 F τ2. Proof. Let A F τ1. Then A {τ 2 t} = A {τ 1 t} {τ 2 t} F t, and hence A F τ2. Lemma 3.9 τ is F τ -measurable.

58 First Prev Next Go To Go Back Full Screen Close Quit 58 Lemma 3.1 If X is cadlag and {F t }-adapted and τ is a stopping time, then X(τ) is F τ -measurable and X(τ ) is {F t }-adapted. Proof. Let τ n be as in Lemma 3.6. Then {X(τ n t) c} = k ({X( k 2 n t) c} {τ n t = k 2 n t}) F t, and X(τ n t) is F t -measurable. By the right continuity of X, lim X(τ n t) = X(τ t) n and X(τ t) is F t -measurable. To see that X(τ) is F τ -measurable, note that {X(τ) c} {τ t} = ({X(t) c} {τ = t}) ({X(τ t) c} {τ < t}) F t.

59 First Prev Next Go To Go Back Full Screen Close Quit Information and conditional expectation Information Independence Conditional expectation Properties of conditional expectations Jensen s inequality Functions of known and unknown random variables Convergence of conditional expectations

60 First Prev Next Go To Go Back Full Screen Close Quit 6 Information Information obtained by observations of the outcome of a random experiment is represented by a sub-σ-algebra D of the collection of events F. If D D, then the observer knows whether or not the outcome is in D.

61 First Prev Next Go To Go Back Full Screen Close Quit 61 Independence Two σ-algebras D 1, D 2 are independent if P (D 1 D 2 ) = P (D 1 )P (D 2 ), D 1 D 1, D 2 D 2. An S-valued random variable Y is independent of a σ-algebra D if P ({Y B} D) = P {Y B}P (D), B B(S), D D. Random variables X and Y are independent if σ(x) and σ(y ) are independent, that is, if P ({X B 1 } {Y B 2 }) = P {X B 1 }P {Y B 2 }.

62 First Prev Next Go To Go Back Full Screen Close Quit 62 Conditional expectation Interpretation of conditional expectation in L 2. Problem: Approximate X L 2 using information represented by D such that the mean square error is minimized, i.e., find the D- measurable random variable Y that minimizes E[(X Y ) 2 ]. Solution: Suppose Y is a minimizer. For any ε and any D- measurable random variable Z L 2 E[ X Y 2 ] E[ X Y εz 2 ] = E[ X Y 2 ] 2εE[Z(X Y )]+ε 2 E[Z 2 ]. Hence 2εE[Z(X Y )] ε 2 E[Z 2 ]. Since ε is arbitrary, E[Z(X Y )] = and hence E[ZX] = E[ZY ] (4.1) for every D-measurable Z with E[Z 2 ] <.

63 First Prev Next Go To Go Back Full Screen Close Quit 63 Definition of conditional expectation Let X be an integrable random variable (that is, E[ X ] <.) The conditional expectation of X, denoted E[X D], is the unique (up to changes on events of probability zero) random variable Y satisfying a) Y is D-measurable. b) D XdP = D Y dp for all D D. ( D XdP = E[1 DX]) Existence is discussed in the Appendix. Condition (b) implies that (4.1) holds for all bounded D-measurable random variables.

64 First Prev Next Go To Go Back Full Screen Close Quit 64 Verifying Condition (b) Lemma 4.1 Let C F be a collection of events such that Ω C and C is closed under intersections, that is, if D 1, D 2 C, then D 1 D 2 C. If X and Y are integrable and XdP = Y dp (4.2) D for all D C, then (4.2) holds for all D σ(c) (the smallest σ-algebra containing C). D Proof. The lemma follows by the Dynkin class theorem.

65 First Prev Next Go To Go Back Full Screen Close Quit 65 Discrete case Assume that D = σ(d 1, D 2,..., ) where i=1 D i = Ω, and D i D j = whenever i j. Let X be any F-measurable random variable. Then, E[X D] = i=1 a) The right hand side is D-measurable. E[X1 Di ] P (D i ) 1 D i b) Any D ɛ D can be written as D = iɛa D i, where A {1, 2, 3,...}. Therefore, D i=1 E[X 1 Di ] 1 Di dp = P (D i ) i=1 E[X 1 Di ] P (D i ) = E[X 1 Di ] P (D i ) P (D i ) iɛa = XdP D D D i 1 Di dp (monotone conv thm)

66 First Prev Next Go To Go Back Full Screen Close Quit 66 Properties of conditional expectation Assume that X and Y are integrable random variables and that D is a sub-σ-algebra of F. 1) E[E[X D]] = E[X]. Just take D = Ω in Condition B. 2) If X then E[X D]. The property holds because Y = E[X D] is D-measurable and D Y dp = D XdP for every D ɛ D. Therefore, Y must be positive a.s.

67 First Prev Next Go To Go Back Full Screen Close Quit 67 3) E[aX +by D] = ae[x D]+bE[Y D]. It is obvious that the RHS is D-measurable, being the linear combination of two D-measurable random variables. Also, (ax + by )dp = a XdP + b Y dp D D D = a E[X D]dP + b E[Y D]dP D D = (ae[x D] + be[y D])dP. D 4) If X Y then E[X D] E[Y D]. Use properties (2) and (3) for Z = X Y. 5) If X is D-measurable, then E[X D] = X.

68 First Prev Next Go To Go Back Full Screen Close Quit 68 6) If Y is D-measurable and Y X is integrable, then E[Y X D] = Y E[X D]. First assume that Y is a simple random variable, i.e., let {D i } i=1 be a partition of Ω, D i ɛ D, c i R, for 1 i, and define Y = i=1 c i1 Di. Then, ( ) Y XdP = c i 1 Di XdP = c i XdP D D i D = = D i=1 i=1 c i E[X D]dP = D D i i=1 D Y E[X D]P D ( ) c i 1 Di E[X D]dP For general Y, approximate by a sequence {Y n } n=1 of simple random variables, for example, defined by Y n = k n if k n Y < k+1 n, k Z. Then Y n converges to Y, and the result follows by the dominated convergence theorem. i=1

69 First Prev Next Go To Go Back Full Screen Close Quit 69 7) If X is independent of D, then E[X D] = E[X]. Independence implies that for D D, E[X1 D ] = E[X]P (D), XdP = E[X1 D ] D = E[X] 1 D dp Ω = E[X]dP Since E[X] is D-measurable, E[X] = E[X D]. 8) If D 1 D 2 then E[E[X D 2 ] D 1 ] = E[X D 1 ]. Note that if D ɛ D 1 then D ɛ D 2. Therefore, XdP = E[X D 2 ]dp D D = E[E[X D 2 ] D 1 ]dp. D D

70 First Prev Next Go To Go Back Full Screen Close Quit 7 Convex functions A function φ : R R is convex if and only if for all x and y in R, and λ in [, 1], φ(λx + (1 λ)y) λφ(x) + (1 λ)φ(y). Let x 1 < x 2 and y ɛ R. Then φ(x 2 ) φ(y) x 2 y φ(x 1) φ(y). (4.3) x 1 y Now assume that x 1 < y < x 2 and let x 2 converge to y from above. The left side of (4.3) is bounded below, and its value decreases as x 2 decreases to y. Therefore, the right derivative φ + exists at y and Moreover, < φ + (y) = φ(x 2 ) φ(y) lim x 2 y+ x 2 y < +. φ(x) φ(y) + φ + (y)(x y), x R. (4.4)

71 First Prev Next Go To Go Back Full Screen Close Quit 71 Jensen s Inequality Lemma 4.2 If φ is convex then E[φ(X) D] φ(e[x D]). Proof. Define M : Ω R as M = φ + (E[X D]). From (4.4), φ(x) φ(e[x D]) + M(X E[X D]), and E[φ(X) D] E[φ(E[X D]) D] + E[M(X E[X D]) D] = φ(e[x D]) + ME[(X E[X D]) D] = φ(e[x D]) + M{E[X D] E[E[X D] D]} = φ(e[x D]) + M{E[X D] E[X D]} = φ(e[x D])

72 First Prev Next Go To Go Back Full Screen Close Quit 72 Functions of known and unknown random variables Lemma 4.3 Let X be an S 1 -valued, D-measurable random variable and Y be an S 2 -valued random variable independent of D. Suppose that ϕ : S 1 S 2 R is a Borel measurable function and that ϕ(x, Y ) is integrable. Define ψ(x) = E[ϕ(x, Y )]. Then, E[ϕ(X, Y ) D] = ψ(x). Proof. For C B(S 1 S 2 ), define ψ C (x) = E[1 C (x, Y )]. ψ(x) is D-measurable as X is. For D D, define µ(c) = E[1 D 1 C (X, Y )] and ν(c) = E[1 D ψ C (X)]. (µ and ν are measures by the monotone convergence theorem.) If A B(S 1 ) and B B(S 2 ), µ(a B) = E[1 D 1 A (X)1 B (Y )] = E[1 D 1 A (X)]E[1 B (Y )] = E[1 D 1 A (X)E[1 B (Y )]] = ν(a B), and µ = ν by Lemma 17.3, giving the lemma for ϕ = 1 C, C B(S 1 S 2 ). For general ϕ, approximate by simple functions.

73 First Prev Next Go To Go Back Full Screen Close Quit 73 More general version Lemma 4.4 Let Y be an S 2 -valued random variable (not necessarily independent of D). Suppose that ϕ : S 1 S 2 R is a bounded measurable function. Then there exists a measurable ψ : Ω S 1 R such that for each x S 1 ψ(ω, x) = E[ϕ(x, Y ) D](ω) a.s. and E[ϕ(X, Y ) D](ω) = ψ(ω, X(ω)) for every D-measurable random variable X. a.s.

74 First Prev Next Go To Go Back Full Screen Close Quit 74 Example Let Y : Ω N be independent of the i.i.d random variables {X i } i=1. Then Y E[ X i σ(y )] = Y E[X 1 ]. (4.5) i=1 Identity (4.5) follows by taking ϕ(x, Y )(ω) = Y (ω) i=1 X i(ω) and noting that ψ(y) = E[ y i=1 X i] = ye[x 1 ].

75 First Prev Next Go To Go Back Full Screen Close Quit 75 Convergence of conditional expectations Since E[ E[X D] E[Y D] p ] = E[ E[X Y ] D] p ] E[E[ X Y p D]] = E[ X Y p ] using linearity using Jensen s inequality we have Lemma 4.5 Let {X n } n= be a sequence of random variables and p 1. If lim n E[ X X n p ] =, then lim n E[ E[X D] E[X n D] p ] =.

76 First Prev Next Go To Go Back Full Screen Close Quit Martingales Definitions Optional sampling theorem Doob s inequalities Local martingales Quadratic variation Martingale convergence theorem

77 First Prev Next Go To Go Back Full Screen Close Quit 77 Definitions A stochastic process X adapted to a filtration {F t } is a martingale with respect to {F t } if for all t, s. It is a submartingale if and a supermartingale if E[X(t + s) F t ] = X(t) (5.1) E[X(t + s) F t ] X(t) (5.2) E[X(t + s) F t ] X(t). (5.3)

78 First Prev Next Go To Go Back Full Screen Close Quit 78 Optional sampling theorem Theorem 5.1 Let X be a martingale and τ 1, τ 2 be stopping times. Then for every t E[X(t τ 2 ) F τ1 ] = X(t τ 1 τ 2 ). If τ 2 < a.s., E[ X(τ 2 ) ] < and lim t E[ X(t) 1 {τ2 >t}] =, then E[X(τ 2 ) F τ1 ] = X(τ 1 τ 2 ). The same results hold for sub and supermartingales with = replaced by (submartingales) and (supermartingales). Proof. See, for example, Ethier and Kurtz [2], Theorem

79 First Prev Next Go To Go Back Full Screen Close Quit 79 Doob s inequalities Theorem 5.2 If X is a non-negative sub-martingale, then and for α > 1 P {sup X(s) x} E[X(t)] s t x E[sup X(s) α ] (α/α 1) α E[X(t) α ]. s t Proof. Let τ x = inf{t : X(t) x} and set τ 2 = t and τ 1 = τ x. Then from the optional sampling theorem we have that so we have that E[X(t) F τx ] X(t τ x ) 1 {τx t}x(τ x ) x1 {τx t} a.s. E[X(t)] xp {τ x t} = xp {sup X(s) x} s t See Ethier and Kurtz, Proposition for the second inequality.

80 First Prev Next Go To Go Back Full Screen Close Quit 8 Convex transformations Lemma 5.3 If M is a martingale and ϕ is convex with E[ ϕ(m(t)) ] <, then X(t) ϕ(m(t)) is a sub-martingale. Proof. by Jensen s inequality. E[ϕ(M(t + s)) F t ] ϕ(e[m(t + s) F t ]) From the above lemma, it follows that if M is a martingale, then and P {sup M(s) x} E[ M(t) ] s t x (5.4) E[sup M(s) 2 ] 4E[M(t) 2 ]. (5.5) s t

81 First Prev Next Go To Go Back Full Screen Close Quit 81 Local martingales M is a local martingale if there exists a sequence of stopping times {τ n } such that lim n τ n = a.s. and for each n, M τ n M( τ n) is a martingale. {τ n } is called a localizing sequence for M. The total variation of Y up to time t is defined as T t (Y ) sup Y (t i+1 ) Y (t i ) where the supremum is over all partitions of the interval [, t]. Y is an FV-process if T t (Y ) < for each t >.

82 First Prev Next Go To Go Back Full Screen Close Quit 82 Fundamental Theorem of local martingales Theorem 5.4 Let M be a local martingale, and let δ >. Then there exist local martingales M and A satisfying M = M + A such that A is FV and the discontinuities of M are bounded by δ. Proof. See Protter [3], Theorem III.13. One consequence of this theorem is that any local martingale can be decomposed into an FV process and a local square integrable martingale. Specifically, if γ c = inf{t : M(t) c}, then M( γ c ) is a square integrable martingale. (Note that M( γ c ) c + δ.)

83 First Prev Next Go To Go Back Full Screen Close Quit 83 Quadratic variation The quadratic variation of a process Y is defined as [Y ] t = lim (Y (ti+1 ) Y (t i )) 2 max t i+1 t i where convergence is in probability. The limit exists if for every ɛ > there exists a δ > such that for every partition {t i } of the interval [, t] satisfying max t i+1 t i δ P { [Y ] t (Y (t i+1 ) Y (t i )) 2 ɛ} ɛ.

84 First Prev Next Go To Go Back Full Screen Close Quit 84 Quadratic variation for FV processes Lemma 5.5 If Y is FV, then [Y ] t = s t (Y (s) Y (s ))2 = s t Y (s)2 where the summation is over the points of discontinuity and Y (s) Y (s) Y (s ) is the jump in Y at time s. For any partition of [, t] (Y (ti+1 ) Y (t i )) 2 Y (t i+1 ) Y (t i ) >ɛ (Y (t i+1 ) Y (t i )) 2 ɛt t (Y ).

85 First Prev Next Go To Go Back Full Screen Close Quit 85 Quadratic variation of martingales Proposition 5.6 a) If M is a local martingale, then [M] t exists and is right continuous. b) If M is a square integrable martingale, then the limit (M(ti+1 ) M(t i )) 2 lim max t i+1 t i exists in L 1, and if M() =, E[M(t) 2 ] = E[[M] t ]. Proof. See, for example, Ethier and Kurtz [2], Proposition

86 First Prev Next Go To Go Back Full Screen Close Quit 86 Square integrable martingales Lemma 5.7 If M is a square integrable martingale with M() =, then E[M(t) 2 ] = E[[M] t ]. Proof. Write M(t) = m 1 i= (M(t i+1) M(t i )), = t < < t m = t. m 1 E[M(t) 2 ] = E[( M(t i+1 ) M(t i )) 2 ] (5.6) i= m 1 = E[ (M(t i+1 ) M(t i )) 2 + i= i j For t i < t i+1 t j < t j+1. (M(t i+1 ) M(t i ))(M(t j+1 ) M(t j ))]. E[(M(t i+1 ) M(t i ))(M(t j+1 ) M(t j ))] (5.7) = E[E[(M(t i+1 ) M(t i ))(M(t j+1 ) M(t j )) F tj ]] = E[(M(t i+1 ) M(t i ))(E[M(t j+1 ) F tj ] M(t j ))] =, The lemma follows by the L 1 convergence in Proposition 5.6.

87 First Prev Next Go To Go Back Full Screen Close Quit 87 Examples If M(t) = N(t) λt where N(t) is a Poisson process with parameter λ, then [M] t = N(t), and since M(t) is square integrable, the limit exists in L 1. For standard Brownian motion W, [W ] t = t. To check this identity, apply the law of large numbers to [nt] k=1 (W ( k n ) W (k 1 n ))2.

88 First Prev Next Go To Go Back Full Screen Close Quit 88 Martingale properties Proposition 5.8 If M is a square integrable {F t }-martingale, then M(t) 2 [M] t is an {F t }-martingale. In particular, if W is standard Brownian motion, then W (t) 2 t is a martingale. Proof. For t, s, let {u i } be a partition of (, t+s]. Since E[(M(u j+1 ) M(u j ))(M(u i+1 ) M(u i )) F t ] =, for i j, by the L 1 convergence E[M(t + s) 2 F t ] = E[(M(t + s) M(t)) 2 F t ] + M(t) 2 n 1 = E[( M(u i+1 ) M(u i )) 2 F t ] + M(t) 2 i=1 n 1 = E[ (M(u i+1 ) M(u i )) 2 F t ] + M(t) 2 i=1 = E[[M] t+s [M] t F t ] + M(t) 2.

89 First Prev Next Go To Go Back Full Screen Close Quit 89 Martingale convergence theorem Theorem 5.9 Let X be a submartingale satisfying sup t E[ X(t) ] <. Then lim t X(t) exists a.s. Proof. See, for example, Durrett [1], Theorem

90 First Prev Next Go To Go Back Full Screen Close Quit 9 6. Poisson process and Brownian motion Poisson process Basic assumptions The Poisson process as a renewal process Brownian motion

91 First Prev Next Go To Go Back Full Screen Close Quit 91 Poisson process A Poisson process is a model for a series of random observations occurring in time. For example, the process could model the arrivals of customers in a bank, the arrivals of telephone calls at a switch, or the counts registered by radiation detection equipment. Let N(t) denote the number of observations by time t. We assume that N is a counting process, that is, the observations come one at a time, so N is constant except for jumps of +1. For t < s, N(s) N(t) is the number of observations in the time interval (t, s].

92 First Prev Next Go To Go Back Full Screen Close Quit 92 Basic assumptions 1) The observations occur one at a time. 2) Numbers of observations in disjoint time intervals are independent random variables, that is, N has independent increments. 3) The distribution of N(t + a) N(t) does not depend on t. Theorem 6.1 Under assumptions ), 1), and 2), there is a constant λ such that N(s) N(t) is Poisson distributed with parameter λ(s t), that is, P {N(s) N(t) = k} = (λ(s t))k e λ(s t). k! If Theorem 6.1 holds, then we refer to N as a Poisson process with parameter λ. If λ = 1, we will call N the unit Poisson process.

93 First Prev Next Go To Go Back Full Screen Close Quit 93 Time inhomogeneous Poisson processes Lemma 6.2 If (1) and (2) hold and Λ(t) = E[N(t)] is continuous and Λ() =, then N(t) = Y (Λ(t)), where Y is a unit Poisson process.

94 First Prev Next Go To Go Back Full Screen Close Quit 94 Jump times Let N be a Poisson process with parameter λ, and let S k be the time of the kth observation. Then P {S k t} = P {N(t) k} = 1 k 1 i= (λt) i e λt, t. i Differentiating to obtain the probability density function gives { λ(λt) f Sk (t) = k 1 e λt t t <.

95 First Prev Next Go To Go Back Full Screen Close Quit 95 The Poisson process as a renewal process The Poisson process can also be viewed as the renewal process based on a sequence of exponentially distributed random variables. Theorem 6.3 Let T 1 = S 1 and for k > 1, T k = S k S k 1. Then T 1, T 2,... are independent and exponentially distributed with parameter λ.

96 First Prev Next Go To Go Back Full Screen Close Quit 96 Gaussian distributions (ξ 1,..., ξ d ) has a Gaussian distribution on R d if d k=1 a kξ k is a realvalued Gaussian random variable for each (a 1,..., a k ) R d. Recall that the density function of a Gaussian (normal) random variable with expectation µ and variance σ 2 is given by f µ,σ (x) = 1 µ)2 exp{ (x } 2πσ 2 2σ 2 Lemma 6.4 If (ξ 1,..., ξ d ) is Gaussian distributed, then the joint distribution is determined by µ i = E[ξ i ], Cov(ξ i, ξ j ), i, j = 1,..., d.

97 First Prev Next Go To Go Back Full Screen Close Quit 97 Brownian motion Standard Brownian motion W is a Gaussian process with E[W (t)] = and Cov(W (t), W (s)) = t s. Equivalently, standard Brownian motion is a Gaussian process with mean zero and stationary, independent increments satisfying V ar(w (t + s) W (t)) = s.

98 First Prev Next Go To Go Back Full Screen Close Quit 98 Properties of Brownian motion Proposition 6.5 Standard Brownian motion, W, is both a martingale and a Markov process. Proof. Let F t = σ(w (s) : s t). Then E[W (t + s) F t ] = E[W (t + s) W (t) + W (t) F t ] = E[W (t + s) W (t) F t ] + E[W (t) F t ] = E[W (t + s) W (t)] + E[W (t) F t ] = E[W (t) F t = W (t) Define T (s)f(x) = E[f(x + W (s))], and note that E[f(W (t + s)) F t ] = E[f(W (t + s) W (t) + W (t)) F t ] = T (s)f(w (t)) = E[f(W (t + s)) W (t)]

99 First Prev Next Go To Go Back Full Screen Close Quit Stochastic integrals Definition Existence for finite variation processes Existence for square integrable martingales L 2 isometry General existence result Semimartingales Approximation of stochastic integrals Change of integrator Change of time variable Other definitions

100 First Prev Next Go To Go Back Full Screen Close Quit 1 Stochastic integrals for cadlag processes Let X and Y be cadlag processes, and let {t i } denote a partition of the interval [, t]. If the limit as max t i+1 t i exists in probability, X(s )dy (s) lim X(t i )(Y (t i+1 ) Y (t i )). (7.1) For W, standard Brownian motion, W (s)dw (s) = lim W (t i )(W (t i+1 ) W (t i )) (7.2) = lim (W (t i )W (t i+1 ) 1 2 W (t i+1) W (t i) 2 ) + ( 1 2 W (t i+1) W (t i) 2 ) = 1 2 W (t)2 lim 1 (W (ti+1 ) W (t i )) 2 2 = 1 2 W (t)2 1 2 t.

101 First Prev Next Go To Go Back Full Screen Close Quit 11 Significance of evaluation at the left end point If we replace t i by t i+1 in (7.2), we obtain lim W (t i+1 )(W (t i+1 ) W (t i )) = lim ( W (t i )W (t i+1 ) W (t i+1) W (t i) 2 ) + ( 1 2 W (t i+1) W (t i) 2 ) = 1 2 W (t)2 + lim 1 (W (ti+1 ) W (t i )) 2 2 = 1 2 W (t) t.

102 First Prev Next Go To Go Back Full Screen Close Quit 12 Similarly, if N is a Poisson process while N(t) lim N(t i )(N(t i+1 ) N(t i )) = (k 1) k=1 N(t) lim N(t i+1 )(N(t i+1 ) N(t i )) = k k=1

103 First Prev Next Go To Go Back Full Screen Close Quit 13 Definition of the stochastic integral For any partition {t i } of [, ), = t < t 1 < t 2 <..., and any cadlag x and y, define S(t, {t i }, x, y) = x(t i )(y(t t i+1 ) y(t t i )). Definition 7.1 For stochastic processes X and Y, define Z = X dy if for each T > and each ɛ >, there exists a δ > such that P {sup Z(t) S(t, {t i }, X, Y ) ɛ} ɛ t T for all partitions {t i } satisfying max t i+1 t i δ.

104 First Prev Next Go To Go Back Full Screen Close Quit 14 Example If X is piecewise constant, that is, for some collection of random variables {ξ i } and random variables {τ i } satisfying = τ < τ 1 <, X = ξ i 1 [τi,τ i+1 ), then X(s )dy (s) = ξ i (Y (t τ i+1 ) Y (t τ i )) = X(τ i )(Y (t τ i+1 ) Y (t τ i )).

105 First Prev Next Go To Go Back Full Screen Close Quit 15 Conditions for existence: Finite variation processes The total variation of Y up to time t is defined as T t (Y ) sup Y (t i+1 ) Y (t i ) where the supremum is over all partitions of the interval [, t]. Proposition 7.2 T t (f) < for each t > if and only if there exist monotone increasing functions f 1, f 2 such that f = f 1 f 2. If T t (f) <, then f 1 and f 2 can be selected so that T t (f) = f 1 + f 2. If f is cadlag, then T t (f) is cadlag. Proof.Note that T t (f) f(t) = sup ( f(t i+1 ) f(t i ) (f(t i+1 ) f(t i ))) is an increasing function of t, as is T t (f) + f(t).

106 First Prev Next Go To Go Back Full Screen Close Quit 16 Existence Theorem 7.3 If Y is of finite variation then X dy exists for all X, X dy is cadlag, and if Y is continuous, X dy is continuous. (Recall that we are assuming throughout that X is cadlag.) Proof. Let {t i }, {s i } be partitions. Let {u i } be a refinement of both. Then there exist k i, l i, k i, l i such that Y (t i+1 ) Y (t i ) = Y (s i+1 ) Y (s i ) = l i Y (u j+1 ) Y (u j ) j=k i l i Y (u j+1 ) Y (u j ). j=k i

107 First Prev Next Go To Go Back Full Screen Close Quit 17 Define t(u) = t i, t i u < t i+1 and s(u) = s i, s i u < s i+1, so that S(t, {t i }, X, Y ) S(t, {s i }, X, Y ) (7.3) = X(t(u i ))(Y (u i+1 t) Y (u i t)) X(s(u i ))(Y (u i+1 t) Y (u i t)) X(t(u i )) X(s(u i )) Y (u i+1 t) Y (u i t). There is a measure µ Y such that T t (Y ) = µ Y (, t]. Since Y (b) Y (a) µ Y (a, b], the right side of (7.3) is less than X(t(ui )) X(s(u i )) µ Y (u i t, u i+1 t] = X(t(u )) X(s(u )) µ Y (du) (u i t,u i+1 t] = X(t(u )) X(s(u )) µ Y (du). (,t]

108 First Prev Next Go To Go Back Full Screen Close Quit 18 so (,t] lim X(t(u )) X(s(u )) =, X(t(u )) X(s(u )) µ Y (du) (7.4) by the bounded convergence theorem. Since the integral in (7.4) is monotone in t, the convergence is uniform on bounded time intervals.

109 First Prev Next Go To Go Back Full Screen Close Quit 19 Representation of quadratic variation Note that (Y (ti+1 ) Y (t i )) 2 = Y (t) 2 Y () 2 2 Y (t i )(Y (t i+1 ) Y (t i )) so that [Y ] t = Y (t) 2 Y () 2 2 Y (s )dy (s) and [Y ] t exists if and only if Y dy exists.

110 First Prev Next Go To Go Back Full Screen Close Quit 11 Conditions for existence: Square integrable martingales If M is a square integrable martingale and X is bounded (by a constant) and adapted, then for any partition {t i }, Y (t) = S(t, {t i }, X, M) = X(t i )(M(t t i+1 ) M(t t i )) is a square-integrable martingale. (In fact, each summand is a squareintegrable martingale.) Theorem 7.4 Suppose M is a local square integrable {F t }-martingale and X is cadlag and {F t }-adapted. Then X dm exists.

111 First Prev Next Go To Go Back Full Screen Close Quit 111 Reduction to bounded X and square integrable martingale M Lemma 7.5 Let X and M be as in Theorem 7.4. Let {τ n } be a localizing sequence for M and define X k = (X k) ( k). Then P {sup S(t, {t i }, X, M) S(t, {s i }, X, M) ɛ} t T X(t) > k} P {τ n T } + P {sup t T +P {sup t T S(t, {t i }, X k, M τ n ) S(t, {s i }, X k, M τ n ) ɛ}

112 First Prev Next Go To Go Back Full Screen Close Quit 112 Proof.[of Theorem 7.4] By Lemma 7.5, it is enough to consider M a square integrable martingale and X satisfying X(t) C. Then, for any partition {t i }, S(t, {t i }, X, M) is a square integrable martingale. For two partitions {t i } and {s i }, define {u i }, t(u), and s(u) as in the proof of Theorem 7.3. Recall that t(u i ), s(u i ) u i, so X(t(u)) and X(s(u)) are {F u }-adapted. By Doob s inequality and the properties of martingales, E[sup(S(t, {t i }, X, M) S(t, {s i }, X, M)) 2 ] (7.5) t T 4E[(S(T, {t i }, X, M) S(T, {s i }, X, M)) 2 ] = 4E[( (X(t(u i )) X(s(u i ))(M(u i+1 T ) M(u i T ))) 2 ] = 4E[ (X(t(u i )) X(s(u i )) 2 (M(u i+1 T ) M(u i T )) 2 ] = 4E[ (X(t(u i )) X(s(u i )) 2 ([M] ui+1 T [M] ui T )].

113 First Prev Next Go To Go Back Full Screen Close Quit 113 [M] is nondecreasing and so determines a measure by µ [M] (, t] = [M] t, and it follows that E[ (X(t(u i )) X(s(u i ))) 2 ([M] ui+1 T [M] ui T )] (7.6) = E[ (X(t(u )) X(s(u ))) 2 µ [M] (du)], (,T ] since X(t(u)) and X(s(u)) are constant between u i and u i+1. (X(t(u)) X(s(u))) 2 µ [M] (du) 4C 2 µ [M] (, t], (,t] The right side of (7.6) goes to zero as max t i+1 t i and max s i+1 s i. Consequently, X(s )dm(s) exists by the completeness of L 2, or more precisely, by the completeness of the space of processes with norm Z T = E[sup Z(t) 2 ]. t T

114 First Prev Next Go To Go Back Full Screen Close Quit 114 Completeness in a space of stochastic processes Lemma 7.6 Let H T be the space of cadlag, R-valued stochastic processes on [, T ] with norm Z T = E[sup t T Z(t) 2 ]. Then H T is complete.

115 First Prev Next Go To Go Back Full Screen Close Quit 115 Proof. Let {Z n } be a Cauchy sequence, and let {n k } be an increasing subsequence satisfying Z m Z nk T 4 k for m n k. Since P {sup Z nk+1 (t) Z nk (t) 2 k } 4 k, t T Z(t) = lim k Z nk (t) = Z n1 (t) + (Z nk+1 (t) Z nk (t)) converges almost surely, uniformly in t [, T ]. By Fatou s lemma E[sup Z(t) Z nk (t) 2 ] lim E[sup t T m t T and k=1 Z nm (t) Z nk (t) 2 ] 4 2l, lim sup Z Z m T lim lim sup( Z Z nk T + Z m Z nk T ) =. m k m l=k

116 First Prev Next Go To Go Back Full Screen Close Quit 116 Continuity properties of intergrals Corollary 7.7 If M is a square integrable martingale and X is adapted, then X dm is cadlag. If, in addition, M is continuous, then X dm is continuous. If X C for some constant C >, then X dm is a square integrable martingale.

117 First Prev Next Go To Go Back Full Screen Close Quit 117 L 2 isometry Proposition 7.8 Suppose M is a square integrable martingale and [ ] E X(s ) 2 d[m] s <. Then X dm is a square integrable martingale with [ ] [ ] E ( X(s )dm(s)) 2 = E X(s ) 2 d[m] s. (7.7) Remark 7.9 If W is standard Brownian motion, the identity becomes [ ( ) 2 ] [ ] E X(s )dw (s) = E X 2 (s)ds.

118 First Prev Next Go To Go Back Full Screen Close Quit 118 Proof for simple processes Proof. Suppose X(t) = ξ i 1 [ti,t i+1 ) is an adapted simple process. [ ( ) 2 ] [ ] E X(s )dm(s) = E X(ti ) 2 (M(t i+1 ) M(t i )) 2 [ = E X(ti ) ( 2 [M] [M] ) ] ti+1 t i [ ] = E X 2 (s )d[m] s.

119 First Prev Next Go To Go Back Full Screen Close Quit 119 Proof for bounded adapted processes Let X be bounded with X(t) C, and for a sequence of partitions {t n i } with lim n sup i t n i+1 tn i =, define X n (t) = X(t n i ), for t n i t < t n i+1. X n (s )dm(s) = X(t n i ) (M(t t n i+1) M(t t n i )) X(s )dm(s), where the convergence is in L 2. It follows that X dm is a martingale, and E [ ( ) 2 ] X(s )dm(s) = lim n E [ ( ) 2 ] X n (s )dm(s) [ ] = lim E X 2 n n(s )d[m] s [ ] = E X 2 (s )d[m] s. The last equality holds by the dominated convergence theorem.

120 First Prev Next Go To Go Back Full Screen Close Quit 12 General cadlag, adapted X Define X k (t) = (k X(t)) ( k). Then X k(s )dm(s) X(s )dm(s) in probability, and by Fatou s lemma, [ ( ) 2 ] [ ( ) 2 ] lim inf E X k (s )dm(s) E X(s )dm(s). k so lim E k [ ( ) 2 ] X k (s )dm(s) [ ] [ ( E X 2 (s )d[m] s E [ ] = lim E X 2 k k(s )d[m] s (7.8) [ ] = lim E X 2 (s ) k 2 d[m] s k [ ] = E X 2 (s )d[m] s <, ) 2 ] X(s )dm(s).

121 First Prev Next Go To Go Back Full Screen Close Quit 121 Since (7.7) holds for bounded X, [ ( E X k (s )dm(s) ) 2 ] X j (s )dm(s) [ ( ) 2 ] = E (X k (s ) X j (s ))dm(s) [ ] = E X k (s ) X j (s ) 2 d[m] s (7.9) Since X k (s) X j (s) 2 4X(s) 2, the dominated convergence theorem implies the right side of (7.9) converges to zero as j, k. Consequently, X k (s )dm(s) X(s )dm(s) in L 2, and the left side of (7.8) converges to E[( X(s )dm(s))2 ] giving (7.7).

122 First Prev Next Go To Go Back Full Screen Close Quit 122 General existence If X(s )dy 1(s) and X(s )dy 2(s) exist, then X(s )d(y 1(s) + Y 2 (s)) exists and is given by the sum of the other integrals. Corollary 7.1 If Y = M + V where M is a {F t }-local martingale and V is an {F t }-adapted finite variation process, then X dy exists for all cadlag, adapted X, X dy is cadlag, and if Y is continuous, X dy is continuous.

123 First Prev Next Go To Go Back Full Screen Close Quit 123 Proof. If M is a local square integrable martingale, then there exists a sequence of stopping times {τ n } such that M τ n defined by M τ n (t) = M(t τ n ) is a square-integrable martingale. But for t < τ n, X(s )dm(s) = X(s )dm τ n (s), and hence X dm exists. Linearity gives existence for any Y that is the sum of a local square integrable martingale and an adapted FV process. But Theorem 5.4 states that any local martingale is the sum of a local square integrable martingale and an adapted FV process, so the corollary follows.

Math 635: An Introduction to Brownian Motion and Stochastic Calculus

Math 635: An Introduction to Brownian Motion and Stochastic Calculus First Prev Next Go To Go Back Full Screen Close Quit 1 Math 635: An Introduction to Brownian Motion and Stochastic Calculus 1. Introduction and review 2. Notions of convergence and results from measure

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

1. Probability Measure and Integration Theory in a Nutshell

1. Probability Measure and Integration Theory in a Nutshell 1. Probability Measure and Integration Theory in a Nutshell 1.1. Measurable Space and Measurable Functions Definition 1.1. A measurable space is a tuple (Ω, F) where Ω is a set and F a σ-algebra on Ω,

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Lecture 2: Random Variables and Expectation

Lecture 2: Random Variables and Expectation Econ 514: Probability and Statistics Lecture 2: Random Variables and Expectation Definition of function: Given sets X and Y, a function f with domain X and image Y is a rule that assigns to every x X one

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

16.1. Signal Process Observation Process The Filtering Problem Change of Measure

16.1. Signal Process Observation Process The Filtering Problem Change of Measure 1. Introduction The following notes aim to provide a very informal introduction to Stochastic Calculus, and especially to the Itô integral. They owe a great deal to Dan Crisan s Stochastic Calculus and

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Optional Stopping Theorem Let X be a martingale and T be a stopping time such Plan Counting, Renewal, and Point Processes 0. Finish FDR Example 1. The Basic Renewal Process 2. The Poisson Process Revisited 3. Variants and Extensions 4. Point Processes Reading: G&S: 7.1 7.3, 7.10

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

2 Lebesgue integration

2 Lebesgue integration 2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information