Math 635: An Introduction to Brownian Motion and Stochastic Calculus

Size: px
Start display at page:

Download "Math 635: An Introduction to Brownian Motion and Stochastic Calculus"

Transcription

1 First Prev Next Go To Go Back Full Screen Close Quit 1 Math 635: An Introduction to Brownian Motion and Stochastic Calculus 1. Introduction and review 2. Notions of convergence and results from measure theory 3. Review of Markov chains 4. Change of measure 5. Information and conditional expectations 6. Martingales 7. Brownian motion 8. Stochastic integrals 9. Black-Scholes and other models 1. The multidimensional stochastic calculus 11. Stochastic differential equations 12. Markov property 13. SDEs and partial differential equations 14. Change of measure and asset pricing 15. Martingale representation and completeness 16. Applications and examples 17. Stationary distributions and forward equations 18. Processes with jumps 19. Assignments 2. Problems

2 First Prev Next Go To Go Back Full Screen Close Quit 2 February 22 review Independence Conditional expectations: Basic properties Jensen s inequality Functions of known and unknown random variables Filtrations and martingales Optional sampling theorem Doob s inequalities

3 First Prev Next Go To Go Back Full Screen Close Quit 3 1. Introduction and review The basic concepts of probability: Models of experiments Sample space and events Probability measures Random variables The distribution of a random variable Definition of the expectation Properties of expectations Jensen s inequality

4 First Prev Next Go To Go Back Full Screen Close Quit 4 Experiments Probability models experiments in which repeated trials typically result in different outcomes. As a means of understanding the real world, probability identifies surprising regularities in highly irregular phenomena. If we roll a die 1 times we anticipate that about a sixth of the time the roll is 5. If that doesn t happen, we suspect that something is wrong with the die or the way it was rolled.

5 First Prev Next Go To Go Back Full Screen Close Quit 5 Probabilities of events Events are statements about the outcome of the experiment: {the roll is 6}, {the rat died}, {the television set is defective} The anticipated regularity is that P (A) #times A occurs #of trials This presumption is called the relative frequency probability. interpretation of

6 First Prev Next Go To Go Back Full Screen Close Quit 6 Definition of probability The probability of an event A should be P (A) = lim n #times A occurs in first n trials n The mathematical problem: Make sense out of this. The real world relationship: Probabilities are predictions about the future.

7 First Prev Next Go To Go Back Full Screen Close Quit 7 Random variables In performing an experiment numerical measurements or observations are made. Call these random variables since they vary randomly. Give the quantity a name: X {X = a} and {a < X < b} are statements about the outcome of the experiment, that is, are events

8 First Prev Next Go To Go Back Full Screen Close Quit 8 The distribution of a random variable If X k is the value of X observed on the kth trial, then we should have P {X = a} = lim n #{k n : X k = a} n If X has only finitely many possible values, then P {X = a} = 1. a R(X) This collection of probabilities determine the distribution of X.

9 First Prev Next Go To Go Back Full Screen Close Quit 9 Distribution function More generally, 1 P {X x} = lim n n n 1 (,x] (X k ) k=1 F X (x) P {X x} is the distribution function for X.

10 The law of averages If R(X) = {a 1,..., a m } is finite, then X X n m #{k n : X k = a l } lim = lim a l n n n n l=1 = m a l P {X = a l } l=1 First Prev Next Go To Go Back Full Screen Close Quit 1

11 First Prev Next Go To Go Back Full Screen Close Quit 11 More generally, if R(X) [c, d], < c < d <, then l x l P {x l < X x l+1 } = lim n = l m l=1 x l #{k n : x l < X k x l+1 } n X X n lim n n m lim n l=1 x l+1 #{k n : x l < X k x l+1 } n x l+1 P {x l < X x l+1 } = l x l+1 (F X (x l+1 ) F X (x l )) d c xdf X (x)

12 First Prev Next Go To Go Back Full Screen Close Quit 12 The expectation as a Stieltjes integral If R(X) [c, d], define E[X] = d c xdf X (x). If the relative frequency interpretation is valid, then X X n lim n n = E[X].

13 First Prev Next Go To Go Back Full Screen Close Quit 13 A random variable without an expectation Example 1.1 Suppose Then X X n lim n n P {X x} = = = x 1 + x, x. m lp {l < X l + 1} l= m l= m l= l( l + 1 l + 2 One could say E[X] =, and we will. l l + 1 ) l (l + 2)(l + 1) as m

14 First Prev Next Go To Go Back Full Screen Close Quit 14 Review of basic calculus Definition 1.2 A sequence {x n } R converges to x R (lim n x n = x) if and only if for each ɛ > there exists an n ɛ > such that n > n ɛ implies x n x ɛ. Let {a k } R. The series k=1 a k converges if lim n n k=1 a k R exists. The series converges absolutely if k=1 a k converges. (Or we write k=1 a k <.) Examples of things you should know: lim (ax n + by n ) = a lim x n + b lim y n n n n if the two limits on the right exist. If lim n,m x n x m = ({x n } is a Cauchy sequence), then lim n x n exists.

15 First Prev Next Go To Go Back Full Screen Close Quit 15 Examples If α > 1, then For α = 1, however ( 1) k k=1 k = lim n n ( 1) k k=1 k k=1 k=1 = lim m 1 k α <. 1 k = ; m l=1 ( 1 2l l ) = 1 2l(2l 1) l=1

16 First Prev Next Go To Go Back Full Screen Close Quit 16 The sample space The possible outcomes of the experiment form a set Ω called the sample space. Each event (statement about the outcome) can be identified with the subset of the sample space for which the statement is true.

17 First Prev Next Go To Go Back Full Screen Close Quit 17 The collection of events If Then A = {ω Ω : statement I is true for ω} B = {ω Ω : statement II is true for ω} A B = {ω Ω : statement I and statement II are true for ω} A B = {ω Ω : statement I or statement II is true for ω} A c = {ω Ω : statement I is not true for ω} Let F be the collection of events. Then A, B F should imply that A B, A B, and A c are all in F. F is an algebra of subsets of Ω. In fact, we assume that F is a σ-algebra (closed under countable unions and complements).

18 First Prev Next Go To Go Back Full Screen Close Quit 18 The probability measure Each event A F is assigned a probability P (A). From the relative frequency interpretation, we must have P (A B) = P (A) + P (B) for disjoint events A and B and by induction, if A 1,..., A m are disjoint m P ( m k=1a k ) = P (A k ) finite additivity k=1 In fact, we assume countable additivity: If A 1, A 2,... are disjoint events, then P ( k=1a k ) = P (A k ). P (Ω) = 1. k=1

19 First Prev Next Go To Go Back Full Screen Close Quit 19 A probability space is a measure space A measure space (M, M, µ) consists of a set M, a σ-algebra of subsets M, and a nonnegative function µ defined on M that satisfies µ( ) = and countable additivity. A probability space is a measure space (Ω, F, P ) satisfying P (Ω) = 1.

20 First Prev Next Go To Go Back Full Screen Close Quit 2 Random variables If X is a random variable, then we must know the value of X if we know that outcome ω Ω of the experiment. Consequently, X is a function defined on Ω. The statement {X c} must be an event, so {X c} = {ω : X(ω) c} F. In other words, X is a measurable function on (Ω, F, P ). R(X) will denote the range of X Ω} R(X) = {x R : x = X(ω), ω

21 First Prev Next Go To Go Back Full Screen Close Quit 21 Distributions Definition 1.3 The Borel subsets B(R) is the smallest σ-algebra of subsets of R containing (, c] for all c R. Definition 1.4 The distribution of a R-valued random variable X is the Borel measure defined by µ X (B) = P {X B}, B B(R). µ X is called the measure induced by the function X.

22 First Prev Next Go To Go Back Full Screen Close Quit 22 Discrete distributions Definition 1.5 A random variable is discrete or has a discrete distribution if and only if R(X) is countable. If X is discrete, the distribution of X is determined by the probability mass function p X (x) = P {X = x}, x R(X). Note that x R(X) P {X = x} = 1.

23 First Prev Next Go To Go Back Full Screen Close Quit 23 Examples Binomial distribution P {X = k} = ( ) n p k (1 p) n k, k for some postive integer n and some p 1 Poisson distribution k =, 1,..., n for some λ >. P {X = k} = e λλk, k =, 1,... k!

24 First Prev Next Go To Go Back Full Screen Close Quit 24 Absolutely continuous distributions Definition 1.6 The distribution of X is absolutely continuous if and only if there exists a nonnegative function f X such that P {a < X b} = b a f X (x)dx, a < b R. Then f X is the probability density function for X.

25 First Prev Next Go To Go Back Full Screen Close Quit 25 Examples Normal distribution f X (x) = 1 e (x µ)2 2σ 2 2πσ Exponential distribution f X (x) = { λe λx x x <

26 First Prev Next Go To Go Back Full Screen Close Quit 26 Expectations If X is discrete, then letting R(X) = {a 1, a 2,...}, X = i a i 1 Ai where A i = {X = a i }. If i a i P (A i ) <, then E[X] = i a i P {X = a i } = i a i P (A i )

27 First Prev Next Go To Go Back Full Screen Close Quit 27 For general X, let Y n = nx n, Z n = nx n. Then Y n X Z n, so we must have E[Y n ] E[X] E[Z n ]. Specifically, if k k P {k < X k + 1} <, which is true if and only if E[ Y n ] < and E[ Z n ] < for all n (we will say that X is integrable), then define Notation E[X] lim n E[Y n ] = lim n E[Z n ]. (1.1) E[X] = Ω XdP = Ω X(ω)P (dω).

28 First Prev Next Go To Go Back Full Screen Close Quit 28 Properties Lemma 1.7 (Monotonicity) If P {X Y } = 1 and X and Y are integrable, then E[X] E[Y ]. Lemma 1.8 (Positivity) If P {X } = 1, and X is integrable, then E[X]. Lemma 1.9 (Linearity) If X and Y are integrable and a, b R, then ax+ by is integrable and E[aX + by ] = ae[x] + be[y ].

29 First Prev Next Go To Go Back Full Screen Close Quit 29 Jensen s inequality Lemma 1.1 Let X be a random variable and ϕ : R R be convex. If E[ X ] < and E[ ϕ(x) ] <, then ϕ(e[x]) E[ϕ(X)]. Proof. If ϕ is convex, then for each x, ϕ + ϕ(y) ϕ(x) (x) = lim y x+ y x and ϕ(y) ϕ(x) + ϕ + (x)(y x). Setting µ = E[X], exists E[ϕ(X)] E[ϕ(µ) + ϕ + (µ)(x µ)] = ϕ(µ) + ϕ + (µ)e[x µ] = ϕ(µ).

30 First Prev Next Go To Go Back Full Screen Close Quit 3 Consequences of countable additivity P (A c ) = 1 P (A) If A B, then P (B) P (A). If A 1 A 2, then P ( k=1 A k) = lim n P (A n ). P ( k=1a k ) = P ( k=1(a k A c k 1)) = = lim n n k=1 P (A k A c k 1) k=1 P (A k A c k 1) = lim n P (A n ) If A 1 A 2, then P ( k=1 A k) = lim n P (A n ). A n = k=1a k ( k=na k A c k+1)

31 First Prev Next Go To Go Back Full Screen Close Quit 31 Properties of cumulative distribution functions If X is a R-valued random variable (P { < X < } = 1) lim x F X (x) = lim n P {X n} = P {X < } = 1. lim x F X (x) = lim n P {X n} = P ( n {X n}) =. F X (x) F X (x ) = lim (F X (x) F X (x n 1 )) n = lim P {x n 1 < X x} = P {X = x}. n Since F X (x) F X (x ) = P {X = x}, there can be only finitely many discontinuities with F X (x) F X (x ) n 1, n = 1, 2,.... Consequently, a cdf has at most countably many discontinuities.

32 First Prev Next Go To Go Back Full Screen Close Quit 32 Expectations of nonnegative functions If P {X } = 1 and l= lp {l < X l + 1} =, we will define E[X] =. Note, however, whenever I write E[X] I mean that E[X] is finite unless I explicitly allow E[X] =.

33 First Prev Next Go To Go Back Full Screen Close Quit Notions of convergence and results from measure theory Three kinds of convergence of random variables Limit theorems for expectations and integrals Computation of expectations

34 First Prev Next Go To Go Back Full Screen Close Quit 34 Three kinds of convergence of random variables Let {X n } be a sequence of random variables and X another random variable. Let F Xn (x) = P {X n x} denote the cdf for X n. Definition 2.1 Almost sure convergence: lim n X n = X almost surely (a.s.) if and only if P {ω : lim n X n (ω) = X(ω)} = 1. Convergence in probability: The sequence {X n } converges to X in probability if and only if for each ɛ > lim P { X n X ɛ} =. n Convergence in distribution: The sequence {X n } converges to X is distribution if and only if for each x such that F X is continuous at x lim F X n n (x) = F X (x).

35 First Prev Next Go To Go Back Full Screen Close Quit 35 Examples Theorem 2.2 (The strong law of large numbers) Suppose ξ 1, ξ 2,... are independent and identically distributed with E[ ξ i ] <. Then ξ ξ n lim n n = E[ξ] a.s. Theorem 2.3 (The weak law of large numbers) Suppose ξ 1, ξ 2,... are iid with E[ξi 2 ] <. Then for ɛ >, P { ξ ξ n n E[ξ] ɛ} V ar(ξ) nɛ 2.

36 First Prev Next Go To Go Back Full Screen Close Quit 36 Theorem 2.4 (Central limit theorem) Let ξ 1, ξ 2,... be iid with E[ξ i ] = µ and V ar(ξ i ) = σ 2 <. Then n i=1 ξ i nµ x x} = nσ lim P { n 1 2π e y2 2 dy

37 First Prev Next Go To Go Back Full Screen Close Quit 37 Relationship among notions of convergence Lemma 2.5 Almost sure convergence implies convergence in probability. Convergence in probability implies convergence in distribution. Proof. Suppose X n X a.s. Then for ɛ >, A n {sup X k X ɛ} { X n X ɛ} k n A 1 A 2, so lim n P (A n ) = P ( k=1 A k). k=1 A k = {ω : lim sup n X n X ɛ} and hence lim n P (A n ) =.

38 First Prev Next Go To Go Back Full Screen Close Quit 38 Suppose X n X in probability. Then F Xn (x) F X (x+ɛ)+p {X n x, X > x+ɛ} F X (x+ɛ)+p { X n X ɛ}. Therefore, and similarly lim sup F Xn (x) lim F X (x + ɛ) = F X (x), n ɛ lim inf n F X n (x) lim ɛ F X (x ɛ) = F X (x ).

39 First Prev Next Go To Go Back Full Screen Close Quit 39 Limit theorems for expectations and integrals Theorem 2.6 (Bounded convergence theorem) Let {X n } be a sequence of random variables such that X n X in distribution. Suppose that there exists C > such that P { X n > C} = for all n. Then lim n E[X n ] = E[X]. Proof. Let a C < a 1 < < C a m be points of continuity for F X. Then m 1 l= a l P {a l < X n a l+1 } E[X n ] m 1 l= a l+1 P {a l < X n a l+1 }. Then the left and right sides converge to the left and right sides of m 1 l= a l P {a l < X a l+1 } E[X] m 1 l= and lim sup n E[X n ] E[X] max l (a l+1 a l ). a l+1 P {a l < X a l+1 }.

40 First Prev Next Go To Go Back Full Screen Close Quit 4 Lemma 2.7 If X a.s., then lim K E[X K] = E[X]. Proof. If K > m n, then E[X] E[X K] m 1 k=1 k n P {k n X < k + 1 n } m E[ nx n ]. More generally, Theorem 2.8 (Monotone convergence theorem) If X 1 X 2 a.s. and X = lim n X n a.s., then allowing =. E[X] = lim n E[X n ]

41 First Prev Next Go To Go Back Full Screen Close Quit 41 Lemma 2.9 (Fatou s lemma) If P {X n } = 1 and X n converges in distribution to X, then Proof. lim inf n E[X n] E[X] lim inf n E[X n] lim n E[X n K] = E[X K] E[X] Let P {X n = n} = 1 P {X n = } = n 1. Then X n in distribution, and E[X n ] = 1 for all n.

42 First Prev Next Go To Go Back Full Screen Close Quit 42 Theorem 2.1 (Dominated convergence theorem) Suppose there exist Y such that X n Y a.s. with E[Y ] < and lim n X n = X a.s. Then lim n E[X n ] = E[X]. Proof. E[Y ] lim sup E[X n ] = lim inf (E[Y ] E[X n]) n n E[Y ] + lim inf n = lim inf n E[Y X n] E[Y X] = E[Y ] E[X] E[X n] = lim inf n (E[Y ] + E[X n]) = lim inf n E[Y + X n] E[Y + X] = E[Y ] + E[X]

43 First Prev Next Go To Go Back Full Screen Close Quit 43 Computation of expectations Recall µ X (B) = P {X B}. g is Borel measurable if {x : g(x) c} B(R) for all c R. Note that (R, B(R), µ X ) is a probability space and a Borel measurable function g is a random variable on this probability space. Theorem 2.11 If X is a random variable on (Ω, F, P ) and g is Borel measurable, then g(x) is a random variable and if E[ g(x) ] <, E[g(X)] = g(x)µ X (dx) R

44 First Prev Next Go To Go Back Full Screen Close Quit 44 Lebesgue measure Lebesgue measure is the unique measure L on B(R) such that for a < b, L((a, b)) = b a. Integration with respect to L is denoted g(x)dx g(x)l(dx) R If f(x) = m i=1 a i1 Ai, A i B(R) with L(A i ) <, a i R, then m f(x)dx a i L(A i ). R R i=1 Such an f is called a simple function.

45 First Prev Next Go To Go Back Full Screen Close Quit 45 General definition of Lebesgue integral For g, R g(x)dx sup{ f(x)dx : f simple, f g}. R If R g(x) dx <, then setting g+ (x) = g(x) and g (x) = ( g(x)), g(x)dx g + (x)dx g (x)dx. R R R

46 First Prev Next Go To Go Back Full Screen Close Quit 46 Riemann integrals Definition 2.12 Let < a < b <. g defined on [a, b] is Riemann integrable if there exists I R so that for each ɛ > there exists a δ > such that a = t < t 1 < < t m = b, s k [t k, t k+1 ] and max t k+1 t k δ implies Then I is denoted by g(s k )(t k+1 t k ) I ɛ. m 1 k= b a g(x)dx m 1 k= g(s k)(t k+1 t k ) is called a Riemann sum. The Riemann integral is the limit of Riemann sums.

47 First Prev Next Go To Go Back Full Screen Close Quit 47 Some properties of Lebesgue integrals Theorem 2.13 If g is Riemann integrable, then g is Lebesgue integrable and the integrals agree. Fatou s lemma, the monotone convergence theorem, and the dominated convergence theorem all hold for the Lebesgue integral.

48 First Prev Next Go To Go Back Full Screen Close Quit 48 Distributions with Lebesgue densities Suppose µ X (B) = R 1 B (x)f X (x)dx, B B(R). Then f X is a probability density function for X. Theorem 2.14 Let X be a random variable on (Ω, F, P ) with a probability density function f X. If g is Borel measurable and E[ g(x) ] <, then E[g(X)] = g(x)f X (x)dx R

49 First Prev Next Go To Go Back Full Screen Close Quit Review of Markov chains Elementary definition of conditional probability Independence Markov property Transition matrix Simulation of a Markov chain

50 First Prev Next Go To Go Back Full Screen Close Quit 5 Elementary definition of conditional probability For two events A, B F, the definition of the conditional probability of A given B (P (A B)) is intended to capture how we would reassess the probability of A if we knew that B occurred. The relative frequency interpretation suggests that we only consider trials of the experiment on which B occurs. Consequently, P (A B) = # times that A and B occur # times B occurs # times that A and B occur/# trials # times B occurs/# trials P (A B) P (B) leading to the definition P (A B) = P (A B) P (B) Note that for each fixed B F, P ( B) is a probability measure on F.

51 First Prev Next Go To Go Back Full Screen Close Quit 51 Independence If knowing that B occurs doesn t change our assessment of the probability that A occurs (P (A B) = P (A)), then we say A is independent of B. By the defintion of P (A B), independence is equivalent to P (A B) = P (A)P (B). {A k, k I} are mutually independent if P (A k1 A km ) = for all choices of {k 1,..., k m } I. m P (A ki ) i=1

52 First Prev Next Go To Go Back Full Screen Close Quit 52 Independence of random variables Random variables {X k, k I} are mutually independent if P {X k1 B 1,..., X km B m } = m P {X ki B i } i=1 for all choices of {k 1,..., k m } I and B 1,..., B m B(R).

53 First Prev Next Go To Go Back Full Screen Close Quit 53 Joint distributions The collection of Borel subsets of R m (B(R m )) is the smallest σ-algebra containing (, c 1 ] (, c m ]. The joint distribution of (X 1,..., X m ) is the measure on B(R m ) defined by µ X1,...,X m (B) = P {(X 1,..., X m ) B}, B B(R m ). Lemma 3.1 The joint distribution of (X 1,..., X m ) is uniquely determined by the joint cdf F X1,...,X m (x 1,..., x m ) = P {X 1 x 1,..., X m x m }.

54 First Prev Next Go To Go Back Full Screen Close Quit 54 The Markov property Let X, X 1,... be positive, integer-valued random variables. Then P {X = i,..., X m = i m } = P {X m = i m X = i,..., X m 1 = i m 1 } P {X m 1 = i m 1 X = i,..., X m 2 = i m 2 } P {X 1 = i 1 X = i }P {X = i } X satisfies the Markov property if for each m and all choices of {i k }, P {X m = i m X = i,..., X m 1 = i m 1 } = P {X m = i m X m 1 = i m 1 }

55 First Prev Next Go To Go Back Full Screen Close Quit 55 The transition matrix Suppose P {X m = j X m 1 = i} = p ij. (The transition probabilities do not depend on m.) The matrix P = ((p ij )) satisfies p ij = 1. j P {X m+2 = j X m = i} = k P {X m+2 = j, X m+1 = k X m = i} = k = k P {X m+2 = j X m+1 = k, X m = i}p {X m+1 = k X m = i} p ik p kj p (2) ij Then ((p (2) ij )) = P 2.

56 First Prev Next Go To Go Back Full Screen Close Quit 56 The joint distribution of a Markov chain The joint distribution of X, X 1,... is determined by the transition matrix and the initial distribution, ν i = P {X = i}. P {X = i,..., X m = i m } = ν i p i i 1 p i1 i 2 p im 1 i m. In general, where P {X m+n = j X m = i} = p (n) ij, ((p (n) ij )) = P n.

57 First Prev Next Go To Go Back Full Screen Close Quit 57 Simulation of a Markov chain Let E be the state space (the set of values the chain can assume) of the Markov chain. For simplicity, assume E = {1,..., N} of {1, 2,...}. Define H : E [, 1] E by j 1 j H(i, u) = j, p ik u < and V : [, 1] E by V (u) = j, k=1 j 1 ν k u < k=1 k=1 p ik j ν k. k=1 If ξ is uniform [, 1], then j 1 P {V (ξ) = j} = P { k=1 ν k ξ < j ν k } = ν j. k=1

58 First Prev Next Go To Go Back Full Screen Close Quit 58 Theorem 3.2 Let ξ, ξ 1,... be iid uniform [, 1] random variables, and define X = V (ξ ), X n+1 = H(X n, ξ n+1 ). Then {X n } is a Markov chain with initial distribution {ν k } and transition matrix ((p ij )). Proof. As noted above, X has distribution {ν k }. Note that X k is a function of {ξ,..., ξ k } and hence is independent of ξ k+1, ξ k+2,... P {X m = i m X = i,..., X m 1 = i m 1 } = P {H(X m 1, ξ m ) = i m X = i,..., X m 1 = i m 1 } = P {H(i m 1, ξ m ) = i m X = i,..., X m 1 = i m 1 } = P {H(i m 1, ξ m ) = i m } = p im 1 i m.

59 First Prev Next Go To Go Back Full Screen Close Quit 59 Stationary distributions Definition 3.3 A probability distribution π = {π k } is a stationary distribution for a Markov chain with transition matrix P = ((p ij )) if k π kp kj = π j Lemma 3.4 If π is a stationary distribution for the Markov chain {X n } and X has distribution π, then for each n, X n has distribution π. Proof. Noting that P {X 1 = j} = k P {X 1 = j, X = k} = k π k p kj = π j, the lemma follows by induction. Note that π T P = π T, so that π T eigenvalue 1. is a left eigenvector of P for the

60 First Prev Next Go To Go Back Full Screen Close Quit 6 The ergodic theorem Definition 3.5 A Markov chain is irreducible if for each i, j E, there exists n such that p (n) ij >. Theorem 3.6 If {X n } is irreducible and and has stationary distribution π, then for each bounded function f on E, f(x ) + + f(x n ) lim n n + 1 = k π k f(k).

61 First Prev Next Go To Go Back Full Screen Close Quit Change of measure Defining a change of measure Expectations under the new measure Absolute continuity and the Radon-Nikodym theorem Equivalent measures

62 First Prev Next Go To Go Back Full Screen Close Quit 62 Defining a change of measure Lemma 4.1 Let Z be a nonnegative random variable on (Ω, F, P ) such that E[Z] E P [Z] = 1. Define Then P is a probability measure on F. P (A) = E P [1 A Z], A F. (4.1)

63 First Prev Next Go To Go Back Full Screen Close Quit 63 Proof. P (Ω) = E P [1 Ω Z] = E P [Z] = 1. Suppose A 1, A 2,... F are disjoint. Then 1 i=1 A i Z = 1 Ai Z and i=1 P ( i=1a i ) = E P [1 i=1 A i Z] = E P [ = lim n E P [ = i=1 P (A i ) n i=1 1 Ai Z] i=1 1 Ai Z] = lim n n E P [1 Ai Z] The third equality follows by the monotone convergence theorem. i=1

64 First Prev Next Go To Go Back Full Screen Close Quit 64 Expectations under the new measure Lemma 4.2 Let P be given by (4.1). Suppose E P [ X Z] <. Then E P [X] = E P [XZ]. Proof. Since E P [1 Ai ] = P (A i ) = E P [1 Ai Z], if X = m i=1 a i1 Ai, then m m E P [X] = a i E P [1 Ai ] = a i E P [1 Ai Z] = E P [XZ]. i=1 i=1 The result follows for general X by approximation.

65 First Prev Next Go To Go Back Full Screen Close Quit 65 Absolute continuity and the Radon-Nikodym theorem Definition 4.3 Let (Ω, F, P ) be a probability space and let P be another probability measure defined on F. Then P is absolutely continuous with respect to P if and only if P (A) = implies P (A) =. Theorem 4.4 (Radon-Nikodym theorem) P is absolutely continuous with respect to P if and only if there exists a nonnegative random variable Z such that P (A) = E P [1 A Z], A F.

66 First Prev Next Go To Go Back Full Screen Close Quit 66 Equivalent measures Definition 4.5 Probability measures P and P on F are equivalent if and only if P is absolutely continuous with respect to P and P is absolutely continuous with respect to P. Lemma 4.6 P and P are equivalent if and only if there exists a nonnegative random variable Z such that P {Z > } = 1 and P (A) = E P [1 A Z], A F. In addition, P (A) = E P [1 A Z 1 ].

67 First Prev Next Go To Go Back Full Screen Close Quit 67 Example Let X be a standard normal random variable on (Ω, F, P ), and for θ R, define Z θ = exp{θx 1 2 θ2 }. Then E P [Z θ ] = 1. Define P θ (A) = E P [1 A Z θ ]. Then P θ {X x} = E P [1 (,x] (X)Z θ ] = E P [1 (,x] (X) exp{θx 1 2 θ2 }] = = x 1 (,x] (z) exp{θz 1 2 θ2 } 1 2π e 1 2 z2 dz 1 2π exp{ 1 2 (z θ)2 }dz so that under P θ, X is normally distributed with E P θ [X] = θ and V ar P θ (X) = 1.

68 First Prev Next Go To Go Back Full Screen Close Quit Information and conditional expectations Modeling information Information obtained by observing a random variable Information evolving in time Information obtained by observing a stochastic process Discrete observations Approximating random variables using available information Conditional expectations Independence Properties of conditional expectations Definition of a Markov process

69 First Prev Next Go To Go Back Full Screen Close Quit 69 Modeling information Recall, events correspond to statements, the event being the set of outcomes for which the statement is true. Available information corresponds to the collection of statements whose truth can be checked with that information. We model information by the collection D of events corresponding to those statements. Clearly, D is closed under finite intersections, finite unions, and complements, that is, D is an algebra. We assume that it is a σ-algebra.

70 First Prev Next Go To Go Back Full Screen Close Quit 7 Information obtained by observing a random variable If X is a random variable and the available information is the information obtained by observing the value of X, then that information corresponds to σ(x), the smallest σ-algebra with respect to which X is measurable. Lemma 5.1 σ(x) = {{X B} : B B(R)}. Proof. Check that the right side is a σ-algebra.

71 First Prev Next Go To Go Back Full Screen Close Quit 71 Information evolving in time We will assume that time is continuous and identified with [, ). As time evolves, more information is obtained and assuming that nothing is forgotten, the information available at time t, F t, includes the information available at time s for all s < t, that is, for s < t, F s F t. Definition 5.2 A filtration is an increasing family {F t } of σ-algebras indexed by [, ). < s < t implies F s F t.

72 First Prev Next Go To Go Back Full Screen Close Quit 72 Information obtained by observing a stochastic process A (continuous time) stochastic process is a family of random variables {X(t), t [, )} indexed by [, ). Definition 5.3 The natural filtration corresponding to a stochastic process X is the filtration given by F X t = σ(x(s), s t), t. σ(x(s), s t) is the smallest σ-algebra with respect to which X(s) is measurable for each s t. Definition 5.4 A stochastic process is adapted to a filtration {F t } if and only if for each t, X(t) is F t -measurable. In particular, F X t F t.

73 First Prev Next Go To Go Back Full Screen Close Quit 73 Discrete observations Let {D k, k = 1, 2,...} be a partition of Ω, that is, a countable collection of disjoint sets whose union is Ω. σ({d k }) = { k K D k : K {1, 2,...}} (Check that the right side is a σ-algebra.)

74 First Prev Next Go To Go Back Full Screen Close Quit 74 Approximating random variables using available information Let X be a random variable and D a σ-algebra modeling the available information. We want to approximate X using the available information. If Y is the approximation, what must be true about Y? For example, {Y c} must be in D, that is, Y is D-measurable. If D = σ({d k }), the for each k, we know whether or not ω D k. For each k, there must be a constant d k such if ω D k, then Y (ω) = d k, that is Y = d k 1 Dk. k

75 First Prev Next Go To Go Back Full Screen Close Quit 75 Conditional expectations We want Y to be a good approximation, that is, we want the error X Y to be small, at least on average. Assuming E[X 2 ] <, select Y to minimize Then for D-measurable Z, E[(X Y ) 2 ]. E[(X (Y + ɛz)) 2 ] = E[(X Y ) 2 ] 2ɛE[Z(X Y )] + ɛ 2 E[Z] 2 has its minimum at ɛ =. Differentiating, we must have E[Z(X Y )] =.

76 First Prev Next Go To Go Back Full Screen Close Quit 76 Definition of conditional expectation Definition 5.5 Y = E[X D] (the conditional expectation of X give D) if a) Y is D-measurable. b) For each D D, E[1 D X] = E[1 D Y ]. Note that the definition makes sense for all X with E[ X ] <.

77 First Prev Next Go To Go Back Full Screen Close Quit 77 Existence and uniqueness of the conditional expectation Existence follows by the Radon-Nikodym theorem, since for nonnegative X, Q(D) = E[1 D X] defines a measure on D that is absolutely continuous with respect to P on D. Uniqueness follows by the observation that if Y and conditions of the definition, then Ỹ satisfy the E[1 {Y > Ỹ } (Y Ỹ )] = = E[1 {Y <Ỹ }(Y Ỹ )]. It follows that P {Y = Ỹ } = 1.

78 First Prev Next Go To Go Back Full Screen Close Quit 78 Example Suppose that {D k } is a partition and D = σ({d k }). Then E[X D] = k E[X1 Dk ] P (D k ) 1 D k.

79 First Prev Next Go To Go Back Full Screen Close Quit 79 Independence As in the elementary definition of independence, we could say that a random variable X is independent of the (information) σ-algebra D if E[g(X) D] = E[g(X)] for every bounded, measure g (every g B(R)). This identity holds for all bounded measurable g if and only if To be precise: E[g(X)1 D ] = E[g(X)]P (D), g B(R), D D. Definition 5.6 σ-algebras D 1 and D 2 are independent if and only if P (D 1 D 2 ) = P (D 1 )P (D 2 ). A random variable X is independent of a σ-algebra D if and only if P ({X B} D) = P {X B}P (D), B B(R), D D, that is, the σ-algebras D and σ(x) are independent.

80 First Prev Next Go To Go Back Full Screen Close Quit 8 Consequences of independence Lemma 5.7 If X and Y are independent and g, h : R R are Borel measurable, then g(x) and h(y ) are independent. If in addition, g(x) and h(y ) are integrable, then E[g(X)h(Y )] = E[g(X)]E[h(Y )]. (5.1) Proof. The first part of the lemma follows from the fact that {g(x) B} = {X g 1 (B)}, {h(y ) C} = {Y h 1 (C)} where g 1 (B) = {x : g(x) B} is Borel for Borel B by the definition of Borel measurable. To prove (5.1), check first for indicator functions, then for simple functions, and complete the proof by approximation. (In other words, apply what Shreve calls the standard machine. )

81 First Prev Next Go To Go Back Full Screen Close Quit 81 Joint distributions of independent random variables Lemma 5.8 If (X 1,..., X m ) is independent of (Y 1,..., Y n ) and C R m, D R n, then µ X1,...,X m,y 1,...,Y n (C D) = µ X1,...,X m (C)µ Y1,...,Y n (D)

82 First Prev Next Go To Go Back Full Screen Close Quit 82 The Tonelli and Fubini theorems Theorem 5.9 Suppose X and Y are independent and ψ : R 2 [, ). Define ϕ X (y) = E[ψ(X, y)] and ϕ Y (x) = E[ψ(x, Y )]. Then E[ψ(X, Y )] = E[ϕ X (Y )] = E[ϕ Y (X)] (5.2) = ψ(x, y)µ X (dx)µ Y (dy) = ψ(x, y)µ Y (dy)µ X (dx) R R If ψ : R 2 R is Borel measurable and E[ ψ(x, Y ) ] <, then (5.2) holds. Note that the theorem will also hold for independent random vectors. R R

83 First Prev Next Go To Go Back Full Screen Close Quit 83 Conditioning on random variables If X 1,..., X m are random variables, then we will write E[Y σ(x 1,..., X m )] = E[Y X 1,..., X m ] Lemma 5.1 Suppose that Z is σ(x 1,..., X m ). Then there exists B(R m )- measurable h such that Z = h(x 1,..., X m ). Consequently, if Y is integrable, there exists a B(R m )-measurable h Y such that E[Y X 1,..., X m ] = h Y (X 1,..., X m )

84 First Prev Next Go To Go Back Full Screen Close Quit 84 Properties of conditional expectations Theorem 5.11 Let D, G F be σ-algebras, X, Y integrable random variables, and a, b R. Then a) [Linearity] E[aX + by D] = ae[x D] + be[y D]. b) [Positivity] If P {X } = 1, then P {E[X D] } = 1. c) [Monotonicity] If P {X Y } = 1, then P {E[X D] E[Y D]} = 1. d) [Factoring out known quantity] If Y is D-measurable and X and XY are integrable, E[XY D] = Y E[X D]. If Y is D-measurable, then E[Y D] = Y.

85 First Prev Next Go To Go Back Full Screen Close Quit 85 e) [Iterated conditioning] If D G, E[E[X G] D] = E[X D]. f) [Independence] If X is independent of D, then E[X D] = E[X].

86 First Prev Next Go To Go Back Full Screen Close Quit 86 Proof. In each case, check the measurability in Part (a) of the definition of conditional expectation and they verify the integral identity in Part (b). a) E[(aE[X D] + be[y D])1 D ] = ae[e[x D]1 D ] + be[e[y D]1 D ] = ae[x1 D ] + be[y 1 D ] = E[(aX + by )1 D ] d) We want E[Y E[X D]1 D ] = E[Y X1 D ] First, let Y be an indicator, then a simple D-measurable random variable, and then complete the argument by approximation.

87 First Prev Next Go To Go Back Full Screen Close Quit 87 e) For D D G, E[E[X D]1 D ] = E[X1 D ] = E[E[X G]1 D ], where the last equality follows from the fact that D G. f) For D D E[E[X]1 D ] = E[X]E[1 D ] = E[X1 D ]

88 First Prev Next Go To Go Back Full Screen Close Quit 88 Jensen s inequality Theorem 5.12 If ϕ is convex and X and ϕ(x) are integrable, E[ϕ(X) D] ϕ(e[x D]). Proof. As before, if ϕ is convex, then for each x, ϕ + (x) = lim y x+ ϕ(y) ϕ(x) y x exists and ϕ(y) ϕ(x) + ϕ + (x)(y x). Setting Y = E[X D], E[ϕ(X) D] E[ϕ(Y ) + ϕ + (Y )(X Y ) D] = ϕ(y ) + ϕ + (Y )E[X Y D] = ϕ(y ).

89 First Prev Next Go To Go Back Full Screen Close Quit 89 Functions of known and unknown random variable Lemma 5.13 Suppose that X is independent of D and Y is D-measurable. Suppose that ϕ : R 2 R and E[ ϕ(x, Y ) ] <. Define ψ(y) = E[ϕ(X, y)]. Then E[ϕ(X, Y ) D] = ψ(y ). Proof. Since Y is D-measurable, ψ(y ) is D-measurable. For D D, X is independent of (Y, 1 D ), Consequently, by the Fubini theorem, E[ϕ(X, Y )1 D ] = ϕ(x, y)zµ X (dx)µ Y,1D (dy dz) = E[ψ(Y )1 D ]. R 2 R

90 First Prev Next Go To Go Back Full Screen Close Quit 9 Bayes formula Theorem 5.14 Let P be absolutely continuous with respect to P and d P = LdP. Let X be a bounded random variable. Then E P [X D] = EP [XL D] E P [L D] Proof. Let D D. Then [ E P E P ] [XL D] E P [L D] 1 D [ E = E P P ] [XL D] E P [L D] 1 DL [ E = E P P [XL D] E P [L D] 1 DE[L D] = E P [E P [XL D]1 D ] = E P [XL1 D ] = E P [X1 D ] ]

91 First Prev Next Go To Go Back Full Screen Close Quit 91 Recursive generation of Markov chains Theorem 5.15 Let X, ξ 1, ξ 2,... be independent R-valued random variables. For n = 1, 2,..., let H n : R 2 R, and define X n = H n (X n 1, ξ n ). Then {X n } is Markov with respect to F n = σ(x, ξ 1,..., ξ n ). Proof. Let f be bounded and measurable, and define T n f(x) = f(h n (x, z))µ ξn (dz) By Lemma 5.13, R E[f(X n ) F n 1 ] = E[f(H n (X n 1, ξ n )) F n 1 ] = T n f(x n 1 ).

92 First Prev Next Go To Go Back Full Screen Close Quit 92 Definition of a Markov process Definition 5.16 If X is a stochastic process adapted to a filtration {F t }, then X is {F t }-Markov (or Markov with respect to {F t }) if and only if for s, t, E[f(X(t + s)) F t ] = E[f(X(t + s)) X(t)].

93 First Prev Next Go To Go Back Full Screen Close Quit Martingales Definitions of martingale and sub/supermartingale Stopping times Optional sampling theorem Doob s inequalities

94 First Prev Next Go To Go Back Full Screen Close Quit 94 Definitions of martingale and sub/supermartingale Definition 6.1 Let {F t } be a filtration and X a stochastic process adapted to {F t } such that X(t) is integrable for each t. a) X is a {F t }-martingale if and only if E[X(s) F t ] = X(t), t s b) X is a {F t }-submartingale if and only if E[X(s) F t ] X(t), t s c) X is a {F t }-supermartingale if and only if E[X(s) F t ] X(t), t s

95 First Prev Next Go To Go Back Full Screen Close Quit 95 Examples Let ξ 1, ξ 2,... be iid, and define S n = n k=1 ξ i and F n = σ(ξ 1,..., ξ n ). If E[ξ k ] =, then S n is a {F n }-martingale. If ρ = E[e ξ i ] <, then X is a {F n }-martingale. Also, see Problem 5. n = es n ρ n

96 First Prev Next Go To Go Back Full Screen Close Quit 96 Applications of Jensen s inequality Lemma 6.2 Let X be a martingale and ϕ be a convex function. If Y (t) = ϕ(x(t)) is integrable for each t, then Y is a submartingale. Let X be a submartingale and ϕ be a convex, nondecreasing function. If Z(t) = ϕ(x(t)) is integrable for each t, then Z is a submartingale. Proof. Suppose X is a martingale. By Jensen s inequality, for s t, E[ϕ(X(s)) F t ] ϕ(e[x(s) F t ]) = ϕ(x(t)). Suppose X is a submartingale and ϕ is nondecreasing. Then E[ϕ(X(s)) F t ] ϕ(e[x(s) F t ]) ϕ(x(t)).

97 First Prev Next Go To Go Back Full Screen Close Quit 97 Stopping times Definition 6.3 Let {F t } be a filtration. Then a nonnegative (or more generally [, ]-valued) random variable τ is a {F t }-stopping time if and only if {τ t} F t for each t. If τ is the time that an alarm clock goes off and {F t } is the information available to an observer, then the observer hears the alarm clock go off.

98 First Prev Next Go To Go Back Full Screen Close Quit 98 Examples Lemma 6.4 Let X be a continuous, R-valued process adapted to {F t }, and define τ c = inf{t : X(t) c}. Then τ c is a {F t }-stopping time. Proof. For t, {τ c t} = n s Q,s t {X(s) c n 1 } F t. Lemma 6.5 Let X be a continuous, R d -valued process adapted to {F t }, and let K R d be closed. Define τ K = inf{t : X(t) K}. Then τ K is a {F t }-stopping time. Proof. Let O n = {x : inf y K x y < n 1. For t, {τ K t} = n s Q,s t {X(s) O n } F t.

99 First Prev Next Go To Go Back Full Screen Close Quit 99 Information up to a stopping time Definition 6.6 Let τ be a {F t }-stopping time. Then Lemma 6.7 F τ = {A F : A {τ t} F t, t } a) If τ is a {F t }-stopping time, then F τ is a σ-algebra. b) If τ 1 and τ 2 are {F t }-stopping times, then F τ1 F τ2. c) If X is right continuous and {F t }-adapted and τ is a {F t }-stopping time, then X(t τ) is F t -measurable and F τ -measurable.

100 First Prev Next Go To Go Back Full Screen Close Quit 1 Optional sampling theorem Theorem 6.8 Let X be a continuous {F t }-submartingale and τ 1 and τ 2 be {F t }-stopping times. Then E[X(t τ 2 ) F τ1 ] X(t τ 1 τ 2 ). Note that if X is a supermartingale, the inequality is reversed and if X is a martingale, the inequality can be replaced by equality.

101 First Prev Next Go To Go Back Full Screen Close Quit 11 Doob s inequalities Theorem 6.9 Let X be a right continuous, nonnegative submartingale. Then for c >, P {sup X(s) c} E[X(t)] s t c Proof. Let τ c = inf{t : X(t) x}. Then E[X(t)] E[X(t τ)] cp {τ t}. (Actually, for right continuous processes, τ c may not be a stopping time unless one first modifies the filtration {F t }, but the modification can be done without affecting the statement of the theorem.)

102 First Prev Next Go To Go Back Full Screen Close Quit 12 Doob s inequalities Theorem 6.1 Let X be a nonnegative submartingale. Then for p > 1, ( ) p p E[sup X(s) p ] E[X(t) p ] s t p 1 Corollary 6.11 If M is a square integrable martingale, then E[sup M(s) 2 ] 4E[M(t) 2 ]. s t

103 First Prev Next Go To Go Back Full Screen Close Quit Brownian motion Moment generating functions Gaussian distributions Characterization by mean and covariance Conditions for independence Conditional expectations Central limit theorem Construction of Brownian motion Functional central limit theorem The binomial model and geometric Brownian motion Martingale properties of Brownian motion Lévy s characterization Relationship to some partial differential equations

104 First Prev Next Go To Go Back Full Screen Close Quit 14 First passage time Quadratic variation Nowhere differentiability Law of the iterated logarithim Modulus of continuity Markov property Transition density and operator Strong Markov property Reflection principle

105 First Prev Next Go To Go Back Full Screen Close Quit 15 Moment generating functions Definition 7.1 Let X be a R d -valued random variable. Then the moment generating function for X (it it exists) is given by ϕ X (λ) = E[e λ X ], λ R d. Lemma 7.2 If ϕ X (λ) < for all λ R d, then ϕ X uniquely determines the distribution of X.

106 First Prev Next Go To Go Back Full Screen Close Quit 16 Gaussian distributions Definition 7.3 A R-valued random variable X is Gaussian (normal) with mean µ and variance σ 2 if f X (x) = 1 e (x µ)2 2σ 2 2πσ The moment generating function for X is E[e λx ] = exp{ σ2 λ 2 + µλ} 2

107 First Prev Next Go To Go Back Full Screen Close Quit 17 Multidimensional Gaussian distributions Definition 7.4 A R d -valued random variable X is Gaussian if λ X is Gaussian for every λ R d. Note that if E[X] = µ and then cov(x) = E[(X µ)(x µ) T ] = Σ = ((σ ij )), E[λ X] = λ µ, var(λ X) = λ T Σλ. Note that is X = (X 1,..., X d ) is Gaussian in R d, then for any choice of i 1,..., i m {1,..., d}, (X i1,... X im ) is Gaussian in R m. Lemma 7.5 If X is Gaussian in R d and C is a m d matrix, then is Gaussian in R m. Y = CX (7.1)

108 First Prev Next Go To Go Back Full Screen Close Quit 18 Characterization by mean and covariance If X is Gaussian in R d, then ϕ X (λ) = exp{ λt Σλ 2 + λ µ} Since the distribution of X is determined by its moment generating function, the distribution of a Gausian random vector is determined by its mean and covariance matrix. Note that for Y given by (7.1), E[Y ] = Cµ and cov(y ) = CΣC T.

109 First Prev Next Go To Go Back Full Screen Close Quit 19 Conditions for independence Lemma 7.6 If X = (X 1, X 2 ) R 2 is Gaussian, then X 1 and X 2 are independent if and only if cov(x 1, X 2 ) =. In general, if X is Gaussian in R d, then the components of X are independent if and only if Σ is diagonal. Proof. If R-valued random variables U and V are independent, then cov(u, V ) =, so if the components of X are independent then Σ is diagonal. If Σ is diagonal, then ϕ X (λ) = d i=1 exp{ λ2 i σ2 i 2 + λ i µ}, and since the moment generating function of X determines the joint distribution, the components of X must be independent.

110 First Prev Next Go To Go Back Full Screen Close Quit 11 Conditional expectations Lemma 7.7 Let X = (X 1,..., X d ) be Gaussian, then d 1 E[X d X 1,..., X d 1 ] = a + b i X i, i=1 where d 1 σ dk = b i σ ik, k = 1,..., d 1 i=1 and d 1 µ d = a + b i µ i. Proof. The b i are selected so that X d (a + d 1 i=1 b ix i ) is independent of X 1,..., X d 1. i=1

111 First Prev Next Go To Go Back Full Screen Close Quit 111 Example Suppose X 1, X 2, and X 3 are independent, Gaussian, mean zero and variance one. Then Let X 1 = X 1 + X 2 2 E[X 1 X 1 + X 2 ] = X 1 + X X 3, X2 = X 1 + X X 3. (7.2) Then X 1 and X 2 are independent, Gaussian, mean zero, and variance one.

112 First Prev Next Go To Go Back Full Screen Close Quit 112 Central limit theorem If {ξ i } are iid with E[ξ i ] = and V ar(ξ i ) = 1, then Z n = 1 n n i=1 ξ i satisfies b 1 P {a < Z n b} e 1 2 x2 dx. 2π If we define a Z n (t) = 1 [nt] n then the distribution of (Z n (t 1 ),..., Z n (t m )) is approximately Gaussian with cov(z n (t i ), Z n (t j )) t i t j, Note that for t < t 1 < < t m, Z n (t k ) Z n (t k 1 ), k = 1,..., m are independent. i=1 ξ i

113 The CLT as a process First Prev Next Go To Go Back Full Screen Close Quit 113

114 First Prev Next Go To Go Back Full Screen Close Quit 114 Interpolation lemma Lemma 7.8 Let Y and Z be independent, R-valued Gaussian random variables with E[Y ] = E[Z] =, V ar(z) = a, and V ar(y ) = 1. Then U = Z a Y, V = Z a 2 2 Y are independent Gaussian random variables satisfying Z = U + V. Proof. Clearly, Z = U+V. To complete the proof, check that cov(u, V ) =.

115 First Prev Next Go To Go Back Full Screen Close Quit 115 Construction of Brownian motion We can construct standard Brownian motion by repeated application of the Interpolation Lemma 7.8. Let ξ k,n be iid, Gaussian, E[ξ k,n ] = and V ar(ξ k,n ) = 1. Define and define W ( k 2 n ) inductively by W (1) = ξ 1,, W ( k 1 2n) = W (k 2 ) + W (k+1 2 ) W ( k 1 n 2 ) 1 n + n 2 2 (n+1)/2ξ k,n = 1 2 W (k 1 2 ) + 1 n 2 W (k ) + 1 n 2 (n+1)/2ξ k,n for k odd and < k < 2 n.

116 First Prev Next Go To Go Back Full Screen Close Quit 116 By the Interpolation Lemma, and W ( k 1 2n) W (k 2 ) = 1 n 2 W (k ) 1 n 2 W (k 1 2 ) + 1 n 2 (n+1)/2ξ k,n W ( k n ) W ( k 2 n) = 1 2 W (k n ) 1 2 W (k 1 2 n ) 1 2 (n+1)/2ξ k,n are independent, Gaussian random variables.

117 First Prev Next Go To Go Back Full Screen Close Quit 117 Convergence to a continuous process Theorem 7.9 Let Γ be the set of diadic rationals in [, 1]. Define W n (t) = W ( k 2 n) + (t k 2 n)2n (W ( k n ) W ( k 2 n)), k 2 n t k n. Then sup W (t) W n (t) a.s., t Γ and hence W extends to a continuous process on [, 1].

118 First Prev Next Go To Go Back Full Screen Close Quit 118 Continuous functions on C[, 1] Definition 7.1 Let C[, 1] denote the continuous, real-valued functions on [, 1]. g : C[, 1] R is continuous if x n, x C[, 1] satisfying sup t 1 x n (t) x(t) implies lim n g(x n ) = g(x). For example, g(x) = sup x(t) t 1 g(x) = 1 x(s)ds

119 First Prev Next Go To Go Back Full Screen Close Quit 119 Functional central limit theorem Theorem 7.11 (Donsker invariance principle) Let {ξ i } be iid with E[ξ i ] = and V ar(ξ i ) = σ 2. Define Z n (t) = 1 [nt] n i=1 ξ i + (t k n ) nξ k+1, k n t k + 1 n. Then for each continuous g : C[, 1] R, g(z n ) converges in distribution to g(σw ).

120 First Prev Next Go To Go Back Full Screen Close Quit 12 The binomial model and geometric Brownian motion In the binomial model for a stock price, at each time step the price either goes up a fixed percentage or down a fixed percentage. Assume the up-down probabilities are and the time step is 1 n. P {X n ( k + 1 n ) = (1+ σ n )X n ( k n )} = P {X n( k + 1 n ) = (1 σ n )X n ( k n )} = 1 2. Then log X n (t) = [nt] i=1 log(1+ σ n ξ i ) where P {ξ i = 1} = P {ξ i = 1} =. By Taylor s formula 1 2 and X n (t) σ [nt] n i=1 ξ i 1 2n [nt] σ 2 σw (t) 1 2 σ2 t i=1 log X n (t) exp{σw (t) 1 2 σ2 t}

121 First Prev Next Go To Go Back Full Screen Close Quit 121 Martingales Recall that an {F t }-adapted process M is a {F t }-martingale if and only if E[M(t + r) F t ] = M(t) for all t, r. Note that this requirement is equivalent to E[M(t + r) M(t) F t ] =.

122 First Prev Next Go To Go Back Full Screen Close Quit 122 Convergence of conditional expectations Lemma 7.12 Suppose X n, X are R-valued random variables and E[ X n X ]. Then for a sub-σ-algebra D, E[ E[X n D] E[X D] ] Proof. By Jensen s inequality E[ E[X n D] E[X D] ] = E[ E[X n X D] ] E[E[ X n X D]] = E[ X n X ]

123 First Prev Next Go To Go Back Full Screen Close Quit 123 Martingale properties of Brownian motion Let F t = Ft W = σ(w (s) : s t). Since W has independent increments, we have E[W (t + r) F t ] = E[W (t + r) W (t) + W (t) F t ] = E[W (t + r) W (t) F t ] + W (t) = W (t), and W is a martingale. Similarly, let M(t) = W (t) 2 t. Then E[M(t + r) M(t) F t ] = E[(W (t + r) W (t)) 2 + 2(W (t + r) W (t))w (t) r F t ] = E[(W (t + r) W (t)) 2 F t ] + 2W (t)e[w (t + r) W (t) F t ] r = and hence M is a martingale.

124 First Prev Next Go To Go Back Full Screen Close Quit 124 An exponential martingale Let M(t) = exp{w (t) 1 2t}, then E[M(t + r) F t ] = E[exp{W (t + r) W (t) 1 2 r}m(t) F t] = M(t)e 1 2 r E[exp{W (t + r) W (t)} F t ] = M(t)

125 First Prev Next Go To Go Back Full Screen Close Quit 125 A general family of martingales Let f Cb 3 (R) (the bounded continuous functions with three bounded continuous derivatives), and t = t < < t m = t + r. Then E[f(W (t + r)) f(w (t)) F t ] = E[ f(w (t i+1 )) f(w (t i )) F t ] = E[ f(w (t i+1 )) f(w (t i )) f (W (t i ))(W (t i+1 ) W (t i )) +E[ 1 2 f (W (t i ))(t i+1 t i ) F t ] = Z 1 + Z f (W (t i ))(W (t i+1 ) W (t i )) 2 F t ]

126 First Prev Next Go To Go Back Full Screen Close Quit 126 Note that since F ti F t, E[f (W (t i ))(W (t i+1 ) W (t i )) F t ] = E[E[f (W (t i ))(W (t i+1 ) W (t i )) F ti ] F t ] = E[f (W (t i ))E[(W (t i+1 ) W (t i )) F ti ] F t ] = E[f (W (t i ))(W (t i+1 ) W (t i )) 2 F t ] = E[E[f (W (t i ))(W (t i+1 ) W (t i )) 2 F ti ] F t ] = E[f (W (t i ))E[(W (t i+1 ) W (t i )) 2 F ti ] F t ] = E[f (W (t i ))(t i+1 t i ) F t ] E[ 1 2 f (W (t i ))(t i+1 t i ) +r t 1 2 f (W (s))ds ] E[ Z 1 ] CE[ W (t i+1 ) W (t i ) 3 ] Ĉ (t i+1 t i ) 3/2 so f(w (t)) f() 1 2 f (W (s))ds is a {F t }-martingale.

127 First Prev Next Go To Go Back Full Screen Close Quit 127 Lévy s characterization Theorem 7.13 Suppose M and Z given by Z(t) = M(t) 2 t are continuous, {F t }-martingales, and M() =. Then M is a standard Brownian motion. Lemma 7.14 Under the assumptions of the theorem, E[(M(t+r) M(t)) 2 F t ] = E[M(t+r) 2 2M(t)M(t+r)+M(t) 2 F t ] = r, if τ is a {F t }-stopping time, then E[(M((t + r) τ) M(t τ)) 2 F t ] = E[(t + r) τ t τ F t ] r, and for f C 3 b (R), is a {F t }-martingale. M f (t) = f(m(t)) f(m()) 1 2 f (M(s))ds (7.3)

128 First Prev Next Go To Go Back Full Screen Close Quit 128 Proof. Taking f(x) = e iθx, is a martingale, and hence and Therefore e iθm(t) e iθm() + E[e iθm(t+r) F t ] = e iθm(t) +r ϕ t,θ (r) E[e iθ(m(t+r) M(t)) F t ] = 1 t 1 2 θ2 e iθm(s) ds 1 2 θ2 E[e iθm(s) F t ]ds r E[e iθ(m(t+r) M(t)) F t ] = e 1 2 θ2r. 1 2 θ2 ϕ t,θ (s)ds It follows that (M(t + r) M(t)) is Gaussian and independent of F t.

129 First Prev Next Go To Go Back Full Screen Close Quit 129 Proof.[of lemma] To prove that (7.3) is a martingale, the problem is to show E[ f(m(t i+1 )) f(m(t i )) f (M(t i ))(M(t i+1 ) M(t i )) 1 2 f (M(t i ))(M(t i+1 ) M(t i )) 2 F t ] converges to zero. Let ɛ, δ >, and assume that t i+1 t i δ. Define τ ɛ,δ = inf{t : sup M(t) M(s) ɛ} t δ s t Then M(ti+1 τ ɛ,δ ) M(t i τ ɛ,δ ) 3 ɛ (M(t i+1 τ ɛ,δ ) M(t i τ ɛ,δ )) 2, and the expectation of the term on the left is bounded by ɛt. Select ɛ n and δ n so that τ ɛn,δ n a.s.

130 First Prev Next Go To Go Back Full Screen Close Quit 13 Applications of the optional sampling theorem Let a < < b, and define γ a,b = inf{t : W (t) / ( a, b). Then and Let t. E[γ a,b ] <, so E[W (γ a,b t)] = E[γ a,b t] = E[W (γ a,b t) 2 ] a 2 b 2. = E[W (γ a,b )] = ap {W (γ a,b ) = a} + bp {W (γ a,b ) = b} Consequently, P {W (γ a,b ) = b} = = a + (a + b)p {W (γ a,b ) = b} a a+b and E[γ a,b ] = E[W (γ a,b ) 2 ] = a 2 b a + b + a b2 a + b

Math 735: Stochastic Analysis

Math 735: Stochastic Analysis First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University

Survival Analysis: Counting Process and Martingale. Lu Tian and Richard Olshen Stanford University Survival Analysis: Counting Process and Martingale Lu Tian and Richard Olshen Stanford University 1 Lebesgue-Stieltjes Integrals G( ) is a right-continuous step function having jumps at x 1, x 2,.. b f(x)dg(x)

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

ECON 2530b: International Finance

ECON 2530b: International Finance ECON 2530b: International Finance Compiled by Vu T. Chau 1 Non-neutrality of Nominal Exchange Rate 1.1 Mussa s Puzzle(1986) RER can be defined as the relative price of a country s consumption basket in

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Hattendorff s theorem and Thiele s differential equation generalized. by Reinhardt Messerschmidt

Hattendorff s theorem and Thiele s differential equation generalized. by Reinhardt Messerschmidt Hattendorff s theorem and Thiele s differential equation generalized by Reinhardt Messerschmidt Submitted in partial fulfillment of the requirements for the degree Magister Scientiae in the Department

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information