Stochastic Processes - lecture notes - Matteo Caiaffa

Size: px
Start display at page:

Download "Stochastic Processes - lecture notes - Matteo Caiaffa"

Transcription

1 Stochastic Processes - lecture notes - Matteo Caiaffa Academic year

2 Contents 1 Introduction to Stochastic processes Propaedeutic definitions and theorems Some notes on probability theory Conditional probability Gaussian Processes Generality Markov process Chapman - Kolmogorov relation eductions of transition probability Wiener process Distribution of the Wiener Process Wiener processes are Markov processes Brownian bridge Exercises Poisson process Kolmogorov s theorems Cylinder set Kolmogorov s theorem I Kolmogorov s theorem II Martingales Conditional expectation Filtrations and Martingales Examples Monotone classes The Dynkin lemma Some applications of the Dynkin lemma i

3 6 Some stochastic differential equation Brownian bridge and stochastic differential equation Ornstein - Uhlenbeck equation Integration The Itô integral, a particular case Semigroup of linear operators 48 ii

4 Chapter 1 Introduction to Stochastic processes 1.1 Propaedeutic definitions and theorems Definition (of probability space). A probability space is a triple (Ω, E, P) where (i) Ω is any (non - empty) set, it is called the sample space, (ii) E is a σ - algebra of subsets of Ω (whose elements are called events). Clearly we have {, Ω} E P(Ω) (= 2 Ω ), (iii) P is a map, called probability measure on E, associating a number with each element of E. P has the following properties: 0 P 1, P(Ω) = 1 and it is countably additive, that is, for any sequence A i of disjoint elements of E, P( i A i ) = i=1 P(A i). Definition (of random variable). A random variable is a map X : Ω such that I : X 1 (I) E (that is, a random variable is an E measurable map), where I is a Borel set. Let us consider the family F X of subsets of Ω defined by F X = {X 1 (I), I B}, where B is the σ - algebra of the Borel sets of. It is easy to see that F X is a σ - algebra, so we can say that X is a random variable if and only if F X E. We can also say that F X is the least σ - algebra that makes X measurable. Definition gives us the following relation: Ω X and E X 1 B. We now introduce a measure (associated with the random variable X) on the Borel sets of : µ X (I) := P(X 1 (I)) and, resuming, 1

5 (Ω, E, P) X X 1 (, B, µ X ) Definition The measure µ X is called the probability distribution 1 of X and it is the image measure of P by means of the random variable X. The measure µ X is completely and uniquely determined by the values that it assumes on the Borel sets I. 2 In other words, one can say that µ X turns out to be uniquely determined by its distribution function 3 F X : F X = µ x (], t]) = P(X t). (1.1.1) Definition (of random vector). A random vector is a random variable with values in d, where d > 1. Formally, a random vector is a map X : Ω d such that I B : X 1 (I) E. The definition just stated provides the relations Ω X d and E d X 1 B, where B is the smallest σ - algebra containing the open sets of d. Moreover, P corresponds with an image measure µ over the Borel sets of d. Definition (of product σ - algebra). Consider as I = I 1 I 2 I 3... (rectangles in d ) where I k B. The smallest σ - algebra containing the family of rectangles I 1 I 2 I 3... is called product σ-algebra, and it is denoted by B d = B B B.... Theorem The σ-algebra generated by the open sets of d coincides with the product σ-algebra B d. Theorem X = (X 1, X 2,..., X d ) is a random vector if and only if each of its component X k is a random variable. Thanks to the previous theorem, given a random vector X = (X 1, X 2,..., X d ) and a product σ - algebra B d, for any component X k the following relations holds: Ω X k and E d X 1 k B. To complete this scheme of relations, we introduce a measure associated 1 For the sake of simplicity, we will omit the index X where there is no possibility of misunderstanding. 2 Moreover, µ X is a σ - additive measure over. 3 In italian: Funzione di ripartizione. 2

6 with P. Consider d monodimensional measures µ k and define ν(i1 I 2... I d ) := µ 1 (I 1 )µ 2 (I 2 )... µ d (I d ). (Ω, E, P) X k (, Theorem The measure ν can be extended (uniquely) to a measure over B d. ν is B, X 1 k µ) called product measure and it is denoted by µ 1 µ 2... µ d. 4 Definition If µ µ 1 µ 2... µ d, the k measures µ k are said to be independent. Example Consider a 2 - dim random vector X 12 := (X 1, X 2 ) and the Borel set J = I I with I. We obviously have Ω X d and E d X 1 1,2 B 2. Moreover, the map d (π 1,π 2 ) 2, where (π 1, π 2 ) are the projections of d onto 2, fulfills a commutative diagram between Ω, d and 2. Ω X d X 1,2 (π 1, π 2 ) 2 By definition of image measure, we are allowed to write: µ 1,2 (J) = P(X 1 1,2(J)) = µ(j... ) and, in particular µ 1,2 (I ) = µ 1 (I), µ 2,1 (I ) = µ 2 (I). In addition to what we have recalled, using the well-known definition of Gaussian measure, the reader can also verify an important proposition: Proposition µ is a Gaussian measure. 4 In general, µ µ 1 µ 2... µ d. 3

7 1.2 Some notes on probability theory We recall here some basic definitions and results which should be known by the reader from previous courses of probability theory. We have already established (definition 1.1.3) how the probability distribution is defined. However we can characterize it by means of the following complex - valued function: ϕ(t) = e itx µ(dx) = Where E(X) = XdP is the expectation value of X. Ω e itx df = E(e itx ). (1.2.1) Definition (of characteristic function). ϕ(t) (in equation 1.2.1) is called the characteristic function of µ x (or the characteristic function of the random variable X). Theorem (of Bochner). The characteristic function has the following properties (i) ϕ(0) = 1, (ii) ϕ is continuous in 0, (iii) ϕ is positive definite, that is, for any n with n an integer, for any choice of the n real numbers t 1, t 2,..., t n and the n complex numbers z 1, z 2,..., z n, results n ϕ(t i t j )z i z j 0. i,j=1 We are now interested to establish a fundamental relation existing between a measure and its characteristic function. To achieve this goal let us show that, given a random variable X and a Borel set I B d, one can define a measure µ using the following integral µ(i) = with dx the ordinary Lebesgue measure in d, and g m,a = I g(x)dx, (1.2.2) ( 1 2π ) d 1 deta e 1 2 A 1 (x m),(x m), (1.2.3) where m d is a number (measure value) and A = d d is a covariance matrix, symmetric and positive definite. Notice that the Fourier transform of the measure is just the characteristic function ϕ(t) = e i t,x µ(dx) d (1.2.4) 4

8 and in particular, for a Gaussian measure, the characteristic function is given by the following expression: ϕ gm,a (t) = e i m,t 1 2 At,t. (1.2.5) Some basic inequalities of probability theory. (i) Chebyshev s inequality: If λ > 0 P ( X > λ) 1 λ p E( X p ). (ii) Schwartz s inequality: E(XY ) E(X 2 )E(Y 2 ). (iii) Hölder s inequality: E(XY ) [E( X p )] 1 p [E( Y q )] 1 q, where p, q > 1 and 1 p + 1 q = 1. (iv) Jensen s inequality: If ϕ : is a convex function and the random variable X and ϕ(x) have finite expectation values, then ϕ(e(x)) E(ϕ(X)). In particular for ϕ(x) = x p, with p 1, we obtain E(X) p E( X p ). Different types of convergence for a sequence of random variables X n, n = 1, 2, 3,.... (i) Almost sure (a.s.) convergence: X n a.s. X, if lim X n(ω) = X(ω), ω / N, wherep (N) = 0. n (ii) Convergence in probability: X n p X, if lim P ( X n X > ε) = 0, ε > 0. n 5

9 L p (iii) Convergence in mean of order p 1: X n X, if lim E( X n X p ) = 0. n (iv) Convergence in law: X n L X, if lim F X n (x) = F X (x), n for any point x where the distribution function F X is continuous. 1.3 Conditional probability Theorem (of adon - Nikodym). Let A be a σ - algebra over X and α, ν : A two σ - additive real - valued measures. If α is positive and absolutely continuous with respect to ν, then there exists a map g L 1 (α), called density or derivative of ν with respect to α, such that ν(a) = A g dα, for any A A. Moreover each other density function is equal to g almost everywhere. Theorem (of dominated convergence). Let {f k } be a increasing sequence of measurable non - negative function, f k : n [0, + ], then if f(x) = lim k f k (x) one has n f(x) dµ(x) = lim k f k (x) dµ(x) Lemma Consider the probability space (Ω, E, P). Let F E be a σ - algebra and A Ω. Then there exists a unique F - measurable function Y : Ω such that for every F F Proof P(A F ) = F Y (ω)p(dω). F P(A F ) is a measure absolutely continuous with respect to P thus, by andon - Nikodym theorem, there exists Y such that Finally F Y dp = F Y dp P(A F ) = F F Y dp. that implies Y = Y almost everywhere with respect to P. (Y Y )dp = 0 for any F Ω, 6

10 Definition (of conditional probability). We call conditional probability of A with respect to F and it is denoted by P(A F ), the only Y such that P(A F ) = Y (ω)p(dω). F 7

11 Chapter 2 Gaussian Processes 2.1 Generality Definition (of stochastic process). A stochastic process is a family of random variables {X t } t T defined on the same probability space (Ω, E, P). The set T is called the parameter set of the process. A stochastic process is said to be a discrete parameter process or a continuous parameter process according to the discrete or continuous nature of its parameter set. If t represents time, one can think of X t as the state of the process at a given time t. Definition (of trajectory). For each element ω Ω, the mapping t X t (ω) is called the trajectory of the process. Consider a continuous parameter process {X t } t [0,T ]. We can take out of this family a number d > 0 of random variables (X t1, X t2,..., X td ). For example, for d = 1 we can take a random variable anywhere in the set [0, T ] (this set being continuous) and, from what we know about a random variable and a probability space, we get the maps Ω Xt and X 1 E t B. Moreover, we can associate a measure µ t to P. Proceeding in this way for all the (infinite) X t, we obtain a family of 1 - dim measures {µ t } t [0,T ] (1 - dim marginals). Example Let {X t } t [0,T ] be a stochastic process defined on the probability space (Ω, E, P), then consider the random vector X 1,2 = (X t1, X t2 ). We can associate with P a 2 - dim measure µ t1,t 2 and, for every choice of the components of the random vector, we construct a (infinite) family of 2 - dim measures {µ t1,t 2 } t1,t 2 [0,T ]. In general, to each stochastic process corresponds a family M of marginals of finite 8

12 dimensional distributions: M = {{µ t }, {µ t1,t 2 }, {µ t1,t 2,t 3 },..., {µ t1,t 2,...,t n },...}. (2.1.1) A stochastic process is a model for something, it is a description of some random phenomena evolving in time. Such a kind of model does not exists by itself, but has to be built. In general, one starts from M to construct it (we shall see later, with some examples, how to deal with this construction). An important characteristic to notice about the family M is the so-called compatibility property (or consistency). Definition (of compatibility property). If {t k1 <... < t km } {t 1 <... < t n }, then µ tk1,...,t km is the marginal of µ t1,...,t n, corresponding to the indexes k 1, k 2,..., k m. Definition (of Gaussian process). A process characterized by the property that each finite - dimensional measure µ M has a normal distribution, is said to be a Gaussian stochastic process. It is customary to denote the mean value of such a process by m(t) := E(X t ) and the covariance function with ϕ(s, t) := E[(X t m(t))(x s m(s))], where s, t [0, T ]. According to the previous definition, we can write the expression of the generic n - dimensional measure as: µ t1,...,t n (I) := I ( 1 2π ) n 1 det(a) e 1 2 A 1 (x m t1,...,tn ),x mt 1,...,tn dx, (2.1.2) where x n, I is a Borel set, m t1,...,t n = (m(t 1 ),..., m(t n )) and A = a ij = ϕ(t i, t j ), i, j = 1... n. In equation the covariance matrix A must be invertible. To avoid this constraint, the characteristic function (which does not depend on A 1 ) can turn to be helpful: ϕ t1,...,t n (z) = e i z,x µ t1,...,t n (dx) = e i mt1,...,tn,z n 2.2 Markov process 1 2 Az,z. Definition (of Markov process). Given a process {X t } and a family of transition probability p(s, x; t, I), {X t } is said to be a Markov process if the following conditions are satisfied. (i) (s, x, t) p(s, x; t, I) is a measurable function, (ii) I p(s, x; t, I) is a probability measure, (iii) p(s, x; t, I) = p(s, x; r, dz)p(r, z; t, I), 9

13 (iv) p(s, x; s, I) = δ x (I), (v) E(1 I (X t ) F s ) = p(s, X s ; t, I) = E(1 I (X t ) (X s )), where F t = σ{x u : u t}. Property (iii) is equivalent to state: P(X t I X s = x, X s1 = x 1,..., X sn = x n ) = P(X t I X s = x). One can observe that the family of finite dimensional measures M does not appear in the definition. 1 Anyway M can be derived from the transition probability. Setting I = (I 1 I 2... I n ) and G = (I 1 I 2... I n 1 ): µ t1,t 2,...,t n (I) = P ((X t1,..., X tn ) I) = P ( ) (X t1,..., X tn 1 ) G, X tn I n = P(X tn I n X tn 1 = x n 1,..., X t1 = x 1 )P(X t1 dx 1,..., X tn 1 dx n 1 ) G = p(t n 1, x n 1 ; t n, I n )P(X t1 dx 1,..., X tn 1 dx n 1 ) G = p(t n 1, x n 1 ; t n, I n )p(t n 2, x n 2 ; t n 1, dx n 1 )P(X t1 dx 1,..., X tn 2 dx n 2 ). G By reiterating the same procedure: µ t1,t 2,...,t n (I) = P(X 0 dx 0 )p(0, x 0 ; t 1, dx 1 )p(t 1, x 1 ; t 2, dx 2 )... p(t n 1, x n 1 ; t n, dx n ), G where P(X 0 dx 0 ) is the initial distribution Chapman - Kolmogorov relation Let X t be a Markov process and, in particular, with s 1 < s 2 < < s n s < t P(X t I X s = x, X sn = x n,..., X s1 = x 1 ) = P(X t I X s = x). Denote the probability P(X t I X s = x) with p(s, x; t, I). Then the following equation p(s, x; t, I) = p(s, x; r, dz)p(r, z; t, I), s < t, (2.2.1) holds and it is called the Chapman - Kolmogorov relation. Under some assumptions, one can write this relation in a different way, giving birth to various families of transition probability. 1 We will see later (first theorem of Kolmogorov) how M can characterise a stochastic process. 10

14 2.2.2 eductions of transition probability In general there are two ways to study a Markov process: (i) a stochastic method: studying the trajectories (X t )(e.g. by stochastic differential equation), (ii) an analytic method: studying directly the family {p(s, x; t, I)} (transition probability). Take account of the second way. 1 st simplification: Homogeneity in time Consider the particular case in which we can express the dependence on s, t by means of a single function ϕ(τ, x, I) which dependes on the difference t s =: τ. So we have 2 {p(s, x; t, I)} {p(t, x, I)}. (2.2.2) It is easy to understand how propositions (i),(ii) and (iv) in the definition of Markov process change: fixing x we have that the map (t, x) p(t, x, I) is measurable, I p(t, x, I) is a probability measure and p(0, x, t) = δ x (I). As far as the Chapman - Kolmogorov relation is concerned, the previous change of variables produces p(t 1 + t 2, x, I) = p(t 1, x, dz)p(t 2, z, I) or, changing time intervals, p(t 1 + t 2, x, I) = p(t 2, x, dz)p(t 1, z, I). 2 nd simplification This is the particular case of a transition probability that admits density, such that p(s, x; t, I) = ϕ(y)dy, where ϕ 0 and ϕ(y)dy = 1. Again, as is customary, we I replace ϕ with p: p(s, x; t, I) = I p(s, x; t, y)dy, with p(s, x; t, y) 0 and p(s, x; t, I) = p(s, x; t, y)dy = 1. Under these new hypoteses, 2 From now on, according to the notation adopted by other authors, the reducted forms of transition probability will be written with the same letters identifying variables of the original expression. However here (in the second member of 2.2.2) t is referred to the difference t s and p is ϕ. 11

15 Chapman - Kolmogorov relation becomes: p(s, x; t, I) = p(s, x, r, z)p(r, z, t, y). 3 rd simplification Starting from the assumption of homogeneity in time, there is another reduction: p(t, x, I) = p(t, x, y)dy. I A straightforward computation yields the new form of the Chapman - Kolmogorov relation p(t 1 + t 2, x, y) = p(t 1, x, z)p(t 2, z, y) or the equivalent expression replacing t 1 with t 2. 4 th simplification: Homogeneity in space This simplification allows us to write the transition probability as a function defined on + : p(t, x, y) = p(t, x) where the new x actually represents the difference y x. Proposition (iii) becomes p(t 1 + t 2, x) = p(t 1, y)p(t 2, x)dy = p(t 2, y)p(t 1, x)dy and p(t 1 + t 2 ) = p(t 1 ) p(t 2 ) is just the convolution. All these simplifications are resumed in the following diagram. {p(s, x; t, I)} {p(t, x, I)} {p(s, x; t, y)} 2.3 Wiener process {p(t, x, y)} {p(t, x)} In 1827 obert Brown (botanist, ) observed the erratic and continuous motion of plant spores suspended in water. Later in the 20 s Norbert Wiener (mathematician, ) proposed a mathematical model describing this motion, the Wiener process (also called the Brownian motion). 12

16 Definition (of Wiener process). A Gaussian, continuous parameter process characterized by mean value m(t) = 0 and covariance function ϕ := min{s, t} = s t, for any s, t [0, T ], is called a Wiener process (denoted by {W t }). We will often talk about a Wiener process following the characterisation just stated, but it is appreciable to notice that next definition can provide a more intuitive description of the fundamental properties of a process of this kind. Definition A stochastic process {W t } t 0 is called a Wiener process if it satisfies the properties listed below: (i) W 0 = 0, (ii) {W t } has, almost surely, continuous trajectory, 3 (iii) s, t [0, T ], with 0 s < t, the increments W t W s are stationary, 4 independent random variables and W t W s N (0, t s). Theorem Definition and are equivalent. Proof (2.3.1) (2.3.2) The independence of increments is a plain consequence of the Gaussian structure of W t. As a matter of fact we can express W t and W t W s as a linear combination of Gaussian random variables, ( (W s, W t W s ) = moreover, they are not correlated 5 ) ( W s W t a ij = a ji = E[(W ti W ti 1 )(W tj W tj 1 )] =... = t i t i 1 + t i 1 t i = 0, thus independent. Now we can write a Wiener process as: ) W t = (W t N W 0) + (W 2t N W t N ) (W tn N W t(n 1) ), N then W kt N W (k 1)t N N(0, t N ). 3 This statement is equivalent to say that the mapping t W t is continuous in t with probability 1. 4 The term stationary increments means that the distribution of the nth increment is the same as for the increment W t W 0, 0 < s, t <. 5 Observe that under the assumption s < t, the relations E(W t ) = m(t) = 0, Var(W t ) = min{t, t} = t, E(W t W s ) = 0 and E[(W t W s ) 2 ] = t 2s + s = t s are valid. 13

17 (2.3.2) (2.3.1) Conversely, the shape of the increments distribution implies that a Wiener process is a Gaussian process. Sure enough, for any 0 < t 1 < t 2... < t n, the probability distribution of a random vector (W t1, W t2,..., W tn ) is normal because is a linear combination of the vector (W t1, W t2 W t1,..., W tn W tn 1 ), whose components are Gaussian by definition. The mean value and the autocovariance function are: E(W t ) = 0 E(W t W s ) = E[W S (W t W s + W s )] = E[W s (W t W s )] + E(W 2 s ) = s = s t when s t. emark eferring to definition 2.3.2, notice that the a priori existence of a Wiener process is not guaranteed, since the mutual consistency of properties (i)-(iii) has first to be shown. In particular, it must be proved that (iii) does not contradict (ii). However, there are several ways to prove the existence of a Wiener process. 6 For example one can invoke the Kolmogorov theorem that will be stated later or, more simply, one can show the existence by construction. Figure 2.3.1: Simulation of the Wiener process. 6 Wiener himself proved the existence of the process in

18 2.3.1 Distribution of the Wiener Process Theorem The finite dimensional distributions of the Wiener process are given by Proof µ t1,...,t n (B) = B g t1 (x 1 )g t2 t 1 (x 2 x 1 )... g tn t n 1 (x n x n 1 )dx 1... dx n. Given a Wiener process W t, so m(t) = 0 and φ(s, t) = min (s, t), let us compute µ t1,...,t n (B). We already know that µ t1,...,t n (B) := B ( 1 2π ) n 1 det A e 1 2 A 1 x,x dx, where the covariance matrix is A = t 1 t 1 t 1 t 1 t 2... t 2. t 2 t 3 t 3. t 1 t 2 t n Performing some operations one gets: det(a) = t 1 (t 2 t 1 ) (t n t n 1 ). We can now compute the scalar product A 1 x, x. Take y := A 1 x, then x = Ay and A 1 x, x = y, x, that is x 1 = t 1 (y y n ) x 2 = t 1 y 1 + t 2 (y y n ). x n = t 1 y 1 + t 2 y t n y n Manipulating the system one gains the following expression for the scalar product A 1 x, x = x2 1 t 1 + (x 2 x 1 ) 2 t 2 t (x n x n 1 ) 2 t n t n 1, then, introducing the function g t (α) = 1 µ t1,...,t n (B) = B 2πt e α2 t, we finally obtain g t1 (x 1 )g t2 t 1 (x 2 x 1 ) g tn t n 1 (x n x n 1 )dx 1 dx n. 15

19 2.3.2 Wiener processes are Markov processes Consider a Wiener process (m(t) = 0 and φ(s, t) = min(s, t)). Theorem A Wiener process satisfies the Markov condition P(W t I W s = x, W s1 = x 1,..., W sn = x sn ) = P(W t I W s = x). (2.3.1) Proof. Compute first the right-hand side of By definition P(W t I, W s B) = P(W t I W s = x)p(w s dx), (2.3.2) B otherwise P(W t I, W s B) = = g s (x 1 )g t s (x 2 x 1 )dx 1 dx 2 { } g t s (x 2 x 1 )dx 2 g s (x 1 )dx 1. B I B I (2.3.3) So, comparing and and noticing that P(W s dx) = g s (x)dx, then P(W t I W s = x) = g t s (y x)dy a.e.. (2.3.4) I Compunting the left-hand side of 2.3.1, we actually obtain the same result of Consider an n dim random variable Y and perform the computation for a Borel set B = J 1... J n J. As before P(W t I Y = y)p(y dy) = P(W t I, Y B) B = P(W t I, W s1 J 1,..., W sn J n, W s J). Where P(W t I, W s1 J 1,..., W sn J n, W s J) can be expressed as g s1 (y 1 )g s2 s 1 (y 2 y 1 ) g s sn (y n+1 y s )g t s (y y n+1 )dy 1... dy n+1 dy. B I 16

20 Since P(Y dy 1 dy n+1 ) = g s1 (y 1 ) g s sn (y n+1 y n )dy 1 dy n+1 B P(W t I Y = y)p(y dy) = B { I } g t s (y y n+1 )dy P(Y dy), that implies P(W t I Y = y) = g t s (y y n+1 )dy and, since in our case y n+1 = x, the theorem is proved. I a.e Brownian bridge As it has already been said, one can think of a stochastic process as a model built from a collection of finite dimensional measures. For example, suppose to have a family M of Gaussian measures. We know that the 1 - dim measure µ t (I) is given by the following integral: µ t (I) = I 1 [x m(t)] 2 e 2σ 2 (t) dx. (2.3.5) 2πσ2 (t) Now, for every choice of m(t) and σ 2 (t) we obtain an arbitrary 1 - dim measure. 7 can also construct the 2 - dim measure, but in that case, instead of σ 2 (t), we will have a 2 2 covariance matrix that must be symmetric and positive definite for the integral to be computed. In addition, the 2 - dim measure obtained in this way must satisfy the compatibility property (definition 2.1.4). Example Consider t [0, + ] and a Borel set I. Try to construct a process under the hypotheses m 0 and a(s, t) = s t st (a 11 = t 1 t 1 2, a 12 = t 1 t 1 t 2,...). It can be proved that the covariance matrix A = a ij is symmetric and definite positive, thus equation (2.3.5) makes sense. In conclusion we have now the idea of how to build a process given a family of finite dimensional measures M. On the other hand, it is interesting and prolific to make over the previous procedure asking ourselves if a process corresponding with a given M does exists. We will introduce chapter 3 keeping this question in mind. Example leads us to the following definition. Definition (of Brownian bridge). A Gaussian, continuous parameter process {X t } t [0,1] with continuous trajectories, mean value m(t) = 0, covariant function a(s, t) = s t st and conditioned to have W (1) = 0, 0 s t T, is called a Brownian bridge process. We 7 For example, for suitable m(t) and a(s, t) we obtain just the Wiener process. 17

21 There is an easy way to construct a Brownian bridge from a Wiener process: X(t) = W (t) tw (1) 0 t 1. Observe also that given a Wiener process, we do not have to worry about the existence of the Brownian bridge. It is interesting to check that the following process represents a Brownian bridge too: ( t ) Y (t) = (1 t) W 1 t 0 t 1, Y (1) = 0. Figure 2.3.2: Simulation of the Brownian bridge Exercises Example Consider a Wiener process {W t }, the vector space L 2 loc (+ ) and a function f : + locally of bounded variation (BV from now on) and locally in L 2. For any trajectory define X = T 0 f(t)dw t, X(ω) = T 0 f(t)dw t (ω). (2.3.6) Observe that we are integrating over time, hence we can formally remove the ω that appears in the integrand as a parameter. Integration by parts yields X(ω) = T 0 f(t)dw t = f(t)w t f(0)w 0 18 T 0 W t df(t).

22 You are now wondering how X is distributed. To this purpose fix a T, choose any partition π = {0, t 1, t 2,..., t n } of [0, T ] and search for the distribution of the sum f(t i )(W ti+1 W ti ) =: X π, (2.3.7) i=1 whose limit is just the first equation in Look now at the properties of X π. First of all notice that it is a linear combination of Gaussian processes. Then, because of the properties of a Wiener process, 8 the covariance function of X π (sum of the covariance functions) is n 1 i=0 f(t i) 2 (t i+1 t i ), thus and, for n, ( n 1 ) X π 0, f(t i ) 2 (t i+1 t i ) i=0 (2.3.8) n 1 f(t i ) 2 (t i+1 t i ) T i=0 0 f(t) 2 dt = f 2 L. 2 (0,T ) By construction we have, for π 0, that X π (ω X(ω). The same result can be achieved independently by calling into account the characteristic function of X π : ϕ(λ) Xπ = E(e iλxπ ) = e 1 2 λ2 n 1 i=0 f(t i) 2 (t i+1 t i ) π 0 e 1 T 2 0 f(t)2 d t. This is a sequence of function e iλxπ(ω) that converges pointwise to e iλx(ω). Observing that e iλxπ = 1 and calling to action the Lebesgue dominated convergence theorem, as π 0 one has ϕ(λ) Xπ ϕ(λ) X = E(e iλx ) = e 1 2 λ2 T 0 f(t)2 dt The integral X = T 0 f(t)dw t has been computed; for a more general expression notice that if its upper bound is variable, one gets the process X t := t 0 f(r)dw r. (2.3.9) Example Given a Wiener process, a Borel set I and the rectangle G = (x δ, x + δ) I, compute the marginal µ s,t (G) M. 8 Independent increments distributed as W t W s N (0, t s). 19

23 Due to the features of the Wiener process one has µ s,t (G) = P(W s, W t G) = where Set z = (x 0), then A = G ( ) s s s t ( t s (z 1, z 2 ) s s g(x)dx = G and A 1 = 1 deta )( z1 z e 1 2 A 1(x 0),(x 0) dx (2.3.10) 2π deta ( t s s s ). ) =... = tz1 2 2sz 1 z 2 + sz2, 2 so G 1 1 e 1 2 A 1z,z dz = 1 1 2π deta 2π st s 2 G =... = 1 2π 1 st s 2 e tz 2 1 2sz 1 z 2 +sz2 2 2(st s 2 ) dz 1 dz 2 G e z 2 1 2s e (z 2 z 1 )2 t s dz 1 dz 2. At this point, bringing to mind the transition function g t (ξ) = 1 2πt e ξ2 2t case: P ( (W s, W t ) G ) = G=I (x δ)(x+δ) we have in this g s (z 1 ) g t s (z 2 z 1 )dz 1 dz 2. (2.3.11) Example Let {W t } be a Wiener process, I a Borel set and s, t [0, T ] with s < t. Compute the probability P(W t I W s = x). By definition of conditional probability we know that P(W t I W s = x) = P(W t I, W s = x). (2.3.12) P(W s = x) When the denominator of the right hand-side is zero (as in this case), the numerator is zero too 9 and formula is not useful (0/0). Try then to compute the probability P(W t I W s (x δ, x + δ)) = P(W t I, W s (x δ, x + δ)), (2.3.13) P(W s (x δ, x + δ) where P(W s (x δ, x + δ) > 0. 9 Consider two events A, B and the expression P(A B) P(B). If P(B) = 0, since P(A B) P (B) also the numerator would be zero. 20

24 Using the result of exercise we have P(W t I, W s (x δ, x + δ)) P(W s (x δ, x + δ) = = = ((x δ)(x+δ) g s(z 1 )g t s (z 2 z 1 )dz 1 ) dz 2 I ) g s(z 1 )g t s (z 2 z 1 )dz 1 ((x δ)(x+δ) x+δ x δ x+δ x δ x+δ 1 2δ x δ dz 2 ) g s (z 1 ) (I g t s(z 2 z 1 )dz 2 dz 1 ) g s (z 1 ) ( g t s(z 2 z 1 )dz 2 dz 1 ) g s (z 1 ) (I g t s(z 2 z 1 )dz 2 dz 1 x+δ. g s (z 1 )dz 1 Latest expression has been achieved by observing that g t sdz 2 = 1 because W s is Gaussian (with known variance) and then dividing numerator and denominator by 1 2δ. If we computed the limit, we would obtain 0/0 again, but after applying De L Hospital s rule we have... g s(x) I g t s(z 2 x)dz 2 g s (x) and, finally, we get the transition probability = 1 2δ I x δ g t s (z 2 x)dz 2 P(W t I W s = x) = P(W t W s I x). (2.3.14) 2.4 Poisson process A random variable T Ω (0, ) has exponential distribution of parameter λ > 0 if, for any t 0, P(T > t) = e λt. In that case T has a density function f T (t) = λe λt 1 (0, ) (t) and the mean value and the variance of T are given by E(T ) = 1 λ, Var(T ) = 1 λ 2. Proposition A random variable T : Ω (0, ) has an exponential distribution 21

25 if and only if it has the following memoryless property P(T > s + t T > s) = P(T > t) for any s, t 0. Definition (of Poisson process.) A stochastic process {N t, t 0} defined on a probability space (Ω, E, P) is said to be a Poisson process of rate λ if it verifies the following properties (i) N t = 0, (ii) for any n 1 and for any 0 t 1... t n the increments N tn N tn 1... N t2 N t1, are independent random variables, (iii) for any 0 s < t, the increment N t N s has a Poisson distribution with parameter λ(t s), that is, λ(t s) [λ(t s)]k P(N t N s = k) = e, k! where k = 0, 1, 2,... and λ > 0 is a fixed constant. Definition (of compound Poisson process.) Let N = {N t, t 0} be a Poisson process with rate λ. Consider a sequence {Y n, n 1} of independent and identically distributed random variables, which are also independent of the process N. Set S n = Y Y n. Then the process Z t = Y Y Nt = S Nt, with Z t = 0 if N t = 0, is called a compound Poisson process. Proposition The compound Poisson process has independent increments and the law of an increment Z t Z s has characteristic funcion e (ϕ Y (x) 1)λ(t s) 1, where ϕ Y1 (x) denotes the characteristic function of Y 1. 22

26 Chapter 3 Kolmogorov s theorems 3.1 Cylinder set Consider a probability space (Ω, E, P) and a family of random variables {X t } t [0,T ]. Every ω Ω corresponds with a trajectory t X t (ω), t [0, T ]. We want to characterize the mapping X defined from Ω in some space of functions, where X is a function of t: X(ω)(t) := X t (ω), ω Ω. If we have no information about trajectories, we can assume some hypotheses in order to define the codomain of the mapping X. Suppose, for example, to deal only with constant or continuous trajectories. This means to immerse the set of trajectories into the larger sets of constant or continuous functions, respectively. In general, if also the regularity of the trajectories is unknown, we can simply consider the very wide space [0,T ] of real function, then X : Ω [0,T ]. We want now to find the domain of the mapping X 1, that is, a σ - algebra that makes X measurable with respect to the σ - algebra itself. To this end, let us introduce the cylinder set C t1,t 2,...,t n,b = {f [0,T ] : (f(t 1 ), f(t 2 ),..., f(t n )) B}, where B B d (B is called the base of the cylinder) and C [0,T ]. Then consider the set C of all the cylinders C: C = {C t1,t 2,...t n,b, n 1, (t, t 1, t 2,..., t n ), B B n }. Actually, C is an algebra but not σ - algebra. Anyway we know that there exists the smallest σ - algebra containing C. We are going to denote it by σ(c ). Now we can show that X 1 : σ(c ) E. Theorem The preimage of σ(c ) lies in E. 23

27 Proof Consider the cylinder set C t1,t 2,...,t n,b C σ(c ). X 1 (C t1,t 2,...,t n,b) = {ω Ω : X(ω) C t1,t 2,...,t n } = {ω Ω : (X(ω)(t 1 ), X(ω)(t 2 ),..., X(ω)(t n )) B} (3.1.1) = {X t1 (ω), X t2 (ω),..., X tn (ω) B} E. Now we consider the sigma algebra {X 1 (E) E } = F for any E [0,T ] and, since C F, the theorem is proved. To fulfill the relations between the abstract probability space and the objects which we actually deal with, notice that the map X is measurable as a function (Ω, E ) ( [0,T ],σ(c ) ), so we can consider the image measure of P defined as usual: µ(e) = P(X 1 (E)), E σ(c ). If we now consider two preocesses X, X (also defined on different probability spaces) with the same family M, that is P((X t1, X t2,..., X tn )) E) = P((X t 1, X t 2,..., X t n )) E), then also µ X = µ X. In other words, µ X is determined by M only. 3.2 Kolmogorov s theorem I Theorem (of Kolmogorov, I). Given a family of probability measures M = {{µ t }, {µ t1,t 2 },...} with the compatibility property 2.1.4, there exists a unique measure µ on ( [0,T ], σ(c )) such that µ(c t1,t 2,...,t n,b) = µ t1,t 2,...,t n (B) and we can define, on the space ( [0,T ], σ(c ), µ), the stochastic process X t (ω) = ω(t) (with ω [0,T ] ) which has the family M as finite dimensional distributions. Although this theorem assures the existence of a process, it is not a constructive theorem, so it does not provide enough informations in order to find a process given a family of probability measures. For our purposes it is more useful to take into account another theorem, that we will call second theorem of Kolmogorov. 1 From now on, if not otherwise specified, we will always recall the second one. 1 Note that some authors call it Kolmogorov s criterion or Kolmogorov continuity criterion. 24

28 3.3 Kolmogorov s theorem II Before stating the theorem, let us recall some notions. Definition (of equivalent processes). A stochastic process {X t } t [0,T ] is equivalent to another stochastic process {Y t } t [0,T ] if P(X t Y t ) = 0 t [0, T ]. It is also said that {X t } t [0,T ] is a version of {Y t } t [0,T ]. However, equivalent processes may have different trajectories. Definition (of continuity in probability). A real - valued stochastic process {X t } t [0,T ], where [0, T ], is said to be continuous in probability if, for any ɛ > 0 and every t [0, T ] lim P( X t X s > ɛ) = 0. (3.3.1) s t Definition Given a sequence of events A 1, A 2,..., one defines lim supa k as the event lim supa k = n=1 k=n A k. Lemma (of Borel Cantelli). Let A 1, A 2,... be a sequence of events such that P(A k ) <, k=1 then the event A = lim supa K has null probability P(A) = 0. Of course P(A c ) = 1, where A c = n=1 k=n Ac k. Proof We have A k = n A k. Then P(A) P(A k ) k=n which, by hypothesis, tends to zero when n tends to infinity. Definition (of Hölder continuity). 2 A function f : n K, (K = C) satisfies the Hölder condition, or it is said to be Hölder continuous, if there exist non - negative real constants C and α such that f(x) f(y) C x y α 2 More generally, this condition can be formulated for functions between any two metric space. 25

29 for all x, y in the domain of f. 3 Definition (of dyadic rational). rational number a 2 b number. A dyadic rational (or dyadic number) is a whose denominator is a power of 2, with a an integer and b a natural Ahead we will deal with the dyadic set D = m=0 D m, where D m := { k 2 m : k = 0, 1,..., 2 m }. Figure 3.3.1: Construction of dyadic rationals. Theorem (of Kolmogorov, II). Let (Ω, E, P) be a probability space and consider the stochastic process {X t } t [0,T ] with the property E X t X s α c t s 1+β s, t [0, T ], (3.3.2) where c, α, β are non - negative real constants. Then there exists another stochastic process {X t} t [0,T ] over the same space such that (i) {X t} t [0,T ] is locally Hölder continuous with exponent γ, γ (0, β α ),4 (ii) P(X t X t) = 0 forall t [0, T ]. Proof Chebyshev inequality implies P( X t X s > ε) 1 ε α E( X t X s α ) c ε α t s 1+β. (3.3.3) 3 If α = 1 one gets just the Lipschitz continuity, if α = 0 then the function in simply bounded. 4 It s not necessary that {X t } t [0,T ] has Hölder continuous trajectories too. 26

30 Using the notion of dyadic rationals (definition 3.3.6), search for a useful valuation of studying the following inequality: P( X ti+1 X ti > ε m ) c 1 ε α m 2, t m(1+β) i D m. Let us consider the event A = (max X ti+1 X ti > ε m ). It s easy to prove that i A = 2 m 1 i=0 A i where A i = ( X ti+1 X ti > ε m ). So, by the properties of probability measure and by the obtained valuations we get: P(max X ti+1 X ti > ε m ) 2 m c 1 i ε α m 2 = c m(1+β) ε α m2. mβ Set now A m = (max i X ti+1 X ti > ε m ) and ε m = 1 2 γm. These choices yield P(A m ) c m=0 m=0 and, choosing γ < β, the sum converges. α 2 mγα 2 = c 1 (3.3.4) mβ 2 m(β γα) m=0 Consider now the set A = lim sup A m. By the Borel-Cantelli lemma we have P(A) = 0. Take an element ω such that ω / A, then ω A c k. It follows that m such that ω k=m A c k. Hence ω A c k k m. It results m=1 k=m max i X ti+1 (ω) X ti (ω) 1 2 mγ m m. esuming, we have constructed a null measure set A such that ω / A m with max i X ti+1 (ω) X ti (ω) 1 2 mγ m m and we want to prove that, for s, t D, the following valuation holds: X t X s c t s γ. Thus, suppose that t s < 1 2 n = δ. This implies t, s D m, m > n. Now try to prove 27

31 the following inequality 1 X t X s 2( (n+1)γ (n+2)γ 2 ) ( 1 2 mγ 2 γ )n (3.3.5) 2 γ We can prove it by induction. To this end let us suppose that inequality holds true for numbers in D m 1. Take t, s D m and s, t D m 1 such that s s, t t. Hence X t X s X t X t + X t X s + X s X s mγ 2 + 2( 1 mγ (n+1)γ 2 ) = 2( 1 (m 1)γ (n+1)γ 2 Observe now that inequality holds for the first step (n + 1) because X t X s 1 2. Thus is proved. 2 (n+1)γ 2 (n+1)γ Set now δ = 1 2 m, c = 2( < t s < 1 2 n+1 mγ ). ) and consider two instants t, s D. Then n such that 2. Finally, consider γ t s < δ. Then 2 n ( ) γ 1 X t (ω) X s (ω) c c t s γ. 2 n+1 We have obtained the valuation we were looking for, but we have obtained it only for dyadic numbers and we have no information about other instants. Define a new process X t(ω) as follow: (i) X t(ω) = X t (ω), se t D, (ii) X t(ω) = lim n X tn (ω), if t / D, where t n t, t n D, (iii) X t(ω) = z, where z, when ω A. This is surely a process with continuous trajectories. All we have to prove now is that X t(ω) is a version of X t (ω), that is P(X t = X t) = 1. This is true by definition if t D. If t / D, consider a sequence t n t such that X tn X t almost everywhere. This implies that X tn X t in probability. Finally, we have to show that the two processes have the same family of finite dimensional measures. Indeed, considering the set Ω t = {ω : X t (ω) = X t(ω)}, we get, obviously, P(Ω t ) = 1 and P( t i Ω c t i ) t i P(Ω c t i ) = 0. Hence P((X t1,..., X tn ) B)P((X t1,..., X tn ) B, t i Ω ti ) = P((X t 1,..., X t n ) B, t i Ω ti ) = P((X t 1,..., X t n ) B). 28

32 29

33 Chapter 4 Martingales We introduce first the notion of conditional expectation of a random variable with respect to a σ - algebra. 4.1 Conditional expectation Lemma Given a random variable X L 1 (Ω) there exists a unique F - measurable map Z such that for every F F, X(ω)P(dω) = Y (ω)p(dω). F F Definition (of conditional expectation). Let X be a random variable in L 1 (Ω) and Z an F - measurable random variable such that, for every F F, X(ω)P(dω) = Z(ω)P(dω), for any F F F F or, similarly, E(Z1 A ) = E(X1 A ). (4.1.1) We will denote this function with E(X F ). Nikodym theorem (1.3.1) and it is unique up to null - measure sets. Its existence is guaranteed by andom- Moreover equation implies that for any bounded F - measurable random variable Y we have E(E(X F )Y ) = E(XY ). Properties of conditional expectation 30

34 (i) If X is a integrable and F - measurable random variable, then E(X F ) = X a.e.. (4.1.2) (ii) If X 1, X 2 are two integrable and F - measurable random variables, then E(αX 1 + βx 2 F ) = αe(x 1 F ) + βe(x 2 F ) a.e.. (4.1.3) (iii) A random variable and its conditional expectation have the same expectation E(E(X F )) = E(X) a.e.. (4.1.4) (iv) Given X L 1 (Ω), consider the σ-algebra generated by X: F X = {X 1 (I), I B} E. If F and F X are independent, 1 then E(X F ) = E(X) a.e.. (4.1.5) In fact, the constant E(X) is clearly F - measurable, and for all A F we have E(X1 A ) = E(X)E(1 A ) = E(E(X)1 A ). (4.1.6) (v) Given X, Y with X a general random variable and Y is a bounded and F - measurable random variable, then E(XY F ) = Y E(X F ) a.e.. (4.1.7) In fact, the random variable Y E(X F ) is integrable and F - measurable, and for alla A F E(E(X F )Y 1 A ) = E(E(XY 1 A B)) = E(XY 1 A ). (4.1.8) (vi) Given the random variable E(X G ) with X L 1 (Ω) and F G E, then E(E(X G ) F ) = E(E(X F ) G )E(X F ) a.e. (4.1.9) 1 For any F F and G G, F and G are said to be independent if F and G are independent: P(F G) = P(F ) P(G). 31

35 4.2 Filtrations and Martingales Definition (of filtration.) Given a probability space (Ω, E, P) 2, a filtration is defined as a family {F t } E of σ - algebras increasing with time, that is, if t < t, then F t F t. Definition (of martingale.) Given a stochastic process {X t, t 0} and a filtration {F t } t 0 we say that {X t, t 0} is a martingale (resp. a supermartingale, a submartingale) with respect to the given filtration if: (i) X t L 1 (Ω) for any t T (ii) X t is F t - measurable (we said that the process is adapted to the filtration), that is F X t F t, where F X t = σ{x u s.t. u t}, (iii) E(X t F s ) = X s (resp., ) a.e Examples Example Consider the smallest σ - algebra of E : F 0 = {, Ω} E. To compute Z := E(X F ), notice that by definition we have XdP = (X F )dp F F. (4.3.1) F F So, since E(X F ) is constant, XdP = (X F 0 )dp = const P(F ). (4.3.2) F F Moreover, due to the inclusion F 0 F s, we also obtain the following relations. E(E(X t F s )) = E(E(X t F s ) F 0 ) = E(X t F 0 ) = E(X t ) = E(X s ) = E(X 0 ) Consider now the first order σ - algebra F 1 = {, A, A c, Ω} for some subset A Ω. What is the expectation value E(X F 1 )? We have 0 < P(A), P(A c ) < 1 and E(X F 1 ) = c 1 c 2 ω A ω A c 2 Note that P is not necessary in this definition. 32

36 Applying formula A E(X F 1 ) = c 1dP = c 1 P(A) c 1 = 1 XdP P(A) A c A c 2 dp = c 2 P(A c ) c 2 = 1 XdP P(A c ) A c ω A ω A c We can extend this result to any partition of Ω and, for every subset A i, c i will be given by: c i = 1 XdP. P(A i ) A i Example Consider a Wiener process adapted to the filtration F t, thus Ft W and F t and W t+h W h are independent. These hypoteses yields to the equalities E(W t F s ) = E(W t W s + W s F s ) = E(W t W s F s ) + E(W s F s ) These reasoning can be applied to a process Wt n ( n E(Wt n F s ) = E ((W t W s + W s ) n F s ) = E = E(W t W s ) + W s = 0 + W s = W s. = = = k=0 k=0 in the following way: k=0 n ( n k n ( n k n ( n k And for odd k the general term vanishes. So, setting b = [ n 2 ], b k=0 ( ) n Ws n 2k E((W t W s ) 2k ) = 2k For example, for n = 2, one has k=0 b k=0 E(W 2 t F s ) = W 2 s s + t. ( ) ) n (W t W s ) k Ws n k F s k ) E ( ) (W t W s ) k Ws n k F s ) W n k s E ( (W t W s ) k F s ) ) W n k s E((W t W s ) k ). ( ) n (2k)! 2k 2 k k! W s n 2k (t s) k. F t Thus Wt 2 is not a martingale, but since t is a costant, we can easily recognize that the process Wt 2 t is actually a martingale: E(W 2 t t F W s ) = W s s. We can try to perform the same computation for an exponential function and look for 33

37 a solution of E(e ϑwt F s ), with ϑ. In order to compute this conditional expectation value it is useful to write the exponential function in terms of its Taylor expansion (e ϑwt = ϑ n Wt n ): n! n=0 E(e ϑwt F s ) = = = = = n=0 n=0 ϑ n n! ϑ n n=0 k=0 k=0 k=0 b k=0 k=0 ( ) n (2k)! 2k 2 k k! W s n 2k (t s) k b 1 (n 2k)!2 k k! W s n 2k (t s) k b 1 (n 2k)!k! (ϑw s) n 2k [ ϑ2 (t s)]k 2 [ ϑ2 (t s)]k 2 (ϑw s ) n 2k k! (n 2k)! [ ϑ2 2 n=2k (t s)]k e ϑws = e ϑ2 2 (t s)+ϑws. k! Then the considered function is a martingale because E(e ϑwt F s ) = e ϑ2 2 (t s)+ϑws, that is E(e ϑwt ϑ2 t 2 F s ) = e ϑws ϑ2 s 2. 34

38 Chapter 5 Monotone classes Talking about Markov processes we introduced the relation: P(X t I X s = x, X s1 = x 1,..., X sn = x n ) = P(X t I X s = x) (5.0.1) and we know how it can be computed. However, equation can be proved in the light of the tools we are about to define. In general, if one wants to prove a certain property of a σ - algebra, it may be helpful to prove it first for a particular subset of the σ - algebra itself (that is a π system) and then extend the result to the whole field. Definition (of π system). A family P, with P E, where E is the σ - algebra of elements of the sample space, is called a π - system if: A, B P A B P. (5.0.2) Suppose, for example, to have two measures µ 1, µ 2. If we want to prove that µ 1 = µ 2 in some σ - algebra F, that is µ 1 (E) = µ 2 (E) E F, we can start proving it for a particular subset E P F. This example shows not just the usefulness of π - systems, but also the necessity of a new tool able to put all the subsets with the same property togheter. These are the λ - systems. Definition (of λ system). A family L, with L E, where E is the σ - algebra of elements of the sample space, is called a λ - system if: (i) if A 1 A 2 A 3... and A i L, then A i L, (ii) if A, B L and A B, then B \ A L. 35

39 Proposition If L is a λ - system such that Ω L and L is also a π - system, then L is a σ - algebra. Proof By hypotesis Ω L ; by definition of λ - system A L A c L ; finally, due to the hypotesis that L is also a π - system, we get {A i } A i L for any sequence {A i }. A foundamental relation between π - systems and λ - systems is given by the Dynkin lemma. Before stating and proving it, let us introduce some observations. Lemma Given a π - system P and a λ - system L such that Ω L and P L, define: L 1 = {E L : C P, E C L }, L 2 = {E L 1 : A L 1, E A L }. Then L 1 and L 2 are π - systems and P L 2 L 1 L. Proof Consider L 1 first. We want to prove that L 1 is a λ - system and that P L 1 L, then we will extend the same kind of reasoning to L 2. Observe first that L 1 is non-empty, for example Ω L 1, P L 1. We know that if A 1 A 2 A 3... with A k L, then k A k L surely, because L is a λ - system; but we want to prove it for elements of L 1, then take (C k A k). Does it belong to L? Notice that we can write (C k A k) as k (C A k). Now define à k := (C A k ) which belongs to L by hypotesis. So, taking Ã1 Ã2... we obtain k (Ãk) L, that is the first property of a λ - system. To verify the other property that makes of L 1 a λ - system we need to show that B \ A belongs to L 1 when A is a subset of B and A, B L 1. This is equivalent to verify if C (B \ A) belongs to L for any C P. But C (B \ A) = (C B) \ (C A), and C A, as C B, belongs to L. So we conclude that L 1 (and for the same facts L 2 ) is a λ - system, so P L 2 L 1 L. 5.1 The Dynkin lemma We can now state and prove the following lemma. Lemma (of Dynkin). 1 Given a π - system P and a λ - system L such that 1 This is a probabilistic version of monotone class method in measure theory (product spaces). See udin Chp

40 Ω L and P L, then P σ(p) L, (5.1.1) where σ(p) is the σ - algebra generated by P. Proof Consider the family of λ - systems L (α) such that P L (α) and Ω L (α). Set Λ := α L (α). Λ is a λ - system by lemma Since Λ L, Λ has the same properties of L, so P Λ and Ω Λ. We can construct the following hierarchy of inclusions: P... Λ 2, Λ 1, Λ L, but Λ 1, Λ 2,... are all the same because Λ = α L (α) is the smallest λ - system containing Ω. In other words P Λ Λ 1 Λ Moreover, it is easy to prove that Λ is a π - system and, by proposition 5, we get that Λ is a σ - algebra. This fact concludes the proof because 2 P σ(p) Λ L 5.2 Some applications of the Dynkin lemma Example Consider a π - system P and suppose E = σ(p ), Ω P and P 1 (E) = P 2 (E) (P 1 and P 2 are probability measures). Using the Dynkin lemma, show that P 1 P 2 on E. Proof P is a π - system by hypotesis. Take L = {F E : P 1 (F ) = P 2 (F )}. Since P L, L is non - empty. Considering F 1 F 2... with F k L and remembering the additivity property of probability measures, it is easy to show that L is also a λ - system then, for the Dynkin lemma, P σ(p) L and, consequently, the thesis. Example Let us consider two definitions of Markov property. 2 Notice that σ(p) obviously belongs to Λ. 37

41 (i) P(X t I X s = x, X s1 = x 1,..., X sn = x n ) = P(X t I X s = x) a.e., (ii) P(X t I F s ) = P(X t I X s = x) a.e.. We want to show that these equations are equivalent. 3 Take the left-hand side of (i). This is a number depending on t, I, s, x, so we can write the first equation as p(t, I; s, x, s 1, x 1,..., s n, x n ) = p(t, I; s, x) a.e.. (5.2.1) Now take the σ - algebras F Xs, F Xs1,..., F Xsn generated by X s, X s1,... and set F Xs,X s1,...,x sn := σ(f Xs, F Xs1,..., F Xsn ). We are going to prove that (i) (ii) by means of Now observe that the left-hand side of (i) is just P(X t I F Xs,Xs1,...,X sn ) (5.2.2) and, as far as (ii) is concerned, we can write: P(X t I F s ) = E(1 I (X t ) F s ) (5.2.3) where and 1 I (X t )dp = E(1 I (X t ) F s )dp F F s (5.2.4) F F P(X t I X s ) = E(1 I (X t ) F Xs ), (5.2.5) where 1 I (X t )dp = E(1 I (X t ) F Xs )dp F F s. (5.2.6) F F Proof (i) (ii) Notice, first of all, that s 1, s 2,..., s n represent instants before s, then F Xs,Xs1,...,X sn F s. Moreover, remember that E(E(X G ) F ) = E(X F ). Now, recalling formulas , one gets E(1 I (X t ) F Xs,Xs1,...,X sn ) = E(E(1 I (X t ) F s ) F Xs,Xs1,...,X sn ) = E(E(1 I (X t ) F Xs ) F Xs,Xs1,...,X sn ) = E(1 I (X t ) F Xs ), where the latter equality is justified by the inclusion F Xs F Xs,Xs1,...,X sn F s. 3 To prove that (ii) (i) we will need the Dynkin lemma. 38

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

An Introduction to Stochastic Processes in Continuous Time

An Introduction to Stochastic Processes in Continuous Time An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

E. Vitali Lecture 6. Brownian motion

E. Vitali Lecture 6. Brownian motion Markov chains, presented in the previous chapter, belong to a wide class of mathematical objects, the stochastic processes, which will introduced in the present chapter. Stochastic processes form the basis

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Introduction to stochastic analysis

Introduction to stochastic analysis Introduction to stochastic analysis A. Guionnet 1 2 Department of Mathematics, MIT, 77 Massachusetts Avenue, Cambridge, MA 2139-437, USA. Abstract These lectures notes are notes in progress designed for

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.

We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. 1 Probability 1.1 Probability spaces We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events. Definition 1.1.

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Probability Theory II. Spring 2014 Peter Orbanz

Probability Theory II. Spring 2014 Peter Orbanz Probability Theory II Spring 2014 Peter Orbanz Contents Chapter 1. Martingales, continued 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Notions of convergence for martingales 3 1.3. Uniform

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

An Introduction to Stochastic Calculus

An Introduction to Stochastic Calculus An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Weeks 3-4 Haijun Li An Introduction to Stochastic Calculus Weeks 3-4 1 / 22 Outline

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Probability Theory II. Spring 2016 Peter Orbanz

Probability Theory II. Spring 2016 Peter Orbanz Probability Theory II Spring 2016 Peter Orbanz Contents Chapter 1. Martingales 1 1.1. Martingales indexed by partially ordered sets 1 1.2. Martingales from adapted processes 4 1.3. Stopping times and

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

The Kolmogorov continuity theorem, Hölder continuity, and the Kolmogorov-Chentsov theorem

The Kolmogorov continuity theorem, Hölder continuity, and the Kolmogorov-Chentsov theorem The Kolmogorov continuity theorem, Hölder continuity, and the Kolmogorov-Chentsov theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto June 11, 2015 1 Modifications

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Math 735: Stochastic Analysis

Math 735: Stochastic Analysis First Prev Next Go To Go Back Full Screen Close Quit 1 Math 735: Stochastic Analysis 1. Introduction and review 2. Notions of convergence 3. Continuous time stochastic processes 4. Information and conditional

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

Measure and integration

Measure and integration Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Preliminaries to Stochastic Analysis Autumn 2011 version. Xue-Mei Li The University of Warwick

Preliminaries to Stochastic Analysis Autumn 2011 version. Xue-Mei Li The University of Warwick Preliminaries to Stochastic Analysis Autumn 2011 version Xue-Mei Li The University of Warwick Typset: October 21, 2011 Chapter 1 Preliminaries For completion and easy reference we include some basic concepts

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

MAGIC010 Ergodic Theory Lecture Entropy

MAGIC010 Ergodic Theory Lecture Entropy 7. Entropy 7. Introduction A natural question in mathematics is the so-called isomorphism problem : when are two mathematical objects of the same class the same (in some appropriately defined sense of

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information