Brownian Motion. Chapter Stochastic Process

Similar documents
6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Brownian Motion. Chapter Stochastic Process

ELEMENTS OF PROBABILITY THEORY

The main results about probability measures are the following two facts:

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

STAT 7032 Probability Spring Wlodek Bryc

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

9 Brownian Motion: Construction

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Stochastic Differential Equations.

1 Independent increments

Brownian Motion and Conditional Probability

Product measure and Fubini s theorem

Wiener Measure and Brownian Motion

Chapter One. The Real Number System

Empirical Processes: General Weak Convergence Theory

Basic Definitions: Indexed Collections and Random Functions

Abstract Measure Theory

n E(X t T n = lim X s Tn = X s

JUSTIN HARTMANN. F n Σ.

2.2 Some Consequences of the Completeness Axiom

FE 5204 Stochastic Differential Equations

Problem set 1, Real Analysis I, Spring, 2015.

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

The strictly 1/2-stable example

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

Lecture 17 Brownian motion as a Markov process

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MATHS 730 FC Lecture Notes March 5, Introduction

Introduction and Preliminaries

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Gaussian Processes. 1. Basic Notions

4 Sums of Independent Random Variables

4th Preparation Sheet - Solutions

MORE ON CONTINUOUS FUNCTIONS AND SETS

Notes on uniform convergence

In N we can do addition, but in order to do subtraction we need to extend N to the integers

13 The martingale problem

A Concise Course on Stochastic Partial Differential Equations

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Random Process Lecture 1. Fundamentals of Probability

Week 2: Sequences and Series

µ X (A) = P ( X 1 (A) )

Poisson random measure: motivation

REAL AND COMPLEX ANALYSIS

Metric Spaces and Topology

Some Background Material

Weak convergence and Compactness.

Lebesgue Measure on R n

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Do stochastic processes exist?

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

Part III. 10 Topological Space Basics. Topological Spaces

Economics 204 Fall 2011 Problem Set 1 Suggested Solutions

STAT 7032 Probability. Wlodek Bryc

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

The Kolmogorov continuity theorem, Hölder continuity, and the Kolmogorov-Chentsov theorem

In N we can do addition, but in order to do subtraction we need to extend N to the integers

First-Return Integrals

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Exercises for Unit VI (Infinite constructions in set theory)

Analysis III. Exam 1

Convergence at first and second order of some approximations of stochastic integrals

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

An essay on the general theory of stochastic processes

Sobolev Spaces. Chapter Hölder spaces

Verona Course April Lecture 1. Review of probability

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf)

The Codimension of the Zeros of a Stable Process in Random Scenery

Problem Set 2: Solutions Math 201A: Fall 2016

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

MATH 202B - Problem Set 5

Stochastic Processes

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Building Infinite Processes from Finite-Dimensional Distributions

Gaussian Random Fields: Geometric Properties and Extremes

Real Variables: Solutions to Homework 3

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Indeed, if we want m to be compatible with taking limits, it should be countably additive, meaning that ( )

Solutions to Homework Assignment 2

P (A G) dp G P (A G)

Probability and Measure

JUHA KINNUNEN. Harmonic Analysis

21 Gaussian spaces and processes

STA 711: Probability & Measure Theory Robert L. Wolpert

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

1 Measurable Functions

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by

MATH 6605: SUMMARY LECTURE NOTES

Transcription:

Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic process can be defined as a collection of random variables {x(t, ω} indexed by the parameter set T. This means that for each t T, x(t,ω is a measurable map of (Ω,Σ (R,B where (R,B is the real line with the usual Borel σ-field. The parameter set often represents time and could be either the integers representing discrete time or could be [,T], [, or (, if we are studying processes in continuous time. For each fixed ω we can view x(t,ω as a map of T R and we would then get a random function of t T. Ifwedenote byxthe spaceoffunctionson T, then astochasticprocessbecomes a measurable map from a probability space into X. There is a natural σ-field B onxand measurabilityisto be understood in terms ofthis σ-field. This natural σ-field, called the Kolmogorov σ-field, is defined as the smallest σ-field such that the projections {π t (f f(t; t T} mapping X R are measurable. The point of this definition is that a random function x(,ω : Ω X is measurable if and only if the random variables x(t,ω : Ω R are measurable for each t T. The mapping x(, induces a measure on (X,B by the usual definition Q(A P [ ω : x(,ω A ] (1.1 for A B. Since the underlying probability model (Ω,Σ,P is irrelevant, it can be replaced by the canonical model (X, B,Q with the special choice of x(t,f π t (f f(t. A stochastic process then can then be defined simply as a probability measure Q on (X,B. Another point of view is that the only relevant objects are the joint distributions of {x(,ω,x(t 2,ω,,x(t k,ω} for every k and every finite subset F (,t 2,,t k of T. These can be specified as probability measures µ F on R k. These {µ F } cannot be totally arbitrary. If we allow different permutations 1

2 CHAPTER 1. BROWNIAN MOTION of the same set, so that F and F are permutations of each other then µ F and µ F should be related by the same permutation. If F F, then we can obtain the joint distribution of {x(t,ω; t F} by projecting the joint distribution of {x(t,ω; t F } from R k R k where k and k are the cardinalities of F and F respectively. A stochastic process can then be viewed as a family {µ F } of distributions on various finite dimensional spaces that satisfy the consistency conditions. A theorem of Kolmogorov says that this is not all that different. Any such consistent family arises from a Q on (X,B which is uniquely determined by the family {µ F }. If T is countable this is quite satisfactory. X is the the space of sequences and the σ-field B is quite adequate to answer all the questions we may want to ask. The set of bounded sequences, the set of convergent sequences, the set of summable sequences are all measurable subsets of X and therefore we can answer questions like, does the sequence converge with probability 1, etc. However if T is uncountable like [, T], then the space of bounded functions, the space of continuous functions etc, are not measurable sets. They do not belong to B. Basically, in probability theory, the rules involve only a countable collection of sets at one time and any information that involves the values of an uncountable number of measurable functions is out of reach. There is an intrinsic reason for this. In probability theory we can always change the values of a random variable on a set of measure and we have not changed anything of consequence. Since we are allowed to mess up each function on a set of measure we have to assume that each function has indeed been messed up on a set of measure. If we are dealing with a countable number of functions the mess up has occured only on the countable union of these individual sets of measure, which by the properties of a measure is again a set of measure. On the other hand if we are dealing with an uncountable set of functions, then these sets of measure can possibly gang up on us to produce a set of positive or even full measure. We just can not be sure. Of course it would be foolish of us to mess things up unnecessarily. If we can clean things up and choose a nice version of our random variables we should do so. But we cannot really do this sensibly unless we decide first what nice means. We however face the risk of being too greedy and it may not be possible to have a version as nice as we seek. But then we can always change our mind. 1.2 Regularity Very often it is natural to try to find a version that has continuous trajectories. This is equivalent to restricting X to the space of continuous functions on [, T] and we are trying to construct a measure Q on X C[,T] with the natural σ- field B. This is not always possible. We want to find some sufficient conditions on the finite dimensional distributions {µ F } that guarantee that a choice of Q exists on (X,B.

1.2. REGULARITY 3 Theorem 1.1. (Kolmogorov s Regularity Theorem Assume that for any pair (s,t [,T] the bivariate distribution µ s,t satisfies x y β µ s,t (dx,dy C t s 1+α (1.2 for some positive constants β,α and C. Then there is a unique Q on (X,B such that it has {µ F } for its finite dimensional distributions. Proof. Since we can only deal effectively with a countable number of random variables, we restrict ourselves to values at diadic times. Let us, for simplicity, take T 1. Denote by T n time points t of the form t j 2 for j 2 n. n The countable union j T j T is a countable dense subset of T. We will construct a probability measure Q on the space of sequences corresponding to the values of {x(t : t T }, show that Q is supported on sequences that produce uniformly continuous functions on T and then extend them automatically to T by continuity and the extension will provide us the natural Q on C[,1]. If we start from the set of values on T n, the n-th level of diadics, by linear interpolation we can construct a version x n (t that agrees with the original variables at these diadic points. This way we have a sequence x n (t such that x n ( x n+1 ( on T n. If we can show that for some γ,δ >, Q [ x( : sup x n (t x n+1 (t 2 nγ] C2 nδ (1.3 t 1 then we can conclude (by Borel-Cantelli lemma that Q [ x( : lim n x n(t x (t exists uniformly on [,1] ] 1 (1.4 The limit x ( will be continuous on T and will coincide with x( on T there by establishing our result. Proof of (1.3 depends on a simple observation. The difference x n ( x n+1 ( achieves its maximum at the mid point of one of the diadic intervals determined by T n and hence sup x n (t x n+1 (t t 1 2j 1 sup x n ( 1 j 2 n 2 n+1 x 2j 1 n+1( 2n+1 sup max { 2j 1 2j 1 2 x( x( 1 j 2 n 2n+1 2n+1, x(2j x(2j 2n+1 2 and we can estimate the left hand side of (1.3 by n+1 }

4 CHAPTER 1. BROWNIAN MOTION Q [ x( : sup x n (t x n+1 (t 2 nγ] t 1 Q [ i sup 2 n+1 x(i 1 2 1 i 2 n+1 x( n+1 2 nγ] 2 n+1 sup Q [ i x( 1 i 2 2 n+1 x(i 1 2 n+1 2 n+1 2 nβγ sup E Q[ i x( 1 i 2 2 n+1 x(i 1 n+1 C2 n+1 2 nβγ 2 (1+α(n+1 C2 nδ n+1 2 nγ] 2 n+1 β] provided δ α βγ. For given α,β we can pick γ < α β and we are done. An equivalent version of this theorem is the following. Theorem 1.2. If x(t,ω is a stochastic process on (Ω,Σ,P satisfying E P[ x(t x(s β] C t s 1+α for some positive constants α, β and C, then if necessary, x(t,ω can be modified for each t on a set of measure zero, to obtain an equivalent version that is almost surely continuous. As an important application we consider Brownian Motion, which is defined as a stochastic process that has multivariate normal distributions for its finite dimensional distributions. These normal distributions have mean zero and the variance covariance matrix is specified by Cov(x(s, x(t min(s, t. An elementary calculation yields E x(s x(t 4 3 t s 2 so that Theorem 1.1 is applicable with β 4, α 1 and C 3. To see that some restriction is needed, let us consider the Poisson process defined as a process with independent increments with the distribution of x(t x(s being Poisson with parameter t s provided t > s. In this case since we have, for every n, P[x(t x(s 1] 1 exp[ (t s] E x(t x(s n 1 exp[ t s ] C t s and the conditions for Theorem 1.1 are never satisfied. It should not be, because after all a Poisson process is a counting process and jumps whenever the event that it is counting occurs and it would indeed be greedy of us to try to put the measure on the space of continuous functions.

1.3. GARSIA, RODEMICH AND RUMSEY INEQUALITY. 5 Remark 1.1. The fact that there cannot be a measure on the space of continuous functions whose finite dimensional distributions coincide with those of the Poisson process requires a proof. There is a whole class of nasty examples of measures {Q} on the space of continuous functions such that for every t [,1] Q [ ω : x(t,ω is a rational number ] 1 The difference is that the rationals are dense, whereas the integers are not. The proofhas to depend on the fact that a continuousfunction that is not identically equal to some fixed integer must spend a positive amount of time at nonintegral points. Try to make a rigorous proof using Fubini s theorem. 1.3 Garsia, Rodemich and Rumsey inequality. If we have a stochastic process x(t,ω and we wish to show that it has a nice version, perhaps a continuous one, or even a Holder continuous or differentiable version, there are things we have to estimate. Establishing Holder continuity amounts to estimating ǫ(l P [ sup s,t x(s x(t t s α l ] and showing that ǫ(l 1 as l. These are often difficult to estimate and require special methods. A slight modification of the proof of Theorem 1.1 will establish that the nice, continuous version of Brownian motion actually satisfies a Holder condition of exponent α so long as < α < 1 2. On the other hand if we want to show only that we have a version x(t,ω that is square integrable, we have to estimate ǫ(l P [ x(t,ω 2 dt l ] and try to show that ǫ(l 1 as l. This task is somewhat easier because we could control it by estimating E P[ x(t,ω 2 dt ] and that could be done by the use of Fubini s theorem. After all E P[ x(t,ω 2 dt ] E P[ x(t,ω 2] dt Estimating integrals are easier that estimating suprema. Sobolev inequality controls suprema in terms of integrals. Garsia, Rodemich and Rumsey inequality is a generalization and can be used in a wide variety of contexts.

6 CHAPTER 1. BROWNIAN MOTION Theorem 1.3. Let Ψ( and p( be continuous strictly increasing functions on [, with p( Ψ( and Ψ(x as x. Assume that a continuous function f( on [,1] satisfies ( f(t f(s Ψ dsdt B <. (1.5 Then f( f(1 8 ( 4B Ψ 1 u 2 dp(u (1.6 The double integral (1.5 has a singularity on the diagonal and its finiteness depends on f,p and Ψ. The integral in (1.6 has a singularity at u and its convergence requires a balancing act between Ψ( and p(. The two conditions compete and the existence of a pair Ψ(,p( satisfying all the conditions will turn out to imply some regularity on f(. Let us first assume Theorem 1.3 and illustrate its uses with some examples. We will come back to its proof at the end of the section. First we remark that the following corollary is an immediate consequence of Theorem 1.3. Corollary 1.4. If we replace the interval [,1] by the interval [T 1,T 2 ] so that then B T1,T 2 T2 T2 T 1 T 1 Ψ ( f(t f(s dsdt T2 T 1 ( 4B f(t 2 f(t 1 8 Ψ 1 u 2 dp(u For T 1 < T 2 1 because B T1,T 2 B,1 B, we can conclude from (1.5, that the modulus of continuity f(δ satisfies f(δ sup f(t f(s 8 s,t 1 t s δ δ ( 4B Ψ 1 u 2 dp(u (1.7 Proof. (of Corollary. If we map the interval [T 1,T 2 ] into [,1] by t t T1 T 2 T 1 and redefine f (t f(t 1 +(T 2 T 1 t and p (u p((t 2 T 1 u, then Ψ [ f (t f (s ] dsdt p ( t s 1 (T 2 T 1 2 B T 1,T 2 (T 2 T 1 2 T2 T2 T 1 Ψ [ f(t f(s T 1 ] dsdt

1.3. GARSIA, RODEMICH AND RUMSEY INEQUALITY. 7 and f(t 2 f(t 1 f (1 f ( 8 8 ( Ψ 1 4BT1,T 2 (T 2 T 1 2 u 2 (T2 T 1 Ψ 1 ( 4BT1,T 2 u 2 In particular (1.7 is now an immediate consequence. dp (u dp(u Let us now turn to Brownian motion or more generally processes that satisfy E P[ x(t x(s β] C t s 1+α on [,1]. We know from Theorem 1.1 that the paths can be chosen to be continuous. We will now show that the continuous version enjoys some additional regularity. We apply Theorem 1.3 with Ψ(x x β, and p(u u γ β. Then [ ( ] x(t x(s E P Ψ dsdt C CC δ [ x(t x(s E P β t s γ t s 1+α γ dsdt ] dsdt where C δ is a constant depending only on δ 2+α γ and is finite if δ >, i.e γ < 2+α. By Fubini s theorem, almost surely ( x(t x(s Ψ dsdt B(ω < and by Tchebychev s inequality On the other hand 8 h P [ B(ω B ] CC δ B. ( 4B u 2 1 β du γ β 8 γ β (4B 1 β h 8 γ γ 2 (4B 1 γ 2 β h β u γ 2 β 1 du We obtainholdercontinuitywith exponent γ 2 which canbe anythinglessthan α β. For Brownian motion α β 2 1 and therefore α β close to 1 2. β can be made arbitrarily

8 CHAPTER 1. BROWNIAN MOTION Remark 1.2. With probability 1 Brownian paths satisfy a Holder condition with any exponent less than 1 2. ItisnothardtoseethattheydonotsatisfyaHolderconditionwithexponent 1 2 Exercise 1.1. Show that P [ x(t x(s sup ] 1. s, t 1 t s Hint: The random variables x(t x(s have standard normal distributions for t s any interval [s, t] and they are independent for disjoint intervals. We can find as many disjoint intervals as we wish and therefore dominate the Holder constant from below by the supremum of absolute values of an arbitrary number of independent Gaussians. Exercise 1.2. (Precise modulus of continuity. The choice of Ψ(x exp[αx 2 ] with α < 1 2 and p(u u 1 2 produces a modulus of continuity of the form x(δ 8 δ that produces eventually a statement P [ limsup δ 1 α log[ 1+ 4B u 2 ] 1 2 u du x(δ δlog 1 δ 16 ] 1. Remark 1.3. This is almostthe finalword, becausethe argumentofthe previous exercise can be tightened slightly to yield P [ limsup δ and according to a result of Paul Lévy P [ limsup δ Proof. (of Theorem 1.3. Define I(t x(δ δlog 1 δ x(δ δlog 1 δ 2 ] 1 2 ] 1. ( f(t f(s Ψ ds and B I(t dt

1.3. GARSIA, RODEMICH AND RUMSEY INEQUALITY. 9 There exists t (,1 such that I(t B. We shall prove that By a similar argument f( f(t 4 f(1 f(t 4 Ψ 1 ( 4B u 2 ( 4B Ψ 1 u 2 dp(u dp(u (1.8 and combining the two we will have (1.6. To prove 1.8 we shall pick recursively two sequences {u n } and {t n } satisfying t > u 1 > > u 2 > t 2 > > u n > t n > in the following manner. By induction, if t n 1 has already been chosen, define and pick u n so that p(u n dn 2. Then d n p(t n 1 and un Now t n is chosen so that and We now have un I(tdt B ( f(tn 1 f(s Ψ ds I(t n 1 p( t n 1 s ( f(tn f(t n 1 Ψ p( t n t n 1 f(t n f(t n 1 Ψ 1 ( 4B u 2 n I(t n 2B u n 2 I(t n 1 u n 4B u n 1 u n 4B u 2 n ( 4B p(t n 1 t n Ψ 1 u 2 p(t n 1. n Then, p(t n 1 2p(u n 4[p(u n 1 2 p(u n] 4[p(u n p(u n+1 ] f(t n f(t n 1 4Ψ 1 ( 4B u 2 n Summing over n 1,2,, we get f(t f( 4 and we are done. u1 un [p(u n p(u n+1 ] 4 Ψ 1 u n+1 Ψ 1 ( 4B u 2 ( 4B u 2 u1 ( 4B p(du 4 Ψ 1 u 2 p(du dp(u

1 CHAPTER 1. BROWNIAN MOTION Example 1.1. Let us consider a stationary Gaussian process with ρ(t E[X(sX(s+t] and denote by σ 2 (t E[(X(t X( 2 ] 2(ρ( ρ(t. Let us suppose that σ 2 (t C logt a for some a > 1 and C <. Then we can apply Theorem 1.3 and establish the existence of an almost sure continuous version by a suitable choice of Ψ and p. On the other hand we will show that, if σ 2 (t c logt 1, for some c >, then the paths are almost surely unbounded on every time interval. It is generally hard to prove that some thing is unbounded. But there is a nice trick that we will use. One way to make sure that a function f(t on t t 2 is unbounded is to make sure that the measure µ f (A LebMes {t : f(t A} is not supported on a compact interval. That can be assured if we show that µ f has a density with respect to the Lebsgue measure on R with a density φ f (x that is real analytic, which in turn will be assured if we show that µ f (ξ e α ξ dξ < for some α >. By Schwarz s inequality it is sufficient to prove that µ f (ξ 2 e α ξ dξ < for some α >. We will prove [ t2 2] E e iξx(t dt e αξ dξ < for some α >. Sine we can replace α by α, this will control [ t2 2] E e iξx(t dt e α ξ dξ <

1.3. GARSIA, RODEMICH AND RUMSEY INEQUALITY. 11 and we can apply Fubini s theorem to complete the proof. [ t2 2] E e iξx(t dt e αξ dξ [ ] E e iξ(x(t X(s dsdt e αξ dξ ] [e iξ(x(t X(s dsdte αξ dξ E e < provided α is small enough. σ 2 (t sξ 2 2 dsdte αξ dξ 2π σ(t s e α 2 2σ 2 (t s 2π σ(t s e α 2 log (t s 2c ds dt