Lecture 6: Wiener Process

Similar documents
An Introduction to Malliavin calculus and its applications

CHAPTER 7. Wiener Process

4 Sequences of measurable functions

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Homework 10 (Stats 620, Winter 2017) Due Tuesday April 18, in class Questions are derived from problems in Stochastic Processes by S. Ross.

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

f(s)dw Solution 1. Approximate f by piece-wise constant left-continuous non-random functions f n such that (f(s) f n (s)) 2 ds 0.

THE WAVE EQUATION. part hand-in for week 9 b. Any dilation v(x, t) = u(λx, λt) of u(x, t) is also a solution (where λ is constant).

The Arcsine Distribution

Vehicle Arrival Models : Headway

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

2. Nonlinear Conservation Law Equations

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

Stochastic models and their distributions

Representation of Stochastic Process by Means of Stochastic Integrals

Two Coupled Oscillators / Normal Modes

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Empirical Process Theory

Lecture Notes 2. The Hilbert Space Approach to Time Series

GMM - Generalized Method of Moments

Vanishing Viscosity Method. There are another instructive and perhaps more natural discontinuous solutions of the conservation law

Unit Root Time Series. Univariate random walk

Differential Equations

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

The Strong Law of Large Numbers

Bernoulli numbers. Francesco Chiatti, Matteo Pintonello. December 5, 2016

Chapter 7: Solving Trig Equations

Lecture 10: The Poincaré Inequality in Euclidean space

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

6. Stochastic calculus with jump processes

On a Fractional Stochastic Landau-Ginzburg Equation

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

KEY. Math 334 Midterm I Fall 2008 sections 001 and 003 Instructor: Scott Glasgow

10. State Space Methods

Math 10B: Mock Mid II. April 13, 2016

arxiv: v1 [math.pr] 21 May 2010

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

5. Stochastic processes (1)

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Chapter 2. First Order Scalar Equations

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

Chapter 6. Systems of First Order Linear Differential Equations

20. Applications of the Genetic-Drift Model

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 33: November 29

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Math 334 Fall 2011 Homework 11 Solutions

Average Number of Lattice Points in a Disk

arxiv: v1 [math.fa] 9 Dec 2018

Approximation Algorithms for Unique Games via Orthogonal Separators

t 2 B F x,t n dsdt t u x,t dxdt

A proof of Ito's formula using a di Title formula. Author(s) Fujita, Takahiko; Kawanishi, Yasuhi. Studia scientiarum mathematicarum H Citation

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Endpoint Strichartz estimates

A NOTE ON THE SMOLUCHOWSKI-KRAMERS APPROXIMATION FOR THE LANGEVIN EQUATION WITH REFLECTION

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Transform Techniques. Moment Generating Function

Heavy Tails of Discounted Aggregate Claims in the Continuous-time Renewal Model

Wavelet Methods for Time Series Analysis. What is a Wavelet? Part I: Introduction to Wavelets and Wavelet Transforms. sines & cosines are big waves

Cash Flow Valuation Mode Lin Discrete Time

Hamilton Jacobi equations

15. Vector Valued Functions

) were both constant and we brought them from under the integral.

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data

Some Basic Information about M-S-D Systems

Solutions to Assignment 1

1 Solutions to selected problems

Lecture 2 April 04, 2018

6.2 Transforms of Derivatives and Integrals.

GEM4 Summer School OpenCourseWare

Reliability of Technical Systems

The Optimal Stopping Time for Selling an Asset When It Is Uncertain Whether the Price Process Is Increasing or Decreasing When the Horizon Is Infinite

OBJECTIVES OF TIME SERIES ANALYSIS

Chapter 14 Wiener Processes and Itô s Lemma. Options, Futures, and Other Derivatives, 9th Edition, Copyright John C. Hull

Math 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas

L p -L q -Time decay estimate for solution of the Cauchy problem for hyperbolic partial differential equations of linear thermoelasticity

Markov Processes and Stochastic Calculus

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

Mixing times and hitting times: lecture notes

International Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN

QUANTITATIVE DECAY FOR NONLINEAR WAVE EQUATIONS

Sample Autocorrelations for Financial Time Series Models. Richard A. Davis Colorado State University Thomas Mikosch University of Copenhagen

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

14 Autoregressive Moving Average Models

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0.

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

An random variable is a quantity that assumes different values with certain probabilities.

Solutions from Chapter 9.1 and 9.2

ON SCHRÖDINGER S EQUATION, 3-DIMENSIONAL BESSEL BRIDGES, AND PASSAGE TIME PROBLEMS

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Chapter 3 Boundary Value Problem

Couplage du principe des grandes déviations et de l homogénisation dans le cas des EDP paraboliques: (le cas constant)

Example on p. 157

Transcription:

Lecure 6: Wiener Process Eric Vanden-Eijnden Chapers 6, 7 and 8 offer a (very) brief inroducion o sochasic analysis. These lecures are based in par on a book projec wih Weinan E. A sandard reference for he maerial presened hereafer is he book by R. Dure, Sochasic Calculus: A Pracical Inroducion (CRC 1998). For a discussion of he Wiener measure and is link wih pah inegrals see e.g. he book by M. Kac, Probabiliy and Relaed Topics in Physical Sciences (AMS, 1991). 1 The Wiener process as a scaled random walk Consider a simple random walk {X n } n N on he laice of inegers Z: X n n ξ k, (1) k1 where {ξ k } k N is a collecion of independen, idenically disribued (i.i.d) random variables wih P(ξ k ±1) 1. The Cenral Limi Theorem (see he Addendum a he end of his chaper) assers ha X N N N(,1) ( Gaussian variable wih mean and variance 1) in disribuion as N. This suggess o define he piecewise consan random funcion on [, ) by leing W N W N X N N, () where N denoes he larges ineger less han N and in accordance wih sandard noaions for sochasic processes, we have wrien as a subscrip, i.e. W N W N (). I can be shown ha as N, W N converges in disribuion o a sochasic process W, ermed he Wiener process or Brownian moion 1, wih he following properies: (a) Independence. W W s is independen of {W τ } τ s for any s. 1 The Brownian moion is ermed afer he biologis Rober Brown who observed in 187 he irregular moion of pollen paricles floaing in waer. I should be noed, however, ha a similar observaion had been made earlier in 1765 by he physiologis Jan Ingenhousz abou carbon dus in alcohol. Somehow Brown s name became associaed o he phenomenon, probably because Ingenhouszian moion does no sound very good. Some of us wih complicaed names are moved by his sory. 5

3.5 1.5 1.5.5 1 1.5.1..3.4.5.6.7.8.9 1 Figure 1: Realizaions of W N for N 1 (blue), N 4 (red), and N 1 (green). (b) Saionariy. The saisical disribuion of W +s W s is independen of s (and so idenical in disribuion o W ). (c) Gaussianiy. W is a Gaussian process wih mean and covariance EW, EW W s min(,s). (d) Coninuiy. Wih probabiliy 1, W viewed as a funcion of is coninuous. To show independence and saionariy, noice ha for 1 m n X n X m n km+1 is independen of X m and is disribue idenically as X n m. I follows ha for any s, W W s is independen of W s and saisfies ξ k W W s d W s, (3) where d means ha he random processes on boh sides of he equaliy have he same disribuion. To show Gaussianiy, observe ha a fixed ime, W N converges as N o Gaussian variable wih mean zero and variance since W N X N X N N N(,1) d N(,). N N N In oher words, P(W [x 1,x ]) x x 1 ρ(x,)dx (4) 51

where ρ(x,) e x / π. (5) In fac, given any pariion 1 n, he vecor (W N 1,...,W N n ) converges in disribuion o a n-dimensional Gaussian random variable. Indeed, using (3) recursively ogeher wih (4),(5) and he independence propery (a), i is easy o see ha he probabiliy densiy ha (W 1,...,W n ) (x 1,...,x n ) is simply given by ρ(x n x n 1, n n 1 ) ρ(x x 1, 1 )ρ(x 1, 1 ) (6) A simple calculaion using EW xρ(x,)dx, EW W s yxρ(y x, s)ρ(x,s)dxdy. R R for s and similarly for < s gives he mean and covariance specified in (b). Noice ha he covariance can also be specified via E(W W s ) s, and his equaion suggess ha W is no a smooh funcion of. In fac, i can be showed ha even hough W is coninuous almos everywhere (in fac Hölder coninuous wih exponen γ < 1/), i is differeniable almos nowhere. This is consisen wih he following propery of self-similariy: for λ > W d λ 1/ W λ, which is easily esablished upon verifying ha boh W and λ 1/ W λ are Gaussian processes wih he same (zero) mean and covariance. More abou he lack of regulariy of he Wiener process can be undersood from firs passage imes. For given a > define he firs passage ime by T a inf{ : W a}. Now, observe ha P(W > a) P(T a < & W > a) 1 P(T a < ). (7) The firs equaliy is obvious by coninuiy, he second follows from he symmery of he Wiener process; once he sysem has crossed a i is equally likely o sep upwards as downwards. Inroducing he random variable M sup s W s, we can wrie his ideniy as: e z / P(M > a) P(T a < ) P(W > a) dz, (8) a π where we have invoked he known form of he probabiliy densiy funcion for W in he las equaliy. Similarly, if m inf s W s, P(m < a) P(M > a). (9) Bu his shows ha he even W crosses a is no so idy as i may a firs appear since i follows from (8) and (9) ha for all ε > : P(M ε > ) > and P(m ε < ) >. (1) In paricular, is an accumulaion poin of zeros: wih probabiliy 1 he firs reurn ime o (and hus, in fac, o any poin, once aained) is arbirarily small. 5

Two alernaive consrucions of he Wiener process Since W is a Gaussian process, i is compleely specified by i mean and covariance, EW EW W s min(,s). (11) in he sense ha any process wih he same saisics is also a Wiener process. This observaion can be used o make oher consrucions of he Wiener process. In his secion, we recall wo of hem. The firs consrucion is useful in simulaions. Define a se of independen Gaussian random variables {η k } k N, each wih mean zero and variance uniy, and le {φ k ()} k N be any orhonormal basis for L [,1] (ha is, he space of square inegral funcions on he uni inerval). Thus any funcion f() in his se can be decomposed as f() k N α kφ k () where (assuming ha he φ k s are real) α k 1 f()φ k()d. Then, he sochasic process defined by: W η k φ k ( )d, (1) k N is a Wiener process in he inerval [,1]. To show his, i suffices o check ha i has he correc pairwise covariance since W is a linear combinaion of zero mean Gaussian random variables, i mus iself be a Gaussian random variable wih zero mean. Now, EW W s s Eη k η l φ k ( )d φ l (s )ds k,l N k N s φ k ( )d φ k (s )ds, where we have invoked he independence of he random variables {η k }. To inerpre he summands, sar by defining an indicaor funcion of he inerval [,τ] and argumen { 1 if [,τ] χ τ () oherwise. If τ [,1], hen his funcion furher admis he series expansion (13) χ τ () k φ k () τ φ k ( )d. (14) Using he orhogonaliy properies of he {φ k ()}, he equaion (13) may be recas as: EW W s 1 ( ) ( s ) φ k ( )d φ k (u) φ l (s )ds φ l (u) du k,l N 1 1 χ (u)χ s (u)du χ min(,s) (u)du min(,s) 53 (15)

as required. One sandard choice for he se of funcions {φ k ()} is he Haar basis. The firs funcion in his basis is equal o 1 on he half inerval < < 1/ and o -1 on 1/ < < 1, he second funcion is equal o on < < 1/4 and o - on 1/4 < < 1/ and so on. The uiliy of hese funcions is ha i is very easy o consruc a Brownian bridge: ha is a Wiener process on [,1] for which he iniial and final values are specified: W W 1. This may be defined by: Ŵ W W 1, (16) if using he above consrucion hen i suffices o omi he funcion φ 1 () from he basis. The second consrucion of he Wiener process (or, raher, of he Brownian bridge), is empirical. I comes under he name of Kolmogorov-Smirnov saisics. Given a random variable X uniformly disribued in he uni inerval (i.e. P( X < x) x), and daa {X 1,X,... X n }, define a sample-esimae for he probabiliy disribuion of X: F n (x) 1 n (number of X k < x, k 1,...,n) 1 n χ n (,x) (X k ), (17) equal o he relaive number of daa poins ha lie in he inerval x k < x. For fixed x F n (x) x as n by he Law of Large Numbers ells us ha, whereas n( ˆFn (x) x) d N(,x(1 x)). (18) by he Cenral Limi Theorem. This resul can be generalized o he funcion ˆF n : [,1] [,1] (i.e. when x is no fixed): as n n(fn (x) x) d W x xw 1 Ŵx. (19) k1 3 The Feynman-Kac formula Given a funcion f(x), define u(x,) Ef(x + W ) () This is he Feynman-Kac formula for he soluion of he diffusion equaion: To show his noe firs ha: u 1 u x u(x,) f(x). (1) u(x, + s) Ef(x + W +s ) Ef(x + (W +s W ) + W ) Eu(x + W +s W,) Eu(x + W s,) where we have used he independence of W +s W and W. Now, observe ha u 1 (x,) lim (u(x, + s) u(x,)) s + s 1 lim s + s E(u(x + W s,) u(x,)) ( 1 u lim s + s x (x,)ew s + 1 ) u x (x,)ew s + o(s), 54

where we have Taylor-series expanded o obain he final equaliy. The resul follows by noing ha EW s and EWs s. The formula admis many generalizaions. For insance: If v(x,) Ef(x + W ) + E g(x + W s )ds, () hen he funcion v(x, ) saisfies he diffusion equaion wih source-erm he arbirary funcion g(x): v 1 v + g(x) v(x,) f(x). (3) x Or: If ( ( w(x,) E f(x + W )exp c(x + W s )ds) ) (4) hen w(x,) saisfies diffusive equaion wih an exponenial growh erm: w 1 w + c(x)w w(x,) f(x). (5) x Addendum: The Law of Large Numbers and he Cenral Limi Theorem Le {X j } j N be a sequence of i.i.d. (independen, idenically disribued) random variables, le η EX 1 σ var(x 1 ) E(Z 1 η) and define S n The (weak) Law of Large Numbers saes ha if E X j <, hen S n n η n j1 X j in probabiliy. The Cenral Limi Theorem saes ha if EX j < hen S n nη nσ N(, 1) in disribuion. We firs give a proof of he Law of Large Numbers under he sronger assumpion ha E X j <. Wihou loss of generaliy we can assume ha η. The proof is based he Chebychev inequaliy: Suppose X is a random variable wih disribuion funcion F(x) P(X < x). Then, for any λ >, provided only ha E X p <. Indeed: λ p P( X λ) λ p df(x) x λ P( X λ) 1 λ pe X p, (6) x λ 55 x p df(x) R x p df(x) E X p.

Using Chebychev s inequaliy, we have P { S n n } > ε for any ε >. Using he i.i.d. propery, his gives Hence 1 ε E S n n E S n E X 1 + X +... + X n ne X 1. P { S n n } > ε 1 nε E X 1, as n, and his proves he law of large numbers. Nex we prove he Cenral Limi Theorem. Le f be he characerisic funcion of X 1, i.e. f(k) Ee ikx 1, k R. (7) and similarly le g n be he characerisic funcion of S n / nσ. Then g n (ξ) Ee iξsn/ nσ n Ee iξx j/ nσ (Ee iξx j/ nσ ) n j1 ( 1 + ik EX 1 k nσ nσ EX 1 + o(n 1 ) ) n (1 k n + o(n 1 ) e k / as n. This shows ha he characerisic funcion of S n / nσ converges o he he characerisic funcion of N(,1) as n and erminaes he proof. I is insrucive o noe ha he only propery of X 1 ha we have required in he cenral limi heorem is ha EX1 <. In paricular, he heorem holds even if he higher momens of X 1 are infinie! For one illusraion of his, consider a random variable having probabiliy densiy funcion ρ(x) ) n π(1 + x ), (8) for which all momens of order higher han are infinie. Neverheless, we have: f(k) e ikx ρ(x)dx (1 + k ) e k R 1 1 k + o(k ), and hence he Cenral Limi Theorem applies. Inuiively, he reason is ha he fa ails of he densiy ρ(x) disappear in he limi owing o he rescaling of he parial sum by 1/sqrn. Noes by Marcus Roper and Ravi Srinivasan. 56