Reflected Brownian Motion

Similar documents
4.6 Example of non-uniqueness.

Stochastic Differential Equations.

Stochastic Integration.

Lecture 12. F o s, (1.1) F t := s>t

Verona Course April Lecture 1. Review of probability

The first order quasi-linear PDEs

Stability of Stochastic Differential Equations

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Exercises. T 2T. e ita φ(t)dt.

1 Brownian Local Time

Harmonic Functions and Brownian motion

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Introduction to Random Diffusions

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Deterministic Dynamic Programming

n E(X t T n = lim X s Tn = X s

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

MATH 425, FINAL EXAM SOLUTIONS

Lecture 3. This operator commutes with translations and it is not hard to evaluate. Ae iξx = ψ(ξ)e iξx. T t I. A = lim

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Applications of Ito s Formula

Richard F. Bass Krzysztof Burdzy University of Washington

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

Lecture 17 Brownian motion as a Markov process

Homework #6 : final examination Due on March 22nd : individual work

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

Harmonic Functions and Brownian Motion in Several Dimensions

Uniformly Uniformly-ergodic Markov chains and BSDEs

Iowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions

Math 4263 Homework Set 1

Weak convergence and large deviation theory

1. Stochastic Processes and filtrations

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Probability and Measure

Poisson Jumps in Credit Risk Modeling: a Partial Integro-differential Equation Formulation

UNIVERSITY OF MANITOBA

SDE Coefficients. March 4, 2008

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Lecture 22 Girsanov s Theorem

ERRATA: Probabilistic Techniques in Analysis

Partial Differential Equations

Laplace s Equation. Chapter Mean Value Formulas

On the submartingale / supermartingale property of diffusions in natural scale

Feller Processes and Semigroups

Some Tools From Stochastic Analysis

Malliavin Calculus in Finance

A Short Introduction to Diffusion Processes and Ito Calculus

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

13 The martingale problem

{σ x >t}p x. (σ x >t)=e at.

Measure and Integration: Solutions of CW2

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

for all f satisfying E[ f(x) ] <.

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Kolmogorov Equations and Markov Processes

The relations are fairly easy to deduce (either by multiplication by matrices or geometrically), as one has. , R θ1 S θ2 = S θ1+θ 2

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

LMI Methods in Optimal and Robust Control

Nonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control

Math Ordinary Differential Equations

Information and Credit Risk

Series Solutions. 8.1 Taylor Polynomials

ELEMENTS OF PROBABILITY THEORY

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Multivariable Calculus

Analysis Qualifying Exam

Branching Processes II: Convergence of critical branching to Feller s CSB

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i

Approximating diffusions by piecewise constant parameters

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

Exercises in stochastic analysis

Convergence of Feller Processes

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS

A Concise Course on Stochastic Partial Differential Equations

Asymptotic statistics using the Functional Delta Method

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Contents. 1. Introduction

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

3. Identify and find the general solution of each of the following first order differential equations.

Lecture 4: Numerical solution of ordinary differential equations

A Note on the Differential Equations with Distributional Coefficients

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Krzysztof Burdzy Robert Ho lyst Peter March

The Pedestrian s Guide to Local Time

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Wiener Measure and Brownian Motion

Ordinary Differential Equation Theory

In class midterm Exam - Answer key

Partial Derivatives. w = f(x, y, z).

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Lecture 21 Representations of Martingales

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Transcription:

Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide what we want to do afterwards. If allowed to continue the process may not remain inside the domain or even inside the closed domain including the boundary. There are some choices. We could freeze it at the point where it exited. Or define some motion on the boundary that it can follow. Is there a way where we can force it to return to the interior and still remain a Markov process with continuous trajectories. Reflected Brownian motion on the half line [, ) is a way of keeping Brownian motion in the half line [, ). It can be defined as the unique process P x supported on Ω = C[[, ); [, )] with the following properties:. It starts from x() = x, i.e. P x [x() = x] = (6.) 2. It behaves locally Brownian motion on (, ). This is formulated in the following manner. If τ is any stopping time and σ = inf{t : t τ, x(t) = } is the hitting time of after τ then the conditional probability distribution of P x given F τ agrees with Q x(τ) the distribution of the Brownian motion starting at x(τ) on the σ field F τ(ω) σ. 3. Finally the time spent at the boundary is. i.e. for any t, E Px [ {} (x(s))ds] = (6.2) Lemma 6.. Behaving locally like the Brownian motion is equivalent to the following condition: f(x(t)) f(x()) 2 f xx (x(s))ds (6.3)

2 CHAPTER 6. REFLECTED BROWNIAN MOTION is a martingale for any bounded smooth function f that is a constant (which can be taken to be with out loss of generality) in some neighborhood of. Proof. The equivalence is easy to establish. Let us take ǫ > and consider the exit time τ ǫ from (ǫ, ). Any smooth function on [ǫ, ) can be extended to [, ) in such a way that it has compact support in (, ). Therefore the martingale property (6.3) holds for arbitrary smooth bounded f until τ ǫ. Hence P x agrees with Q x on F τǫ. But ǫ > is arbitrary and we can let it go to. F τǫ F τ as ǫ. In the other direction if f is supported on [a, ) we can consider the sequence of stopping times defined successively, starting from τ = and for j, and for j, Clearly τ 2j+ = inf{t : t τ 2j, x(t) = } τ 2j = inf{t : t τ 2j, x(t) = a} Z(t) = f(x(t)) f(x()) 2 f xx (x(s))ds is a martingale in [τ 2j, τ 2j+ ] and is constant in [τ 2j, τ 2j ] when x(t) [, a]. It is now easy to verify that Z(t) is a martingale. Just write for s < t, Z(t) Z(s) = j [Z((τ j+ s) t) Z((τ j s) t)] Theorem 6.2. There is a unique P x satisfying (6.), (6.2) and (6.3). It can also be characterized by replacing (6.2) and (6.3) by the following. f is a smooth bounded function satisfying f (), then Z f (t) = f(x(t)) f(x()) 2 f xx (x(s))ds (6.4) is a sub-martingale with respect to (Ω, F t, P x ). In particular if f () =, Z f (t) is a martingale. If β(t) is the ordinary Brownian motion starting from x, then P x is the distribution of x(t) = β(t) and is a Markov process with transition probability density, for t >, x, y. p(t, x, y) = [ (y x) e 2 2t 2πt ] + e y+x)2 2t Remark 6.. In both (6.3) and (6.4) the condition that f is bounded can be dropped and replaced by f having polynomial growth. If f is x 2n for large x, then f xx is c n x 2n 2. If n = E[[x(t)] 2 ] = E[[x()] 2 ] + ds

3 This is easily justified by standard methods of approximating x 2 by bounded functions below it. By induction on n, one can show in a similar fashion that E[[x(t)] 2n ] = E[[x()] 2n ] + n(2n ) [x(s)] 2n 2 ds and deduce bounds of the form E [ [x(t)] 2n] C n [x 2n + t n ] for all n. Then the martingale property will extend to smooth functions with polynomial growth. Proof. First let us establish the equivalence between (6.2)-(6.3) and (6.4). Given a function f(x) with f () =, we can approximate it with functions that are constant near. f (n) (x) = f() for x n and f(n) (x) = f(x n ). It is piecewise smooth, with matching first derivatives. Therefore it is easy to check that f (n) (x(t)) f (n) (x()) f (n) xx (x(s))ds is a Martingale. Let n, then f (n) f and f xx (n) f xx except at. But f xx (n) is uniformly bounded and because of (6.2) we obtain (6.4) for functions with f () =. Conversely if Z(t) is a martingale when f () =, we can then show that (6.2) holds. Consider the function { x 2 if x ǫ f ǫ (x) = 2ǫx ǫ 2 (6.5) if x ǫ It is piecewise smooth, with matching first derivatives. Therefore E[f ǫ (x(t))] f ǫ (x) = 2 E[ Since f ǫ (x) 2ǫx it follows that f ǫ (x(s))ds] = E[ lim E[ [,ǫ] (x(s))ds] = ǫ [,ǫ] (x(s))ds] proving (6.2). To prove the sub-martingale property, let φ(x) be a smooth, bounded function that is identically equal to x in some interval [, δ]. Then g(x) = f(x) f ()φ(x) satisfies g () = and therefore Z g is a martingale. It is then enough to prove that Z φ (t) is sub-martingale. We repeat an argument used earlier in the proof of Lemma 6. of writing Z φ (t) = j [Z φ (τ j+ t) Z φ (τ j t)] with a = δ. The terms with even j are martingales and as for terms with odd j, φ(x(τ j+ t)) φ(x(τ j t)). An easy calculation now shows that E Px [Z φ (t) Z φ (s) F s ]

4 CHAPTER 6. REFLECTED BROWNIAN MOTION Finally we will identify any P x satisfying (6.), (6.2) and (6.3) or (6.4), has the distribution of x(t) = β(t) starting from β( + x. p(t, x, y) is clearly the transition probability of β(t) which is Markov because Brownian motion has the x x symmetry. The function u(t, x) = q(t, x, y)f(y)dy satisfies u t = 2 u xx; u(, x) = f(x), u x (t, ) = Just as in the case of diffusions with out boundary, if f(x(t)) f(x()) 2 f xx (x(s))ds is a martingale for all smooth f satisfying f () =, then it follows that for all smooth functions v = v(s, x) satisfying v x (s, ) = for all s, v(t, x(t)) v(, x()) [v s (s, x(s)) + 2 v xx(, x(s))]ds is a martingale. Therefore u(t t, x(t)) is a martingale under P x and E Px [f(x(t)] = u(t, x). This identifies the marginal distributions of P x as that of β( ). Regular conditioning argument allows us to complete the identification. The reflected Brownian motion can be defined in terms of the ordinary Brownian motion β(t) as a solution of the equation x(t) x(t) = x + β(t) + A(t) where A(t) is continuous and increasing, but is allowed to increase only when x(s) =, i.e da(s) = {s:x(s)>} Such a solution is unique. If x i (t); i =, 2 are two solutions d[x (t) x 2 (t)] 2 = 2(x (t) x 2 (t))d(a (t) A 2 (t)) = 2x (t)da 2 (t) 2x 2 (t)da (t) But x () = x 2 (). Implies x (t) x 2 (t) for all t. Existence is easy. x(t) = x + β(t) inf [(x + s t β(s)) ] is clearly non negative. A(t) = inf s t [(x+β(s)) ] is increasing and does so only when [x + β(t)] = A(t), i.e. x(t) =. The reflected Brownian motion comes with a representation x(t) = β(t) + A(t)

5 where A(t) is a local time at, almost surely continuous in t, acting as an infinitely strong drift when x(t) = which is a set of times of Lebesgue measure. This leads to an Ito s formula of the form du(t, x(t)) = u t (t, x(t))dt + u x (t, (x(t))dx(t) + 2 u xx(t, (x(t))dt = u t (t, x(t))dt + u x (t, (x(t))dβ(t) + u x (t, )da(t) + 2 u xx(t, (x(t))dt which is a way of seeing the effect of the boundary condition satisfied by u. There is a way of slowing down the process at the boundary. Let σ(s) = ρa(s) + s Define increasing family of stopping times {τ t } by σ(τ t ) = t; τ σ(s) = s which has a unique solution which is strictly increasing and continuous in t. Since ds and da(s) are orthogonal (A(s) increases only when x(s) = which is a set of Lebesgue measure), we have and We can now define a new process da(s) = ρ {} (x(s))dσ(s) ds = {(, )} (x(s))dσ(s) y(t) = x(τ t ) We will identify the martingales that go with this process. By Doob s optional stopping theorem, Z f (t) = f(x(τ t )) f(x()) 2 = f(x(τ t )) f(x()) 2 ρ f () τt = f(y(t)) f(y()) 2 ρ f () τt τt f (x(s))ds f ()A(τ t ) f (x(s)) (, ) (x(s))dσ(s) {} (x(s))dσ(s) f (y(s)) (, ) (y(s))ds {} (y(s))ds is a martingale. The new process is then a Markov process with generator { (Lf)(x) = 2 f (x) if x > ρ f () if x =

6 CHAPTER 6. REFLECTED BROWNIAN MOTION We can calculate the Laplace transform in t of the probability P x [x(t) = ]. This requires us to solve the resolvent equation, denoting ρ by γ which can be done explicitly to give λψ(x) 2 ψ (x) =, λψ γψ () = e λ t p(t, x, {})dt = λ + γ 2λ x 2λ e In particular p(t,, {}) as t. But p(t,, {}) does not go to linearly in t. {} is an instantaneous state. As γ this becomes the reflected process and as γ the absorbed process. For the characterization of these processes {P γ,x } through martingales we have the following theorem. Theorem 6.3. The process {P γ,x } is characterized uniquely as the process with respect to which u(t, x(t)) u(, x()) [u s (s, x(s)) + 2 u xx(s, x(s))] (, ) (x(s))ds (6.6) is a sub-martingale provided (u t + γu x )(t, ) for all t. Proof. First let us prove that for any smooth bounded u(t, x), u(t, x(t)) u(, x()) γ u s (s, x(s))ds u x (s, ) {} (x(s))ds 2 u xx(s, x(s)) (, ) (x(s))ds is a martingale. We saw earlier that the time changed process P x has the property that f(x(t)) f(x()) [γf () {} (x(s)) + 2 f xx(x(s)) (, ) (x(s))]ds is a martingale for smooth bounded functions. It follows from this that for smooth functions u = u(t, x), u(t, x(t)) u(, (x()) u s (s, x(s))ds [γu x (s, x(s)) {} (x(s)) + 2 u xx(s, x(s)) (, ) (x(s))]ds is a martingale. If u s (s, )+γu x (s, ) = for all s, we can replace u s(s, x(s))ds by u s (s, ) {} (x(s))ds + = γ u s (s, x(s)) (, ) (x(s))ds u x (s, ) {} (x(s))ds + u s (s, x(s)) (, ) (x(s))ds

7 proving that the expression (6.6) is a martingale. On the other hand, if u s (s, ) + γu x (s, ) = h(s), then v(s, x) = u(s, x) H(s) with H (s) = h(s), satisfies the boundary condition v s (s, ) + γv x (s, ) = and v(t, x(t)) v(, x()) γ [v s (s, x(s))ds + 2 v xx(s, x(s))] (, ) (x(s))ds v x (s, ) {} (x(s))ds is a martingale. Substituting v(s, x) = u(s, x) H(s) with H(s) nondecreasing, we obtain the result. Finally will will identify the processes P γ,x through their associated ODEs. Consider the solution g(x) of λg(x) 2 g (x) = f(x) for x > λg() γg () = a Then with u(t, x) = e λ t g(x) we see that e λt g(x(t)) g(x()) + e λ s [f(x(s)) (, ) (x(s))ds + a (, ) (x(s))]ds is a martingale. Equating expectations at t = and t = we identify [ g(x) = E x e λ s [f(x(s)) (, ) (x(s))ds + a (, ) (x(s))]ds ] The situation corresponding to 2 a(x)d2 x + b(x)d x is similar so long as a(x) c >. Girsanov formula works and b can be taken care of and one can do random time change as well and reduce the problem to a =, b =. Markov Chain approximations. We have a transition probability π h (x, dy). We assume that for any δ >, sup π h (x, {y : x y δ}) = o(h) x uniformly in x. This implies immediately that the processes P h,x of piecewise constant (intervals of size h) versions of the Markov Chain are compact in D[[, T], R]. We impose conditions on the behavior of a h (x) = (y x) 2 π h (x, dy) h and b h (x) = h y x δ y x δ (y x)π h (x, dy)

8 CHAPTER 6. REFLECTED BROWNIAN MOTION to determine any limit. We assume that sup a h (x) C(l) <h x l inf b h (x) C(l) <h x l and uniformly on compact subsets of (, ), lim a h(x) = h lim b h(x) = h For smooth functions f with compact support in (, ), by a Taylor expansion lim [f(y) f(x)]π h (x, dy) h h 2 f (x) locally uniformly. Any limit is therefore Brownian motion away from. We need only determine what happens at. The parameter γ is to be determined from the behavior of a h (x), b h (x) as x, h. Theorem 6.4.. If lim inf a h (x) c > h x then the limit exists and is the reflected Brownian motion, i.e. γ =. 2. If b h (x) + whenever x, h, a h (x) then again the limit is the reflected Brownian motion. 3. If b h (x) remains uniformly bounded as h, x and tends to γ whenever h,, x and a h (x), then the limit exists and is the process with generator (L γ f)(x) = 2 (, )(x)f (x) + γ {} (x)f () Proof. For proving or 2, we need to show that for any limit P of P h,x as h, x, for some l >, E P [ τl t {} (x(s))ds] = where τ l is the exit time from [, l]. Take l =. By Taylor expansion we see that [f(y) f(x)]π h (x, dy) = b h (x)f (x) + h 2 a h(x)f (x) + (h)

9 In particular if we consider f = f ǫ (x) defined in (6.5), it is uniformly bounded by 2ǫ on [, ]. and [f(y) f(x)]π h (x, dy) = h[b h (x)f (x)+ 2 a h(x)f (x)]+o(h) = (L j f)(x)+o(h) uniformly on [, ]. (L h f)(x) = [b h (x)f (x) + 2 a h(x)f (x)] c [,ǫ] (x) 2C()ǫ f(x(nh)) f(x()) (n ) j= is a martingale. Therefore we obtain in the limit f(x(t τ )) f(x()) [f(y) f(x(jh))]π(x(jh), dy) τ is a sub-martingale. This provides an estimate E P [ τ c [,ǫ] (x) 2C()ǫ [,ǫ] (x(s))ds] Kǫ(t + ) which proves. To prove 2 we need a different test function φ. Take φ(x) = f(x) + δx on [, ]. Then on [, ], φ and φ are bounded by C(ǫ + δ) and for some L θ as θ, lim inf h [b h(x)φ (x) + 2 a h(x)φ (x)] (L θ δ θ) [,ǫ] (x) C(ǫ + δ) This provides the estimate for the limiting P E P [ τ and this is enough. Let ǫ to get lim sup E P [ ǫ [L θ δ θ] [,ǫ] (x(s))ds] C(ǫ + δ) τ δ [,ǫ] (x(s))ds] C L θ δ θ and now pick θ = δ and let δ. Finally we turn to 3. We need to consider for smooth u(s, x), with u s (s, ) + γu x (s, ), [u(s + h, y) u(s, x)]π h (x, dy) and determine its sign when x is close to and h is small. We see that this is nearly h[u s (s, x) + b h (x)u x (s, x) + 2 a h(x)u xx (s, x)] From earlier estimate we know that the chain will not spend much time close to where a h (x) is not small and then b h (x) is close to γ. Since u s (s, )+γu x (s, ) this contribution is nonnegative.