Optimal Stopping and Applications

Size: px
Start display at page:

Download "Optimal Stopping and Applications"

Transcription

1 Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal Stopping and Freeboundary Problems by Peskir and Shiryaev, and more details can generally be found there. 1 Introduction 1.1 Motivating Examples Given a stochastic process X t, an optimal stopping problem is to compute the following: sup EF(X τ ), (1) τ where the supremum is taken over some set of stopping times. As well as computing the supremum, we will also usually be interested in finding a description of the stopping time. For motivation, we consider three applications where optimal stopping problems arise. (i) Stochastic analysis. Let B t be a Brownian motion. Then classical results tell us that for any (fixed) T 0, [ ] πt E sup B s = 0 s T 2 What can we say about the left-hand side if we replace the constant T with a stopping time τ? To solve this problem, it turns out that we can consider the (class of) optimal stopping problems: V (λ) = sup E τ [ ] sup B s λτ 0 s τ. (2) Where we take the supremum over all stopping times which have Eτ <, and λ > 0. (Note that in order to fit into the exact form described in (1), we need 1

2 Optimal Stopping and Applications Introduction Amg Cox 2 to take e.g. X t = (B t, sup s t B s, t)). Now suppose we can solve this problem for all λ > 0, then we can use a Lagrangian-style approach to get back to the original question: for any suitable stopping time τ we have [ ] E sup B s V (λ) + λeτ 0 s τ inf (V (λ) + λeτ) λ>0 Now, if we suppose that the infimum here is attained, and the supremum in (2) is also attained at the optimal λ, then we see that the right-hand side is a function of Eτ, and further, we can attain equality for this bound, so that it is the smallest possible bound. (ii) Sequential analysis. Suppose we observe (in continuous time) a statistical process X t perhaps X t is a measure of the health of patients in a clinical trial. We wish to know whether a particular drug has a positive impact on the patients. If we suppose that the process X t has: X t = µt + B t where µ is the unknown effect of the treatment. Then to make inference, we might test the hypotheses: H 0 : µ = µ 0, H 1 : µ = µ 1. Our goal might be to minimise the average time we take to make a decision subject to some constraints on the probabilities of making wrong decisions: P(accept H 0 H 1 true) P(accept H 1 H 0 true) β. Again, using Lagrangian techniques, we are able to rewrite this as an optimal stopping problem, which we can solve to find the optimal stopping time (together with a rule to decide whether we accept or reject when we stop). (iii) Mathematical Finance. Let S t be the share price of a company. A common type of option is the American Put option. This contract gives the holder of the option the right, but not the obligation, to sell the stock for a fixed price K to the writer of the contract at any time before some fixed horizon T. In particular, suppose the holder exercises at a stopping time τ, then the discounted 1 value of the payoff is: e rτ (K S τ ) +. This is the difference between the amount she receives, and the actual value of the asset, and we take the positive part, since she will only choose to exercise the option, and sell the share, if the price she would receive is larger than the price she would get from selling on the market. If the holder of the contract chooses the stopping time τ, the fundamental theorem of asset pricing 2 says that the payoff has initial value E Q e rτ (K S τ ) +, 1 We convert time-t money into time-0 money by multiplying the time-t amount by the discount factor e rt, where r is the interest rate. 2 If you don t know what this is, ignore the Q that crops up, and just think of this as pricing by taking an average of the final payoff. α

3 Optimal Stopping and Applications Introduction Amg Cox 3 where Q is the risk-neutral measure since the seller would not want to sell this too cheaply (and in fact, there is a better argument based on hedging that we will see later), and she doesn t know what strategy the buyer will use, so she should take the supremum over all stopping times τ T: Price = sup E Q e rτ (K S τ ) +. τ T So pricing the American put option is equivalent to solving an optimal stopping problem. 1.2 Simple Example Once a problem of interest has been set up as an optimal stopping problem, we then need to consider the method of solution. The exact techniques vary depending on the problem, however there are two important approaches the martingale approach, and the Markov approach, and there are a number of common features that any optimal stopping problem is likely to exhibit. We will now consider a reasonable simple example where we will be able to explicitly observe many of the common features. Let X n be a simple symmetric random walk on a probability space (Ω, F, (F n ) n Z+, (P x ) x Z ), where F n is the natural filtration of X n, and P x (X 0 = x) = 1. Let f( ) be a function on Z with f(n) = 0 if n N or n N, for some N. Then the optimal stopping problem we will consider is to find: sup E 0 f(x τ ) τ M H ±N where M H ±N is the set of stopping times which are earlier than the hitting time H ±N = inf{n 0 : X n N}. Note that some kind of constraint along these lines is necessary to make the problem non-trivial, since otherwise we can use the recurrence of the process to wait until the process hits the point at which the function f( ) is maximised similar restrictions are common throughout optimal stopping, although often arise more naturally from the setup. For two stopping times τ 1, τ 2, introduce the class of stopping times (generalising the above): M τ 2 τ 1 = { stopping times τ : τ 1 τ τ 2 } (3) where we will often omit τ 1 if this is intended to be τ 1 0. Then we may use the Markov property of X n to deduce 3 that sup τ M H ±N n E 0 [f(x τ ) F n ] 3 There is a technical issue here: namely that the set of stopping times over which we take the supremum may be uncountable, and therefore the resulting sup, which is a function from Ω R, may not be measurable, and therefore may not be a random variable. We will introduce the correct notion to resolve this issue later, but for the moment, we interpret this statement informally.

4 Optimal Stopping and Applications Introduction Amg Cox 4 must be a function of X n only (i.e. independent of F n except through X n ), so, on the set {n H ±N }, we get: sup E 0 [f(x τ ) F n ] = V (X n, n) τ M H ±N n for some function V (, ), and in principle, we believe that the function should only depend on the spatial, and not the time parameter, so that we introduce as well: V (y) = V (y, 0) = Note that we must have 4 V (y) V (y, n). sup E y f(x τ ). (4) τ M H ±N Our aim is now to find V (0) (and the connected optimal strategy), but in fact it will be easiest to do this by characterising the whole function V ( ). The function V ( ) will be called the value function. In general, we can characterise V ( ) as follows: Theorem 1.1. The value function, V ( ), is the smallest concave function on { N,...,N} which is larger than f( ). Moreover: (i) V (X H ±N n ) is a supermartingale; (ii) there exists a stopping time τ which attains the supremum in (4); (iii) the stopped process V (X H ±N n τ ) is a martingale; (iv) V satisfies the Bellman equation: (v) V (x) = V (x, n) for all n 0. V (x) = max{f(x), E x V (X 1 )}; (5) (vi) V (X n ) is the smallest supermartingale dominating f(x n ) (i.e. V (X n ) f(x n )). Proof. We first note that the smallest concave function dominating f does indeed exist: let G be the set of convex functions which are larger than f, and set g(x) = inf h G h(x). The set G is non-empty, since h(x) M is in G for M greater than max n { N,...,N} f(n), so g is finite. Suppose g is not concave. Then we can find x < y < z such that g(y) < z y z x g(x) + y x z x g(z) 4 Any difference between V (y) and V (y, n) must come from the fact that there is a stopping time which uses the extra randomness in F n (which contains no information about the future of the process); if we have a stopping time which is optimal without the randomness, we can always construct a stopping time at F n which has the same expected payoff.

5 Optimal Stopping and Applications Introduction Amg Cox 5 and therefore, there is some h G for which: contradicting the concavity of h. h(y) < z y z x g(x) + y x z x g(z) < z y z x h(x) + y x z x h(z), Now we consider the function V ( ). Clearly, from the definition (4), we must have V (x) f(x) (just take τ n). Moreover, V ( ) must be concave. We introduce the following notation: let P x (and E x ) denote the probability measure with X 0 = x, and define the stopping time H a,b = inf{n 0 : X n = a or X n = b} then we get V (y) = sup E y f(x τ ) τ M H ±N sup E y f(x τ ) τ M H ±N Hx,z sup E y f(x Hx,z+τ θ Hx,z ) τ M H ±N where θ T is the usual shift operator. So [ ] z y V (y) sup [E y f(x Hx,z+τ θ Hx,z ) X Hx,z = x τ M H ±N z x ] y x ] + E y [f(x Hx,z+τ θ Hx,z ) X Hx,z = z z x z y sup E x f(x τ ) + y x sup E z f(x τ ) z x τ M H ±N z x τ M H ±N z y z x V (x) + y x z x V (z) which gives the required concavity, and so we have V G. We note first that we have V ( N) = V (N) = 0, from (4), and the fact that f(n) = f( N) = 0. from this, we deduce that any g G is non-negative. In fact, since for any g G, and x { N + 1,..., N 1}, concavity implies g(x) 1 (g(x 1) + g(x + 1)), we 2 have g(x n ) E [g(x n+1 ) F n ] and hence g(x H ±N n ) is a supermartingale. In particular, for all τ M H ±N, we have and therefore g(x) E x g(x τ ) E x f(x τ ) g(x) sup τ M H ±N E x f(x τ ) V (x),

6 Optimal Stopping and Applications Introduction Amg Cox 6 and we conclude that V ( ) is the smallest convex function. Note that the above argument also applies to V ( ), since V G, so we conclude that V (X H ±N n ) is also a supermartingale. Now we need to construct the optimal stopping time. Define the sets: C = {x : V (x) > f(x)} D = {x : V (x) = f(x)}. We claim that the function V ( ) is linear between points in D( ): recall that the minimum of any two concave functions is also concave, and suppose that there is a point y of C( ) at which V ( ) is larger than the line between (x, V (x)) and (z, V (z)), where x and z are the nearest points below and above y in D. Then there exists a point y {x + 1,...,z}. Taking the minimum of V and the line between (x, V (x)) and (y, f(y )) is then a strictly smaller concave function, which remains larger than V ( ). Hence V ( ) is indeed linear between points of D. which maximises f(y ) f(x) y x Now consider the stopping time τ = inf{n 0 : X n D}. Clearly, if X 0 = y C, and x and z are the nearest points below and above y in D, we have E y f(x τ ) = z y z x f(x) + y x f(z) = V (y). z x Alternatively, if y D, τ = 0, P y -a.s., and so E y f(x τ ) = f(y). Hence τ is optimal in the sense: V (y) = sup E y f(x τ ) = E y f(x τ ). τ M H ±N Consequently, V (y) = E y V H ±N τ, and the process V H ±N n τ (which we have already shown is a non-negative supermartingale) must be a (uniformly integrable) martingale. Property (iv) now clearly follows if f(x) E x V (X 1 ) then τ = 0, otherwise τ 1. Now consider the function V (x, n). Suppose we have V (X n, n) > V (X n ) for some n, X n, and let τ n M H ±N n be a stopping time for which: E 0 [f(x τn ) F n ] > V (X n ). Using the supermartingale property of V (X n ), we conclude that which contradicts the choice of τ n. V (X n ) E 0 [V (X τn ) F n ] E 0 [f(x τn ) F n ], Finally, suppose there exists a supermartingale Y n with Y n V (X n ), but Y n f(x n ). Write τ n = inf{m n : X m D}. Then Y n E[Y τn F n ] E[f(X τn ) F n ] = V (X n ). Hence V (X n ) is the smallest supermartingale dominating f( ). Remark 1.2. (i) The sets C and D are called the continuation set and stopping set respectively. Note the connection between C and D and the concavity of the

7 Optimal Stopping and Applications Discrete time Amg Cox 7 value function: V (x) 1 (V (x + 1) + V (x 1)) = 0 x C (6) 2 V (x) 1 (V (x + 1) + V (x 1)) 0 x D. (7) 2 In particular, V (X n ) is a martingale in C, and a supermartingale on D (strictly so, if the inequality is strict). (ii) We have made strong use of the Markov property here: the ability to write the value function as a function of X n has made things considerably easier; in addition, the fact that there is no time-horizon helps, otherwise, our martingale would have to be a function of both time and space, and correspondingly harder to characterise. In general, we would like to consider situations where the process is not Markov, and this means we cannot characterise the value function quite so simply. However, part (vi) suggests the way out rather than look for functions on the state space of the Markov process, we will look for the smallest supermartingales which dominates the payoff. The corresponding supermartingale will be called the Snell envelope. (iii) The other question is: how might things change in continuous time and space? A simple way of considering this is to see what happens if we rescale the current problem by letting the space and time steps get small appropriately, so that in the limit, we get a Brownian motion. Thinking about (6) (7), we might expect the statespace, [ N, N], to break up into two sets, C and D, such that: lim n lim n ( V (x) 1 (V (x + 1/n) + V (x 1/n))) 2 ( V (x) 1 2 and such that V (x) = f(x) in D. = V (x) = 0 x C n 2 (V (x + 1/n) + V (x 1/n))) = V (x) 0 x D n 2 Thinking a bit more, suppose our process starts at y [ N, N] a possible approach to finding V (y) is to find the points x y z, and the function V such that: V (w) = 0, w (x, z) V (w) = f(w), w {x, z} V (w) = f (w), w {x, z} V (w) 0, w [ N, N] then this will be sufficient to characterise the value function at y. Such a formulation is analogous to the free-boundary problems that we will see in later lectures (although these will normally be formulated as PDE problems, rather than differential equation problems as it is here). Conditions like those on the value and the gradient on the boundary are continuous fit and smooth fit conditions.

8 Optimal Stopping and Applications Discrete time Amg Cox 8 2 Discrete time Our approach throughout this section will be to start in the more general case, where we simply assume that there is some gains process G n, which is the amount we receive at time n if we choose to stop, and then later to specify the situation to the Markov case, where we can say more about the value function and the optimal stopping time. 2.1 Martingale treatment We begin by assuming simply that (G n ) n 0 is a sequence of adapted random variables on a filtered probability space (Ω, F, (F n ) n 0, P), and interpret G n as the gain, or reward we receive for stopping at time n. Recall the definition of M τ 2 τ 1 as the set of stopping times which are less than τ 2 and greater than τ 1 ; we shall omit the lower limit when this is 0, and if we omit the upper limit, we admit any stopping time which is almost surely finite. Our optimisation will be taken over some set of stopping times N M, so the value function is: V = sup EG τ. τ N To keep the problem technically simple, we assume that [ ] sup E τ N sup G n n τ <. (8) Note that this can be rather restrictive, but if N = M τ for some relatively nice τ like τ = H ±N, or some constant we are usually OK. We begin by considering the case where we have some finite horizon to the problem: that is, N = M N, for a fixed constant N. Clearly, if we arrive at this horizon, we have no choice but to stop, so that the value function and the gain function are the same at time N. From here, we can work backwards if, conditional on the information at time N 1, we are on average better stopping, we should stop. Otherwise we continue, and hence we can calculate the value function at N 1. This is exactly the idea expressed in the Bellman equation (5). Formally, we can define the candidate for our Snell envelope 5 S n inductively as follows: { G N, n = N S n = (9) max{g n, E[S n+1 F n ]}, n = N 1, N 2,...,0. Then the following result holds: Theorem 2.1. For each n, V n = ES n is the value of the optimal stopping problem: V n = sup EG τ, (10) τ M N n 5 The difference between the value function and the Snell envelope is that the value function is a numerical value, possibly depending on the choice of the state in the Markovian context, whereas the Snell envelope is a random variable. In the example of the previous section V ( ) was the value function, V (X n ) the Snell envelope.

9 Optimal Stopping and Applications Discrete time Amg Cox 9 assuming (8) holds. Moreover: (i) The stopping time is optimal in (10); τ n = inf{n k N : S k = G k } (ii) The process (S k ) 0 k N is the smallest supermartingale which dominates (G k ) 0 k N ; (iii) The stopped process (S k τn ) n k N is a martingale. Proof. Note that condition (8) ensures that all the stochastic processes mentioned are integrable. We begin by showing that: S n E[G τ F n ], for τ M N n, (11) S n = E[G τn F n ]. (12) We proceed backwards inductively from n = N. Clearly both statements are true for n = N. The inductive step follows from the definition of S n : suppose the statement is true for n and take τ M N n 1, then E[G τ F n 1 ] = G n 1 1 {τ=n 1} + E[G τ n F n 1 ]1 {τ n} S n 1 1 {τ=n 1} + E[E[G τ n F n ] F n 1 ]1 {τ n}. Now, τ n M N n, so E[G τ n F n ] S n, and by the definition of S n 1, E[G τ F n 1 ] S n 1 1 {τ=n 1} + E[S n F n 1 ]1 {τ n} S n 1 1 {τ=n 1} + S n 1 1 {τ n} S n 1. If we now replace τ above with τ n, and note that τ n 1 = τ n and S n 1 = E[S n F n 1 ] on the set {τ n 1 n}, all the inequalities above can be seen to be equalities, and thus: S n 1 = E[G τn F n 1 ]. Taking expectations in (11) and (12), we conclude that ES n EG τ for any τ M N n, and further that ES n = EG τn. Hence (10) holds, and τ n is optimal for this equation. The supermartingale and dominance properties follow from (12). To show S n is the smallest supermartingale dominating S n, consider an alternative supermartingale U n which also dominates G n. Then clearly, S N = G N U N, and it follows from (9) that if S n U n, then also S n 1 U n 1 The martingale property finally follows from the fact that (S k τn ) n k N ) is a supermartingale with ES n τn = ES N τn.

10 Optimal Stopping and Applications Discrete time Amg Cox 10 We now want to try and move to the infinite-horizon case, but there is a technical issue: ideally, we should have been able to write the value function as S n = sup E[G τ F n ], τ M N n however the set of stopping times in M N n will typically be uncountable, so that the righthand side may not be measurable! To get round this, we need to introduce the concept of an essential supremum. Theorem 2.2. Let {Z α ; α I} be a collection of random variables in a probability space (Ω, F, P), with I an arbitrary index set. Then there exists a unique (up to P-null sets) random variable Z : Ω R = R {, } such that: (i) P(Z α Z ) = 1, α I, (ii) if Y : Ω R is another random variable satisfying (i), then P(Z Y ) = 1. We call Z the essential supremum of {Z α ; α I}, and write Z = esssup Z α. α I Moreover, there exists a countable subset J of I, such that Z = sup Z α. α J Sketch of Proof. Mapping by a suitable f : R [ 1, 1], we may assume all Zα are bounded. Let C be the set of countable subsets of I. Then [ ] [ ] a = sup E sup Z α = sup E sup Z α C C α C n 1 α C n for some suitable sequence C n C. However, n 1 C n is a countable set, so we may define the random variable Z = sup Z S n. n Cn From here, it is relatively easy to check that Z has the required properties. We re now able to define the Snell envelope for the infinite horizon problem. Suppose we want to solve a problem of the form: sup EG τ τ M then our candidate Snell envelope at time n is: with the associated optimal strategy S n = esssup τ M n E [G τ F n ], (13) τ n = inf{k n : S k = G k }. (14)

11 Optimal Stopping and Applications Discrete time Amg Cox 11 Theorem 2.3. Suppose (8) holds with N = M, and let S n, τ n be as defined in (13) and (14). Fix n, and suppose that P(τ n < ) = 1. Then V n = ES n is the value of the optimal stopping problem: V n = sup EG τ. (15) τ M n Moreover, (i) The stopping time τ n is optimal in (15); (ii) The process (S k ) k n is the smallest supermartingale which dominates (G k ) k n ; (iii) The stopped process (S k τn ) k n is a martingale. Finally, if P(τ n < ) < 1, there is no optimal stopping time in (15). Proof. We begin by showing that S n E[S n+1 F n ]. Suppose that σ 1, σ 2 M n+1 and let A = {E[G σ1 F n+1 ] E[G σ2 F n+1 ]} F n+1. Then we can define a stopping time Hence: σ 3 = σ 1 1 A + σ 2 1 A C M n+1. E[G σ3 F n+1 ] = 1 A E[G σ1 F n+1 ] + 1 A CE[G σ2 F n+1 ] = max {E[G σ1 F n+1 ], E[G σ2 F n+1 ]}. Now, by Theorem 2.2, we know there exists a countable subset J M n+1 such that S n+1 = sup E[G τ F n+1 ]. τ J Moreover, we can assume that J is closed under the operation described above (i.e. if σ 1, σ 2 J then σ 3 is also in J) without losing the countability of J. Hence, there is a sequence of stopping times σ k such that S n+1 = lim k E[G σk F n+1 ] where E[G σk F n+1 ] is a sequence of increasing random variables. So [ ] E[S n+1 F n ] = E lim E[G σ k F n+1 ] F n k = lim E [E[G σk F n+1 ] F n ] k = lim E[G σk F n ] k S n (16) In addition, for all τ M n, we get (by conditioning on {τ = n}, {τ > n}) E[G τ F n ] max{g n, E[S n+1 F n ]},

12 Optimal Stopping and Applications Discrete time Amg Cox 12 so S n max{g n, E[S n+1 F n ]}, however clearly also S n G n, and (16) imply Then, for n k < τ n, S k = E[S k+1 F k ] a.s., that is: and so, for N > n, S n = max{g n, E[S n+1 F n ]}. (17) S k τn = E[S (k+1) τn F k ], k n, (18) S n = E[G τn 1 {τn<n} + S N 1 {τn N} F n ]. In particular, if P(τ n < ), we can let N, and use (8) and dominated convergence to conclude: S n = E[G τn F n ] = esssup τ M n E[G τn F n ]. So V n = ES n = sup τ Mn E[G τ ], and τ n is optimal. Suppose (U k ) k n is also a supermartingale dominating (G k ) k n. Then S k = E[G τk F k ] E[U τk F k ] U k, where the final inequality follows from conditional Fatou, applied to U m τk sup n r τk G r which (by (8)) is an integrable random variable, independent of m. We can conclude that the stopped process is a martingale from (18) and the integrability condition. For the final statement in the theorem, suppose that there is another stopping time τ M n, then if P(S τ < G τ ) > 0, we get: EG τ < ES τ ES n = V n so that any optimal τ has P(τ τ n ) = 1. In particular, if P(τ n < ) < 1, there is no optimal τ with P(τ < ) = Markovian setting Now consider a time-homogeneous Markov chain, X = (X n ) n 0, X n E for some measurable space (E, B), defined on a probability space (Ω, F, (F n ) n 0, P x ), with P x (X 0 = x). Further, for simplicity, we will assume that (Ω, F) = (E Z +, B Z + ), so that the shift operators θ n : Ω Ω can be defined by (θ n (w)) k = w n+k, and F 0 is trivial. Of course, this is just a special case of the setting considered in Section 2.1, so can we say anything extra? Consider an infinite horizon problem: V (x) = sup E x G(X τ ) (19) τ M where we note that we have replaced the general process G n by a process depending only on the current state G(X n ). Assume that there is an optimal strategy for the problem. Before, the solution was expressed in terms of the Snell envelope. The next result shows that we can make a more explicit connection between the Snell envelope:

13 Optimal Stopping and Applications Discrete time Amg Cox 13 Lemma 2.4. Let S n be the Snell envelope as given by (13). Then we have: P x -a.s., for all x E, n 0. S n = V (X n ), (20) Proof. We first note that V (X n ) S n, since the latter is taken over all stopping times in M n, whereas the former is equal to the essential supremum where the supremum is taken over stopping times in M n of the form τ n = n + τ θ n. We also note that V (x) sup τ M 1 E x [G(X τ )] sup τ M 0 E x [G(X 1+τ θ1 )] sup τ M 0 E x [E X1 G( X τ )]] where X and X are equal in law ] E x [esssup E X1 [G( X τ )] τ M 0 E x [V (X 1 )]. In particular, we can conclude that V (X n ) is a supermartingale, and dominates G( ), but is smaller than S n. By property (ii) of Theorem 2.3, we deduce that S n = V (X n ). So, whereas before we needed to work with the Snell envelope, now we need only consider V ( ). To help matters, we note the key equation (17) becomes: S n = V (X n ) = max{g n, E[S n+1 F n ]} = max{g n, E[V (X n+1 ) F n ]} = max{g n, E Xn [V (X 1 )]}, (21) and we introduce an operator T which maps functions F : E R by: (TF)(x) = E x F(X 1 ). We also introduce an important concept from harmonic analysis: Definition 2.5. We say that F : E R is superharmonic if TF(x) is well defined, and TF(x) F(x) for all x E. Then, provided E x F(X n ) < for all x E, we get: F is superharmonic iff (F(X n )) n 0 is a supermartingale for all P x, x E.

14 Optimal Stopping and Applications Discrete time Amg Cox 14 Note: for the example in Section 1.2, we had TF(x) = 1 (F(x + 1) + F(x 1)), so 2 TF(x) F(x) if and only if F is concave. We are now in a position to state an analogue of Theorem 2.3. To update certain notions, we state the corresponding versions of (8): [ ] sup E τ N sup G(X n ) n τ <. (22) Further, given the value function V, we define the continuation region: and the stopping region C = {x E : V (x) > G(x)} D = {x E : V (x) = G(x)}. The candidate optimal stopping time is then: τ D = inf{n 0 : X n D}. (23) Theorem 2.6. Suppose (22) holds, with N = M, and let V ( ) be defined by (19), and τ D be given by (23). Suppose that P(τ D < ) = 1. Then (i) the stopping time τ D is optimal in (19); (ii) the value function is the smallest superharmonic function which dominates the gain function G on E; (iii) the stopped process (V (X n τd )) n 0 is a P x -martingale for every x E Finally, if P(τ D < ) < 1, there is no optimal stopping time in (19). Proof. This follows directly from Theorem 2.3, having made the identification S n = V (X n ) in (20) Note that we may rephrase the Bellman equation (21) in terms of the operator T as: V (x) = max{g(x), TV (x)}, and the right hand side is really just an operator, so if we define a new operator: QF(x) = max{g(x), TF(x)}, we see that the Bellman equation becomes: V = QV. This gives us a relatively easy starting point to look for candidate value functions, but note that solving V = QV is not generally sufficient: consider the Markov process X n = n, and let G(x) = (1 1 ). Then TV (x) = V (x + 1), so the superharmonic functions are x simply decreasing functions, and QV (x) = max{1 1, V (x + 1)}. Clearly, V 1, and x QV = V for this choice, but QF = F whenever F const 1. We are, however able to say some things about the value function in terms of the operator Q:

15 Optimal Stopping and Applications Discrete time Amg Cox 15 Lemma 2.7. Under the hypotheses of Theorem 2.6, V (x) = lim n Q n G(x), x E. Moreover, suppose that F satisfies F = QF and ( ) E sup F(X n ) n 0 <. Then F = V iff: lim sup n F(X n ) = lim sup G(X n ), P x -a.s., x E, n In which case, F(X n ) converges P x -a.s. for all x E. We won t prove this, but see Peskir and Shiryaev, Corollary 1.12 and Theorem 1.13, and the comments at the end of this chapter. One of the important features of this result, and one that proves to be important in practice, is that we now have a set of conditions that we can check to confirm that a candidate for the value function is indeed the value function a common feature of many practical problems is that the solution is obtained by guessing a plausible V, and checking that the function solves an appropriate version of V = QV, along with any other suitable technical conditions. We end the discrete time theory with a quick discussion on the time-inhomogeneous case: we can move from a time-inhomogeneous Markov process Z n to the time-homogeneous case X n by considering the process X n = (Z n, n), and e.g. TF(z, n) = E z,n F(Z n+1, n + 1), and the same results typically hold. Also, we can think of the finite-horizon, time-homogeneous case as a special case of the time-inhomogeneous case, in particular, if T is the operator associated with a timehomogeneous process X n : TF(x) = E x [F(X 1 )], and N is the time-horizon, if we write V n (x) for then Using V N (x) = G(x), we then get: V n (x) = sup E[G(X τ ) F n ], τ M N n V n (x) = max{g(x), TV n+1 (x)} = QV n+1 (x). V n (x) = Q N n V N (x) = Q N n G(x).

16 Optimal Stopping and Applications Continuous Time Amg Cox 16 From which we get: V 0 (x) = Q N G(x). Note that there is no issue with multiple solutions when we have a finite horizon. Finally, we comment on the proof of Lemma 2.7: the term lim n Q n G(x) can now be interpreted as the limit of the finite horizon problem as we let the horizon go to infinity. If the stopped gain function is well behaved (in the sense of (22)), and the stopping time τ D is almost surely finite, we can conclude the convergence. Now if F = QF, and the integrability condition holds, we see that F(X n ) is a supermartingale dominating the value function. The final condition can then be used to show that the candidate value function is indeed the smallest such supermartingale. 3 Continuous Time 3.1 Martingale treatment Mostly, the ideas in continuous time remain identical to the ideas in the discrete time setting. To fill in the details, we need to address a few technical issues, but mostly the proofs will be similar to the discrete case. Let our gains process (G t ) t 0 be an adapted process, defined on a filtered probability space (Ω, F, (F t ) t 0, P), satisfying the usual conditions 6. Moreover, we will suppose that the process is right-continuous, and left-continuous over stopping times: if τ n is a sequence of stopping times, increasing to a stopping time τ, then G τn G τ, P-a.s.. Note that this is a weaker condition than left-continuity. Our new integrability condition will be: [ ] sup E τ N sup G τ 0 t τ <. (24) Note in particular that with such an integrability condition, we may apply the optional sampling theorem to any supermartingale which dominates the gains process: let Y t G t be a supermartingale, and τ N, τ t an almost-surely finite stopping time. Then Y t E[Y τ N F t ], and inf N N Y τ N sup 0 t τ G τ, so dominated convergence allows us to deduce that Y t E[Y τ F t ]. We suppose N = M T t, for some t < T, (including the case T =, which we interpret as the a.s. continuous stopping times) and consider the problem: V t = sup τ M T t EG τ. (25) Now, introduce the process St = esssup E[G τ F t ], (26) τ M T t 6 That is, (F t ) t 0 is complete (F 0 contains all P-negligible sets), and right-continuous (F t = s>t F s).

17 Optimal Stopping and Applications Continuous Time Amg Cox 17 and we would hope that such a process could be our Snell envelope, with the stopping time: τ t = inf{t r T : S r = G r}. (27) There is however a technical issue here: although our gains process and the filtration are both right continuous, the process St is not a priori right-continuous: notably, we could, for fixed t, redefine St on a set of probability zero without contradicting the definition (26). So pick any such St, and suppose further that the probability space admits an independent, U(0, 1) random variable U. Then choose a version of (St ), say ( S t ) which has SU = G U, but otherwise has S t = St. This is a possible alternative choice according to (26), but note that, if we stop both processes according to their respective versions of (27), we get S τt = Sτ t U, which will in general be substantially different! The solution to this is from the following: Lemma 3.1. Let S t be as given in (26). Then S t is a supermartingale and ES t = V t. In addition, S t admits a right-continuous version, S t, which is also a supermartingale, and for which ES t = V t. Proof. The supermartingale property follows by using an identical proof to that used in Theorem 2.3, to derive (16). The equality ES t = V t then follows from the definition of an essential supremum. For the final statement, we note that the existence of S t will follow if we can show that t ESt is right-continuous (e.g. Revuz & Yor, Theorem II.2.9). From the martingale property, we get ESt ESt+δ, so that lim t n t ESt n ESt. Now choose τ MT t, and consider τ n = τ (t + 1) M n t+ 1. Then right continuity of G t implies lim n G τn = G τ, n and so we get EG τ lim inf n EG τn lim inf n ES. t+ 1 n Now we have a suitably defined Snell envelope, we are in a position to state the main theorem: Theorem 3.2. Let S t be the process defined in Lemma 3.1, and define the stopping time τ t = inf{t r T : S r = G r }. (28) Suppose (24) holds with N = M. Fix t, and suppose that P(τ t < ) = 1. Then V t = ES t is the value of the optimal stopping problem: Moreover, V t = sup τ M T t EG τ. (29) (i) The stopping time τ t is optimal in (29); (ii) The process (S r ) t r T is the smallest right-continuous supermartingale which dominates (G r ) t r T ; (iii) The stopped process (S r τt ) t r T is a right-continuous martingale.

18 Optimal Stopping and Applications Continuous Time Amg Cox 18 Finally, if P(τ t < ) < 1, there is no optimal stopping time in (29). Proof. That (29) holds, and S t is the smallest right-continuous martingale dominating G t follow from the previous lemma, and the usual argument: if Y t G t is another right-continuous supermartingale, Y t E[Y τ F t ] E[G τ F t ], so Y t S t. Note that right-continuity in fact implies P(Y r S r for all r t) = 1. The major new issue concerns the optimality of τ. Previously, we were able to prove optimality using the Bellman equation, but this no longer makes sense when we move to the continuous time setting. Instead, we need to find a new argument. Suppose initially that the gains process satisfies the additional condition G t 0. Then, for λ (0, 1] introduce τ λ t = inf{r t : λs r G r }, so τ 1 t τ t. Right continuity of S t, G t implies λs τ λ t G τ λ t, τ λ t+ = τλ t. Moreover, S t E[S τ λ t F t ], since S t is a supermartingale. We want to show that S t = E[S τ 1 t F t ]. Suppose λ (0, 1), and define and then, for s < t, R t = E[S τ λ t F t ], E[R t F s ] = E[E[S τ λ t F t ] F s ] = E[S τ λ t F s ] E[S τ λ s F s ] R s. Then R s is a supermartingale, and an identical argument to the proof of Lemma 3.1 allows us to deduce t ES τ λ t is decreasing, and therefore we can find a right-continuous version of R t. In particular, since G t 0, we also have R t 0. Now consider the (right-continuous) process L t = λs t + (1 λ)r t. Then, for λ (0, 1): L t = λs t + (1 λ)r t 1 {τ λ t =t} + (1 λ)r t 1 {τ λ t >t} λs t + (1 λ)e[s τ λ t F t ]1 {τ λ t =t} λs t + (1 λ)s t 1 {τ λ t =t} S t 1 {τ λ t =t} + λs t 1 {τ λ t >t} G t,

19 Optimal Stopping and Applications Continuous Time Amg Cox 19 where we have used: τt λ > t implies λs t > G t. Since L t is now a supermartingale dominating G t, we must also have L t S t, and therefore S t R t. So in fact S t = R t, and rearranging, we get S t = E[S τ λ t F t ] 1 λ E[G τ λ t F t]. (30) But as λ 1, τ λ t increases to some limiting (possibly infinite) stopping time σ, and by the left continuity for stopping times of G t, we get S t E[G σ F t ]. So it just remains to show that σ = τ 1 t (since we already have S t E[G σ F t ]). Since τt 1 τt λ for λ (0, 1), we have τt 1 σ, but if {σ < τt 1 } has positive probability, then also with positive probability we have S σ > G σ, and hence S t = E[G σ F t ] < E[S σ F t ] with positive probability, but this contradicts the supermartingale property of S t. Hence we can conclude that S t = E[G τ 1 t F t ] = E[S τ 1 t F t ]. Now we note that we may remove the assumption that G t 0. Since we are assuming (24), [ ] M t = E inf G t F t 0 t T is a right-continuous, uniformly integrable martingale. Now define a new problem with gain process G t = G t M t, and note that this is now non-negative, and right-continuous (although not necessarily left-continuous for stopping times). Then if we solve the problem for G t, we get: S t = esssup τ N E[ G τ F t ] = esssup E[G τ M τ F t ] = S t M t. τ N We can now apply the above argument to the processes ( G t, S t ) up to the step at (30), and from there reverting to the processes (G t, S t ), we get the desired conclusion. The remaining claims in the theorem follow with the usual arguments. 3.2 Markovian setting Take (X t ) t 0 a Markov process on (Ω, F, (F t ) t 0, P x ) on a measurable space (E, B) where, for simplicity, we suppose E = R d, B = Borel(R d ), and (Ω, F) = (E [0, ), B [0, ) ), so that the shift operators are measurable. In addition, we again suppose the sample paths of (X t ) t 0 are right-continuous, and left-continuous for stopping times. As usual, we also introduce an integrability condition: ] sup τ N E x [sup G(X t ) t τ <. (31) Our problem of interest is: V (x) = sup E x G(X τ ). (32) τ N

20 Optimal Stopping and Applications Continuous Time Amg Cox 20 In this section, we will in general consider the case where N = M, although we will also at times consider the case N = M T, for which all the results we state will also hold (and in cases, will be simpler). The argument we used in Lemma 2.4 can be used essentially unmodified to see that V (x) E x V (X s ) for all x E, s 0, and therefore V (X t ) = S t, P x -a.s., using the same reasoning. Again, we define the continuation and stopping regions by: C = {x E : V (x) > G(x)} D = {x E : V (x) = G(x)}. We want τ D = inf{t 0 : X t D} to be a stopping time, and X τd D. For a right-continuous process in a right-continuous filtration, this is true if D is closed. Recall also that a function is lower semi-continuous (upper semi-continuous) if f(x) lim inf xn x f(x n ) (f(x) lim sup xn x f(x n )) for all x E. Equivalently, f is lower semi-continuous (lsc) if {x E : f(x) A} is closed for any A R. Similarly, f is upper semi-continuous (usc) if {x E : f(x) A} is closed. Note that if f is usc, f is lsc, and if f, g are usc/lsc, f + g is usc/lsc. Then D is closed if {V G 0} is closed, which occurs if V G is lsc, and therefore if V is lsc and G is usc. It is under these conditions that we will generally work. The restriction to lsc V allows us to rule out certain pathological examples (e.g. V (x) = 1 {x=0} = G(x), when X t = BM(R 2 )), but is also not a strong constraint: if x E x G(X τ ) is continuous (or lsc) for each τ, then x E x G(X τ ) is also lsc. The usc of G ensures that we don t stop outside the stopping region: e.g. G(x) = 1 {x =0}, then P 0 (τ D = 0) = 1, but G(X τd ) = 0. We can also extend Definition 2.5 to the current context: Definition 3.3. A measurable function F : E R is superharmonic if E x F(X σ ) F(x) for all stopping times σ and all x E. (Note that this requires F(X σ ) L 1 (P x ) for all x E.) Lemma 3.4. Suppose F is lsc and (F(X t )) t 0 is uniformly integrable. Then F is superharmonic if and only if (F(X t )) t 0 is a right-continuous supermartingale under P x, for each x E. The if direction is clear. For the only if direction, if F is superharmonic, F(X t ) E Xt F(X s ) = E x [F(X t+s ) F t ]. So F is a super-martingale. The hard part is checking right-continuity. We leave the proof of this fact to Peskir & Shiryaev, Proposition I.2.5. From this, we get the following theorem:

21 Optimal Stopping and Applications Continuous Time Amg Cox 21 Theorem 3.5. Suppose there exists an optimal stopping time τ with P(τ < ) = 1, so that V (x) = E x G(X τ ) for all x E. Then (i) V is the smallest superharmonic function dominating G on E; Moreover, if V is lsc, and G is usc, then also (ii) τ D τ for all x E, and τ D is optimal; (iii) the stopped process (V (X t τd )) t 0 is a right-continuous P x -martingale. Proof. E x V (X σ ) = E x E Xσ G(X τ ) = E x E x [G(X τ ) θ σ F σ ] = E x G(X σ+τ θ σ ) sup E x G(X τ ) = V (x). τ So V is a super-harmonic function, dominating G. For any other super-harmonic F we get E x G(X τ ) E x F(X τ ) F(x), and taking the supremum over τ, we get V (x) F(x). For any optimal stopping time, if P x (V (X τ ) > G(X τ )) > 0, we get E x G(X τ ) < E x V (X τ ) V (x), so τ optimal implies τ D τ. By Lemma 3.4, V (X t ) is a right-continuous supermartingale, and further V (X τd ) = G(X τd ) by lsc/usc. Then V (x) = E x G(X τ ) = E x V (X τ ) E x V (X τd ) = E x G(X τd ) V (x), where the inequality follows from the supermartingale property. For (iii): E x [V (X τd ) F t ] = E x [G(X τd ) F t ] = E Xt [G(X τd ) F t ]1 {t<τd } + V (X τd )1 {t τd } = V (X t τd ) This theorem constitutes necessary conditions for the existence of a solution. Now we consider sufficient conditions:

22 Optimal Stopping and Applications Continuous Time Amg Cox 22 Theorem 3.6. Suppose there exists a function ˆV, which is the smallest superharmonic function dominating G on E. Suppose also ˆV is lsc and G is usc. Let D = {x E : ˆV (x) = G(x)}, and suppose τ D is as above. Then (i) if P x (τ D < ) = 1, for all x E, then ˆV = V and τ D is optimal; (ii) if P x (τ D < ) < 1, for some x E, then there is no optimal stopping time τ. Proof. For all τ, x: E x G(X τ ) E x ˆV (Xτ ) ˆV (x). Taking the supremum over all τ, we get G(x) V (x) ˆV (x). Suppose in addition G( ) 0, and introduce for λ (0, 1), C λ = {x E : λˆv (x) > G(x)} D λ = {x E : λˆv (x) G(x)}. Then the usc/lsc of ˆV, G imply C λ is open and D λ is closed. Also, C λ C, D λ D as λ 1. Taking τ Dλ = inf{t 0 : X t D λ } we get τ Dλ τ D a.s., and so P x (τ Dλ < ) = 1 for all x E. Note that x E x ˆV (XτDλ ) is superharmonic: ] ]] E x [E Xσ [ˆV (X τdλ )] = E x [E x [ˆV (Xσ+τDλ θσ ) F σ = E x ˆV (Xσ+τDλ θ σ ) E x ˆV (XτDλ ) since ˆV is superharmonic, and σ + τ Dλ θ σ τ Dλ. Now, if x C λ, then G(x) < λˆv (x) λˆv (x) + (1 λ)e x ˆV (XτDλ ) since the last term is non-negative. If x D λ, G(x) ˆV (x) = λˆv (x) + (1 λ)e x ˆV (XτDλ ), so for all x E, we get G(x) λˆv (x) + (1 λ)e x ˆV (XτDλ ) But since both terms on the right are superharmonic, the whole expression is, and therefore: ˆV (x) λˆv (x) + (1 λ)e x ˆV (XτDλ ), which implies ˆV E x ˆV (XτDλ ). Again using superharmonicity, we conclude: ˆV (x) = E x ˆV (XτDλ ). Since ˆV is lsc and G is usc, we deduce ˆV (X τdλ ) 1 λ G(X τ Dλ ), (33)

23 Optimal Stopping and Applications Continuous Time Amg Cox 23 and we get ˆV (x) = E x ˆV (XτDλ ) 1 λ E xg(x τdλ ) 1 λ V (x). Letting λ 1, we conclude that ˆV (x) V (x), and therefore ˆV = V, V (x) 1 λ G(X τ Dλ ). Now consider the sequence τ Dλ as λ 1. Then τ Dλ σ, for some stopping time σ. We need to show σ = τ D. Since τ Dλ τ D for λ < 1, we must have σ τ D. Using (33) and the left-continuity of X t over stopping times, and lsc of ˆV, usc of G, we must have: V (X σ ) G(X σ ), and hence V (X σ ) = G(X σ ), so that σ τ D (recall the definition of τ D.) Finally, to see that τ D is indeed optimal, using (33) and Fatou: V (X) lim sup E x G(X τdλ ) λ 1 E x lim sup G(X τdλ ) λ 1 E x G(lim sup X τdλ ) λ 1 = E x G(X τdλ ) where in the last two lines, we have used the usc of G, and the left-continuity for stopping times of X t. The final statement in the theorem follows from Theorem 3.5. We will not prove the final step (that we can drop the condition G 0), but refer instead to Peskir & Shiryaev. This result allows us to find the value function by looking for the smallest superharmonic function dominating G In fact, if we can directly (via other methods) prove that the value function must be lsc, we can get the following result: Theorem 3.7. If V is lsc, G is usc, and P x (τ D < 1) = 1, then τ D is optimal. Of course, in the finite horizon setting, we don t need to check the final condition! As mentioned above, V lsc can also be relatively easy to check if e.g. x E x G(X τ ) is continuous for each τ. Sketch proof. To deduce this from the previous result, we need to show that V is superharmonic. Since V is lsc, it is measurable, and thus, by the Strong Markov property, we have: V (X σ ) = esssup E x [G(X σ+τ θσ ) F σ ]. τ Using similar arguments to e.g. the proof of Theorem 2.3, we can finds a sequence of stopping times such that V (X σ ) = lim n E x [G(X σ+τn θ σ ) F σ ],

24 Optimal Stopping and Applications Free Boundary ProblemsAmg Cox24 where the right hand side is an increasing sequence. Then: E x V (X σ ) = lim n E x [G(X σ+τn θ σ )] V (x). Since any other superharmonic function F has E x G(X σ ) E x F(X σ ) F(x), V must be the smallest superharmonic function, and now everything follows from Theorem Free Boundary Problems 4.1 Infinitesimal Generators Let X t be a suitably nice 7 time-homogeneous Markov process, and write P t for the transition semi-group of the process that is, P t f(x) = E x f(x t ). Suppose f C 0 (the set of continuous functions with limit 0 at infinity) is sufficiently smooth that 1 L X f = lim t 0 t (P tf f) exists in C 0. Then L X : D X C 0 is the infinitesimal generator of X, where we write D X C 0 for the set of functions where L X is defined. Definition 4.1. If X is a Markov process, a Borel function f belongs to the domain D X of the extended infinitesimal generator if there exists a Borel function g such that f(x t ) f(x 0 ) t 0 g(x s ) ds is a right-continuous martingale for every x E; in which case, we write g = L X f. It can then be shown that D X D X. Theorem 4.2 (Revuz & Yor, VII.1.13). If P t is a nice semi-group on R d, and C K D X, (i) C 2 K D X; (ii) for every relatively compact open set U, there exist functions a ij, b i, c on U and a kernel N such that for all f CK 2 and x U: L X f(x) = c(x)f(x) + b i (x) f (x) + 2 f a ij (x) (x) x i x i x j [ + f(y) f(x) 1 U (y) (y i x i ) f ] (x) N(x, dy). (34) x i R\{x} N(x, ) is a Radon measure on R d \ {x}, a is a symmetric, non-negative matrix, c 0 and a, c do not depend on U. 7 Feller i.e. if P t is the transition semi-group, and P t f(x) = E x f(x t ), then for f C 0, the set of continuous functions tending to zero at infinity, then P t f f, where is the uniform norm, for all f C 0, P t P s = P t+s and lim t 0 P t f f = 0 for every f C 0.

25 Optimal Stopping and Applications Free Boundary ProblemsAmg Cox25 We can interpret some of the terms in the operator as follows: the c(x) term encodes the rate at which the process is killed at x, b(x) is the drift, and (a ij (x)) the diffusion matrix. N(x, ) is the jump measure. In the case where the measure N is identically zero, the process X t is continuous. 4.2 Boundary problems Consider now our value function. If D exists, with τ D almost surely finite, then V (X t τd ) is a right-continuous martingale, and V (X t ) is a supermartingale. Provided V D X, in terms of the operator L X, this implies we have L X V 0 x E (35) V G x E (36) L X V = 0 x C (37) V = G x D (38) Reversing these arguments, we can try to exploit this as a solution method: suppose that we can find a closed set ˆD such that P(τ ˆD < ) = 1, and a function ˆV such that (35) (38) holds for this set. Then ˆV (X t ) ˆV (X 0 ) t 0 L X ˆV (Xs ) ds is a martingale, and hence ˆV (X t τ ˆD) is a martingale, and ˆV (X t ) is a supermartingale. Moreover, since ˆD is closed, X τ ˆD ˆD, so and for any stopping time τ M, and therefore τ ˆD must be optimal. ˆV (x) = E x ˆV (Xτ ˆD) = E x G(X τ ˆD), ˆV (x) E x ˆV (Xτ ) E x G(X τ ) So a common approach will be to guess what D might be, and then define { E x G(X τd ) : x C V (x) = G(x) : x D and then verify that L X V 0 V G x E x E In general, we may want to consider slightly more complicated payoffs than a simple function of X t, where X t has the generator given by (34). To consider how we might do

26 Optimal Stopping and Applications Free Boundary ProblemsAmg Cox26 this, consider first the simple example where X t is continuous, and so the solution is given by V (x) = E x G(X τd ) so that V is a solution to the Dirichlet problem: for the correct choice of D. L X V (x) = 0, x C V (x) = G(x), x C, Now, consider some other problems our aim is to derive the correct analogue of the Dirichlet problem for more complicated payoff functions. Again, for all these examples, we shall assume that the process X t is continuous, and there exists an optimal stopping time for the respective problem which is a.s. finite. Where necessary, we will assume any needed finiteness conditions the arguments here should be seen as heuristic, rather than exact. (i) Consider the problem: [ V (x) = sup E x e λ τ H(X τ ) ] τ where λ t = t λ(x 0 s) ds, for a measurable function λ( ). We can account for this by considering instead the killed process X t, killed at rate λ(x t ), which in turn has generator L X = L X λi, to conclude that V (x) = sup E x [H( X ] τ ), τ and so V satisfies the PDE (ii) Consider the finite horizon problem: L X V (x) = λv, x C V (x) = H(x), x C. V (x, t) = sup τ M T t E (x,t) M(X τ ) where the appropriate Markov process is Z t = (X t, t). Then we know V (X t, t) is a martingale for (x, t) C R R +. Assuming suitable smoothness on f, we can apply Itô to derive the operator of Z t : { } E(x,s) f(x t, t + s) f(x, s) L Z f(x, s) = lim t 0 t = lim t 0 [ t E (x,s) L 0 Xf(X r, s + r) ds + t f (X 0 t r, s + r) dr = L X f(x, s) + f (x, s). t t ]

27 Optimal Stopping and Applications Free Boundary ProblemsAmg Cox27 So the condition on V being a martingale now implies we get the PDE: L X V (x) = V, (x, t) C (39) t V (x) = M(x), (x, t) C. Note that this is the Cauchy problem in PDE theory. There is also killed version of this problem, where the right hand side of (39) becomes: V t + λv. (iii) Let L be a continuous function and consider [ τ ] V (x) = sup E x L(X s ) ds. τ 0 Note that this problem does not immediately lend itself to a Markov setting, so we need to expand the state space: consider the process Z t = (X t, L t ), where L t = l + t L(X 0 s) ds, and l is the starting point of the process L t. Then for a function h, applying Itô as above, we see that the generator of the new process Z is given by L Z h(x, l) = L X h + h l L(x), where the first term is the operator L X applied only to the first co-ordinate of h. Now consider the more general problem Ṽ (x, l) = sup E x [l + τ τ 0 ] L(X s ) ds. Then Ṽ = 1, and Ṽ (x, l) = V (x) + l. So the condition L l ZṼ (x, l) = 0 implies L X V + Ṽ l L(x) = 0 and therefore, we deduce that V solves the PDE L X V (x) = L(x), x C (40) V (x) = 0, x C. Once again, there is a killed version, where the right hand side of (40) becomes L(x) + λv. (iv) Consider S t = max 0 r t X r S 0, and let our underlying Markov process be Z t = (X t, S t ) on Ẽ = {(x, s) : x s}, where (X 0, S 0 ) = (x, s) under P x,s for (x, s) Ẽ. We wish to consider a problem of the form: V (x, s) = sup E x,s [M(X τ, S τ )], τ where M(x, s) is a continuous function on Ẽ.

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model. 1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Principle of Mathematical Induction

Principle of Mathematical Induction Advanced Calculus I. Math 451, Fall 2016, Prof. Vershynin Principle of Mathematical Induction 1. Prove that 1 + 2 + + n = 1 n(n + 1) for all n N. 2 2. Prove that 1 2 + 2 2 + + n 2 = 1 n(n + 1)(2n + 1)

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Snell envelope and the pricing of American-style contingent claims

Snell envelope and the pricing of American-style contingent claims Snell envelope and the pricing of -style contingent claims 80-646-08 Stochastic calculus I Geneviève Gauthier HEC Montréal Notation The The riskless security Market model The following text draws heavily

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

Quick Tour of the Topology of R. Steven Hurder, Dave Marker, & John Wood 1

Quick Tour of the Topology of R. Steven Hurder, Dave Marker, & John Wood 1 Quick Tour of the Topology of R Steven Hurder, Dave Marker, & John Wood 1 1 Department of Mathematics, University of Illinois at Chicago April 17, 2003 Preface i Chapter 1. The Topology of R 1 1. Open

More information

Week 2: Sequences and Series

Week 2: Sequences and Series QF0: Quantitative Finance August 29, 207 Week 2: Sequences and Series Facilitator: Christopher Ting AY 207/208 Mathematicians have tried in vain to this day to discover some order in the sequence of prime

More information

MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6

MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6 MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION Extra Reading Material for Level 4 and Level 6 Part A: Construction of Lebesgue Measure The first part the extra material consists of the construction

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Filters in Analysis and Topology

Filters in Analysis and Topology Filters in Analysis and Topology David MacIver July 1, 2004 Abstract The study of filters is a very natural way to talk about convergence in an arbitrary topological space, and carries over nicely into

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Optimal Stopping Games for Markov Processes

Optimal Stopping Games for Markov Processes SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x: 108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES CLASSICAL PROBABILITY 2008 2. MODES OF CONVERGENCE AND INEQUALITIES JOHN MORIARTY In many interesting and important situations, the object of interest is influenced by many random factors. If we can construct

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

arxiv: v1 [math.pr] 11 Jan 2013

arxiv: v1 [math.pr] 11 Jan 2013 Last-Hitting Times and Williams Decomposition of the Bessel Process of Dimension 3 at its Ultimate Minimum arxiv:131.2527v1 [math.pr] 11 Jan 213 F. Thomas Bruss and Marc Yor Université Libre de Bruxelles

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Optimal Stopping under Adverse Nonlinear Expectation and Related Games

Optimal Stopping under Adverse Nonlinear Expectation and Related Games Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Online Appendix Durable Goods Monopoly with Stochastic Costs

Online Appendix Durable Goods Monopoly with Stochastic Costs Online Appendix Durable Goods Monopoly with Stochastic Costs Juan Ortner Boston University March 2, 2016 OA1 Online Appendix OA1.1 Proof of Theorem 2 The proof of Theorem 2 is organized as follows. First,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

02. Measure and integral. 1. Borel-measurable functions and pointwise limits

02. Measure and integral. 1. Borel-measurable functions and pointwise limits (October 3, 2017) 02. Measure and integral Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/02 measure and integral.pdf]

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference Optimal exit strategies for investment projects Simone Scotti Université Paris Diderot Laboratoire de Probabilité et Modèles Aléatories Joint work with : Etienne Chevalier, Université d Evry Vathana Ly

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0. Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT

More information

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Lecture 7. 1 Notations. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

HOMEWORK ASSIGNMENT 6

HOMEWORK ASSIGNMENT 6 HOMEWORK ASSIGNMENT 6 DUE 15 MARCH, 2016 1) Suppose f, g : A R are uniformly continuous on A. Show that f + g is uniformly continuous on A. Solution First we note: In order to show that f + g is uniformly

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Mathematical Finance

Mathematical Finance ETH Zürich, HS 2017 Prof. Josef Teichmann Matti Kiiski Mathematical Finance Solution sheet 14 Solution 14.1 Denote by Z pz t q tpr0,t s the density process process of Q with respect to P. (a) The second

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Liquidation in Limit Order Books. LOBs with Controlled Intensity Limit Order Book Model Power-Law Intensity Law Exponential Decay Order Books Extensions Liquidation in Limit Order Books with Controlled Intensity Erhan and Mike Ludkovski University of Michigan and UCSB

More information

Alternate Characterizations of Markov Processes

Alternate Characterizations of Markov Processes Chapter 10 Alternate Characterizations of Markov Processes This lecture introduces two ways of characterizing Markov processes other than through their transition probabilities. Section 10.1 addresses

More information

On the principle of smooth fit in optimal stopping problems

On the principle of smooth fit in optimal stopping problems 1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:

More information

1 Basics of probability theory

1 Basics of probability theory Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING J. TEICHMANN Abstract. We introduce the main concepts of duality theory for utility optimization in a setting of finitely many economic scenarios. 1. Utility

More information

A VERY BRIEF REVIEW OF MEASURE THEORY

A VERY BRIEF REVIEW OF MEASURE THEORY A VERY BRIEF REVIEW OF MEASURE THEORY A brief philosophical discussion. Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information