Nonzero-Sum Games of Optimal Stopping for Markov Processes

Size: px
Start display at page:

Download "Nonzero-Sum Games of Optimal Stopping for Markov Processes"

Transcription

1 Appl Math Optim : Nonzero-Sum Games of Optimal Stopping for Markov Processes Natalie Attard Published online: 7 November 206 The Authors 206. This article is published with open access at Springerlink.com Abstract Two players are observing a right-continuous and quasi-left-continuous strong Markov process X. We study the optimal stopping problem Vσ x = sup τ M x τ, σ for a given stopping time σ resp. V 2 τ x = sup σ M2 x τ, σ for given τ where M x τ, σ = E xg X τ I τ σ + H X σ I σ < τ] with G, H being continuous functions satisfying some mild integrability conditions resp. M 2 x τ, σ = E xg 2 X σ I σ < τ + H 2 X τ I τ σ] with G 2, H 2 being continuous functions satisfying some mild integrability conditions. We show that if σ = σ D2 = inf{t 0 : X t D 2 } resp. τ = τ = inf{t 0 : X t D } where D 2 resp. D has a regular boundary, then Vσ D2 resp. Vτ 2 is finely continuous. If D 2 resp. D is also finely closed then τ σ D 2 = inf{t 0 : X t D σ D 2 } resp. σ τ D = inf{t 0 : X t D τ D 2 } where D σ D 2 = {Vσ D2 = G } resp. D τ D 2 ={Vτ 2 = G 2 } is optimal for player one resp. player two. We then derive a partial superharmonic characterisation for Vσ D2 resp. Vτ 2 which can be exploited in examples to construct a pair of first entry times that is a Nash equilibrium. Keywords Nonzero-sum optimal stopping game Nash equilibrium Markov process Double partial superharmonic characterisation Principle of double smooth fit Principle of double continuous fit Mathematics Subject Classification Primary 9A5 60G40; Secondary 60J25 60G44 B Natalie Attard natalie.attard@postgrad.manchester.ac.uk School of Mathematics, The University of Manchester, Oxford Road, Manchester M3 9PL, UK 23

2 568 Appl Math Optim : Introduction Optimal stopping games, often referred to as Dynkin games, are extensions of optimal stopping problems. Since the seminal paper of Dynkin 4], optimal stopping games have been studied extensively. Martingale methods for zero-sum games were studied by Kifer 32], Neveu 44], Stettner 55], Lepeltier and Maingueneau 4] and Ohtsubo 45]. The Markovian framework was initially studied by Frid 22], Gusein-Zade 26], Elbakidze 8] and Bismut 5]. Bensoussan and Friedman 2] and Friedman 20,2] considered zero-sum optimal stopping games for diffusions and developed an analytic approach by relying on variational and quasi-variational inequalities. Ekström and Peskir 6] proved the existence of a value in two-player zero-sum optimal stopping games for right-continuous strong Markov processes and construct a Nash equilibrium point under the additional assumption that the underlying process is quasi-left continuous. Peskir in 5] and 52] extended these results further by deriving a semiharmonic characterisation of the value of the game without assuming that a Nash equilibrium exists a priori. In particular, a necessary and sufficient condition for the existence of a Nash equilibrium is that the value function coincides with the smallest superharmonic and the largest subharmonic function lying between the gain and the loss function which, in the case of absorbed Brownian motion in 0,], is equivalent to pulling a rope between two obstacles that is finding the shortest path between the graphs of two functions. Connections between zero-sum optimal stopping games and singular stochastic control problems were studied in 23,3] and4]. Cvitanic and Karatzas ] showed that backward stochastic differential equations are connected with the value function of a zero-sum Dynkin game. Advances in this direction can be found in 27]. Various authors have also studied zero-sum optimal stopping games with randomised strategies. For further details one can refer to 39] and the references therein. Zero-sum optimal stopping games have been used extensively in the pricing of game contingent claims both in complete and incomplete markets see for example 5,7,24,25,30,33,35,36,38] and 9]. Literature on nonzero-sum optimal stopping games is mainly concerned with the existence of a Nash equilibrium. Initial studies in discrete time date back to Morimoto 42] wherein a fixed point theorem for monotone mappings is used to derive sufficient conditions for the existence of a Nash equilibrium point. Ohtsubo 46] derived equilibrium values via backward induction and in 47] the same author considers nonzero-sum games in which the lower gain process has a monotone structure, and gives sufficient conditions for a Nash equilibrium point to exist. Shmaya and Solan in 54] proved that every two player nonzero-sum game in discrete time admits an ε-equilibrium in randomised stopping times. In continuous time Bensoussan and Friedman 3] showed that, for diffusions, a Nash equilibrium exists if there exists a solution to a system of quasi-variational inequalities. However, the regularity and uniqueness of the solution remain open problems. Nagai 43] studies a nonzero-sum stopping game of symmetric Markov processes. A system of quasi-variational inequalities is introduced in terms of Dirichlet forms and the existence of extremal solutions of a system of quasivariational inequalities is discussed. Nash equilibrium points of the stopping game are then obtained from these extremal solutions. Cattiaux and Lepeltier 8] study special right-processes, namely Hunt processes in the Ray topology, and they prove existence 23

3 Appl Math Optim : of a quasi-markov Nash Equilbrium. The authors follow Nagai s idea but use probabilistic tools rather than the theory of Dirichlet forms. Huang and Li in 29] prove the existence of a Nash equilibrium point for a class of nonzero-sum noncyclic stopping games using the martingale approach. Laraki and Solan 40] proved that every two-player nonzero-sum Dynkin game in continuous time admits an ε equilibrium in randomised stopping times. Hamadène and Zhang in 28] prove existence of a Nash equilibrium using the martingale approach, for processes with positive jumps. One application of nonzero-sum optimal stopping games is seen in the study of game options in incomplete markets, via the consideration of utility-based arguments see 34]. Nonzero-sum optimal stopping games have also been used to model the interaction between bondholders and shareholders in the study of convertible bonds, when corporate taxes are included and when the company is allowed to claim default see 9]. In this work we consider two player nonzero-sum games of optimal stopping for a general strong Markov process. The aim is to use probabilistic tools to study the optimal stopping problem of player one resp. player two when the stopping time of player two resp. player one is externally given. Although this work does not deal with the question of existence of mutually best responses that is the existence of a Nash equilibrium the results obtained can be exploited further in various examples, to show the existence of a pair of first entry times that will be a Nash equilibrium. Indeed, the results derived here will be used in a separate work see ] to construct Nash equilibrium points for one dimensional regular diffusions and for a certain class of payoff functions. This paper is organised as follows: In Sect. 2 we introduce the underlying setup and formulate the nonzero-sum optimal stopping game. In Sect. 3 we show that if the strategy chosen by player two resp. player one is σ D2 resp. τ, the first entry time into a regular Borel subset D 2 resp. D of the state space, then the value function of player one associated with σ D2 resp. the value function of player two associated with τ, which we shall denote by Vσ D2 resp. Vτ 2, is finely continuous. In Section 4 we shall use this regularity property of Vσ D2 resp. Vτ 2 to construct an optimal stopping time for player one resp. player two. In Sect. 5 we shall use the results obtained in Sects. 3 and 4 to provide a partial superharmonic characterisation for Vσ D2 resp. Vτ 2. More precisely if D 2 resp. D is also a closed or finely closed subset of the state space then Vσ D2 resp. Vτ 2 can be identified with the smallest finely continuous function that is superharmonic in D2 c resp. in Dc and that majorises the lower payoff function. In Section 6 we shall consider stationary one-dimensional Markov processes and we shall assume that there exists a pair of stopping times τ,σ B of the form τ = inf{t 0 : X t } and σ B = inf{t 0 : X t } where <, that is a Nash equilibrium point. We first show that V σ B resp. V τ is continuous at resp. at. Then for the special case of one dimensional regular diffusions we shall use the results obtained in Sect. 5 to show that V σ B resp. V 2 τ is also smooth at resp. provided that the payoff functions are smooth. This is in line with the principle of smooth fit observed in standard optimal stopping problems see for example 49] for further details. 23

4 570 Appl Math Optim : Formulation of the Problem In this section we shall formulate rigorously the nonzero-sum optimal stopping game. For this we shall first set up the underlying framework. This will be similar to the one presented by Ekström and Peskir cf. 6, p. 3]. On a given filtered probability space,f,ft t 0, P x we define a strong Markov process X = Xt t 0 with values in a measurable space E, B, with E being a locally compact Hausdorff space with a countable base note that since E has a countable base then it is a Polish space and B the Borel σ -algebra on E. We shall assume that P x X 0 = x =, that the sample paths of X are right-continuous and that X is quasi-left-continuous that is X ρn X ρ P x -a.s. whenever ρ n and ρ are stopping times such that ρ n ρ P x -a.s.. All stopping times mentioned throughout this text are relative to the filtration F t t 0 introduced above, which is also assumed to be right-continuous. This means that entry times in open and closed sets are stopping times. Moreover F 0 is assumed to contain all P x - null sets from F X = σ X t : t 0, which further implies that the first entry times to Borel sets are stopping times. We shall also assume that the mapping x P x F is universally measurable for each F F so that the mapping x E x Z] is universally measurable for each integrable random variable Z. Remark 2. Note that a subset F of a Polish space E is said to be universally measurable if it is μ-measurable for every finite measure μ on E, B, where B is the Borel-σ algebra on E. By μ-measurable we mean that F is measurable with respect to the completion of B under μ.if is the collection of universally measurable subsets of E then a function f : E R is said to be universally measurable if f A for all A BR, where BR is the Borel-σ algebra on R. Finally we shall assume that is the canonical space E 0, with X t ω = ωt for ω. In this case the shift operator θ t : is well defined by θ t ωs = ωt+s for ω and t, s 0. The Markovian version of a nonzero-sum optimal stopping game may now be formally described as follows. Let G, G 2, H, H 2 : E R be continuous functions satisfying G i H i and the following integrability conditions; E x sup t G i X t ] <, and E x sup H i X t ] < 2. t for i =, 2. Suppose that two players are observing X. Player one wants to choose a stopping time τ and player two a stopping time σ in such a way as to maximise their total average gains, which are respectively given by M x τ,σ = E x G X τ I τ σ + H X σ I σ <τ] 2.2 M 2 x τ,σ = E x G 2 X σ I σ <τ + H 2 X τ I τ σ ]. 2.3 For a given strategy σ chosen by player two, we let 23 V σ x = sup M x τ, σ 2.4 τ

5 Appl Math Optim : and for a given strategy τ chosen by player one, we let V 2 τ x = sup M 2 x τ, σ. 2.5 σ We shall refer to Vσ resp. V τ 2 as the value function of player one resp. player two associated with the given stopping time σ resp. τ of player two resp. player one. We shall assume that the stopping times in 2.4 and 2.5 are finite valued and if the terminal time T is finite we shall further assume that G i X T = H i X T P x -a.s. 2.6 for i =, 2. In this case one can think of X as being a two dimensional process t, Y t t 0 so that G i and H i will be functions on 0, T ] E cf. 49, p.36]. We note that if the terminal time T is infinite and the stopping times τ and σ are allowed to be infinite our results will still be valid provided that lim sup t G i X t = lim sup t H i X t P x -a.s. The game is said to have a solution if there exists a pair of stopping times τ,σ which is a Nash equilibrium point, that is M x τ, σ M x τ,σ and M 2 x τ,σ M 2 x τ,σ for all stopping times τ,σ. This means that none of the players will perform better if they change their strategy independent of each other. In this case Vσ x = M x τ,σ is the payoff function of player one and Vτ 2 x = M 2 x τ,σ the payoff function of player two in this equilibrium. So Vσ and Vτ 2 can be called the value functions of the game corresponding to τ,σ. In general, as we shall see in Sect. 6, there might be other pairs of stopping times that form a Nash equilibrium point, which can lead to different value functions. 3 Fine continuity property In this section we show that if the strategy chosen by player two resp. player one corresponds to the first entry time into a subset D 2 resp. D ofe, whose boundary D 2 resp. D is regular, then Vσ D2 resp. Vτ 2 is continuous in the fine topology i.e. finely continuous. For literature on the fine topology one can refer to 6,0] and 3]. We first define the concept of a finely open set and a regular boundary of a Borel subset of E. Definition 3. An arbitrary set B E is said to be finely open if there exists a Borel set A B such that P x ρ A c > 0 = for every x A, where ρ A c = inf{t > 0 : X t A c } is the first hitting time in A c. Definition 3.2 The boundary D of a Borel set D E is said to be regular for D if P x ρ D = 0 = for every point x D, where ρ D = inf{t > 0 : X t D}. We now introduce preliminary results which are needed to prove the main theorem of this section. 23

6 572 Appl Math Optim : Lemma 3.3 For any given stopping time σ resp. τ, the mapping x Vσ x,resp. x Vτ 2 x is measurable. Proof To prove measurability of the mapping x Vτ 2 x one can follow the proof in 6, p. 5, pt. 3] by replacing G and G 2 with G 2 and H 2 respectively note that our payoff functions are assumed to be continuous, hence finely-continuous. So we shall only prove the result for Vσ. Ekström and Peskir 6, p.5,pt.3] proved that for any given stopping time σ, the function Ṽσ of the optimal stopping problem sup τ M x τ, σ = sup τ E x G X τ I τ < σ + H X σ I σ τ] is measurable. The same method of proof can be applied in this setting with the following slight modification: Let G σ, t = G X τ I t σ + H X σ I σ < t and G t σ, = G X t I t <σ+ H X σ I σ t. Note that the mapping t G σ, t is not right-continuous. Now for any stopping time τ in the optimal stopping problem 2.4 weletτ n = 2 k n on { k 2 n <τ 2 k n } for each n. It is well known that τ n,for each n is a stopping time on the dyadic rationals Q n of the form 2 k n and that τ n τ as n. Since G X is right-continuous we have that lim n Gσ, τ n = G σ, τ P x -a.s. 3. Since G H we get, upon using 3. and Fatou s lemma the required integrability condition for using Fatou s lemma can be derived from the integrability assumption 2., that E x G σ, τ ] Ex G σ, ] τ = Ex lim ] =: sup sup sup E x G σ, τ n τ Q n n n Gσ, τ n ] lim inf n E x ] G σ, τ n V σ, n x. 3.2 Taking the supremum over all τ it follows that Vσ x sup n V n σ, x. Onthe other hand, Vn σ, x Vσ x for all n so we get that V σ x = sup n V n σ, x for all x E. Measurability of Vσ x follows from the measurability property of sup n Vn σ, x as in 6]. Lemma 3.4 Let D be a Borel subset of E and let x D, where D is a regular boundary for D. Suppose that ρ n n= is a sequence of stopping times such that ρ n 0 P x -a.s. as n. Set σ ρn = inf{t ρ n : X t D}. Then σ ρn 0 P x -a.s. as n. Proof Let x D. By regularity of D for any ε>0there exists t 0,ε such that X t D P x -a.s. Since σ ρn is a sequence of decreasing stopping times then σ ρn β P x -a.s. for some stopping time β. So suppose for contradiction that β>0. Now ρ n 0 P x -a.s. and for each n we have σ ρn β P x -a.s. So for any given ω \N where P x N = 0 we have that X t ω / D for all t 0, βω and this contradicts the fact that D is regular for D. The next lemma and theorem, which we shall exploit in this study, provide conditions for fine continuity. The proofs of these results can be found in 3]. 23

7 Appl Math Optim : Lemma 3.5 A measurable function F : E R is finely continuous if and only if lim t 0 FX t = Fx P x -a.s. 3.3 for every x E. This is further equivalent to the fact that the mapping t FX t ω is right-continuous on R for every ω \N where P x N = 0 for all x E. Theorem 3.6 Let F : E R be a measurable function and suppose that K K 2 K 3... is a nested sequence of compact sets in E. Suppose also that ρ Kn 0 P x -a.s. as n, where ρ Kn = inf{t 0 : X t K n }.Iflim n E x FX ρkn ]=Fx then F is finely continuous. We next state and prove the main result of this section, that is the fine continuity property of V σ D2 resp. V 2 τ. Theorem 3.7 Let ρ n n= be any sequence of stopping times such that ρ n 0 P x -a.s. as n. Suppose that D, D 2 are Borel subsets of E having regular boundaries D and D 2 respectively. Then lim E x V X n σd2 ρ n ] = Vσ D2 x 3.5 lim E x V 2 n τ X ρn ] = Vτ 2 x 3.6 where σ D2 = inf{t 0 : X t D 2 } and τ = inf{t 0 : X t D }. Proof We will only prove 3.5 as3.6 follows by symmetry. From Lemma 3.3 we know that V σ D2 is measurable. This implies that Vσ D2 X ρn = sup M X ρn τ, σ D2 3.7 τ is a random variable. By the strong Markov property of X we have that M ] X ρn τ, σ D2 = E x G X τρn I τ ρn σ ρn + H X σρn I σ ρn <τ ρn F ρn 3.8 where we set τ ρn = ρ n + τ θ ρn and σ ρn = ρ n + σ D θ ρn. It is well known that τ ρn and σ ρn are stopping times see for example 0, Section.3, Thoerem ]. Let us set M x τ, σ F ] ρ = E x G X τ I τ σ+ H X σ I σ < τ F ρ 3.9 for given stopping times τ,σ and ρ. Then from 3.7 and 3.8 we get that V σ D X ρn = ess sup τ M x τ ρ n,σ ρn F ρn = ess sup τ ρn M x τ, σ ρ n F ρn

8 574 Appl Math Optim : The last equality follows from the fact that for every stopping time τ ρ n, there exists a function τ ρ n : 0, ] such that τ ρ n is F ρn F measurable 3. ϑ τ ρ n ω, ϑ is a stopping time 3.2 τω = ρ n + τ ρ n ω, θ ρn ω 3.3 for all ω. Note that the latter assertion can be derived from Galmarino s test. In particular if τ is the first entry time of X into a set, then τ = σ + τ θ ρn and τ ρ n can be identified with τ in the sense that τ ρn ω, ϑ = τϑ for all ω and ϑ. Taking expectations on both sides in 3.0 we get E x V σd X ρn ] = E x ess supτ ρn M x τ, σ ρ n F ρn ] We next show that the family {M x τ, σ ρ n F ρn : τ ρ n } is upwards directed. For this we show that for any two stopping times τ,τ 2 ρ n there exists τ 3 ρ n such that M x τ 3,σ ρn F ρn M x τ,σ ρn F ρn M x τ 2,σ ρn F ρn.soletτ,τ 2 ρ n be any two stopping times given and fixed and define the set A ={ω : M x τ,σ ρn F ρn ω M x τ 2,σ ρn F ρn ω}. NowA F ρn because M x τ i,σ ρn F ρn for i =, 2are F ρn measurable. Let τ 3 = τ I A + τ 2 I A c. Since τ,τ 2 ρ n it follows that τ 3 ρ n. Also, {τ 3 t} = {{τ t} A} {{τ 2 t} A c } = {{τ t} A {ρ n t}} {{τ 2 t} A c {ρ n t}} F t. This follows from the fact that the sets A and A c belong to F ρn and that {τ i t} {ρ n t} for i =, 2. So τ 3 is a stopping time and hence, M x τ3,σ ρn F ρn = M x τ I A + τ 2 I A c,σ ρn F ρn 3.5 = M x τ,σ ρn F ρn IA + M x τ2,σ ρn F ρn IA c = M x τ,σ ρn F ρn M x τ2,σ ρn F ρn 3 We next prove that if D 2 is a regular boundary for D 2 then Vσ D2 x lim inf E x V X n σd2 ρ n ]. 3.6 For this we first show that 23 M x τ, σ ρ n M x τ ρ n,σ ρn = E x G X τ ρn G X ρn ] 3.7

9 Appl Math Optim : Since σ ρn ρ n we have that M x τ, σ ρ n = E x G X τ I τ σ ρn + H X σρn I σ ρn <τi τ < ρ n ] + E x G X τ ρn I τ ρ n σ ρn + H X σρn I σ ρn <τ ρ n I τ ρ n ] = E x G X τ I τ < ρ n ] + M x τ ρ n,σ ρn E x G X ρn I ρ n σ ρn + H X σρn I σ ρn <ρ n I τ < ρ n ] = E x G X τ G X ρn I τ < ρ n ] + M x τ ρ n,σ ρn = E x G X τ ρn G X ρn ] + M x τ ρ n,σ ρn 3.8 from which 3.7 follows. By considering separately the sets {σ D2 <ρ n }, {σ D2 ρ n } note that σ ρn = σ D2 on the set {σ D2 ρ n }, {σ D2 > 0}, {σ D2 = 0}, {τ = 0} and {τ >0}, and by using Lemma 3.4 we get M x τ, σ D 2 M x τ, σ ρ n = E x G X τ I τ σ ρn I 0 <ρ n I σ D2 = 0I τ > 0 + H X 0 H X σρn I σ ρn <τi 0 <ρ n I σ D2 = 0I τ > 0] = E x H X 0 H X σρn I 0 <ρ n I σ D2 = 0I τ > 0] 3.9 for n sufficiently large. The last equality in 3.9 can be seen as follows: If x intd 2 the interior of D 2 then by the right-continuity property of the sample paths it follows that σ ρn = 0 P x -a.s. for n sufficiently large. If on the other hand x D 2 then by Lemma 3.4, we have that σ ρn 0 P x -a.s. Note that in the case x / D 2 D 2 then σ D2 > 0 P x -a.s. and so the terms in the right-hand side of 3.9 vanish. Combining 3.7 and 3.9 we get M x τ, σ D 2 M x τ ρ n,σ ρn = M x τ, σ D 2 M x τ, σ ρ n + M x τ, σ ρ n M x τ ρ n,σ ρn E x sup G X t ρn G X ρn ] t ρ n + E x H X 0 H X σρn I 0 <ρ n I σ D2 = 0I τ > 0] for n sufficiently large. So V σ D2 x lim inf sup n τ = lim inf n M x τ ρ n,σ ρn sup M x τ, σ ρ n = lim inf τ ρ n E x V X n σρn ρ n ]. 3.2 The first inequality follows from Indeed, since G and H are continuous, the composed processes G X and H X are right-continuous and so both terms 23

10 576 Appl Math Optim : on the right hand side of 3.20 tend to zero as n note that if x D 2 this follows from Lemma 3.4 whereas if x intd 2 or x / D D the result follows as explained in the text before The last equality in 3.2 follows from the fact that the family {M x τ, σ ρn F ρn : τ ρ n } is upwards directed see step 2, and so we can interchange the expectation and the essential supremum in We show that V σ D2 x lim sup n E x V σ ρn X ρn ]. From3.7 and 3.9 we get M x τ, σ D 2 M x τ ρ n,σ ρn E x sup G X t G X ρn ] 3.22 t ρ n E x H X 0 H X σρn ] for any stopping time τ. From this, together with Lebesgue dominated convergence theorem upon recalling assumption 2. and the continuity property of G and H we conclude that V σ D2 x lim sup n sup M x τ ρ n,σ ρn = lim sup E x V σρn X ρn ]. τ ρ n n We next present an example to show that if D is not regular for D then V σ D may not be finely continuous. Example 3.8 Let E = R and B the Borel σ -algebra on R. Suppose that X is the deterministic motion to the right, that is the process starts at x R and X t = x +t P x - a.s. for each t 0. In this case the fine topology coincides with the right-topology cf. 53], so a function is finely continuous if it is right-continuous. Define the functions G x = ex 2 I x < + I x and H 2x 2 x = I x < + I x. x Let D = {} and σ 2 D = inf{t 0 : X t D}. Then D is not regular for D because if ρ D = inf{t > 0 : X t D} then ρ D = P -a.s. We show that for any given ε>0the stopping time τ ε = τ, + ε A + τ, A, where τ, is the first entry time of X in, and A ={τ, = σ D }, is optimal for player one given the strategy σ D chosen by player two. For each ε>0wehave that τ ε = τ, I x > + τ, + εi x = 0I x > + x + εi x. So M x τ ε,σ D = H I x + G xi x >. On the other hand, for any stopping time τ one can see that M x τ, σ D E x G X τ I τ < σ D + H X σd I σ D τ] <H I τ < x + H I x τi x + G x + τi x > M x τ ε,σ D where the last inequality follows from the fact that G is decreasing in,. Taking the supremum over all τ we get that V σ D x M x τ ε,σ D for all x. Since on the other hand V σ D x M x τ ε,σ D, it follows that V σ D x = M x τ ε,σ D for all x and so we must have that τ ε is optimal for player one, provided that player two selects strategy 23

11 Appl Math Optim : σ D. The value function is thus given by V σ D x = H xi x + G xi x > which is not right-continuous. 4 Towards a Nash equilibrium The main result of this section is to show that if σ D2 resp. τ is externally given as the first entry time in D 2 resp. D, a set that is either closed or finely closed, and has a regular boundary, then the first entry time τ σ D 2 = inf{t 0 : X t D σ D 2 } resp. σ τ D = inf{t 0 : X t D τ D } where D σ D 2 ={Vσ D2 = G } resp. D τ D 2 = {Vτ 2 = G 2 } solves the optimal stopping problem Vσ D2 x = sup τ M x τ, σ D 2 resp. Vτ 2 x = sup σ M 2 x τ D,σ. The proof of this result will be divided into several lemmas and propositions. Proposition 4. Let D, D 2 be Borel subsets of E having regularies boundaries D and D 2 respectively. Set τ = inf{t 0 : X t D } and σ D2 = inf{t 0 : X t D 2 }. Then, Vσ D2 x M σ D2 x τ ε,σ D2 + ε 4. Vτ 2 x M 2 x τ,σ τ D ε + ε 4.2 for any ε>0, where τ σ D 2 ε = inf{t 0 : X t D σ D 2,ε } and σ τ D ε = inf{t 0 : X t D τ D,ε 2 } with D σ D 2,ε ={Vσ D2 G + ε} and D τ D,ε 2 ={Vτ 2 G 2 + ε}. Proof We shall only prove 4. as for4.2 the result follows by symmetry. The proof will be carried out in several steps. Consider the optimal stopping problem Ṽ σ D2 x = sup τ M x τ, σ D where M x τ, σ D 2 = E x G X τ I τ < σ D2 + H X σd2 I σ D2 τ ]. 4.4 Recall that the mapping x Ṽσ D2 x is measurable cf. 6,p.5]andsoṼσ D2 X ρ = sup τ M Xρ τ, σ D2 is a random variable for any stopping time ρ. By the strong Markov property of X it follows that for any stopping time ρ given and fixed M X ρ τ, σ D2 = E Xρ G X τ I τ < σ D2 + H X σd2 I σ D2 τ ] 4.5 = E x G X ρ+τ θρ I ρ + τ θ ρ <ρ+ σ D2 θ ρ ] + H X ρ+σd2 θ ρ I ρ + σ D2 θ ρ ρ + τ θ ρ F ρ 23

12 578 Appl Math Optim : and so we have that Ṽσ D2 X ρ = ess sup E x G X ρ+τ θρ I ρ + τ θ ρ <ρ+ σ D2 θ ρ τ ] + H X ρ+σd2 θ ρ I ρ + σ D2 θ ρ ρ + τ θ ρ F ρ =: ess sup M x τ ρ,σ ρ D τ 2 F ρ 4.6 where σ ρ D 2 = inf{t ρ : X t D 2 } and τ ρ = ρ + τ θ ρ. The gain process G σ D 2, t = G X t I t <σ D2 + H X σd2 I σ D2 t is right-continuous, adapted and satisfies the integrability condition E x sup t 0 G σ D 2, t ] = E x sup G X t I t <σ D2 + H X σd2 I σ D2 t ] t 0 E x sup t 0 G X t I t <σ D2 +sup H X σd2 I σ D2 t ] < t 0 where the last inequality follows from assumption 2.. So the martingale approach in the theory of optimal stopping cf. 49, Theorem 2.2] can be applied in this setting to deduce that there exists a right-continuous modification of the supermartingale S σ D 2 t = ess sup τ t M x τ, σ D 2 F t, 4.7 known as the Snell envelope for simplicity of exposition we shall still denote the right-continuous modification by S σ D 2 t, such that the stopping time ˆτ t := ˆτ σ D 2 t = inf{s t : S σ D 2 s = G σ D 2, s } is optimal. It is known cf. 49, Theorem 2 p. 29] that the stopped process S σ D 2 s ˆτ t s t is a right-continuous martingale and so E x S σ D 2 ] ρ = Ex S σ D 2 ] ρ ˆτ ε = Ex S σ D 2 ] ρ ˆτ ε ˆτ = S σ D = Ṽσ D2 x 4.8 for every stopping time ρ ˆτ ε where ˆτ ε := τ σ D 2 ε = inf{t 0 : S σ D 2 t G σ D 2, t + ε}. Using the fact that σ ρ D 2 = σ D2 for any stopping time ρ σ D2, that P x -a.s., the essential supremum and its right-continuous modification are equivalent at stopping times and that the essential supremum is attained at hitting times cf. 49] it follows, from 4.6, that Ṽσ D2 X ρ = S σ D 2 ρ P x -a.s. 4.9 for every stopping time ρ σ D2. 2. We next show that Vσ D2 x = Ṽσ D2 x. Since G H we have that Ṽ σ D2 x M x τ, σ D 2 M x τ, σ D for all stopping times τ and for all x E. Taking the supremum over all τ we get that 23 Ṽ σ D2 x V σ D2 x. 4.

13 Appl Math Optim : To prove the reverse inequality we will show that M x τ, σ D2 Vσ D2 x for all stopping times τ so that sup τ M x τ, σ D2 = Ṽσ D2 x Vσ D2 x. By definition of V σ D2 we have that Vσ D2 x M x τ, σ D 2 for all stopping times τ. Nowtakeany stopping time τ and set τ ε = τ + ε A + τ A c where A ={τ = σ D2 }. Note that τ ε is a stopping time since A F τ σd2 F τ. If the time horizon T is finite, then we shall replace τ + ε in the definition of τ ε with τ + ε T. Then we have that M x τ ε,σ D2 = E x G X τ εi τ ε σ D2 + H X σd2 I σ D2 <τ ε I τ = T I σ D2 = T + G X τ εi τ ε σ D2 + H X σd2 I σ D2 <τ ε I τ < T I σ D2 = T + G X τ εi τ ε σ D2 + H X σd2 I σ D2 <τ ε I τ = T I σ D2 < T + G X τ εi τ ε σ D2 + H X σd2 I σ D2 <τ ε I τ < T I σ D2 < T ] = E x G X τ I τ < σ D2 + H X σd2 I σ D2 τi τ = T I σ D2 = T + G X τ I τ < σ D2 + H X σd2 I σ D2 τi τ < T I σ D2 = T + G X τ I τ < σ D2 + H X σd2 I σ D2 τi τ = T I σ D2 < T + G X τ I τ < σ D2 + H X σd2 A + I σ D2 <τi τ < T I σ D2 < T ] = M x τ, σ D 2 The first and third expressions in the second equality follow from assumption 2.6. The second expression also follows from assumption 2.6 together with the fact that I τ ε <σ D2 = I τ < σ D2 on the set {τ <T }. The last expression follows from the fact that I τ ε = σ D2 = 0 and I σ D2 < τ ε = A + I σ D2 < τ on the set {τ < T } {σ D2 < T }. So for any given stopping time τ we have Vσ D2 x M x τ ε,σ D2 = M x τ, σ D We show that Vσ D2 x = E x Vσ D2 X σd2 τ ε ] where τ ε := τ σ D 2 ε = inf{t 0 : X t D σ D 2,ε } with D σ D 2,ε ={Vσ D2 G + ε}. Fromstep2 it is sufficient to prove that Ṽσ D2 x = E x Ṽσ D2 X σd2 τ ε ] where τ ε := τ σ D 2 ε = inf{t 0 : X t D σ D 2,ε } with D σ D 2,ε ={Ṽσ D2 G + ε}. By definition of τ ε, for any given t <σ D2 τ ε,we have Ṽσ D2 X t >G X t + ε = G X t I t <σ D2 + H X σd2 I σ D2 t + ε = G σ D 2, t + ε 4.2 where the first equality follows from the fact that I σ D2 t = 0. Since σ D2 τ ε σ D2,by4.9 it follows that Ṽ σ D2 X σd2 τ ε = S σ D 2 σ D2 τ ε P x -a.s

14 580 Appl Math Optim : Using 4.9 again we get Ṽσ D2 X t = S σ D 2 t P x -a.s. 4.4 So by 4.2 we can conclude that S σ D 2 t > G σ D 2, t + ε P x -a.s. 4.5 Recall, from step 2, that ˆτ ε = inf{t 0 : S σ D 2 t G σ D 2, t }. By using the definition of ˆτ ε together with 4.5 one can see that σ D2 τ ε ˆτ ε.byusing4.8, 4.9 and that S σ D 2 s ˆτ 0 s 0 is a martingale we further get that S σ D 2 0 = Ṽ σ D2 x = E x S σ D 2 σ D2 τ ε ˆτ ε ] = Ex S σ D 2 σ D2 τ ε ] = Ex Ṽ σd2 X σd2 τ ε ]. 4.6 From 4.6 together with the fine continuity property of Vσ D2 upon using Lemma 3.5, the right-continuity property of the composite process GX and the fact that Ṽσ D2 = Vσ D2,wehave Vσ D2 x = E x S σ D 2 ] σ D2 τ ε = Ex S σ D 2 τ ε I τ ε σ D2 + S σ D 2 σ D2 I σ D2 <τ ε ] Ṽ = E x σd2 X τε I τ ε σ D2 + H X σd2 I σ D2 <τ ε ] E x G X τε + εi τ ε σ D2 + H X σd2 I σ D2 <τ ε ] M x τ ε,σ D2 + ε 4.7 for any ε>0, where the third equality follows from the fact that S σ D 2 σ D2 = ess sup E x G X τ I τ < σ D2 + H X σd2 I σ D2 τ F σd2 ] τ σ D2 = ess sup E x H X σd2 F σd2 ]=H X σd2 4.8 τ σ D2 Lemma 4.2 Let {ρ n } n= be a sequence of stopping times such that ρ n ρ P x -a.s. For a given Borel set D E define the entry times σ ρn = inf{t ρ n : X t D} and σ ρ = inf{t ρ : X t D}.Ifeither then σ ρn σ ρ P x -a.s. Disclosed, or 4.9 D is finely closed with regular boundary D, 4.20 Proof Since ρ n is an increasing sequence of stopping times, σ ρn is increasing and thus σ ρn β P x -a.s. for some stopping time β. We need to prove that β = σ ρ P x a.s. 23

15 Appl Math Optim : We first show that β ρ P x -a.s. Suppose, for contradiction, that the set ˆ := {ω : βω < ρω} is of positive measure. Since ρ n ρ P x -a.s. then for every ω ˆ \N, where P x N = 0-a.s., there exists n 0 ω N such that ρ n ω > βω 4.2 for all n n 0 ω. But for each n N we have that σ ρn β P x -a.s., so for every ω ˆ \N 2 where P x N 2 = 0 there exists n ω N such that σ ρn ω < ρ n ω 4.22 for all n n ω. Combining 4.2 and 4.22 it follows that for every ω ˆ \N N 2 there exists ˆnω N such that σ ρn ω < ρ n ω for all n ˆnω. But this contradicts the fact that σ ρn ρ n P x -a.s. So we must have that β ρ P x -a.s. 2 Let ={ω : βω > ρω} and 2 ={ω : βω = ρω}. We prove that there exists a set N with P x N = 0 such that βω = σ ρ ω for every ω 2 \N. i. Suppose first that P x >0. Since σ ρn β P x -a.s. then for every ω \N 3, where P x N 3 = 0, there exists n 2 ω N such that σ ρn ω > ρω for all n n 2 ω. Moreover, since σ ρ ω ρω for every ω \N 4 where P x N 4 = 0 it follows that for every ω \N, where N = N 3 N 4, there exists n 3 ω N such that σ ρn ω = σ ρ ω for all n n 3 ω. From this it follows that β = σ ρ P x -a.s. on. ii. Now suppose that P x 2 >0. Let us consider first the case when 4.9holds. So we have that σ ρn ω ρω for every ω 2 \N 5 where P x N 5 = 0. The fact that D is closed implies that X σρn D for all n N. Moreover, by the quasi-leftcontinuity property of X it follows that X σρn ω ω X ρω ω for every ω 2 \N 6 where P x N 6 = 0. Again using the fact that D is closed we have that X ρ ω D for every ω 2 \N 7 where P x N 7 = 0. From this it follows that σ ρ ω = ρω for every ω 2 \N with N = N 6 N 7. By definition of 2, this implies that σ ρ = β P x -a.s. on 2. Now let us consider the case when 4.20 holds. Again by quasileft-continuous of X it follows that X σρn ωω X ρω ω for each ω 2 \N 8 with P x N 8 = 0. Since D is not necessarily closed we have that X ρω ω D D. Suppose first that X ρω ω D. This means that σ ρω ω = ρω and so we have that σ ρn ωω σ ρω ω. From this we can conclude that βω = σ ρω ω. To prove that σ ρ = ρ P x -a.s. on the set := {ω : X ρ ω D} it is sufficient to show that P x {σ ρ >ρ}.letη D = inf{t > 0 : X t D}. By the strong Markov property of X we have that E Xρ I η D > 0] =E x I ρ + η D θ ρ >ρ F ρ ] 4.23 Multiplying both sides in 4.23byI and taking E x expectation on both sides note that X ρ is F ρ measurable we get that P x {σ ρ >ρ} = E x I E Xρ I η D > 0]] = 0 by the regularity property of X. 23

16 582 Appl Math Optim : Proposition 4.3 Let D, D 2 be either closed or finely closed subsets of E. Suppose also that their respective boundaries D and D 2 are regular. Let τ and σ D2 be the first entry times into D and D 2 respectively. Set τ σ D 2 ε = inf{t 0 : X t D σ D 2,ε } and σ τ D ε = inf{t 0 : X t D τ D,ε 2 } where D σ D 2,ε ={Vσ D2 G + ε} and D τ D,ε 2 = {Vτ 2 G 2 + ε}. Then τ σ D 2 ε τ σ D 2 P x -a.s. and σ τ D ε σ τ D P x - a.s., where τ σ D 2 = inf{t 0 : X t D σ D 2 } with D σ D 2 = {Vσ D2 = G } and σ τ D = inf{t 0 : X t D τ D 2 } with D τ D 2 ={Vτ 2 = G 2 }. Proof We shall only prove that τ σ D 2 ε τ σ D 2 P x -a.s. as the other assertion follows by symmetry. Recall the definition of Ṽσ D2 from From step 2 in the proof of Proposition 4. it is sufficient to prove that τ ε τ, where we recall that τ ε = inf{t 0 : X t D σ D 2,ε } with D σ D 2,ε ={Ṽσ D2 G + ε} and τ σ D 2 = inf{t 0 : X t D σ D 2 } with D σ D 2 ={Ṽσ D2 = G }. For each ε>0wehave that τ ε τ σ D 2. Since τ ε increases as ε decreases, τ ε β as ε 0 where β τ σ D 2 P x -a.s. To prove that β = τ σ D 2 we first show that Ṽ E x σd2 X β ] Ṽ lim inf E x σd2 X τε ] 4.24 ε 0 For stopping times σ β = inf{t β : X t D 2 } and σ τε = inf{t τ ε : X t D 2 } we have M x τ, σ β M x τ, σ τ ε = E x G X τ I τ < σ β + H X σβ I σ β <τ +H X σβ I σ β = τ,σ β = σ τε G X τ I τ < σ τε H X σ τε I σ τε <τ H X σ τε I σ τε = τ,σ τε = σ β +H X σβ I σ τε <τ H X σβ I σ τε <τ ] E x G X τ I τ < σ β I τ < σ τε I σ τε = τ,σ τε = σ β ] + E x H X σβ I σ β <τ I σ τε <τ+ I σ β = τ,σ β = σ τε ] + E x H X σβ H X σ τε I σ τε <τ ] = E x G X τ H X σβ I σ τε <τ<σ β ] + E x H X σβ H X σ τε I σ τε <τ ] E x sup t G X t +sup H X t I σ τε <τ<σ β ] t + E x H X σβ H X σ τε ] 4.25 where the first inequality follows from the fact that G H. By Lemma 4.2 we have that σ τε σ β as ε 0 and so the first term on the right hand side of the above expression tends to zero uniformly over all τ. Since H X is quasi-left-continuous, the second expression also tends to zero. By the strong Markov property of X recall 23

17 Appl Math Optim : that the expectation and the essential supremum in 3.4 can be interchanged it follows that E x Ṽ σ D2 X β ]=sup τ β M x τ, σ β.by4.25 wehave sup τ β M x τ, σ β lim inf ε 0 sup τ β M x τ, σ τ ε lim inf ε 0 sup M x τ, σ τε τ τ ε Ṽ = lim inf E x σd2 X τε ] 4.26 ε 0 Using again the fact that Ṽσ D2 = Vσ D2 so that Ṽσ D2 is finely continuous, together with the fact that G X is left-continuous over stopping times, we get, from Lemma 3.5, that Ṽσ D2 X τε G X τε + ε P x -a.s. Hence it follows that E x Ṽ σd2 X β ] lim inf ε 0 lim inf ε 0 Ṽ E x σd2 X σ D2 ] τ ε E x G X σ D2 + ε ] = E x G X β ] τ ε Combining 4.27 with the fact that Ṽ σ D2 X β G X β P x -a.s. we conclude that Ṽ σ D2 X β = G X β P x -a.s But τ σ D 2 = inf{t 0 : X t D σ D 2 } and so we must have that β τ σ D 2 0. This fact together with β τ σ D 2 proves the required result. We now state and prove the main result of this section. Theorem 4.4 Given the setting in Proposition 4.3, we have Vσ D2 x = M σ D2 x τ,σ D2, 4.29 Vτ 2 x = M 2 x τ,σ τ D Proof We shall only prove 4.29 as the proof of 4.30 follows by symmetry. Recall, from Proposition 4., that Vσ D2 x M x τ σ D 2 ε,σ D2 + ε. We show that lim sup ε 0 M x τ σ D 2 ε,σ D2 M x τ σ D 2,σ D2 so that Vσ D2 x lim sup ε 0 M x τ σ D 2 ε, σ D2 + ε M x τ σ D 2,σ D2. For simplicity of exposition let us set τ ε := τ σ D 2 ε and 23

18 584 Appl Math Optim : τ := τ σ D 2.Now M x τ,σ D2 M x τ ε,σ D2 = E x G X τ I τ <σ D2 + H X σd2 I σ D2 <τ + G X σd2 I σ D2 = τ,τ = τ ε G X τε I τ ε <σ D2 H X σd2 I σ D2 <τ ε G X σd2 I σ D2 = τ ε,τ ε = τ + G X τε I τ <σ D2 G X τε I τ <σ D2 + G X τε I τ = σ D2,τ = τ ε G X τε I τ = σ D2,τ = τ ε ] E x G X τ G X τε I τ <σ D2 ] + G X τε I τ <σ D2 I τ ε <σ D2 I τ = σ D2,τ = τ ε + G X τ I τ = σ D2,τ = τ ε G X τε I τ = σ D2,τ = τ ε + H X σd2 I σ D2 <τ I σ D2 <τ ε I τ ε = σ D2,τ ε = τ ] = E x G X τ G X τε I τ <σ D2 + H X σd2 G X τε I τ ε <σ D2 <τ + G X τ I τ = σ D2,τ = τ ε G X τε I τ = σ D2,τ = τ ε ] 2E x G X τ G X τε ] + E x H X σd2 G X τε I τ ε <σ D2 <τ ] 2E x G X τ G X τε ] E x sup H X t t + sup G X t I τ ε <σ D2 <τ ] 4.3 t The first inequality follows from the assumption G H whereas the second equality follows from the fact that I σ D2 <τ I σ D2 <τ ε I τ ε = σ D2,τ ε = τ = I τ <σ D2 I τ ε <σ D2 + I τ = σ D2,τ = τ ε = I τ ε <σ D2 <τ 4.32 The penultimate inequality follows from the fact that G X τ G X τε I τ <σ D2 + G X τ G X τε I τ = σ D2,τ = τ ε 2 G X τ G X τε 4.33 whereas the last inequality follows from the fact that H X σd2 G X τε I τ ε <σ D2 <τ sup H X t t + sup G X t I τ ε <σ D2 <τ 4.34 t Letting ε 0in4.3 we get, from Proposition 4.3, that I τ ε <σ D2 <τ converges to zero uniformly over σ D2. Moreover, by using the quasi-left-continuity property of 23

19 Appl Math Optim : G X we conclude that M x τ,σ D2 lim sup ε 0 M x τ ε,σ D2 and this completes the proof. 5 Partial superharmonic characterisation The purpose of the current section is to utilise the results derived in Sects. 3 and 4 to provide a partial superharmonic characterisation of Vσ D2 resp. Vτ 2 when the stopping time σ D2 resp. τ of player two resp. player one is externally given. This characterisation attempts to extend the semiharmonic characterisation of the value function in zero-sum games see 5] and 52] and can informally be described as follows: Suppose that G 2 in 2.3. Then the second player has no incentive of stopping the process and so 2.4 reduces to the optimal stopping problem V x = sup τ E x G X τ ]. By results in optimal stopping theory V admits a superharmonic characterisation. More precisely V can be identified with the smallest superharmonic function that dominates G see 49, p. 37 Theorem 2.4]. However, if G 2 is finite valued then there might be an incentive for the second player to stop the process. This raises two questions: i is the superharmonic characterisation of Vσ still valid before the second player stops the process, ii does Vσ coincide with H at the time the second player stops the process? If the second player selects the stopping time σ := σ D2 = inf{t 0 : X t D 2 } where D 2 is a closed or finely closed subset of the state space E having a regular boundary D 2 then the above questions can be answered affirmatively and we will say that the value function of player one associated with the stopping time σ D2 admits a partial superharmonic characterisation. To be more precise let us consider the set Sup D 2 G, K ={F : E G, K ]:F is finely continuous, F = H in D 2, F is superharmonic in D2 c } 5. where K is the smallest superharmonic function that dominates H and G, K ] means that G x Fx K x for all x E. Then the value function of player one can be identified with the smallest finely continuous function from Sup D 2 G, K. Likewise, suppose that player one selects the stopping time τ = inf{t 0 : X t D } where D is a closed or finely closed set having a regulary boundary D, and consider the set Sup 2 D G 2, K 2 ={F : E G 2, K 2 ]:F is finely continuous, F = H 2 in D, F is superharmonic in c } 5.2 where K 2 is the smallest superharmonic function that dominates H 2 and G 2, K 2 ] means that G 2 x Fx K 2 x for all x E. Then the value function of player two associated to τ can be identified with the smallest finely continuous function from Sup 2 D G 2, K 2. 23

20 586 Appl Math Optim : H = H 2 v = V 2 τd u = V σd 2 G G 2 D D 2 Fig. The double partial superharmonic characterisation of the value functions in a nonzero-sum optimal stopping game for absorbed Brownian motion The above characterisation of V σ D2 and V 2 τ can be used to study the existence of a Nash equilibrium. Indeed suppose that one can show the existence of finely continuous functions u and v such that: i. u lies between G and K, u is identified with H in the region D 2 ={v = G 2 } and u is the smallest superharmonic function that dominates G in the region {v >G 2 } ii. v lies between G 2 and K 2, v is identified with H 2 in the region D ={u = G } and v is the smallest superharmonic function that dominates G 2 in the region {u > G }. Then under the assumption that D and D 2 have regular boundaries and are either closed or finely closed, u and v coincide with Vσ D2 and Vτ 2 respectively. In this case we shall say that together, Vσ D2 and Vτ 2 admit a double partial superharmonic characterisation see Fig. and can be called the value functions of the game Moreover, the pair τ,σ D2 will form a Nash equilibrium point. To prove the partial superharmonic characterisation of Vσ D2 resp. Vτ 2 we first show that for any stopping time σ resp. τ, Vσ resp. V τ 2 is bounded above by K resp. K 2. For this we define the concept of superharmonic functions. Definition 5. Let C be a measurable subset of E and D = E\C. A measurable function F : E R is said to superharmonic in C if E x FX ρ σd ] Fx for every stopping time ρ and for all x E, where σ D = inf{t 0 : X t D}. F is said to be superharmonic if E x FX ρ ] Fx for every stopping time ρ and for all x E. Lemma 5.2 Let SupH ={F : E R : F H, F is superharmonic} be the collection of superharmonic functions that majorise H. Then for any given stopping time σ we have 23

21 Appl Math Optim : V σ inf F. 5.3 F SupH Similarly let SupH 2 ={F : E R : F H 2, F is superharmonic}. Then for any given stopping time τ we have Vτ 2 inf F. 5.4 F SupH 2 Proof We shall only prove 5.3as5.4 can be proved in exactly the same way. Take any stopping time σ and any F SupH. Then M x τ,σ E x F X τ I τ σ+ F X σ I σ < τ] = E x F X τ σ ] 5.5 for all stopping times τ. The first two inequalities follow from the fact that G H F. F is superharmonic, so E x F Xρ ] F x for any stopping time ρ and for all x E. In particular E x F X τ σ ] F x for every stopping time τ. Thus M x τ,σ F x 5.6 for all stopping times τ and for all x E. Taking the infimum over all F in SupH on the right hand side and the supremum over all τ on the left hand side of 5.6 we get the required result. Theorem 5.3 i. Suppose that K is the smallest superharmonic function that dominates H. Let v G 2 be a finely continuous function, with D 2 ={v = G 2 }, such that u := inf F Sup D2 G,K F where Sup D 2 G, K is the collection of functions given in 5., exists. If the boundary D 2 of D 2 is regular for D 2, then ux = V σ D2 x 5.7 for all x E where σ D2 ={t 0 : X t D 2 }. ii. Similarly suppose that K 2 be the smallest superharmonic function that dominates H 2. Let u G be a finely continuous function, with D ={u = G }, such that v := inf F Sup 2 G 2,K 2 F where Sup2 D G 2, K 2 is the collection of functions defined in 5.2, exists. If the boundary D of D is regular for D, then vx = V 2 τ x 5.8 for all x E where τ ={t 0 : X t D }. Proof We shall only prove i. as ii. follows by symmetry. We first show that u Vσ D2.TakeanyF Sup D 2 G, K. We know that F is superharmonic in D2 c so ] F x E x F X τ σd2 = E x FX τ I τ σ D2 + FX σd2 I σ D2 <τ] ] E x G X τ I τ σ D2 + F X σd2 I σ D2 <τ

22 588 Appl Math Optim : for every stopping time τ and for all x E, where the last inequality follows from the fact that F G. Since v is finely continuous and G 2 is continuous hence finely continuous, D 2 is finely closed and thus by the definition of Sup D 2 G, K, upon using 3.4 since F is finely continuous and H is continuous hence finely continuous it can be seen that FX σd2 = H X σd2. Thus for any F Sup D 2 G, K we have ] F x E x G X τ I τ σ D2 + H X σd2 I σ D2 <τ = M x τ,σd2 5.0 for every stopping time τ and for all x E. Taking the infimum over all F and the supremum over all τ we get that ux Vσ D2 x for all x E. We next show that u Vσ D2. For this it is sufficient to prove that Vσ D2 Sup D 2 G, K as the result will follow by definition of u. Recall, from Theorem 3.7, that Vσ D2 is finely continuous since the boundary D 2 is assumed to be regular for D 2. The fact that Vσ D2 K follows from Lemma 5.2. To show that Vσ D2 is bounded below by G we note that Vσ D2 M x τ, σ D 2 for any τ in particular for τ = 0. Since M x 0,σ D 2 = G x the result follows. To prove that Vσ D2 = H in D 2 we take any x D 2 so that σ D2 = 0. Then by selecting any stopping time τ>0weget that V0 x M x τ, 0 = H x. On the other hand V 0 x suph xp x τ = 0 + H xp x τ > 0 = H x. 5. τ From this we conclude that V0 x = H x. It remains to prove that Vσ D2 is superharmonic in D2 c By the strong Markov property of X we have E x V σd2 X ρ σd2 ] = E x sup M X ρ σd2 τ, σ D2 ] τ = E x ess sup M x ρ σ D 2 + τ θ ρ σd2,ρ σ D2 + σ D2 θ ρ σd2 F ρ σd2 ] τ = E x ess sup τ ρ σ D2 M x τ, σ D 2 F ρ σd2 ] = sup M x τ, σ D 2 τ ρ σ D2 sup M x τ, σ D 2 = Vσ D2 x 5.2 τ for any stopping time ρ where we recall that M x τ, σ F ρ = E x G X τ I τ σ+ H X σ I σ < τ F ρ ] for stopping times τ,σ and ρ. 6 The case of stationary one-dimensional Markov processes In this section we shall assume that the Markov process X takes values in R and is such that LawX P x = LawX x P. We shall also assume that there exist points 23

23 Appl Math Optim : and satisfying < < < such that i. for given D 2 of the form,, the first entry time τ = inf{t 0 : X t } as obtained from Theorem 4.4 is optimal for player one and ii. for given D of the form, ], the first entry time σ B = inf{t 0 : X t } as obtained from Theorem 4.4 is optimal for player two. So in this section we will assume the existence of a pair τ,σ B that is a Nash equilibrium. 6. The principle of double continuous fit We prove that V σ B resp. V 2 τ is continuous at resp.. We shall refer to this result as the principle of double continuous fit. For this we shall further assume that the following time-space conditions hold: as ε, h 0 and X +ε t+h X ε t+h X +ε ρ n X ε ρ n X t P a.s. 6. X t P a.s. 6.2 X ρ P a.s. 6.3 X ρ P a.s. 6.4 whenever ρ n is a sequence of stopping times such that ρ n ρ. Conditions imply that the mapping x X x stochastic flow is continuous at and. Stochastic differential equations driven by Lévy processes, for example, satisfy this property under regularity assumptions on the drift and diffusion coefficients see for example 37, p. 340]. We shall first state and prove the following lemma. Lemma 6. Let σ = inf{t 0 : X t } be the optimal stopping time for player two under P and let σ +ε = inf{t 0 : X t } be the optimal stopping time of player two under P +ε, for given ε>0. Then, if condition 6.3 is satisfied we have that σ +ε σ as ε 0. Similarly, if τ = inf{t 0 : X t } is the optimal stopping time for player two under P B and τ ε = inf{t 0 : X t } is the optimal stopping time of player one under P B ε, for given ε>0, then if condition 6.4 is satisfied we have that τ ε τ as ε 0. Proof We shall only prove that σ +ε σ D 2 as ε 0. The fact that τ ε τ as ε 0 can be proved in the same way. Since LawX P x = LawX x P we have that σ +ε is equally distributed as ˆσ +ε := inf{t 0 : X +ε t } under P whereas σ is equally distributed as ˆσ := inf{t 0 : X t } under P. Now ˆσ +ε γ as ε 0 for some stopping time γ ˆσ. So to prove the result it remains to show that γ ˆσ. By the time-space condition 6.3 we have that X +ε ˆσ +ε B X γ 23

24 590 Appl Math Optim : P-a.s. as ε 0. Now since X +ε ˆσ +ε B for each ε>0 it follows that X γ. But this implies that γ ˆσ and this proves the required result. Proposition 6.2 Suppose that the payoff functions G i, H i for i =, 2 are also assumed to be bounded. Then the value functions Vσ B and Vτ 2 are continuous at and respectively. Proof We shall only prove the result for Vσ B as for V 2 symmetry. To prove this it is sufficient to show that τ the result will follow by lim ε 0 V σb + ε V σ B = because lim ε 0 V σ B ε V σ B = lim ε 0 G ε G = 0 by continuity of G. Since V σ B x G x for all x R and V σ B = G we get that V σ B + ε V σ B G + ε G for every ε>0. So by continuity of G we have that lim inf ε 0 V σ B + ε V σ B 0. We next show that lim sup ε 0 V σ B + ε V σ B 0 so that we get the required result. Given ε>0, let τ +ε time for player one under P +ε. Then we have that τ +ε ˆτ +ε = inf{t 0 : X +ε P +ε we have that ˆτ +ε t = inf{t 0 : X t } be the optimal stopping is equally distributed as } under P. Now from the optimality of τ +ε under Vσ B + ε Vσ B M +ε τ +ε,σ A +ε M τ +ε,σ A = E +ε G X τ +εi τ +ε σ A ] +ε E G X τ +ε + E +ε H X σ +εi σ +ε <τ A ] +ε E H X B σ B = E G X +ε I ˆτ +ε ˆσ A +ε G X I ˆτ +ε + E H X +ε ˆσ +ε B I ˆσ +ε < ˆτ +ε ˆτ +ε H X ˆσ B I ˆσ I τ +ε I σ ˆσ ] < ˆτ +ε ] σ ] <τ +ε ] 6.6 The first expectation in the last expression on the right hand side of 6.6 can be written as E G X +ε I ˆτ +ε ˆτ +ε + G X +ε ˆτ +ε + G X +ε ˆτ +ε = E G X +ε ˆτ +ε 23 I ˆτ +ε I ˆτ +ε ˆσ A +ε B G X ˆσ +ε ˆσ +ε ˆτ +ε G X G X I ˆτ +ε ˆτ +ε ˆτ +ε G X I ˆτ +ε ˆτ +ε A < ˆσ A ] I ˆτ +ε I ˆτ +ε ˆσ A B I ˆτ +ε A < ˆσ A ˆσ A B I ˆτ +ε A =ˆσ A ˆσ A B I ˆτ +ε A > ˆσ A ] 6.7

Optimal Stopping Games for Markov Processes

Optimal Stopping Games for Markov Processes SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =

More information

Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion

Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion N. Attard This version: 24 March 2016 First version: 30 January 2015 Research Report No. 2, 2015, Probability and Statistics

More information

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES IOANNIS KARATZAS Departments of Mathematics and Statistics, 619 Mathematics Building Columbia University, New York, NY 127 ik@math.columbia.edu

More information

Optimal Stopping under Adverse Nonlinear Expectation and Related Games

Optimal Stopping under Adverse Nonlinear Expectation and Related Games Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Stopping Games in Continuous Time

Stopping Games in Continuous Time Stopping Games in Continuous Time Rida Laraki and Eilon Solan August 11, 2002 Abstract We study two-player zero-sum stopping games in continuous time and infinite horizon. We prove that the value in randomized

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Optimal Stopping and Applications

Optimal Stopping and Applications Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Uniform Integrability of Martingales. On a Question by Alexander Cherny The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Quitting games - An Example

Quitting games - An Example Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Supermartingales as Radon-Nikodym Densities and Related Measure Extensions

Supermartingales as Radon-Nikodym Densities and Related Measure Extensions Supermartingales as Radon-Nikodym Densities and Related Measure Extensions Nicolas Perkowski Institut für Mathematik Humboldt-Universität zu Berlin Johannes Ruf Oxford-Man Institute of Quantitative Finance

More information

02. Measure and integral. 1. Borel-measurable functions and pointwise limits

02. Measure and integral. 1. Borel-measurable functions and pointwise limits (October 3, 2017) 02. Measure and integral Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/02 measure and integral.pdf]

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Optimal stopping and a non-zero-sum Dynkin game in discrete time with risk measures induced by BSDEs

Optimal stopping and a non-zero-sum Dynkin game in discrete time with risk measures induced by BSDEs Optimal stopping and a non-zero-sum Dynin game in discrete time with ris measures induced by BSDEs Miryana Grigorova, Marie-Claire Quenez To cite this version: Miryana Grigorova, Marie-Claire Quenez. Optimal

More information

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals MATH 02 INTRODUCTION TO MATHEMATICAL ANALYSIS Properties of Real Numbers Some Fundamentals The whole course will be based entirely on the study of sequence of numbers and functions defined on the real

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

In N we can do addition, but in order to do subtraction we need to extend N to the integers

In N we can do addition, but in order to do subtraction we need to extend N to the integers Chapter The Real Numbers.. Some Preliminaries Discussion: The Irrationality of 2. We begin with the natural numbers N = {, 2, 3, }. In N we can do addition, but in order to do subtraction we need to extend

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model.

The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market model. 1 2. Option pricing in a nite market model (February 14, 2012) 1 Introduction The main purpose of this chapter is to prove the rst and second fundamental theorem of asset pricing in a so called nite market

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

1 Definition of the Riemann integral

1 Definition of the Riemann integral MAT337H1, Introduction to Real Analysis: notes on Riemann integration 1 Definition of the Riemann integral Definition 1.1. Let [a, b] R be a closed interval. A partition P of [a, b] is a finite set of

More information

SUPERMARTINGALES AS RADON-NIKODYM DENSITIES AND RELATED MEASURE EXTENSIONS

SUPERMARTINGALES AS RADON-NIKODYM DENSITIES AND RELATED MEASURE EXTENSIONS Submitted to the Annals of Probability SUPERMARTINGALES AS RADON-NIKODYM DENSITIES AND RELATED MEASURE EXTENSIONS By Nicolas Perkowski and Johannes Ruf Université Paris Dauphine and University College

More information

Could Nash equilibria exist if the payoff functions are not quasi-concave?

Could Nash equilibria exist if the payoff functions are not quasi-concave? Could Nash equilibria exist if the payoff functions are not quasi-concave? (Very preliminary version) Bich philippe Abstract In a recent but well known paper (see [11]), Reny has proved the existence of

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

v( x) u( y) dy for any r > 0, B r ( x) Ω, or equivalently u( w) ds for any r > 0, B r ( x) Ω, or ( not really) equivalently if v exists, v 0.

v( x) u( y) dy for any r > 0, B r ( x) Ω, or equivalently u( w) ds for any r > 0, B r ( x) Ω, or ( not really) equivalently if v exists, v 0. Sep. 26 The Perron Method In this lecture we show that one can show existence of solutions using maximum principle alone.. The Perron method. Recall in the last lecture we have shown the existence of solutions

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries EC 521 MATHEMATICAL METHODS FOR ECONOMICS Lecture 1: Preliminaries Murat YILMAZ Boğaziçi University In this lecture we provide some basic facts from both Linear Algebra and Real Analysis, which are going

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.

Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017. Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 017 Nadia S. Larsen 17 November 017. 1. Construction of the product measure The purpose of these notes is to prove the main

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging SAMUEL DRAPEAU Humboldt-Universität zu Berlin BSDEs, Numerics and Finance Oxford July 2, 2012 joint work with GREGOR

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

2 Lebesgue integration

2 Lebesgue integration 2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,

More information

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define 1 Measures 1.1 Jordan content in R N II - REAL ANALYSIS Let I be an interval in R. Then its 1-content is defined as c 1 (I) := b a if I is bounded with endpoints a, b. If I is unbounded, we define c 1

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

From Optimal Stopping Boundaries to Rost s Reversed Barriers and the Skorokhod Embedding

From Optimal Stopping Boundaries to Rost s Reversed Barriers and the Skorokhod Embedding From Optimal Stopping Boundaries to ost s eversed Barriers and the Skorokhod Embedding Tiziano De Angelis First version: 11 May 2015 esearch eport No. 3, 2015, Probability and Statistics Group School of

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

ABSTRACT INTEGRATION CHAPTER ONE

ABSTRACT INTEGRATION CHAPTER ONE CHAPTER ONE ABSTRACT INTEGRATION Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any form or by any means. Suggestions and errors are invited and can be mailed

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

Chapter 1 The Real Numbers

Chapter 1 The Real Numbers Chapter 1 The Real Numbers In a beginning course in calculus, the emphasis is on introducing the techniques of the subject;i.e., differentiation and integration and their applications. An advanced calculus

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Final. due May 8, 2012

Final. due May 8, 2012 Final due May 8, 2012 Write your solutions clearly in complete sentences. All notation used must be properly introduced. Your arguments besides being correct should be also complete. Pay close attention

More information

Bayesian Persuasion Online Appendix

Bayesian Persuasion Online Appendix Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose

More information

Subgame-Perfection in Quitting Games with Perfect Information and Differential Equations

Subgame-Perfection in Quitting Games with Perfect Information and Differential Equations Subgame-Perfection in Quitting Games with Perfect Information and Differential Equations Eilon Solan September 3, 2002 Abstract We introduce a new approach to study subgame-perfect equilibrium payoffs

More information

Stochastic Shortest Path Problems

Stochastic Shortest Path Problems Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

Invariant measures for iterated function systems

Invariant measures for iterated function systems ANNALES POLONICI MATHEMATICI LXXV.1(2000) Invariant measures for iterated function systems by Tomasz Szarek (Katowice and Rzeszów) Abstract. A new criterion for the existence of an invariant distribution

More information

An Introduction to Stochastic Processes in Continuous Time

An Introduction to Stochastic Processes in Continuous Time An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................

More information

Zero-sum semi-markov games in Borel spaces: discounted and average payoff

Zero-sum semi-markov games in Borel spaces: discounted and average payoff Zero-sum semi-markov games in Borel spaces: discounted and average payoff Fernando Luque-Vásquez Departamento de Matemáticas Universidad de Sonora México May 2002 Abstract We study two-person zero-sum

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Review of measure theory

Review of measure theory 209: Honors nalysis in R n Review of measure theory 1 Outer measure, measure, measurable sets Definition 1 Let X be a set. nonempty family R of subsets of X is a ring if, B R B R and, B R B R hold. bove,

More information