CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES

Similar documents
Stopping Games in Continuous Time

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Optimal Stopping Games for Markov Processes

Nonzero-Sum Games of Optimal Stopping for Markov Processes

Prof. Erhan Bayraktar (University of Michigan)

On the Multi-Dimensional Controller and Stopper Games

On continuous time contract theory

Optimal stopping for non-linear expectations Part I

An Almost Sure Approximation for the Predictable Process in the Doob Meyer Decomposition Theorem

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Non-Zero-Sum Stochastic Differential Games of Controls and St

Optimal stopping and a non-zero-sum Dynkin game in discrete time with risk measures induced by BSDEs

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Minimal Supersolutions of Backward Stochastic Differential Equations and Robust Hedging

Optimal Stopping under Adverse Nonlinear Expectation and Related Games

Doléans measures. Appendix C. C.1 Introduction

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Lecture 21 Representations of Martingales

Asymptotic Perron Method for Stochastic Games and Control

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Azéma-Yor Embedding in Non-Singular Diffusions

Pathwise Construction of Stochastic Integrals

1 Brownian Local Time

Maximum Process Problems in Optimal Control Theory

On the upper bounds of Green potentials. Hiroaki Aikawa

Variational approach to mean field games with density constraints

Citation Osaka Journal of Mathematics. 41(4)

Bayesian quickest detection problems for some diffusion processes

Noncooperative continuous-time Markov games

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

Uniformly Uniformly-ergodic Markov chains and BSDEs

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Regularity of the density for the stochastic heat equation

On a Class of Multidimensional Optimal Transportation Problems

Convergence of generalized entropy minimizers in sequences of convex problems

for all f satisfying E[ f(x) ] <.

Information and Credit Risk

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

Backward Stochastic Differential Equations with Infinite Time Horizon

On Ergodic Impulse Control with Constraint

Quadratic BSDE systems and applications

A Barrier Version of the Russian Option

Abstract In this paper, we consider bang-bang property for a kind of timevarying. time optimal control problem of null controllable heat equation.

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

A probabilistic approach to second order variational inequalities with bilateral constraints

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

A class of globally solvable systems of BSDE and applications

Local semiconvexity of Kantorovich potentials on non-compact manifolds

A Representation of Excessive Functions as Expected Suprema

c 2004 Society for Industrial and Applied Mathematics

On the sequential testing problem for some diffusion processes

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Classical solutions for the quasi-stationary Stefan problem with surface tension

EXISTENCE OF SOLUTIONS TO THE CAHN-HILLIARD/ALLEN-CAHN EQUATION WITH DEGENERATE MOBILITY

Quasi-sure Stochastic Analysis through Aggregation

Skorokhod Embeddings In Bounded Time

Weak solutions of mean-field stochastic differential equations

Lecture 17 Brownian motion as a Markov process

Dynamic risk measures. Robust representation and examples

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

Lecture 12. F o s, (1.1) F t := s>t

The Pre-nucleolus for Fuzzy Cooperative Games

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

A Concise Course on Stochastic Partial Differential Equations

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

Convergence at first and second order of some approximations of stochastic integrals

Change detection problems in branching processes

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

The Wiener Sequential Testing Problem with Finite Horizon

A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Régularité des équations de Hamilton-Jacobi du premier ordre et applications aux jeux à champ moyen

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Game Theory and its Applications to Networks - Part I: Strict Competition

Random Stopping Times in Stopping Problems and Stopping Games

Convergence of Finite Volumes schemes for an elliptic-hyperbolic system with boundary conditions

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

THE MALLIAVIN CALCULUS FOR SDE WITH JUMPS AND THE PARTIALLY HYPOELLIPTIC PROBLEM

1. Stochastic Processes and filtrations

OPTIMAL CONTROL AND STRANGE TERM FOR A STOKES PROBLEM IN PERFORATED DOMAINS

On an Effective Solution of the Optimal Stopping Problem for Random Walks

Some aspects of the mean-field stochastic target problem

Quitting games - An Example

The Skorokhod problem in a time-dependent interval

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

ON NONHOMOGENEOUS BIHARMONIC EQUATIONS INVOLVING CRITICAL SOBOLEV EXPONENT

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ =

Stochastic integration. P.J.C. Spreij

e - c o m p a n i o n

Math212a1413 The Lebesgue integral.

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION

Packing-Dimension Profiles and Fractional Brownian Motion

Journal of Inequalities in Pure and Applied Mathematics

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

Maturity randomization for stochastic control problems

Transcription:

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES IOANNIS KARATZAS Departments of Mathematics and Statistics, 619 Mathematics Building Columbia University, New York, NY 127 ik@math.columbia.edu HUI WANG Division of Applied Mathematics Brown University, Box F Providence, RI 2915 huiwang@cfm.brown.edu October 15, 23 Dedicated to Professor Alain Bensoussan on the occasion of his 6 th birthday Abstract A general result is obtained for the existence of saddle-point in a stochastic game of timing, by exploiting its connection with a bounded-variation control problem. Weak compactness arguments prove the existence of an optimal process for the control problem. It is shown that this optimal process generates a pair of stopping times that constitute a saddle-point for the game, using the method of comparing costs at nearby points by switching paths at appropriate random times. 1 Introduction and Summary In three very important and influential papers, Bensoussan & Friedman 4, 5 and Friedman 11 developed the analytical theory of stochastic differential games with stopping times. Their setting is that of a Markov diffusion process, and of a pair of players, each of whom can chose when to terminate the process. At that time, and depending on who made the decision to stop, one of the players the minimizer pays the other maximizer a certain random amount. The problem then is for one player to minimize, and for the other to maximize, the expected value of this payoff. Bensoussan and Friedman studied the value and saddle-points of such a game using appropriate partial differential equations, variational inequalities, and free-boundary problems. Key Words: Stochastic control, games of stopping, saddle-point, Komlós lemma. Research supported in part by the National Science Foundation under Grant NSF-DMS-97-3281. 1

Detailed expositions of their approach can be found in their monographs Bensoussan & Lions 6, pp. 462-493, and Friedman 12. Along a parallel track, the purely probabilistic approach to such games of timing stopping was being developed in progressively greater generality, by Dynkin 9, Krylov 19, Neveu 22, Bismut 7, Alario-Nazaret 1, Stettner 24, Alario-Nazaret, Lepeltier & Marchal 2, Lepeltier & Maingeneau 2, Morimoto 21, among others. This theory relied on the martingale approach to the problem of optimal stopping introduced by Snell 23, but now applied in the more challenging setup of two coupled such stopping problems. More recently, this problem has been connected by Cvitanić & Karatzas 8 to the solution of backwards stochastic differential equations with upperand lower-constraints on the state-process see also Hamadène & Lepeltier 13, 14, and has received pathwise Karatzas 15 and mixed-strategy Touzi & Vieille 26 treatments. This paper presents another approach to the general, non-markovian stochastic game of timing Dynkin game, and brings it into contact with a suitable bounded-variation or singular control problem. We show that the game has a value, which coincides with the derivative of the valuefunction of the auxiliary control problem. An optimal process to this latter problem generates in a simple way a pair of stopping times, that constitute a saddle-point for the game of timing; see Theorems 3.1, 3.2. Then a result of Komlós 18 is coupled with weak compactness arguments, to show that such an optimal process indeed exists; cf. Theorem 3.3. The approach is very direct and pathwise. It uses the method of comparing costs at nearby points by switching paths at appropriate random times, initiated by Karatzas & Shreve 16, 17 in the context of optimal-stopping/reflected-follower problems for Brownian motion, and further developed in Baldursson & Karatzas 3. The connections of singular stochastic control with games of stopping were first noticed by Taksar 25, in a Markovian context and using very different analytical methods. 2 The Two Stochastic Optimization Problems Consider a complete probability space Ω, F, IP equipped with a filtration IF = { Ft } t T which satisfies the usual conditions of right-continuity and augmentation by IP-null sets. We denote by A the class of increasing, left-continuous and IF-adapted processes ζ :, T Ω, with ζ =, and by B the class of processes ξ :, T Ω IR that can be written in the form 2.1 ξt = ξ t ξ t, t T for two processes ξ ± A. We assume in fact that the decomposition of 2.1 is minimal, and thus the total variation of the function s ξs on any interval, t is given by 2.2 ˇξt = ξ t ξ t, t T almost surely. In the control problem of 2.5, 2.6 below, ξ t resp., ξ t represents the total cumulative push in the positive resp., negative direction exerted up to time t, so that 2.3 Xt := x ξt, t T represents the state or position at time t, when starting at X = x IR and employing the strategy ξ B. We shall denote by S the set of all IF-stopping times ρ : Ω, T. In order to formulate the two optimization problems that will occupy us in this paper, let us introduce: 2

A random field H :, T Ω IR IR, such that t, ω Ht, ω, x is IF-progressively measurable for every x IR, and x Ht, ω, x is convex and of class C 1 for every t, ω, T Ω. Two continuous, IF-adapted processes γ :, T Ω, and ν :, T Ω,. A random field G : Ω IR IR, such that G, x is FT -measurable for every x IR, and Gω, is convex and of class C 1 for every ω Ω. On these random functions we shall impose throughout the regularity requirements T 2.4 IE H x t, x dt G x <, x IR and IE 2.1 A Bounded-Variation Control Problem sup γt sup νt t T t T <. Suppose that when in state Xt = y IR at time t, T, we incur a running cost Ht, y per unit of time, and a running cost γt resp., νt per unit of fuel spent to push in the positive resp., negative direction. Suppose also that we incur a terminal cost Gx for being in position XT = y at the terminal time t = T. Then we should try to find a strategy ξ B that attains 2.5 V x = inf IEJξ; x, ξ B the minimal expected cost of our stochastic control problem with T 2.6 Jξ; x = H t, Xt dt γtdξ t νtdξ t G XT, in the notation of 2.1-2.3. If there exists a strategy ξ B that attains the infimum in 2.5, it is called optimal for the initial condition x. 2.2 A Game of Timing Dynkin Game Consider two players, N and G, each of whom chooses a stopping time σ and τ, respectively in S. The game terminates at σ τ, that is, as soon as one of the players decides to stop. If player G stops first, he pays N the amount γτ. If player N stops first, he pays G the amount νσ resp. G x when the game terminates before resp., at the end of the time-horizon T ; and as long as the game is in progress, N keeps paying G at the rate H x t, x per unit of time. In other words, the total payment from N to G is given by the expression 2.7 Iσ, τ; x = σ τ H x t, x dt νσ1 {σ<τ} γτ1 {τ<σ} G x1 {σ=τ=t }, a random variable whose expectation N tries to minimize and G to maximize. We obtain a stochastic game of timing or Dynkin game, after Dynkin 9 with lower- and upper- values 2.8 ux = sup inf IEIσ, τ; x ūx = τ S σ S inf sup IEIσ, τ; x, σ S τ S respectively. We say that the game has value ux, if ux = ūx = ux. A pair σ, τ S 2 is called saddle-point of the game, if 2.9 IEIσ, τ; x IEIσ, τ ; x IEIσ, τ ; x holds for all σ S, τ S. 3

It is easy to see, that the existence of a saddle-point implies that the game has value 2.1 ux = IEIσ, τ ; x = ux = ūx. 3 Results The stochastic control problem of subsection 2.1 and the stochastic game of subsection 2.2 turn out to be very closely connected. Under the conditions of Section 2, it can be proved that every optimal strategy for the control problem induces a saddle-point for the game Theorem 3.1, whose value u is then shown to coincide with the derivative V of the value of the control problem Theorem 3.2. And under certain additional conditions on the random quantities H, G, γ and ν, it can be shown that such an optimal strategy for the control problem indeed exists Theorem 3.3. Consequently, under the conditions of Theorem 3.3, the Dynkin Game of section 2.2 has a saddle-point, and this is of the type 3.1 below. Theorem 3.1. Suppose that the process ξ B is optimal for the control problem of subsection 2.1, i.e., attains the infimum in 2.5. Write this process in its minimal decomposition ξ = ξ ξ as in 2.1, and define the stopping times 3.1 σ = inf { t, T / ξ t > } T, τ = inf { t, T / ξ t > } T. Then the pair σ, τ S 2 is a saddle-point for the game of subsection 2.2, whose value is given by 3.2 ux = ūx = ux = IEIσ, τ ; x. Theorem 3.2. Under the conditions of Theorem 3.1, the value function V of the control problem of 2.5, 2.6 is differentiable, and we have 3.3 V x = ux, x IR. Suppose now that, in addition to the conditions in Section 2, the random functions H, G, γ and ν satisfy T 3.4 IE sup H t, x dt sup G t, x <, x IR x IR and the a.s. conditions 3.5 γt G x νt, x IR 3.6 γt κ, νt κ ; t, T for some κ >. Theorem 3.3. Under the conditions 3.4-3.6, in addition to those of Section 2, the control problem of 2.5, 2.6 admits an optimal process ξ B. Corollary 3.1. With the assumption of Theorem 3.3, the Dynkin game of subsection 2.2 has a saddle-point and a value given by 3.1-3.2, and the relation 3.3 holds. 4

4 Proofs of Theorems 3.1, 3.2 We begin with a simple observation. Lemma 4.1. The function V of 2.5 is proper, convex. Proof : Convexity of V is a consequence of the convexity of the functions Ht, ω,, Gω,, and follows by taking infima over ζ B, η B on the right-hand side of the inequality 4.1 V λx 1 1 λx 2 IEJ λx1 1 λx 2 ; λζ1 λη λ IEJx 1 ; ζ1 λ IEJx 2 ; η, T valid for x 1 IR, x 2 IR, λ 1 ; and 2.4 implies V x IE Ht, x dt Gx < for every x IR, which gives properness. As a corollary of convexity, the right- and left- derivatives 4.2 D ± x = V x h V x lim h ± h exist, and we have D V x D V x, x IR. Lemma 4.2. Under the assumption of Theorem 3.1, we have in the notation of 2.7, 3.1, 4.2: 4.3 D V x IEIσ, τ ; x for every x IR, σ S. Proof : We compare costs at nearby points using the technique of switching paths at appropriate random times introduced in Karatzas & Shreve 16. Let us consider the stopping times 4.4 τ ε = inf { t, T / ξ t ε } T, < ε < 1 and notice that τ ε τ as ε, a.s. Introduce also the auxiliary process { } 4.5 ξ ε t = ξ t ; t σ τ ε B ξ t ε ; σ τ ε < t T for each given σ S, < ε < 1. The state-process X ε = x ε ξ ε corresponds to the strategy of starting at x ε and following a modification of the optimal strategy ξ for x, whereby we suppress any movement to the right up to time σ τ ε ; after which we jump onto the optimal path X = x ξ for x, and follow it up to time T. The cost associated with this strategy is 4.6 Jx ε; ξ ε = T σ τ H t, x ε ξ t dt σ τε σ τ H t, x ε ξ t dt H t, x ξ t ξ t dt G x ξ T ξ T 1 {σ τε<t } σ τ ε G x ε ξ T 1 {σ=τ =T } G x ε ξ T 1 {τ <σ=τ ε=t } 1 {τε σ} γτ ε ξ τ ε ε γt dξ t νt dξ t τ ε,t 1 {τ σ<τε} γt dξ t νσ ε ξ σ σ,t 1 {σ<τ } γt dξ t νt dξ t ενσ. τ,t νt dξ t 5

In a similar vein, we have the decomposition 4.7 Jx; ξ = T σ τ H t, x ξ t σ τε dt H t, x ξ t ξ t dt σ τ H t, x ξ t ξ t dt G x ξ T ξ T 1 {σ τε<t } σ τ ε G x ξ T 1 {σ=τ =T } G x ξ T ξ T 1 {τ <σ=τε=t } 1 {τε σ} γt dξ t γτ ε ξ τ ε ξ τ ε τ,τ ε γt dξ t νt dξ t τ ε,t 1 {τ σ<τε} γt dξ t γt dξ t νt dξ t τ,σ σ,t γσ ξ σ ξ σ 1 {τ σ<τε} 1 {σ<τ } γt dξ t τ,t νt dξ t for the cost corresponding to the optimal process ξ B at position x IR. Comparing 4.7 with 4.6, we see that V x ε V x is dominated by IEJx ε; ξ ε IEJx; ξ, namely σ τ V x ε V x IE 4.8 σ τε σ τ H t, x ε ξ t H t, x ξ t dt H t, x ε ξ t H t, x ξ t ξ t dt G x ε ξ T G x ξ T 1 {σ=τ =T } G x ε ξ T G x ξ T ξ T 1 {τ <σ=τε=t } 1 {τε σ} γτ ε ε ξ τ ε γt dξ t ενσ 1 {σ<τ } 1 {τ σ<τ ε} τ,τ ε γσ νσ ξ σ ε γσ ε ξ σ τ,σ γt dξ t. Let us look at all the terms under the expectation on the right-hand side of 4.8, one by one. From the convexity of the functions Ht, ω, and Gω,, we deduce that the first and second terms are dominated by σ τ ε H x t, x ε dt and σ τε σ τ ε ξ t H x t, x ε dt, while the third and fourth are dominated by εg x ε 1 {σ=τ =T } and ε ξ T G x ε 1 {τ <σ=τ ε=t }, 6

respectively. We also observe that the fifth term can be written in the form εγτ 1 {τ <σ} ε γτ 1 {τ <σ} γτ ε 1 {τε σ} 1{τε σ} γτ ε ξ τ ε τ,τ ε We can now put these observations together, and deduce from 4.8 the inequality γt dξ t. 4.9 V x ε V x ε IEIσ, τ ; x 7 L j ε, j=1 where L 1 ε L 3 ε L 4 ε L 5 ε L 6 ε = IE σ τ Hx t, x εdt H x t, x dt, L 2 ε = σ τε IE H x t, x ε dt σ τ = IE γτ 1 {τ <σ} γτ ε 1 {τε σ} 1 = IE ε γτ εξ τ ε γtdξ t τ,τ ε 1 {τ ε σ} 1 = IE ε νσ ε ξ σ γtdξ t τ,σ 1 {τ σ<τ ε} = IE G x ε 1 {τ <τ ε }, L7 ε = IE G x ε G x. It is not hard to see that lim ε L j ε =, for all j = 1,, 7, thanks to the dominated convergence theorem and the conditions of 2.4. For instance, L 3 ε IE γτ ε γτ γτ 1 {τ <σ<τ ε}, L 4 ε ξ IE τ ε ε L 5 ε IE IE 1 {τ σ<τ ε} max t T sup γτ ε γt τ t<τ ε νσ 1 ξ σ γt νt 1{τ σ<τ ε} ε, IE sup γτ ε γt τ t<τ ε ξ σ max ε γt τ t σ, all tend to zero, as ε ; and 4.3 follows. Lemma 4.3. With the same assumptions and notation as in Lemma 4.2, we have 4.1 D V x IEIσ, τ; x, for every x IR, τ S. Sketch of Proof : The situation is completely symmetric to that of Lemma 4.2, so we just sketch the broad outline of the argument. For each given τ S, < ε < 1, we introduce the analogue σ ε = inf { t, T / ξ t ε } T, < ε < 1 of the stopping time in 4.4, as well as the analogue { } ϑ ε t = ξ t ; t τ σ ε ξ t ε ; τ σ ε < t T of the auxiliary process in 4.5, and note that IP lim ε σ ε = σ = 1. The state-process X ε = x ε ϑ ε corresponds then to the strategy of starting at x ε and following a 7

modification of ξ, whereby we suppress any movement to the left, up to the time τ σ ε ; after which we follow the optimal state-process X = xξ for x, up to time T. Comparing the cost Jx ε; ϑ ε of this strategy to the cost Jx; ξ of the optimal strategy at x, taking expectations, dividing by ε >, and then letting ε, we arrive at 4.1 much as we derived 4.3 through the comparisons 4.6 4.9. Proof of Theorems 3.1, 3.2 : From 4.2, 4.3 and 4.1, we obtain D V x D V x IEIσ, τ ; x D V x, x IR. This shows that V is differentiable with V x = IEIσ, τ ; x, proving 3.3. Again from 4.2, 4.3 we conclude now IEIσ, τ; x V x = IEIσ, τ ; x IEIσ, τ ; x for every σ S, τ S, proving 3.2. 5 Proof of Theorem 3.3 Let {η n } n IN B be a minimizing sequence of processes for the control problem of 2.5-2.6, that is, lim n IEJη n ; x = V x <. From the conditions 3.4 and 3.6, we have 5.1 sup n IN IEˇη n T 1 κ T sup IEJη n ; x IE sup H t, x dt IE sup G x =: M <. n IN x IR x IR It follows that sup n IN IE T η nt dt sup n IN IE T ˇη nt dt MT <, so that the sequence of processes {η n } n IN is bounded in IL 1 λ IP, where λ denotes Lebesgue measure on, T. Thus, thanks to a theorem of Komlós 18, there exists a subsequence relabelled, and still denoted by {η n } n IN and a pair of B, T FT measurable processes ϑ ± :, T Ω,, such that the Cesàro sequences of processes 5.2 ξ± n = 1 n η j ± n A converge λ IP a.e. to ϑ ±. j=1 n IN Define ξ n = ξ n ξn B, for every n IN. From the convexity of J, x in 2.6, it is clear that {ξ n } n IN B is also a mininizing sequence, namely 5.3 lim n IEJξ n; x = V x <, and from 5.2 that the sequence of processes 5.4 ξ n = 1 n η j converges λ IP a.e. to ϑ = ϑ ϑ. n j=1 n IN Using Komlós s theorem one last time, we deduce from 5.1 the existence of two FT measurable random variables ζ ± : Ω,, such that the sequences of random variables 5.5 ξ± n T = 1 n η j ± n T converge IP a.s. to ζ ± j=1 n IN 8

and thus lim n ξ n T = ζ = ζ ζ, IP a.s. possibly after passing to a further, relabelled, subsequence, for which each of 5.2, 5.4 and 5.5 holds. Proposition 5.1. The processes ϑ ± of 5.2 have modifications ξ ± A left-continuous, increasing, with ξ ± = that satisfy i.e., IF-adapted, 5.6 lim n ξ± n t = ξ ± t, IP a.s. for λ a.e. t, T. In particular, we have the analogue 5.7 lim n ξ nt = ξ t = ξ t ξ t, IP a.s. of 5.6, for λ a.e. t, T, where the process ξ = ξ ξ belongs to B, as well as 5.8 ξ ± T ζ ±, IP a.s. This result can be proved in a manner completely analogous to Lemmata 4.5 4.7, pp. 867 869 in Karatzas & Shreve 16. Proposition 5.2. For the process ξ ± A, ξ B of Proposition 5.1, we have T 5.9 V x IE H t, x ξ t dt γtdξ t νtdξ t G x ξ T.,t Corollary 5.3: The process ξ of 5.7 is optimal for the control problem of 2.5, 2.6. Proof : This is clear if the decomposition ξ = ξ ξ is minimal, because then 5.9 reads V x IEJξ ; x. But even if this decomposition is not minimal, the right-hand side of 5.9 dominates IEJξ ; x, and the conclusion follows as before. Proof of Proposition 5.2 : From 5.5, 5.7, the assumption 3.4, and Fatou s Lemma, we obtain 5.1 5.11 T IE H t, x ξ t T dt lim inf IE H t, x ξ n t dt, n IEGx ζ lim inf IEG x ξ n T. n It can also be shown that we have 5.12 IE γtdξ t γt ζ ξ T 5.13 IE νtdξ t νt ζ ξ T lim inf n IE γtdξ n t lim inf n IE νtdξ n t. Indeed, denote by T the set of full Lebesgue measure in B, T on which 5.6 and 5.7 hold. For any partition Π = {t i } m i=1 T of, T with = t < t 1 < < t m < t m1 = T, we have m inf γt t i 1 t t i i=1 ξ n t i ξ n t i 1 inf γt t m t T 9 ξ n T ξ n t m γtdξ n t

IP a.s. and, by 5.5 and Fatou s lemma, lim inf n IE γtdξ n t dominates the expression m IE inf γt ξ t i ξ t i 1 t i 1 t t i i=1 inf γt t m t T ζ ξ n t m. As we let Π = max i m t i1 t i, this expression approaches the left-hand side of 5.12, which is then established; 5.13 is proved similarly. Now let us put 5.1-5.13 together; in conjunction with 5.3, and using repeatedly the inequality lim inf n x n lim sup n y n lim sup n x n y n, we obtain IE H t, x ξ t γtdξ t νtdξ t G x ξ T IEΘ 5.14 lim sup n IEJξ n ; x = V x, where 5.15 Θ = Gx ζ G x ξ T γt ζ ξ T νt ζ ξ T. In order to deduce 5.9 from 5.14, it suffices to show that the random variable of 5.15 satisfies IPΘ = 1 ; indeed, 3.5, 5.5, 5.8 and the convexity of Gω, imply Gx ζ G x ξ T = Gx ζ ζ G x ξ T ξ T G x ζ ζ ξ T G x ζ ζ ξ T γt ζ ξ T νt ζ ξ T, a.s. whence Θ, a.s. References 1 ALARIO-NAZARET, M. 1982 Jeux de Dynkin. Doctoral Thesis, Besançon, France. 2 ALARIO-NAZARET, M., LEPELTIER, J.P. & MARCHAL, B. 1982 Dynkin Games. Lecture Notes in Control & Information Sciences 43, 23-32. Springer-Verlag, Berlin. 3 BALDURSSON, F. & KARATZAS, I. 1997 Irreversible investment and industry equilibrium. Finance & Stochastics 1, 66-89. 4 BENSOUSSAN, A. & FRIEDMAN, A. 1974 Nonlinear variational inequalities and differential games with stopping times. J. Functional Analysis 16, 35-352. 5 BENSOUSSAN, A. & FRIEDMAN, A. 1977 Non-zero sum stochastic differential games with stopping times and free-boundeary problems. Trans. Amer. Math. Society 231, 275-327. 6 BENSOUSSAN, A. & LIONS, J.L. 1982 Applications of Variational Inequalities in Stochastic Control. North-Holland, Amsterdam and New York. 7 BISMUT, J.M. 1977 Sur un problème de Dynkin. Z. Wahrsch. verw. Gebiete 39, 31-53. 1

8 CVITANIĆ, J. & KARATZAS, I. 1996 Backward stochastic differential equations with reflection and Dynkin games. Annals of Probability 24, 224-256. 9 DYNKIN, E.B. 1967 Game variant of a problem on optimal stopping. Soviet Mathematics Doklady 1, 27-274. 1 FRIEDMAN, A. 1972 Stochastic differential games. J. Differential Equations 11, 79-18. 11 FRIEDMAN, A. 1973 Stochastic games and variational inequalities. Arch. Rational Mech. Anal. 51, 321-346. 12 FRIEDMAN, A. 1976 Stochastic Differential Equations & Applications, Volume 2. Academic Press, New York. 13 14 HAMADÈNE, S. & LEPELTIER, J.P. 1995 Zero-sum stochastic differential games and backward equations. Systems & Control Letters 24, 259-263. HAMADÈNE, S. & LEPELTIER, J.P. 2 Reflected BSDEs and mixed game problem. Stochastic Processes & Applications 24, 259-263. 15 KARATZAS, I. 1996 A pathwise approach to Dynkin games. In Statistics, Probability and Game Theory: David Blackwell Volume. IMS Monograph Series 3, 115-125. 16 KARATZAS, I. & SHREVE, S.E. 1984 Connections between optimal stopping and singular stochastic control I. Monotone follower problems. SIAM J. Control & Optim. 22, 856-877. 17 KARATZAS, I. & SHREVE, S.E. 1985 Connections between optimal stopping and singular stochastic control II. Reflected follower problems. SIAM J. Control & Optim. 23, 433-451. 18 KOMLÓS, J. 1967 A generalization of a problem of Steinhaus. Acta Math. Acad. Sci. Hungar. 18, 217-229. 19 KRYLOV, N.V. 1971 Control of Markov processes and W spaces. Isvestija 5, 233-266. 2 LEPELTIER, J.M. & MAINGAINEAU, M.A. 1984 Le jeu de Dynkin en théorie générale sans l hypothèse de Mokobodski. Stochastics 13, 25-44. 21 MORIMOTO, H. 1984 Dynkin games and martingale methods. Stochastics 13, 213-228. 22 NEVEU, J. 1975 Discrete-Parameter Martingales. North-Holland, Amsterdam. 23 SNELL, J.L. 1952 Applications of martingale systems theorems. TAMS 73, 293-312. 24 STETTNER, L. 1982 Zero-sum Markov games with stopping and impulsive strategies. Applied Mathematics & Optimization 9, 1-24. 25 TAKSAR, M. 1985 Average optimal singular control and a related stopping problem. Mathematics of Operations Research 1, 63-81. 26 TOUZI, N. & VIEILLE, N. 2 Continuous-time Dynkin games with mixed strategies. Preptint. 11