Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion

Size: px
Start display at page:

Download "Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion"

Transcription

1 Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion N. Attard This version: 24 March 2016 First version: 30 January 2015 Research Report No. 2, 2015, Probability and Statistics Group School of Mathematics, The University of Manchester

2 Research Report No. 2, 2015, Probab. Statist. Group Manchester (24 pp) Nash Equilibrium in Nonzero-Sum Games of Optimal Stopping for Brownian Motion N. Attard We present solutions to nonzero-sum optimal stopping games for Brownian motion in [0, 1] absorbed at either 0 or 1. The approach used is based on the double partial superharmonic characterisation of the value functions derived in [1]. In this setting this characterisation of the value functions has a transparent geometrical interpretation of pulling two ropes above two obstacles which must however be constrained to pass through certain regions. This is an extension of the analogous result derived by Peskir in [16] and [17] (semiharmonic characterisation) for the value function in zero-sum games of optimal stopping. To derive the value functions we transform the game into a free-boundary problem. The latter is then solved by making use of the double smooth-fit principle which was also observed in[1]. Martingale arguments based on Itô-Tanaka formula will then be used to verify that the solution to the free-boundary problem coincide with the value functions of the game and this will establish Nash equilibrium. 1. Introduction The purpose of this work is to derive Nash equilibrium in two-player nonzero-sum games of optimal stopping for Brownian motion in [0, 1], absorbed at either 0 and 1. For this we shall use the results obtained in [1], in particular the double partial superharmonic characterisation of the value functions of the two players and the principle of double smooth fit. This probabilistic approach for studying the value functions and corresponding Nash equilibrium is in line with the results derived by Peskir in [16] and [17] for zero-sum games. In the case of absorbed Brownian motion in [0,1] the results of Peskir show that the value function in zero-sum games is equivalent to pulling a rope between two obstacles (semiharmonic characterisation) which in turn establishes Nash equilibrium (by pulling a rope between two obstacles we mean finding the shortest path between the graphs of two functions). In nonzero-sum games, under certain assumptions on the payoff functions, we will show that the value functions are equivalent to pulling two ropes above two obstacles which must however be constrained to pass through Mathematics Subject Classification Primary 91A15, 60G40. Secondary 60J65, 60G44, 60J60, 46N10. Key words and phrases: Nonzero-sum optimal stopping game, Nash equilibrium, Brownian motion, double partial superharmonic characterisation, principle of double smooth fit, Itô-Tanaka formula, optimal stopping, regular diffusions 1

3 certain regions. As in the case of zero-sum games this geometric explanation of the value function will establish Nash equilibrium. Literature on nonzero-sum games of optimal stopping are mainly concerned with existence of Nash equilibrium. Initial studies in discrete time date back to Morimoto [12] wherein a fixed point theorem for monotone mappings is used to derive sufficient conditions for the existence of a Nash equilibrium point. Ohtsubo [14] derived equilibrium values via backward induction and in [15] the same author considers nonzero-sum games with the smaller gain processes having a monotone structure and gives sufficient conditions for a Nash equilibrium point to exist. Shmaya and Solan in [20] proved that every two player nonzero-sum game in discrete time admits an ε -equilibrium in randomised stopping times. In continuous time Bensoussan and Friedman [3] showed that, for diffusions, Nash equilibrium exists if there exists a solution to a system of quasi-variational inequalities. However, the regularity and uniqueness of the solution remain open problems. Nagai [13] studies a nonzero-sum stopping game of symmetric Markov processes. A system of quasi-variational inequalities is introduced in terms of Dirichlet forms and the existence of extremal solutions of a system of quasi-variational inequalities is discussed. Nash equilibrium is then established from these extremal solutions. Cattiaux and Lepeltier [4] study right processes and they prove existence of a quasi-markov Nash Equilbrium point. The authors follow Nagai s idea but use probabilistic tools rather than the theory of Dirichlet forms. Moreover they complete Nagai s result (whose construction of the extremal solutions of the quasi-variational inequalities is not complete) and extend it to non-symmetric processes. Huang and Li in [10] prove the existence of a Nash equilibrium point for a class of nonzero-sum noncyclic stopping games using the martingale approach. Laraki and Solan [11] proved that every two-player nonzero-sum Dynkin game in continuous time admits an ε equilibrium in randomised stopping times whereas Hamadène and Zhang in [9] prove existence of Nash equilibrium points using the martingale approach, for processes with positive jumps. The structure of this paper is as follows: In Section 2 we introduce the game and review the double partial superharmonic characterisation of the value functions and the double smooth fit principle (cf. [1]), when the underlying process is assumed to be absorbed Brownian motion in [0,1]. In Section 3 we formulate and solve an equivalent free-boundary problem for a certain class of payoff functions. Under additional assumptions on the payoff functions we then show that the solution is also unique. In Section 4 we use martingale arguments based on Itô- Tanaka formula to verify that the solution to the free-boundary problem coincides with the value functions of the game. In Section 5 we provide a counterexample to show that if the origional assumptions imposed on the payoff functions are relaxed then Nash equilibrium may not be established via the double partial superharmonic characterisation of the value functions. In Section 6 we explain how these results can be extended to one-dimensional regular diffusions and in Section 7 we conclude by giving some remarks for future research. 2

4 2. Double partial superharmonic characterisation of the value functions Let X be Brownian motion in [0,1], started at x [0,1] and absorbed at either 0 or 1 and let G i,h i : [0,1] R for i = 1,2 be C 2 functions such that G i H i. Assume also that G i (0) = H i (0) and G i (1) = H i (1). Consider the nonzero-sum game of optimal stopping in which player one wants to choose a stopping time τ and player two a stopping time σ such that their total average gains, which are respectively given by M 1 x(τ,σ) = E x [G 1 (X τ )I(τ σ)+h 1 (X σ )I(σ < τ)] M 2 x(τ,σ) = E x [G 2 (X σ )I(σ < τ)+h 2 (X τ )I(τ σ)] are maximized. For a given strategy σ chosen by player two, we shall define the value function of player one by (2.1) Vσ(x) 1 = supm 1 x(τ,σ). τ Similarly, for a given strategy τ chosen by player one, we shall define the value function of player two by (2.2) Vτ 2 (x) = supm 2 x(τ,σ). σ In this context a saddle point of stopping times is characterized via Nash equilibrium. Formally, a pair of stopping times (τ,σ ) is a saddle point if M 1 x(τ,σ ) M 1 x(τ,σ ) and M 2 x(τ,σ) M 2 x(τ,σ ) for all stopping times τ and σ and for all x [0,1]. Under the mentioned assumptions on G i and H i, for i = 1,2, the result on the double partial superharmonic characterisation of the value functions of player one and player two with the underlying process X introduced above become applicable (cf. [1]). It is well known that superharmonic/subharmonic functions of X are equivalent to concave/convex functions and that continuity in the fine topology is equivalent to continuity in the familiar Euclidean topology. Thus in this setting, the double partial superharmonic characterisation of the value functions can be explained rigorously as finding two continuous functions u and v such that (2.3) u = inf F and v = inf F F Sup 1 v (G 1,K 1 ) F Sup 2 u (G 2,K 2 ) where Sup 1 v(g 1,K 1 ) = {F : [0,1] [G 1,K 1 ] : F is continuous,f = H 1 in D 2,F is concave in D c 2} and Sup 2 u(g 2,K 2 ) = {F : [0,1] [G 2,K 2 ] : F is continuous,f = H 2 in D 1,F is concave in D c 1} with D 1 = {u = G 1 }, D 2 = {v = G 2 } and K i, for i = 1,2, is the smallest concave function majorizing H i. Indeed if the boundaries D 1 and D 2 of D 1 and D 2 are regular for their respective sets then the functions u and v solve the optimal stopping game, that is (2.4) u(x) = Vσ 1 (x) = supm 1 x(τ,σ ) and v(x) = Vτ 2 (x) = supm 2 x(τ,σ) τ 3 σ

5 where τ = inf{t 0 : X t D 1 } and σ = inf{t 0 : X t D 2 }. We initiate this study by showing that if D 1 is of the form [m,n] {0,1} and D 2 of the form [r,l] {0,1} for some points 0 m n 1 and 0 r l 1 then the functions u and v are contained in the sets Sup 1 v(g 1,K 1 ) and Sup 2 u(g 2,K 2 ) respectively. We will prove this claim for u as the result for v follows by symmetry. Clearly we have that u(x) = H 1 (x) for all x D 2 and that u is bounded above by K 1. By definition of the infimum we also have that u(x) G 1 (x) for all x [0,1]. Since the infimum of concave functions is concave it follows that u is concave in D2 c and so u is continuous in int(d2) c the interior of D2 c (recall that concave functions defined on open sets are continuous). Continuity of u in D 2 follows from the continuity of H 1. So it remains to show that u is continuous at the boundary of D 2. To prove this we shall follow the line of thought of Ekström and Villeneuve in [7]. Without loss of generality we prove that u is lower semi-continuous at l. Upper semi-continuity of u holds from the fact that u is the infimum of continuous fuctions. So suppose for contradiction that u is not right-lower-semicontinuous at l (note that u is left continuous at l by continuity of H 1 ). This means that there exists ˆε > 0 such that lim x l u(x) < u(l) ˆε. For given δ > 0, let L be the line segment joining the points (l,u(l) ˆε) and (l+δ,u(l+δ)). By continuity of L it follows that there exists y (l,l+δ) such that L(y) > u(y). By definition of u this means that there exists F Sup 1 v(g 1,K 1 ) such that F(y) < L(y). Since F is continuous in [0,1] and concave in (l,l+δ) we have that ( ) ( ) ( ) ( ) l+δ y y l l+δ y y l F(l) +L(l+δ) = F(l) +u(l+δ) δ δ δ δ ( ) ( ) l+δ y y l F(l) +F(l+δ) δ δ ( ) ( ) l+δ y y l F(y) < L(y) = (u(l) ˆε) +L(l+δ) δ δ This implies that F(l) < u(l) ˆε, which contradicts the fact that F u. Thus u is rightlower-semi-continuous at l. 3. Free-boundary problem In this section we shall formulate a free-boundary problem by making use of the double partial superharmonic characterisation of the value functions. For this we will assume that there exist thresholds a,b with 0 a < b 1, such that (3.1) (3.2) (3.3) and (3.4) (3.5) G 1(x) < 0 for x [0,a) G 1(x) = 0 for x = a G 1(x) > 0 for x (a,1] G 2(x) > 0 for x [0,b) G 2(x) = 0 for x = b 4

6 (3.6) G 2(x) < 0 for x (b,1]. In this setting the double partial superharmonic characterisation of the value functions can be explained geometrically as follows: Suppose that two ropes are pulled above two obstacles G 1 and G 2 with their endpoints pulled to the ground. Let D 1 be the region where the first rope touches the first obstacle and let D 2 be the region where the second rope touches the second obstacle. Then on D 2 the first rope is constrained to pass through a certain region (this region corresponds to the value of H 1 on D 2 ) and so creates a (new) contact region with its obstacle G 1, say D 1. Similarly, on D 1 the second rope is also constrained to pass through a certain region (as specified by the value of H 2 on D 1 ) and thus creates a (new) contact region with its obstacle G 2, say D 2. All points of contact are then altered until both ropes touch their respective obstacles smoothly (it may also happen that the new regions coincide with the boundary points 0 and 1 in which case smoothness will break down). However, this must be done in such a way that the point of contact of the first rope with its obstacle G 1 must coincide with the point of contact of the second rope with H 2 and vice versa. With this intuitive explanation, we will search for a saddle point (τ,σ ) of optimal stopping times of the form (3.7) (3.8) τ = inf{t 0 : X t A } ρ 0,1 σ = inf{t 0 : X t B } ρ 0,1 where 0 A < B 1 are optimal stopping boundaries that need to be determined and ρ 0,1 = inf{t 0 : X t {0,1}}. Prior to formulating the free-boundary problem we note that if there exists such optimal stopping boundaries then we must have that A a and B b. This is a consequence of the double partial superharmonic characterisation of the value functions (which requires the value function of player one to be concave in (0,B ) and that of player two to be concave in (A,1) ). We are now in a position to formulate the free-boundary problem for unknown points 0 A a < b B 1 and unknown functions u,v : [0,1] R : (3.9) (3.10) (3.11) (3.12) (3.13) (3.14) (3.15) u (x) = 0 and v (x) = 0 for x (A,B ) u(a ) = G 1 (A ) and v(b ) = G 2 (B ) u(b ) = H 1 (B ) and v(a ) = H 2 (A ) u(x) = G 1 (x) for x [0,A ) and v(x) = G 2 (x) for x (B,1] u(x) > G 1 (x) and v(x) > G 2 (x) for x (A,B ) u(x) = H 1 (x) for x (B,1] v(x) = H 2 (x) for x [0,A ) By means of straightforward calculations one can show that the solution of system (3.9)-(3.11) takes the form (3.16) (3.17) u(x) = H 1(B ) G 1 (A ) x+g 1 (A ) (H 1(B ) G 1 (A ))A B A B A v(x) = H 2(A ) G 2 (B ) x+g 2 (B ) (H 2(A ) G 2 (B ))B A B A B 5

7 for all x (A,B ). In certain cases (which shall be specified below) the double smooth fit principle (cf. [1]) will also be satisfied, that is (3.18) u (A ) = G 1(A ) and v (B ) = G 1(B ). If (3.16)-(3.17) and (3.18) hold we get that the optimal boundary points A and B must solve the system of non-linear equations (3.19) (3.20) G 1(A )(B A ) H 1 (B )+G 1 (A ) = 0 G 2(B )(A B ) H 2 (A )+G 2 (B ) = 0. For given A,B [0,1] we shall denote the right hand side expressions in (3.19) and (3.20) by Θ(A,B) and Γ(A,B) respectively. Note that since G i and H i for i = 1,2 are C 2 functions, Θ and Γ are continuous functions on [0,1] [0,1]. We next study existence of points 0 A < a < b < B 1, and corresponding functions u and v which solve the free-boundary problem (3.9)-(3.15). For the functions u and v to coincide with the value functions of the game (as shall be seen in the verification theorem in Section 4) we further need to study solutions to the system of equations (3.19)-(3.20). For the latter we shall make use of the following result from convex analysis. The proof of this result can be found in [2]. Proposition 3.1 Let U be a non-empty convex subset of R n and f : U R a differentiable stictly convex (resp. strictly concave) function. Then (3.21) f(x) > ( resp. <)f( x)+ f( x) T (x x) for each x int(u) and for all x U with x x, where f( x) is the vector of partial derivatives of f at x. We will also need the following preliminary result. Lemma 3.2 i. Let B [b,1] be given and fixed. Then Θ(A,B) < 0 for all A [a,1] such that A B. Similarly if A [0,a] is given and fixed, then Γ(A,B) < 0 for all B [0,b] such that B A. ii. Let B [b,1] be given and fixed. If there exists A Θ,B [0,a) such that Θ(A Θ,B,B) = 0 then A Θ,B is unique. Similarly, suppose that A [0,a] is given and fixed. If there exists B Γ,A (b,1] such that Γ(A,B Γ,A ) = 0 then B Γ,A is unique. iii. Suppose that for each B [b 1,b 2 ], where b b 1 < b 2 1, there exists a unique A Θ,B [0,a) such that Θ(A Θ,B,B) = 0. Then there exists a unique continuously differentiable function φ : [b 1,b 2 ] [0,a) such that Θ(φ(B),B) = 0 for all B [b 1,b 2 ]. Similarly, suppose that for each A [a 1,a 2 ], where 0 a 1 < a 2 a, there exists a unique B Γ,A (b,1] such that Γ(B Γ,A,A) = 0. Then there exists a unique continuously differentiable function ψ : [a 1,a 2 ] (b,1] such that Γ(A,ψ(A)) = 0 for all A [a 1,a 2 ]. 6

8 Proof. i. We only prove that Θ(A,B) < 0 for all A [a,1] as for Γ the result follows by symmetry. Consider first the case when A (a,1). Since G 1 is strictly convex in (a,1] we see from Proposition 3.1 that G 1 (B) > G 1 (A) + G 1(A)(B A). From this we get that (3.22) Θ(A,B) = G 1(A)(B A) H 1 (B)+G 1 (A) G 1(A)(B A) G 1 (B)+G 1 (A) < G 1(A)(B A) G 1 (A) G 1(A)(B A)+G 1 (A) = 0 for A B, where the first inequality follows from the fact that G 1 H 1. Now suppose that A = a. From (3.22) we have that Θ(a+ε,B) < 0 for any ε > 0 sufficiently small. Since Θ is continuous on [0,1] [0,1] and the mapping A Θ(A,B) is increasing in (a,a+ε) it follows that Θ(a,B) < 0. When A = 1 we get, from Taylor s theorem with the mean value form of the remainder, that G 1 (B) = G 1 (1)+G 1(1)(B 1)+ G 1 (z) (B 1) 2 2 where z (B,1). Since G 1(z) > 0 it follows that G 1 (B) > G 1 (1) + G 1(1)(B 1). From this we can deduce, in a similar way to (3.22), that Θ(1,B) < 0 and this proves the required result. ii. This follows from the fact that for each B [b,1] the mapping A Θ(A,B) is strictly decreasing in [0,a) whereas for each A [0,a] the mapping B Γ(A,B) is strictly increasing in (b,1]. iii. The existence and uniqueness of φ and ψ follows from the Implicit Function theorem (note that for a given B [b 1,b 2 ], Θ A (A,B) 0 for all A [0,a) and similarly for a given A [a 1,a 2 ], Γ B (A,B) 0 for all B (b,1] ). We are now in a position to determine the points A and B in the free-boundary problem (3.9)-(3.15) by studying the system of equations (3.19) -(3.20). For this we shall consider only the case when there exists a unique (cf. Lemma 3.2 (ii)) A Θ,1 [0,a) such that Θ(A Θ,1,1) = 0 and a B Θ,0 [b,1] such that Θ(0,B Θ,0 ) = 0. All other cases can be dealt with in the same way. In particular, one of the following cases will happen: (i.) either (3.19)-(3.20) has a solution in [0,a) (b,1], or (ii.) Θ(A,B ) = 0 and Γ(A,B ) < 0 where A [0,a) and B = 1, or (iii.) Θ(A,B ) < 0 and Γ(A,B ) = 1 with A = 0 and B (b,1], or (iv.) Θ(A,B ) < 0 and Γ(A,B ) < 0 with A = 0 and B = 1. In Section 4. we shall see that whenever Θ(A,B ) < 0 and/or Γ(A,B ) < 0, condition (3.13) in the free-boundary problem will always be satisfied. To this end let us introduce the following notation: (I.) If there exists at least one A Γ,1 [0,a] such that Γ(A Γ,1,1) = 0 we will set (3.23) a Γ,1 min = min{aγ,1 : Γ(A Γ,1,1) = 0}. 7

9 Moreover we will assign (3.24) ã Γ,1 = max{a Γ,1 : A Γ,1 A Θ,1 } and â Γ,1 = min{a Γ,1 : A Γ,1 A Θ,1 } whenever the sets {A Γ,1 : A Γ,1 A Θ,1 } and {A Γ,1 : A Γ,1 A Θ,1 } are nonempty. } = we will assign ã Γ,1 = 0 whereas if } = we shall set â Γ,1 = a. If on the other hand {A Γ,1 : A Γ,1 A Θ,1 {A Γ,1 : A Γ,1 A Θ,1 (II.) We shall assign (3.25) b Θ,0 max = max{b Θ,0 : Θ(0,B Θ,0 ) = 0}. If in addition, there exists (a unique) B Γ,0 (b,1] such that Γ(0,B Γ,0 ) = 0 we set (3.26) bθ,0 = max{b Θ,0 : B Θ,0 B Γ,0 } and ˆb Θ,0 = min{b Θ,0 : B Θ,0 B Γ,0 } whenever the sets {B Θ,0 : B Θ,0 B Γ,0 } and {B Θ,0 : B Θ,0 B Γ,0 } are nonempty. In the case when {B Θ,0 : B Θ,0 B Γ,0 } = we assign b Θ,0 = b whereas if {B Θ,0 : B Θ,0 B Γ,0 } = we set ˆb Θ,0 = 1. We now explain how the points A and B are constructed by first considering the case when Γ(0,B Γ,0 ) = 0 for some unique B Γ,0 (b,1] and by then considering the case when such B Γ,0 does not exist. (1 ). Assume that there exists a unique (cf. Lemma 3.2 (ii)) B Γ,0 (b, 1] satisfying Γ(0,B Γ,0 ) = 0, so that Γ(0,B) 0 for all B (B Γ,0,1] (cf. Lemma 3.2 (i)). Let b Θ,0 and ˆbΘ,0 be defined as in point (II.) above and consider first the case B Γ,0 = b Θ,0. It is clear that A = 0 and B = B Γ,0 solve the system of equations (3.19) -(3.20). If Θ(0,B Γ,0 ) < 0 we shall set A = 0 and B = B Γ,0 in the free-boundary problem (3.9)-(3.15). Now suppose that Θ(0,B Γ,0 ) > 0. Since B Γ,0 ( b Θ,0,ˆb Θ,0 ) we have, by definition of b Θ,0 and ˆb Θ,0, that Θ(0,B) > 0 for all B ( b Θ,0,ˆb Θ,0 ). Moreover, from Lemma 3.2 (iii) we see that there exists a unique continuously differentiable function φ : [ b Θ,0,ˆb Θ,0 ] [0,a) such that Θ(φ(B),B) = 0 for all B [ b Θ,0,ˆb Θ,0 ]. i. Suppose that Γ(A,1) > 0 for all A [0,a]. Again from Lemma 3.2 we have the existence of a unique continuously differentiable function ψ : [0,a] (b,1] such that Γ(A,ψ(A)) = 0 for all A [0,a]. Since B Γ,0 ( b Θ,0,ˆb Θ,0 ) it follows that the curves B Θ(φ(B),B) and A Γ(A,ψ(A)) must intersect in [0,a) [ b Θ,0,ˆb Θ,0 ]. From this we conclude that there exists A,B [0,a) [ b Θ,0,ˆb Θ,0 ] solving (3.19) - (3.20). ii. Suppose that there exists at least one A Γ,1 [0,a] such that Γ(A Γ,1,1) = 0. Let a Γ,1 min be defined as in (3.23). Since BΓ,0 ( b Θ,0,ˆb Θ,0 ) we have that a Γ,1 min > 0 and so Γ(A,1) > 0 for all A [0,a Γ,1 min ). Again by using Lemma 3.2 we see that there exists a unique continuously differentiable function ψ : [0,a Γ,1 min ] (b,1] such that Γ(A,ψ(A)) = 0 for all A [0,a Γ,1 min ]. If either ˆb Θ,0 < 1, or ˆb Θ,0 = 1 and Θ(0,ˆb Θ,0 ) = 0, the curves A Γ(A,ψ(A)) and B Θ(φ(B),B) intersect in [0,a Γ,1 min ] [ b Θ,0,ˆb Θ,0 ] 8

10 and hence we conclude that there exists (A,B ) [0,a Γ,1 min ] [ b Θ,0,ˆb Θ,0 ) such that Θ(A,B ) = Γ(A,B ) = 0. Now suppose that ˆb Θ,0 = 1 and Θ(0,ˆb Θ,0 ) > 0 (note that Θ(0,ˆb Θ,0 ) cannot be negative under the assumption that Θ(A Θ,1,1) = 0 ). If a Γ,1 min AΘ,1 then the curves A Γ(A, ψ(a)) and B Θ(φ(B), B) intersect in [0,a Γ,1 min ] [ b Θ,0,ˆb Θ,0 ) and so we conclude that there exists (A,B ) [0,a Γ,1 min ] [ b Θ,0,ˆb Θ,0 ) solving (3.19) - (3.20). If a Γ,1 min < AΘ,1 we shall consider three cases. In the case Γ(A Θ,1,1) < 0 we shall set A = A Θ,1 and B = 1 in the free-boundary problem (3.9)-(3.15). If Γ(A Θ,1,1) = 0 then the pair (A Θ,1,B ) with B = 1 solves (3.19) - (3.20). Finally suppose that Γ(A Θ,1,1) > 0 and let ã Γ,1 and â Γ,1 be defined as in point (II.) above. Then we see (cf. Lemma 3.2), that there exists a unique continuously differentiable function ψ : [ã Γ,1,â Γ,1 ] (b,1] such that Γ(A,ψ(A)) = 0 for all A [ã Γ,1,â Γ,1 ]. Since A Θ,1 (ã Γ,1,â Γ,1 ) it follows that the curves B Θ(φ(B),B) and A Γ(A,ψ(A)) intersect in [ã Γ,1,â Γ,1 ] [ b Θ,0,ˆb Θ,0 ) and hence we conclude that there exists (A,B ) [ã Γ,1,â Γ,1 ] [ b Θ,0,ˆb Θ,0 ) such that Θ(A,B ) = Γ(A,B ) = 0. It remains to consider the case B Γ,0 = ˆb Θ,0. If ˆbΘ,0 < 1, or ˆb Θ,0 = 1 and Θ(0,ˆb Θ,0 ) = 0 we can set A = 0 and B = B Γ,0 in the free-boundary problem (3.9) - (3.15) and again we have that the system of equations (3.19) -(3.20) are satisfied at A and B. If ˆbΘ,0 = 1 and Θ(0,ˆb Θ,0 ) > 0 then upon using Lemma 3.2 again we get the existence and uniqueness of a continuously differentiable function φ : [ b Θ,0,1] [0,a) such that Θ(φ(B),B) = 0 for all B [ b Θ,0,1]. Similar to part 1 (ii) above we shall consider three different cases. If Γ(A Θ,1,1) < 0 we set A = A Θ,1 and B = 1. If on the other hand Γ(A Θ,1,1) = 0 we have that (3.19)-(3.20) are satisfied at A = A Θ,1 and B = 1. Finally suppose that Γ(A Θ,1,1) > 0. Again we see that there exists a unique continuously differentiable function ψ : [ã Γ,1,â Γ,1 ] (b,1] such that Γ(A,ψ(A)) = 0 for all A [ã Γ,1,â Γ,1 ]. Moreover it is easily seen that φ and ψ intersect in [ b Θ,0,1] [ã Γ,1,â Γ,1 ] and so it follows that there exists (A,B ) [ b Θ,1,1] [ã Γ,1,â Γ,1 ] satisfying (3.19)-(3.20). (2 ). Let us now assume that Γ(0,B) < 0 for all B [b,1]. If there exists no A Γ,1 such that Γ(A Γ,1,1) = 0 then Γ(A,B) < 0 in [0,a] [b,1] (cf. Lemma 3.2 (i)). In this case we set A = A Θ,1 and B = 1 in the free boundary problem. Suppose on the otherhand that there exists such A Γ,1. If Γ(A Θ,1,1) 0 then we shall set A = A Θ,1 and B = 1. If on the otherhand Γ(A Θ,1,1) > 0 then there exists a unique continuously differentiable function ψ : [ã Γ,1,â Γ,1 ] (b,1] such that Γ(A,ψ(A)) = 0 for all A [ã Γ,1,â Γ,1 ]. Let us define bmax Θ,0 as in (3.25). Again by using Lemma 3.2 we see that there exists a unique continuously differentiable function φ : [b Θ,0 max,1] [0,a) such that Θ(φ(B),B) = 0 for all B [b Θ,0 max,1]. The fact that A Θ,1 (ã Γ,1,â Γ,1 ) implies that the curves B Θ(φ(B),B) and A Γ(A,ψ(A)) intersect in [ã Γ,1,â Γ,1 ] [b Θ,0 max,1] and hence we conclude that there exists (A,B ) [ã Γ,1,â Γ,1 ] [bmax,1] Θ,0 solving (3.19) - (3.20) Uniqueness of solution to the free-boundary problem Having proved that there exists a solution to the free-boundary problem we now consider some special cases in which the solution to the system of equations (3.19)-(3.20) is unique. For 9

11 simplicity we shall only consider the case when there exist unique continuously differentiable functions φ : [b,1] [0,a) and ψ : [0,a] (b,1] satisfying Θ(φ(B),B) = 0 for all B [b,1] and Γ(A,ψ(A)) = 0 for all A [0,a], and such that the mappings B Θ(φ(B),B) and A Γ(A,ψ(A)) intersect in [0,a) (b,1]. Let A Θ denote the range of the function φ. By continuity of φ it follows that A Θ is a closed interval in [0,a). Similarly if we set A Γ to be the range of the function ψ, then by continuity of ψ it follows that A Γ is a closed interval in (b,1]. Proposition 3.3 Suppose that i. H 1(B) > G 1(A) and H 2(A) > G 2(B) for all (A,B) A Θ A Γ ii. H 1(B) < G 1(A) and H 2(A) < G 2(B) for all (A,B) A Θ A Γ. Then the solution to (3.19)-(3.20) is unique. Proof. We shall only prove (i.) as for (ii.) the result follows analogously. For this we note that for any A A Θ given and fixed, Θ B (A,B) < 0 for all B A Γ and so the mapping B Θ(A,B) is decreasing in A Γ. Similarly for B A Γ given and fixed Γ A (A,B) < 0 for all A A Θ and so the mapping A Γ(A,B) is decreasing in A Θ. Suppose, for contradiction, that there exist two pairs (A 1,B 1 ) and (A 2,B 2 ) in A Θ A Γ, such that ( A 1,B 1 ) (A 2,B 2 ), which solve (3.19)-(3.20). Suppose first that A 1 < A 2. If B 1 B 2 we have that 0 = Θ(A 1,B 1 ) > Θ(A 2,B 1 ) Θ(A 2,B 2 ) = 0 where the first inequality follows from the fact that for B [b,1] the mapping A Θ(A,B) is decreasing in [0,a) (cf. Lemma 3.2 (i)). So we must have that B 1 > B 2 whenever A 1 < A 2. But if this is the case we get that 0 = Γ(A 1,B 1 ) > Γ(A 2,B 1 ) > Γ(A 2,B 2 ) = 0. The second inequality follows from the fact that the mapping B Γ(A,B) is increasing for any given A [0,a] (cf. Lemma 3.2 (i)). From this it follows that A 1 A 2. By symmetry one can see that this case is not possible either and so uniqueness of A and B follows. Proposition 3.4 Suppose that H 1(B) > G 1(A) and H 2(A) < G 2(B) for all (A,B) A Θ A Γ. Then if G 1 is increasing in A Θ, G 2 is decreasing in A Γ, H 1 is concave in A Γ and H 2 is concave in A Θ, the system of equations (3.19)-(3.20) is unique. Prior to proving Proposition 3.4 we need the following simple fact from convex analysis. Lemma 3.5 Let f,g be differentiable functions on some closed interval [l,m]. Suppose that there exists A [l,m) such that f(a) = g(a). If f is convex, g is concave and f(m) < g(m), then there exists no other point B [l,m) such that f(b) = g(b). Proof. We first show that f(b) < g(b) for any B (A,m). For this consider the lines L 1 (x) joining the points (A,g(A)) and (m,g(m)) and L 2 (x) joining the points (A,f(A)) and (m,f(m)). By concavity of g and convexity of f we get that g(b) L 1 (B) > L 2 (B) f(b). We next show that f(b) > g(b) for any B [l,a). For this we note, by convexity of f and concavity of g (recall Proposition 3.1), that (3.27) f (A) f(m) f(a) m A < g(m) g(a) m A g (A). 10

12 Again by convexity of f, concavity of g and Proposition 3.1, we have that f(b) f(a)+f (A)(B A) = g(a)+f (A)(B A) g(b) g (A)(B A)+f (A)(B A) = g(b)+(f (A) g (A))(B A) > g(b) for all B [l,a), where the last inequality follows from (3.27). Proof of Proposition 3.4. Since the functions φ and ψ are continuously differentiable we can take the partial derivatives on both sides of the equations Θ(φ(B), B) = 0 and Γ(A,ψ(A)) = 0 and rearranging terms to get (3.28) (3.29) φ (B) = Θ B(φ(B),B) Θ A (φ(b),b) = G 1(φ(B)) H 1(B) G 1(φ(B))(B φ(b)) < 0 ψ (A) = Γ A(A,ψ(A)) Γ B (A,ψ(A)) = G 2(ψ(A)) H 2(A) G 2(ψ(A))(A ψ(a)) < 0. The inequalities follow from the concavity properties of G 1 and G 2 and from the fact that G 1(φ(B)) < H 1(B) and G 2(ψ(A)) > H 2(A). From this we conclude that φ and ψ are decreasingon A Γ and A Θ respectively. Takeany B 1,B 2 A Γ suchthat B 1 < B 2. Fromthe monotonicity property of φ together with the facts that G 1 < 0 and is monotonic increasing on A Θ, and that B 1 φ(b 1 ) < B 2 φ(b 2 ) we have that (3.30) 1 G 1(φ(B 1 ))(B 1 φ(b 1 )) > 1 G 1(φ(B 2 ))(B 2 φ(b 2 )) > 0. Using again the concavity property of G 1 on A Θ and that of H 1 on A Γ we get that (3.31) G 1(φ(B 1 )) H 1(B 1 ) < G 1(φ(B 2 )) H 1(B 2 ) < 0 where the last inequality follows by recalling that G 1(A) < H 1(B) for all (A,B) A Θ A Γ. Combining (3.30) and (3.31) we see that (3.32) φ (B 1 ) = G 1(φ(B 1 )) H 1(B 1 ) G 1(φ(B 1 ))(B 1 φ(b 1 )) < G 1(φ(B 2 )) H 1(B 2 ) G 1(φ(B 2 ))(B 2 φ(b 2 )) = φ (B 2 ) from which the strict convexity property of φ on A Γ follows. Analogously, one can show that ψ is strictly concave on A Γ. Since φ is continuously differentiable and φ < 0 in [b,1], it follows that the inverse function φ 1 : A Θ [b,1] is a decreasing continuously differentiable funtion. Moreover, using the fact that the inverse of convex decreasing functions is also convex we deduce that φ 1 is convex on A Θ. Since A Θ is a closed interval, we can use Lemma 3.5 to deduce that the functions φ 1 and ψ intersect only once on A Θ and so we can conclude that there exists only one point (A,B ) A Θ A Γ which solve the system of equations (3.19)-(3.20). 11

13 4. Verification theorem We initiate this section by showing that if there exist 0 A < a < b < B 1 solving (3.19)-(3.20) then the functions u and v in the free-boundary problem (3.9)-(3.15) coincide with the value functions of the nonzero-sum game (2.1)-(2.2). Theorem 4.1 Let X be Brownian motion in [0,1], started at x [0,1] and absorbed at either 0 or 1. Suppose that G i,h i, for i = 1,2, are C 2 functions on [0,1] such that G i H i. Assume also that G i (0) = H i (0) and G i (1) = H i (1). If G 1,G 2 satisfy assumptions (3.1)-(3.6) then the functions (4.1) u(x) = and (4.2) v(x) = G 1 (x) if 0 x A u (x;a,b ) if A < x < B H 1 (x) if B x 1 H 2 (x) if 0 x A v (x;a,b ) if A < x < B G 2 (x) if B x 1 where u (x;a,b ) takes the form (3.16) and v (x;a,b ) is given by (3.17), coincide with the value functions Vσ 1 (x) = sup τ M 1 x(τ,σ ) and Vτ 2 (x) = sup σ M 2 x(τ,σ) respectively, where τ = inf{t 0 : X t [0,A ]} ρ 0,1 and σ = inf{t 0 : X t [B,1]} ρ 0,1 with the optimal stopping boundaries A and B solving the system of nonlinear equations (3.19)-(3.20). Proof. We first show that Vσ 1 (x) u(x) for all x [0,1]. Since G 1,H 1 and u are C 1 functions on [0,1] it follows that u is absolutely continuous on [0,1] and that u (which exists a.e.) is of bounded variation. But this implies that u can be written as the difference of two convex functions. So we can apply Itô-Tanaka formula (cf. [21]) to u(x t ) to get that u(x t ) = u(x)+ = u(x) t 0 t 0 = u(x)+m t t u (X s )dx s u (X s )dx s l x ti(x = A )du (x) l x tdu (x) l x tu (x)i(x A,x B )dx 1 0 l x ti(x = B )du (x) + 1 [G 2 1(X s )I(0 X s < A ) 0 +0I(A < X s < B )+H 1(X s )I(B < X s 1)]I(X s A,X s B )ds lb t (u +(B ) u (B )) = u(x)+m t t + 1 [G 2 1(X s )I(0 X s < A ) 0 +0I(A < X s < B )+H 1(X s )I(B < X s 1)]ds 12

14 (4.3) lb t (H 1(B ) G 1(A )) where (lt B ) t 0 is the local time of X at the point B, defined by lt B = P x lim 1 t I(B ε 0 < X s < B +ε)ds and (M t ) t 0 is a local martingale, given by t 0 u (X s )dx s. The third equality follows from the occupation time space formula (cf. [8]) together with the definition of u and the fact that u is smooth at A. The last equality follows again from the definition of u. Since σ = inf{t 0 : X t B } ρ 0,1 we have that (4.4) G 1 (X t )I(t σ )+H 1 (X σ )I(σ < t) u(x t )I(t σ )+H 1 (X σ )I(σ < t) = u(x t )I(t σ )+u(x σ )I(σ < t) = u(x t σ ) u(x)+m t σ for any given t 0. The first inequality can be seen by noting that since G 1 is concave in [0,a] then the line u (x;a,b ) supports the hypograph of G 1 in [A,a] and so u G 1 in [A,a]. On the other hand since u(b ) G 1 (B ) it follows that u majorises the line joining the points (a,g 1 (a)) and (B,G 2 (B )) in the interval [a,b ], which in turn, by convexity of G 1 in (a,b ], majorises G 1 in [a,b ]. The first equality follows from the fact that u(x σ ) {0} [B,1] and by definition u = H 1 in {0} [B,1]. The second inequality follows from (4.3) upon noting that G 1 0 in [0,A ) and that lt B increases only when the process is at B. Now suppose that (τ n ) n=1 is a localizing sequence of stopping times for M. Then from (4.4) we get that (4.5) G 1 (X τ τn )I(τ τ n σ )+H 1 (X σ )I(σ < τ τ n ) u(x)+m τ τn σ for every stopping time τ of X. Taking the P x -expectation we conclude, by the optional sampling theorem, that (4.6) E x [G 1 (X τ τn )I(τ τ n σ )+H 1 (X σ )I(σ < τ τ n )] u(x) for all stopping times τ. Letting n we conclude by Fatou s lemma that (4.7) M 1 x(τ,σ ) u(x) for all τ. Taking the supremum over all τ it follows that V 1 σ (x) u(x). It remains to prove that (4.7) holds with equality if τ is replaced by τ. Indeed, from (4.3) and the structure of the stopping times τ and σ we get that (4.8) u(x τ τ n σ ) = u(x)+m τ τ n σ. Taking the P x -expectation on both sides of (4.8) and the limit as n we have, by Lebesgue s dominated convergence theorem, that (4.9) lim n E x u(x τ σ τ n ) = E x [lim n u(x τ σ τ n )] = E x u(x τ σ ) = u(x) Since u(x τ σ ) = G 1 (X τ )I(τ σ )+H 1 (X σ )I(σ < τ ) weconcludethat M 1 x(τ,σ ) = u(x) and so Vσ 1 (x) M 1 x(τ,σ ). By definition Vσ 1 (x) M 1 x(τ,σ ) and so equality of Vσ 1 and u 13

15 follows. By symmetry one can also show that V 2 τ (x) = v(x). We next provide three results to link the solution of the free-boundary problem with the value functions of the game in the case when A and B do not solve the system of equations (3.19)-(3.20). The proofs can be carried out using similar arguments to the proof of Theorem 4.1 and therefore shall be ommitted. Proposition 4.2 Consider the assumptions given in Theorem 4.1. Let { G1 (x) if 0 x A (4.10) u(x) = u (x;a,1) if A < x 1 and (4.11) v(x) = { H2 (x) if 0 x A v (x;a,1) if A < x 1 where u (x;a,1) takes the form (3.16) and v (x;a,1) is given by (3.17), with B = 1 and A being the solution of the nonlinear equation (3.19). If Γ(A,1) < 0 then u and v coincide with the value functions Vτ 1 (x) = sup τ M 1 x(τ,σ ) and Vσ 2 (x) = sup σ M 2 x(τ,σ) respectively, where τ = inf{t 0 : X t A } ρ 0,1 and σ = ρ 0,1. Proposition 4.3 Consider the assumptions given in Theorem 4.1. Let { u (x;0,b (4.12) u(x) = ) if 0 x < B H 1 (x) if B x 1 and (4.13) v(x) = { v (x;0,b ) if 0 x < B G 2 (x) if B x 1 where u (x;0,b ) takes the form (3.16) and v (x;0,b ) is given by (3.17) with A = 0 and B being the solution of the nonlinear equation (3.20). If Θ(0,B ) < 0 then u and v coincide with the value functions Vτ 1 (x) = sup τ M 1 x(τ,σ ) and Vσ 2 (x) = sup σ M 2 x(τ,σ) respectively, where τ = ρ 0,1 and σ = inf{t 0 : X t B} ρ 0,1. Proposition 4.4 Consider the assumptions given in Theorem 4.1. Let (4.14) u(x) = u (x;0,1) and (4.15) v(x) = v (x;0,1) where u (x;0,1) takes the form (3.16) and v (x;0,1) is given by (3.17) with A = 0 and B = 1. If Θ(0,1) < 0 and Γ(0,1) < 0 then the functions u and v coincide with the value functions Vτ 1 (x) = sup τ M 1 x(τ,σ ) and Vσ 2 (x) = sup σ M 2 x(τ,σ) respectively, where τ = σ = ρ 0,1. 14

16 H G 1 H G Counterexample Figure 1: Counterexample: Payoff Functions In this section we shall give a counterexample to show that if assumptions(3.1)-(3.6) are relaxed then the functions u and v may not exist. For this consider the payoff functions G i and H i given in figure 1. Suppose that the pair (τ,σ ) is a Nash equilibrium point so that (5.1) (5.2) σ (x) = supm 1 x(τ,σ ) = M 1 x(τ,σ ) V 1 V 2 τ τ (x) = supm 2 x(τ,σ) = M 2 x(τ,σ ). σ Suppose also that Vσ 1 and Vτ 2 are continuous in x. (1 ). Let V 1 (x) = sup τ E x G 1 (X τ ). From the theory of optimal stopping (cf. [18]) it is known that V 1 (x) = E x G 1 (X τd1 ) where τ D1 = inf{t 0 : X t D 1 } with D 1 = {V 1 = G 1 }. We show that τ τ D1 P x -a.s. for each x [0,1]. For this we first prove that Vσ 1 (x) V 1 (x). Again from the theory of optimal stopping we know that V 1 coincides with the smallest concave function majorizing G 1 (which is known to be continuous by the continuity of G 1 ). Since H 1 is concave and H 1 G 1 we have that M 1 x(τ D1,σ ) = E x [G 1 (X τd1 )I(τ D1 σ)+h 1 (X σ )I(σ < τ D1 )] E x [G 1 (X τd1 )I(τ D1 σ )+V 1 (X σ )I(σ < τ D1 )] = E x [V 1 (X τd1 )I(τ D1 σ )+V 1 (X σ )I(σ < τ D1 )] = E x V 1 (X τd1 σ ) = E x E XτD1 σ V 1(X τd1 ) = E x E x (V 1 (X τd1 ) θ τd1 σ F τd1 σ ) = E x E x (V 1 (X τd1 σ +τ D1 θ τd1 σ ) F τ D1 σ ) = E x E x (V 1 (X τd1 ) F τd1 σ ) = E x V 1 (X τd1 ) = V 1 (x) where θ is the shift operator (cf. [18, Chapter II]). The second equality follows from the fact that V 1 (X τd1 ) = G 1 (X τd1 ) (since both V 1 and G 1 are continuous). The fourth and last 15

17 equalities follows from the fact that V 1 (X τd1 ) = G 1 (X τd1 ) and that V 1 (x) = E x G 1 (X τd1 ). The fifth equality follows from the strong Markov property of X whereas the seventh equality follows from the fact that τ D1 = τ D1 σ + τ D1 θ τd1 σ upon noting that τ D1 σ τ D1 P x -a.s. Thus we have that (5.3) V 1 σ (x) M 1 x(τ D1,σ ) V 1 (x). To prove the required result we let τ D1 = inf{t 0 : X t D 1 } where D1 = {Vσ 1 = G 1 } (since we are assuming that Vσ 1 is continuous we must have that D1 is closed). From (5.1) and the Markov property of X we have (5.4) V 1 σ (X τ ) = M 1 X τ (τ,σ ) = E x [G 1 (X τ ) θ τ I(τ θ τ σ θ τ ) +H 1 (X σ ) θ τ I(σ θ τ < τ θ τ ) F τ ] = E x [G 1 (X τ ) F τ ] = G 1 (X τ ) For this we used the fact that τ θ τ = 0. From (5.4) and by definition of τ D1 (upon recalling that D1 is closed) it follows that τ τ D1 P x -a.s. Finally, since Vσ 1 (x) V 1 (x) (cf. (5.3)) we see that D1 D 1 and so τ D1 τ D1 P x -a.s. from which the result follows. (2 ). We show that the pair (ρ 0,1,σ ), with ρ 0,1 = inf{t 0 : X t {0,1}}, cannot be a Nash equilibrium point. So suppose for contradiction that (ρ 0,1,σ ) is a Nash equilibrium point. Consider first the optimal stopping problem V 2 (x) = sup σ E x G 2 (X σ ) and let σ D2 = inf{t 0 : X t D 2 } where D 2 = {V 2 = G 2 }. From the theory of optimal stopping we know that V 2 is the smallest concave function majorizing G 2 and σ D2 is an optimal stopping time (as for V 1 one can show that V 2 is continous by continuity of G 2 ). We first show that if τ = ρ 0,1 then σ ρ 0,1 σ D2. Since M 2 x(ρ 0,1,σ) = E x G 2 (X ρ0,1 σ) for any stopping time σ, in particular for σ we have that (5.5) V 2 ρ 0,1 (x) = E x G 2 (X σ ρ 0,1 ) E x G 2 (X σ ρ0,1 ) Taking the supremum over all σ we get that (5.6) E x G 2 (X σ ρ 0,1 ) supe x G 2 (X σ ρ0,1 ) E x G 2 (X σd2 ρ 0,1 ) = E x G 2 (X σd2 ) = V 2 (x). σ Note that the penultimate inequality follows from the fact that V 2 (0) = G 2 (0) and V 2 (1) = G 2 (1) andso D 2 {0,1}. Ontheotherhand V 2 (x) E x G 2 (X σ ρ 0,1 ) = Vρ 2 0,1 (x) bydefinition of V 2 and so we conclude that E x G 2 (X σ ρ 0,1 ) = V 2 (x). From this it follows that σ ρ 0,1 is an optimal stopping time for problem V 2 (x) = sup σ E x G 2 (X σ ). Moreover, by the Markov property of X we have that (5.7) V 2 (X σ ρ 0,1 ) = E Xσ ρ0,1 G 2 (X σ ρ 0,1 ) = E x [G 2 (X σ ρ 0,1 ) θ σ ρ 0,1 F σ ρ 0,1 ] = G 2 (X σ ρ 0,1 ) 16

18 and so, by continuity of V 2 and G 2 we get that σ ρ 0,1 σ D2 P x -a.s. for each x [0,1]. From this, together with the fact that H 1 is concave (that is superharmonic relative to X ), it follows that (5.8) M 1 x(ρ 0,1,σ ) = E x H 1 (X σ ρ 0,1 ) E x H 1 (X σd2 ). Now if (ρ 0,1,σ ) were to be a Nash equilibrium point then we must have that M 1 x(ρ 0,1,σ ) G 1 (x) for all x [0,1]. But it is clear (cf. Figure 2 below) that there exists x [0,1] for which M 1 x(ρ 0,1,σ ) E x H 1 (X σd2 ) < G 1 (x) and thus we get a contradiction H E H (X x 1 σd ) V 2 H 2 G 1 G D 2 Figure 2: A drawing of the functions x V 2 (x) and x E x H 1 (X σd2 ) (3 ). We now use observations (1 ) and (2 ) above to show that the functions u and v do not exist. Suppose, for contradiction, that u and v exist so that u = Vσ 1 D = 2 inf F Sup 2 v (G 1,K 1 )F and v = Vτ 2 D = inf F Sup 2 u (G 2,K 2 1 )F where τ D 1 = inf{t 0 : X t D1} and σ D 2 = inf{t 0 : X t D2} with D1 = {u = G 1 } and D2 = {v = G 2 } being closed sets. From (1 ) we must have that τ D 1 τ D1 P x -a.s. Since D 1,D1 are closed subsets of [0,1] (which must contain 0 and 1) it follows that D1 D 1. Moreover, from (2 ) it follows that D1 {0,1}. Now let A 2 [0,1] be the point given in figure 3 and consider the optimal stopping problem Vτ 2 A2 = sup σ M 2 x(τ A2,σ), where τ A2 = inf{t 0 : X t = A 2 } ρ 0,1. From the double partial superharmonic characterisation (note that each x [0,1] is regular for {x}) it can be shown that Vτ 2 A2 = inf F Sup (G 2 D1 2,K 2 )F where Sup 2 D1 (G 2,K 2 ) = {F : [0,1] [G 2,K 2 ] : F is continuous,f = H 2 in D 1,F is concave in D 1} c where D 1 = inf{t 0 : X t {0,A 2,1}}. Moreover, the first entry time = inf{t 0 : X σ D2 t D 2 } where D 2 = {Vτ 2 A2 = G 2 } is an optimal stopping time. Now consider the optimal stopping problem Vτ 2 D1 = sup σ M 2 x(τ D1,σ). Then we can similarly show that Vτ 2 D1 = inf F Sup 2 D1 (G 2,K 2 )F where Sup 2 D 1 (G 2,K 2 )F = {F : [0,1] [G 2,K 2 ] : F is continuous,f = H 2 in D 1,F is concave in D1} c (recall from ( 1 ) that τ D1 = inf{t 0 : X t D 1 } with D 1 = {V 1 = G 1 }). Moreover, the first entry time = inf{t 0 : X σˆd2 t ˆD 2 }, where ˆD 2 = {Vτ 2 D1 = G 2 }, is an optimal stopping time. It is easy to see from figure 3 that D2 = ˆD 2 (note that A 2 is a bounary point of D 1 ). On the other 17

19 0.3 H 1 E x H 1 (X σ ) E x H 1 (X σˆd2 ) V 1 V 2 G 2 Vτ 2 D1 τ G 1 H A 1 A 1 A 2 A 2 B ˆB Figure 3: A typical drawing of the functions Vτ 2 and Vτ 2 D1 together with the mappings x E x H 1 (X σ ) and x E x H 1 (X σ ˆD2 ), where τ = τ D 1 and σ = τ D 2 The sets D 1, D 1, ˆD2 (= D 2 ), D 2 are given by {0,1} [A 1,A 2 ], {0,1} [A 1,A 2], {0} [ˆB,1] and 0 [B,1] respectively. hand, since V 2 τ A2 V 2 τ D 1 we can conclude that D2 D 2 and so σ D 2 σ D2 P x -a.s. From this one can see that D 1\{0,1} = which contradicts the fact that D 1 {0,1}. 6. Regular Diffusions We shall now link nonzero-sum games of optimal stopping for one-dimensional regular diffusions with nonzero-sum games of optimal stopping for Brownian motion. In doing so one can then use the results in the previous sections to show that for a certain class of payoff functions, nonzero-sum optimal stopping games for one-dimensional regular diffusions admit a Nash equilibrium point. So let X be a one-dimensional regular diffusion in [0,1], absorbed at either 0 or 1 and suppose that α 0 is a given constant. Let us assume that the fine topology coincides with the Euclidean topology and let L X be the infinitesimal generator of X. It is well known that under regularity conditions (cf. example [18]), L X F = σ2 (x)f xx +µ(x)f 2 x for x (0,1) where µ(x) R is the drift and σ 2 (x) is the diffusion coefficient of X. Moreover, the second order L X F = αf admits two linearly independent solutions ψ and ϕ such that ψ(0),ϕ(1) > 0 and that ψ is increasing and ϕ is decreasing. These solutions are uniquely determined up to a multiplicative constant. In the case when α = 0 we can take ψ = S and ϕ 1 where S is the scale function of X. (1 ). Consider the nonzero-sum game of optimal stopping in which player one chooses a stopping time τ and player two a stopping time σ in order to maximize their expected payoffs, which are respectively given by [ E x e α(τ σ) (G 1 (X τ )I(τ σ)+h 1 (X σ )I(σ < τ)) ] [ (6.1) E x e α(τ σ) (G 2 (X σ )I(σ < τ)+h 2 (X τ )I(τ σ)) ] 18

20 where G i,h i : [0,1] R, for i = 1,2, are continuous functions such that G i H i with G i (0) = H i (0) and G i (1) = H i (1). For a given stopping time σ chosen by player two, let [ (6.2) Vσ 1,α (x) = supe x e α(τ σ) (G 1 (X τ )I(τ σ)+h 1 (X σ )I(σ < τ)) ] τ be the value function of player one and for a given stopping time τ chosen by player one let [ (6.3) Vτ 2,α (x) = supe x e α(τ σ) (G 2 (X σ )I(σ < τ)+h 2 (X τ )I(τ σ)) ] σ bethevaluefunctionofplayertwo. Supposethatthereexistcontinuousfunctions u,v : [0,1] R such that (6.4) u = inf F F Sup 1 v (G 1,K 1 ) and (6.5) v = inf F F Sup 2 u (G 2,K 2 ) where (6.6) Sup 1 v(g 1,K 1 ) = {F : [0,1] [G 1,K 1 ] : F is continuous, F = H 1 in D 2,F is α-superharmonic in D c 2} and (6.7) Sup 2 u(g 2,K 2 ) = {F : [0,1] [G 2,K 2 ] : F is continuous, F = H 2 in D 1,F is α-superharmonic in D c 1} with K i, for i = 1,2, being the smallest α superharmonic function (relative to X ) majorizing H i, D 1 = {u = G 1 } and D 2 = {v = G 2 } (recall that a measurable function F : R R is α superharmonic if E x [e ατ F(X τ )] F(x) for all stopping times τ of X and all x [0,1] ). From the double partial superharmonic characterisation of the value functions we have that u(x) = V 1,α σ D2 (x) and v(x) = V 2,α τ D1 (x) for all x [0,1], where τ D1 = inf{t 0 : X t D 1 } and σ D2 = inf{t 0 : X t D 2 }. (2 ). Let I : [0,1] R be a strictly increasing continuous function and J : [0,1] R a Borel measurable function. J is said to be I concave if ( ) ( ) I(d) I(x) I(x) I(c) (6.8) J(x) J(c) +J(d) I(d) I(c) I(d) I(c) for 0 c < x < d 1. It is known (cf. for example [6, Chapter 16] or [17, proof of Theorem J 3.2]) that a Borel measurable function J is α superharmonic if and only if is I concave ϕ J orequivalentlyifandonlyif is Î -concavewhere I and Î arestrictlyincreasingcontinuous ψ 19

21 functions given by I = ψ and Î = ϕ 1 = I ϕ. From this it follows that the collections of ψ functions in (6.6)-(6.7) are equivalent to Sup 1 v(g 1,K 1 ) = {F : [0,1] [G 1,K 1 ] : F is continuous, (6.9) F = H 1 in D 2, F ϕ is I-concave in Dc 2} and Sup 2 u(g 2,K 2 ) = {F : [0,1] [G 2,K 2 ] : F is continuous, (6.10) F = H 2 in D 1, F ϕ is I-concave in Dc 1} where K i, for i = 1,2, is the smallest function majorizing H i such that K i ϕ is I concave. (3 ). We show that the sets in (6.9)-(6.10) are equivalent to collections involving ordinary concave functions. For this let B be a Brownian motion in [I(0),I(1)], absorbed at either I(0) or I(1) and consider the nonzero-sum game of optimal stopping in which player one chooses a stopping time γ and player two a stopping time β in order to maximize their expected payoffs, which are respectively given by E y [ G1 (B γ )I(γ β)+ H ] 1 (B β )I(β < γ) E y [ G2 (B β )I(β < γ)+ H ] 2 (B γ )I(γ β) for y [I(0),I(1)], where G i := G i ϕ I 1 and H i := H i ϕ I 1, for i = 1,2. Given stopping time β chosen by player two let (6.11) W 1,α β (y) = supe y [ G 1 (B γ )I(γ β)+ H 1 (B β )I(β < γ)] γ be the value function of player one and similarly, given stopping time γ chosen by player one let (6.12) Wγ 2,α (y) = supe y [ G 2 (B β )I(β < γ)+ H 2 (B γ )I(γ β)] β be the value function of player two. Suppose that there exist continuous functions ũ, ṽ : [I(0),I(1)] R such that (6.13) ũ = inf F F Sup 1 ṽ ( G 1, K 1 ) and (6.14) ṽ = inf F F Sup 2 ũ ( G 2, K 2 ) where Sup 1 ṽ( G 1, K 1 ) = {F : [I(0),I(1)] [ G 1, K 1 ] : F is continuous, 20

22 (6.15) F = H 1 in D 2,F is concave in D c 2} and Sup 2 ũ( G 2, K 2 ) = {F : [I(0),I(1)] [ G 2, K 2 ] : F is continuous, (6.16) F = H 2 in D 1,F is concave in D c 1} with K i, for i = 1,2, being the smallest concave function majorizing H i, D1 = {ũ = G 1 } and D 2 = {ṽ = G 2 }. Again from the double partial superharmonic characterisation of the value functions (note that G i (I(0)) = H i (I(0)) and G i (I(1)) = H i (I(1)) since G i (0) = H i (0) and G i (1) = H i (1) ), we have that (6.17) ũ(y) = W 1,α β D2 (y) and ṽ(y) = W 2,α γ D1 (y) for all y [I(0),I(1)], where γ D1 = inf{t 0 : B t D 1 } and β D2 = inf{t 0 : B t D 2 }. (4 ). We now link the value functions in (6.2) -(6.3) with those in (6.11) - (6.12) via the collectionsoffunctionsin (6.9) - (6.10) and (6.15) -(6.16). Itiseasytoseethat ϕ(x)ũ(i(x)) G 1 (x) for all x [0,1] and that ϕ(x)ũ(i(x)) = H 1 (x) for all x {x [0,1] : ϕ(x)(ṽ I)(x) = G 2 (x)}. Since we know that ũ is concave in {y [I(0),I(1)] : ṽ(y) > G 2 (y)} then by writing ũ = (ũ I) I 1 and by making use of the fact that a Borel measurable function F on D [0,1] is I -concave if and only if F I 1 is concave on I(D) = {I(x) : x [0,1]} we get that ũ I is I -concave in {x [0,1] : ϕ(x)(ṽ I)(x) > G 2 (x)}. This fact can also be used to show that ϕ(x) K 1 (I(x)) = K 1 (x) for all x [0,1]. Repeating the above arguments for ṽ and comparing the functions ϕ(ũ I) and ϕ(ṽ I) with the functions u and v in (6.4) (6.5) it follows that (6.18) (6.19) ϕ(x)ũ(i(x)) = V 1,α σ D2 (x) ϕ(x)ṽ(i(x)) = V 2,α τ D1 (x) for x [0,1], where D 1 = {x [0,1] : ϕ(x)ũ(i(x)) = G 1 (x)} and D 2 = {x [0,1] : ϕ(x)ṽ(i(x)) = G 2 (x)}. From (6.17) we can deduce that (6.20) (6.21) V 1,α σ D2 (x) = ϕ(x)w 1,α β D2 (I(x)) V 2,α τ D1 (x) = ϕ(x)w 2,α γ D1 (I(x)) for x [0,1]. 7. Concluding Remarks We conclude this study by pointing out some remarks and directions for future research. 1. In general, given only assumptions (3.1) - (3.6), there may exist stopping boundaries 0 B A 1 such that the first entry times τ = inf{t 0 : X t A } ρ 0,1 and σ = inf{t 0 : X t B } ρ 0,1 form a Nash equilibrium point. This may happen whenever there exists y (0,1) such that G i (y) = H i (y), where i = 1,2. Consider for example the 21

23 H 2 H G 2 G Figure 4: The case when A > B : Payoff functions payoff functions in Figure 4 and let τ = inf{t 0 : X t A } ρ 0,1 and σ = inf{t 0 : X t B } ρ 0,1 where A = 0.6 and B = 0.4. It is easy to see that M 1 x(τ,σ ) = H 1 (x). On the other hand (7.1) M 1 x(τ,σ ) = E x [G 1 (X τ )I(τ σ )+H 1 (X σ )I(σ < τ)] = E x [G 1 (X τ σ )I(τ σ )+H 1 (X σ τ)i(σ < τ)] E x [H 1 (X τ σ )I(τ σ )+H 1 (X σ τ)i(σ < τ)] = E x H 1 (X τ σ ) H 1 (x) = M 1 x(τ,σ ) for all stopping times τ and for all x [0,1], where the second inequality follows from the fact that H 1 is concave in [0,B ). By symmetry we can show that M 2 x(τ,σ ) M 2 x(τ,σ) for all stopping times σ and for all x [0,1], so that (τ,σ ) is a Nash equilibrium point. In general if B < A then we must have that A b and B a. This assertion can be easily proved bycontradiction. Weshallonlyprovethat B a. Thefactthat A b followsbysymmetry. So suppose, for contradiction, that B > a and let τ ε B = inf{t 0 : X t B ε} ρ 0,1 be the first entry time in [0,B ε] {1}, for ε > 0 sufficiently small. It is easy to see that (7.2) M 1 x(τ ε B,σ ) M 1 x(τ,σ ) Moreover, if x (B ε,b ) then (7.2) holds with a strict inequality and this shows that (τ,σ ) cannot be optimal for player one in the case B > a. We next show that if A,B (0,1), then the case A B cannot occur if G 1 (x) < H 1 (x) for all x (0,1). So suppose, for contradiction, that A B. Consider first the case A > B and let τ B = inf{t 0 : X t B } ρ 0,1. Then it is easy to see that M 1 x(τ B,σ ) = G 1 (x)i(x B )+H 1 (x)i(x > B ) > M 1 x(τ,σ ) for all x (0,1) and this contradicts optimality of τ for player one. In the case A = B one can show that if τ ε B = inf{t 0 : X t B ε}, for ε > 0 sufficiently small then M 1 x(τ,σ ) M 1 x(τ ε B,σ ) for all x (B ε,b ). In particular one can see that M 1 x(τ,σ ) < M 1 x(τ ε B,σ ) 22

Nonzero-Sum Games of Optimal Stopping for Markov Processes

Nonzero-Sum Games of Optimal Stopping for Markov Processes Appl Math Optim 208 77:567 597 https://doi.org/0.007/s00245-06-9388-7 Nonzero-Sum Games of Optimal Stopping for Markov Processes Natalie Attard Published online: 7 November 206 The Authors 206. This article

More information

Optimal Stopping Games for Markov Processes

Optimal Stopping Games for Markov Processes SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =

More information

Acta Mathematica Academiae Paedagogicae Nyíregyháziensis 21 (2005), ISSN

Acta Mathematica Academiae Paedagogicae Nyíregyháziensis 21 (2005), ISSN Acta Mathematica Academiae Paedagogicae Nyíregyháziensis 21 (2005), 107 112 www.emis.de/journals ISSN 1786-0091 A GENERALIZED AMMAN S FIXED POINT THEOREM AND ITS APPLICATION TO NASH EQULIBRIUM ABDELKADER

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS SAVAS DAYANIK Department of Operations Research and Financial Engineering and the Bendheim Center for Finance Princeton University, Princeton,

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

3 Measurable Functions

3 Measurable Functions 3 Measurable Functions Notation A pair (X, F) where F is a σ-field of subsets of X is a measurable space. If µ is a measure on F then (X, F, µ) is a measure space. If µ(x) < then (X, F, µ) is a probability

More information

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version

M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version M2PM1 Analysis II (2008) Dr M Ruzhansky List of definitions, statements and examples Preliminary version Chapter 0: Some revision of M1P1: Limits and continuity This chapter is mostly the revision of Chapter

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

1 Lattices and Tarski s Theorem

1 Lattices and Tarski s Theorem MS&E 336 Lecture 8: Supermodular games Ramesh Johari April 30, 2007 In this lecture, we develop the theory of supermodular games; key references are the papers of Topkis [7], Vives [8], and Milgrom and

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Notes on Ordered Sets

Notes on Ordered Sets Notes on Ordered Sets Mariusz Wodzicki September 10, 2013 1 Vocabulary 1.1 Definitions Definition 1.1 A binary relation on a set S is said to be a partial order if it is reflexive, x x, weakly antisymmetric,

More information

Document downloaded from: This paper must be cited as:

Document downloaded from:  This paper must be cited as: Document downloaded from: http://hdl.handle.net/10251/50602 This paper must be cited as: Pedraza Aguilera, T.; Rodríguez López, J.; Romaguera Bonilla, S. (2014). Convergence of fuzzy sets with respect

More information

Could Nash equilibria exist if the payoff functions are not quasi-concave?

Could Nash equilibria exist if the payoff functions are not quasi-concave? Could Nash equilibria exist if the payoff functions are not quasi-concave? (Very preliminary version) Bich philippe Abstract In a recent but well known paper (see [11]), Reny has proved the existence of

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Iowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions

Iowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions Math 50 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 205 Homework #5 Solutions. Let α and c be real numbers, c > 0, and f is defined

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Supremum and Infimum

Supremum and Infimum Supremum and Infimum UBC M0 Lecture Notes by Philip D. Loewen The Real Number System. Work hard to construct from the axioms a set R with special elements O and I, and a subset P R, and mappings A: R R

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Problem set 4, Real Analysis I, Spring, 2015.

Problem set 4, Real Analysis I, Spring, 2015. Problem set 4, Real Analysis I, Spring, 215. (18) Let f be a measurable finite-valued function on [, 1], and suppose f(x) f(y) is integrable on [, 1] [, 1]. Show that f is integrable on [, 1]. [Hint: Show

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

On the principle of smooth fit in optimal stopping problems

On the principle of smooth fit in optimal stopping problems 1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:

More information

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space Yasuhito Tanaka, Member, IAENG, Abstract In this paper we constructively

More information

2019 Spring MATH2060A Mathematical Analysis II 1

2019 Spring MATH2060A Mathematical Analysis II 1 2019 Spring MATH2060A Mathematical Analysis II 1 Notes 1. CONVEX FUNCTIONS First we define what a convex function is. Let f be a function on an interval I. For x < y in I, the straight line connecting

More information

A LITTLE REAL ANALYSIS AND TOPOLOGY

A LITTLE REAL ANALYSIS AND TOPOLOGY A LITTLE REAL ANALYSIS AND TOPOLOGY 1. NOTATION Before we begin some notational definitions are useful. (1) Z = {, 3, 2, 1, 0, 1, 2, 3, }is the set of integers. (2) Q = { a b : aεz, bεz {0}} is the set

More information

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS LUIS SILVESTRE These are the notes from the summer course given in the Second Chicago Summer School In Analysis, in June 2015. We introduce the notion of viscosity

More information

Iterative Weak Dominance and Interval-Dominance Supermodular Games

Iterative Weak Dominance and Interval-Dominance Supermodular Games Iterative Weak Dominance and Interval-Dominance Supermodular Games Joel Sobel April 4, 2016 Abstract This paper extends Milgrom and Robert s treatment of supermodular games in two ways. It points out that

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

From Optimal Stopping Boundaries to Rost s Reversed Barriers and the Skorokhod Embedding

From Optimal Stopping Boundaries to Rost s Reversed Barriers and the Skorokhod Embedding From Optimal Stopping Boundaries to ost s eversed Barriers and the Skorokhod Embedding Tiziano De Angelis First version: 11 May 2015 esearch eport No. 3, 2015, Probability and Statistics Group School of

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Fixed Point Theorems for Condensing Maps

Fixed Point Theorems for Condensing Maps Int. Journal of Math. Analysis, Vol. 2, 2008, no. 21, 1031-1044 Fixed Point Theorems for Condensing Maps in S-KKM Class Young-Ye Huang Center for General Education Southern Taiwan University 1 Nan-Tai

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA SPENCER HUGHES In these notes we prove that for any given smooth function on the boundary of

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Quitting games - An Example

Quitting games - An Example Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and

More information

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries EC 521 MATHEMATICAL METHODS FOR ECONOMICS Lecture 1: Preliminaries Murat YILMAZ Boğaziçi University In this lecture we provide some basic facts from both Linear Algebra and Real Analysis, which are going

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Optimal Stopping and Applications

Optimal Stopping and Applications Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal

More information

Introduction to Convex Analysis Microeconomics II - Tutoring Class

Introduction to Convex Analysis Microeconomics II - Tutoring Class Introduction to Convex Analysis Microeconomics II - Tutoring Class Professor: V. Filipe Martins-da-Rocha TA: Cinthia Konichi April 2010 1 Basic Concepts and Results This is a first glance on basic convex

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Equilibria in Games with Weak Payoff Externalities

Equilibria in Games with Weak Payoff Externalities NUPRI Working Paper 2016-03 Equilibria in Games with Weak Payoff Externalities Takuya Iimura, Toshimasa Maruta, and Takahiro Watanabe October, 2016 Nihon University Population Research Institute http://www.nihon-u.ac.jp/research/institute/population/nupri/en/publications.html

More information

v( x) u( y) dy for any r > 0, B r ( x) Ω, or equivalently u( w) ds for any r > 0, B r ( x) Ω, or ( not really) equivalently if v exists, v 0.

v( x) u( y) dy for any r > 0, B r ( x) Ω, or equivalently u( w) ds for any r > 0, B r ( x) Ω, or ( not really) equivalently if v exists, v 0. Sep. 26 The Perron Method In this lecture we show that one can show existence of solutions using maximum principle alone.. The Perron method. Recall in the last lecture we have shown the existence of solutions

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Walker Ray Econ 204 Problem Set 3 Suggested Solutions August 6, 2015

Walker Ray Econ 204 Problem Set 3 Suggested Solutions August 6, 2015 Problem 1. Take any mapping f from a metric space X into a metric space Y. Prove that f is continuous if and only if f(a) f(a). (Hint: use the closed set characterization of continuity). I make use of

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

MATH 131A: REAL ANALYSIS (BIG IDEAS)

MATH 131A: REAL ANALYSIS (BIG IDEAS) MATH 131A: REAL ANALYSIS (BIG IDEAS) Theorem 1 (The Triangle Inequality). For all x, y R we have x + y x + y. Proposition 2 (The Archimedean property). For each x R there exists an n N such that n > x.

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

van Rooij, Schikhof: A Second Course on Real Functions

van Rooij, Schikhof: A Second Course on Real Functions vanrooijschikhof.tex April 25, 2018 van Rooij, Schikhof: A Second Course on Real Functions Notes from [vrs]. Introduction A monotone function is Riemann integrable. A continuous function is Riemann integrable.

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

convergence theorem in abstract set up. Our proof produces a positive integrable function required unlike other known

convergence theorem in abstract set up. Our proof produces a positive integrable function required unlike other known https://sites.google.com/site/anilpedgaonkar/ profanilp@gmail.com 218 Chapter 5 Convergence and Integration In this chapter we obtain convergence theorems. Convergence theorems will apply to various types

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Chapter 3 Continuous Functions

Chapter 3 Continuous Functions Continuity is a very important concept in analysis. The tool that we shall use to study continuity will be sequences. There are important results concerning the subsets of the real numbers and the continuity

More information

2 Lebesgue integration

2 Lebesgue integration 2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,

More information

Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia. 1. Introduction

Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia. 1. Introduction ON LOCALLY CONVEX HYPERSURFACES WITH BOUNDARY Neil S. Trudinger Xu-Jia Wang Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia Abstract. In this

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

4. Convex Sets and (Quasi-)Concave Functions

4. Convex Sets and (Quasi-)Concave Functions 4. Convex Sets and (Quasi-)Concave Functions Daisuke Oyama Mathematics II April 17, 2017 Convex Sets Definition 4.1 A R N is convex if (1 α)x + αx A whenever x, x A and α [0, 1]. A R N is strictly convex

More information

Bayesian Persuasion Online Appendix

Bayesian Persuasion Online Appendix Bayesian Persuasion Online Appendix Emir Kamenica and Matthew Gentzkow University of Chicago June 2010 1 Persuasion mechanisms In this paper we study a particular game where Sender chooses a signal π whose

More information

Continuity. Chapter 4

Continuity. Chapter 4 Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information

Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate

Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate A survey of Lihe Wang s paper Michael Snarski December 5, 22 Contents Hölder spaces. Control on functions......................................2

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Lemma 15.1 (Sign preservation Lemma). Suppose that f : E R is continuous at some a R.

Lemma 15.1 (Sign preservation Lemma). Suppose that f : E R is continuous at some a R. 15. Intermediate Value Theorem and Classification of discontinuities 15.1. Intermediate Value Theorem. Let us begin by recalling the definition of a function continuous at a point of its domain. Definition.

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

COMPLETION AND DIFFERENTIABILITY IN WEAKLY O-MINIMAL STRUCTURES

COMPLETION AND DIFFERENTIABILITY IN WEAKLY O-MINIMAL STRUCTURES COMPLETION AND DIFFERENTIABILITY IN WEAKLY O-MINIMAL STRUCTURES HIROSHI TANAKA AND TOMOHIRO KAWAKAMI Abstract. Let R = (R,

More information

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 28, 2003, 207 222 PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Fumi-Yuki Maeda and Takayori Ono Hiroshima Institute of Technology, Miyake,

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information