Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011

Similar documents
Prof. Erhan Bayraktar (University of Michigan)

Dynamic and Stochastic Brenier Transport via Hopf-Lax formulae on Was

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

Hamilton-Jacobi Equation and Optimality conditions for control systems governed by semilinear parabolic equations with boundary control

Pursuit differential games with state constraints

Differential games withcontinuous, switching and impulse controls

Régularité des équations de Hamilton-Jacobi du premier ordre et applications aux jeux à champ moyen

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Research Article Almost Periodic Viscosity Solutions of Nonlinear Parabolic Equations

Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks

Chain differentials with an application to the mathematical fear operator

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Multi-Dimensional Controller and Stopper Games

LINEAR-CONVEX CONTROL AND DUALITY

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations

Nonlinear Control Systems

Level-set convex Hamilton-Jacobi equations on networks

Exam February h

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1.

Deterministic Dynamic Programming

A junction condition by specified homogenization

A probabilistic approach to second order variational inequalities with bilateral constraints

Conservation law equations : problem set

Mañé s Conjecture from the control viewpoint

Continuous dependence estimates for the ergodic problem with an application to homogenization

Convergence and sharp thresholds for propagation in nonlinear diffusion problems

2 Statement of the problem and assumptions

Thuong Nguyen. SADCO Internal Review Metting

Deterministic Minimax Impulse Control

On a general definition of transition waves and their properties

Representation formulas for solutions of Isaacs integro-pde

Some Properties of NSFDEs

Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints

A deterministic approach to the Skorokhod problem

Weak convergence and large deviation theory

Weak solutions of mean-field stochastic differential equations

13 The martingale problem

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

Landesman-Lazer type results for second order Hamilton-Jacobi-Bellman equations

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

On differential games with long-time-average cost

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

MIN-MAX REPRESENTATIONS OF VISCOSITY SOLUTIONS OF HAMILTON-JACOBI EQUATIONS AND APPLICATIONS IN RARE-EVENT SIMULATION

Control Theory: From Classical to Quantum Optimal, Stochastic, and Robust Control

Variational approach to mean field games with density constraints

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Stability for solution of Differential Variational Inequalitiy

Hilbert Spaces. Contents

J. Differential Equations 212 (2005), no. 2,

INITIAL AND BOUNDARY VALUE PROBLEMS FOR NONCONVEX VALUED MULTIVALUED FUNCTIONAL DIFFERENTIAL EQUATIONS

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Verona Course April Lecture 1. Review of probability

Neighboring feasible trajectories in infinite dimension

Solutions to Problem Set 5 for , Fall 2007

Min-Max Certainty Equivalence Principle and Differential Games

Master Thesis. Nguyen Tien Thinh. Homogenization and Viscosity solution

Nonlinear L 2 -gain analysis via a cascade

Course 212: Academic Year Section 1: Metric Spaces

On an uniqueness theorem for characteristic functions

Minimal time mean field games

GENERALIZED FRONTS FOR ONE-DIMENSIONAL REACTION-DIFFUSION EQUATIONS

Fractal first order partial differential equations

On continuous time contract theory

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

A connection between Lorentzian distance and mechanical least action

Linear conic optimization for nonlinear optimal control

Regularity of solutions to Hamilton-Jacobi equations for Tonelli Hamiltonians

HJ equations. Reachability analysis. Optimal control problems

Obstacle problems and isotonicity

PREPUBLICACIONES DEL DEPARTAMENTO DE MATEMÁTICA APLICADA UNIVERSIDAD COMPLUTENSE DE MADRID MA-UCM

The incompressible Navier-Stokes equations in vacuum

REACTION-DIFFUSION EQUATIONS FOR POPULATION DYNAMICS WITH FORCED SPEED II - CYLINDRICAL-TYPE DOMAINS. Henri Berestycki and Luca Rossi

Nonlinear Lévy Processes and their Characteristics

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Convergence of a Generalized Fast Marching Method for an Eikonal equation with a Velocity Changing Sign

An iterative procedure for constructing subsolutions of discrete-time optimal control problems

Half of Final Exam Name: Practice Problems October 28, 2014

1 Preliminaries on locally compact spaces and Radon measure

Partial Differential Equations

Contents. 1. Introduction

Controllability of linear PDEs (I): The wave equation

A Concise Course on Stochastic Partial Differential Equations

Existence, uniqueness and regularity for nonlinear parabolic equations with nonlocal terms

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Fractal first order partial differential equations

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

f (t) K(t, u) d t. f (t) K 1 (t, u) d u. Integral Transform Inverse Fourier Transform

Functional Analysis (2006) Homework assignment 2

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

The Arzelà-Ascoli Theorem

Exercises: Brunn, Minkowski and convex pie

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA

pacman A classical arcade videogame powered by Hamilton-Jacobi equations Simone Cacace Universita degli Studi Roma Tre

Numerical Methods for Constrained Optimal Control Problems

Transcription:

Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011

Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3. III Dynamic Programming Principle 4. IV Existence of Value, Viscosity Solutions 5. V Games on the space of measures, Incomplete information

Recall x = f(x, u, v) where u(t) U, v(t) V C(t 0, x 0, u( ), v( )) = g(x[t 0, x 0, u( ), v( )](T )) A map α : V(t 0 ) U(t 0 ) is a NA Strategy iff, v 1, v 2 V(t 0 ), t, v 1 = v 2 on [t 0, t] α(v 1 ) = α(v 2 ) on [t 0, t]. Value Function V + (t 0, x 0 ) = inf α A(t 0 ) sup C(t 0, x 0, α(v), v) v V(t 0 ) V (t 0, x 0 ) = sup β B(t 0 ) inf C(t 0, x 0, u, β(u)) u U(t 0 )

Dynamic Programming Principle Theorem 1 Let t 0 < t 0 + h < T V (t 0, x 0 ) = sup β B(t 0 ) inf V (t 0, x[t 0, x 0, u, β(u)](t 0 + h)) u U(t 0 ) V + (t 0, x 0 ) = inf α A(t 0 ) sup V + (t 0 + h, x[t 0, x 0, α(v), v](t 0 + h)) v V(t 0 )

Proof We will prove that V (t 0, x 0 ) = W (t 0, t 0 + h, x 0 ) with W (t 0, t 0 +h, x 0 ) := So sup β B(t 0 ) inf V (t 0 +h, x[t 0, x 0, u, β(u)](t 0 +h)) u U(t 0 ) Step 1 Show that V (t 0, x 0 ) W (t 0, t 0 + h, x 0 ). Fix β 0 B(t 0 ) and u 0 U(t 0 ). Denote x(t) =: x[t 0, x 0, u, β 0 (u)](t). We define β B(t 0 + h) by u U(t 0 + h), β(u) := β 0 (ũ) with ũ := Observe that { u0 on [t 0, t 0 + h] u on [t 0 + h, T ] x[t 0, x 0, u, β(u)](t) = x[t 0, x 0, ũ, β 0 (ũ)](t), t [t 0 + h, T ]

Then inf C(t 0 + h, x(t 0 + h), u, β(u)) = u U(t 0 +h) { inf C(t 0, x 0, u, β 0 (u)) u U(t0 + h) u = u 0 on [t 0, t 0 + h] V (t 0 + h, x(t 0 + h)) inf { u U(t0 ) u = u 0 on [t 0, t 0 + h] inf C(t 0 + h, x(t 0 + h), u, β(u)) = u U(t 0 +h) C(t 0, x 0, u, β 0 (u)) Passing to the inf on u 0 and the sup on β 0

sup β 0 B(t 0 ) sup β 0 B(t 0 ) inf V (t 0 + h, x(t 0 + h)) u 0 U(t 0 ) inf C(t 0, x 0, u, β 0 (u)) = V (t 0, x 0 ) u U(t 0 ) We have W (t 0, t 0 + h, x 0 ) V (t 0, x 0 ). Step 2 Show that V (t 0, x 0 ) W (t 0, t 0 + h, x 0 ). Fix ε > 0. For every x, there exists β x B(t 0 + h) s.t. inf u U(t 0 +h) C(t 0 + h, x, u, β x (u)) V (t 0 + h, x) ε Fix β 0 and define

u U(t 0 ), β(u) := { β0 (u) on [t 0, t 0 + h] β x(t0 +h) (u [t 0 + h, T ]) where x(t) = x[t 0, x 0, u, β 0 (u)](t). C(t 0, x 0, u, β(u)) = C(t 0 +h, x(t 0 +h), u [t 0 +h, T ], β x(t0 +h) (u [t 0+h, T ]) So So passing to the inf on u inf C(t 0, x 0, u, β(u)) u U(t 0 ) V (t 0, x 0 ) V (t 0 + h, x(t 0 + h)) ε inf V (t 0 + h, x(t 0 + h)) ε u U(t 0 ) inf V (t 0 + h, x(t 0 + h)) ε u U(t 0 )

Taking the sup on β 0 we have V (t 0, x 0 ) W (t 0, t 0 +h, x 0 ) ε. Since ε is arbitrary, the proof is complete. q.e.d.

Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3. III Dynamic Programming Principle 4. IV Existence of Value, Viscosity Solutions 5. V Games on the space of measures, Incomplete information

Viscosity Solution As we have seen before value functions of differential game problems have very much to do with partial differential equations. These equations can be of the evolutionary type, as for Bolza problem: V t (t, x) + H(t, x, V x (t, x)) = 0 for all (t, x) [0, T ) IR N or of stationary type, as for the minimal time problem or for the infinite horizon problem: H(x, V (x), V (x)) = 0 for all x Ω.

These first order equations are usually called Hamilton- Jacobi-Isaacs equations. In order to give a suitable definition of solutions for these equations, without having to consider the different forms of these equations, we write them in a generic way: H(t, x, V (t, x), V t (t, x), V x (t, x)) = 0 for all (t, x) Ω. (1) where now H = H(t, x, s, p t, p x ) is the so-called hamiltonian of the problem and Ω is some open subset of [0, + ) IR N. Definition 2 (Viscosity solutions) Let V : Ω IR be a continuous function. We say that: V is a supersolution of equation (1) if V is lower semicontinuous and if, for any (t, x) Ω, for any test function ϕ C 1 (Ω) such that V ϕ has a local minimum at

(t, x), we have H(t, x, V (t, x), ϕ t (t, x), ϕ x (t, x)) 0. V is a subsolution of equation (1) if V is upper semicontinuous and if, for any (t, x) Ω, for any test function ϕ C 1 (Ω) where V ϕ has a local maximum at (t, x), we have H(t, x, V (t, x), ϕ t (t, x), ϕ x (t, x)) 0. V is a solution of equation (1) if V is a super- and a subsolution of equation (1). Let us first point out that a classical solution when it exists is always a viscosity solution: Proposition 3 Assume that V is a C 1 solution of (1). Then V is a viscosity solution of (1).

Proof of Proposition 3 : We only prove that V is a supersolution, the proof for the subsolution being symetric. Suppose that a C 1 test function ϕ is such that V ϕ has a local minimum at some point (t, x) Ω. The functions V and ϕ being C 1, the necessary condition of optimality implies that V t (t, x) = ϕ t (t, x) and V x (t, x) = ϕ x (t, x). Since V is a solution of (1), this gives: 0 = H(t, x, V (t, x), V t (t, x), V x (t, x)) = H(t, x, V (t, x), ϕ t (t, x), ϕ x (t, x)) which is the desired result, with an equality instead of an inequality. q.e.d.

From now on, we state several standard properties of viscosity solutions. Let us start with some purely algebraic properties. Lemma 4 Let us assume that V is a supersolution of (1). Then V is a subsolution of equation H(t, x, w(t, x), w t (t, x), w x (t, x)) = 0 in Ω (2) where H(t, x, s, p t, p x ) = H(t, x, s, p t, p x ). Conversely, if V is a subsolution of (1), then V is a supersolution of (2). Proof of Lemma 4 : Assume that V is a supersolution of (1). Let ϕ be a C 1 test function such that V ϕ has a local maximum at some point (t 0, x 0 ). This means that

V ( ϕ) has a local minimum at (t 0, x 0 ) and thus we have, since V is a supersolution of (1), H(t 0, x 0, V (t 0, x 0 ), ϕ t (t 0, x 0 ), ϕ x (t 0, x 0 )) 0. This inequality can be rewritten as whence H(t 0, x 0, ( V (t 0, x 0 )), ϕ t (t 0, x 0 ), ϕ x (t 0, x 0 )) 0, H(t, x, w(t, x), w t (t, x), w x (t, x)) 0 in Ω. This proves that V is a subsolution of of (2). q.e.d.

Existence Result C(t 0, x 0, u( ), v( )) = T t 0 L(t, x(t), u(t), v(t))e λt dt + g(x(t ))

Lemma 5 Let us assume that the following superdynamic programming holds for any h (0, h 0 ), for some h 0 > 0, at some point (t 0, x 0 ). V (t 0, x 0 ) t0 +h inf u( ) U(t 0 ) t 0 sup β B(t 0 ) +e λh V (t 0 + h, x(t 0 + h))} L(s, x(s), u(s), β(u)(s))ds (3) Let ϕ be a C 1 such that V ϕ has a local minimum at some point (t 0, x 0 ). Then inf u U sup {ϕ t (t 0, x 0 ) + f(t 0, x 0, u, v), ϕ x (t 0, x 0 ) v V λv (t 0, x 0 ) + L(t 0, x 0, u, v)} 0 (4)

If, on the contrary, the subdynamic programming holds { t0 V +h (t 0, x 0 ) sup inf L(s, x(s), u(s), β(u)(s))ds β B(t 0 ) u( ) U(t 0 ) t 0 } + e λh V (t 0 + h, x(t 0 + h)) (5) for any h (0, h 0 ) (for some h 0 > 0) and if V ϕ has a local maximum at (t 0, x 0 ), then inf u U sup v V {ϕ t (t 0, x 0 ) + f(t 0, x 0, u, v), ϕ x (t 0, x 0 ) λv (t 0, x 0 ) +L(t 0, x 0, u, v)} 0. (6) Remark In other words, V is a viscosity solution of D t V (t, x) + H (t, x, D x V (t, x)) λv (t, x)} = 0

Tecnical Lemma this technical remark will explain later on why the expression sup β B(t0 ) inf u( ) U(t 0 ) in the DPP becomes inf u U sup v V in the HJI equations. Lemma 6 Let θ : U V IR be a continuous function and let V be the set of Borell measurable maps v : U V: Then V = { v : U V, v Borell measurable}. min u U v V max θ(u, v) = sup v V inf θ(u, v(u)). u U

Proof of Lemma 6: u U, we have which implies that Let v V be fixed. Then, for any θ(u, v(u)) sup v V θ(u, v), inf θ(u, v(u)) inf u U u U sup Whence the inequality sup v V v V inf θ(u, v(u)) inf u U u U sup v V θ(u, v). θ(u, v). In order to prove the reverse inequality, let us fix ε > 0 and, for any u U, let v u V be such that θ(u, v u ) sup θ(u, v) ε v V 2.

Since θ is continuous and V is compact, the map u sup v V θ(u, v) is continuous. Therefore we can find a covering of V by a finite familly of open sets O i V (for i = 1,..., k) and a finite family of controls v i V (for i = 1,..., k) such that θ(u, v i ) sup θ(u, v) ε u O i. v V Then the Borel map ṽ defined by ṽ(u) = v i if u O i \ j<i O j satisfies θ(u, ṽ(u)) sup θ(u, v) ε u U. v V

Therefore inf θ(u, ṽ(u)) inf u U u U sup θ(u, v) ε, v V which finally implies sup v V inf θ(u, v(u)) inf u U u U sup θ(u, v) ε. v V Hence the result is proved since ε is arbitrary. We are now ready to start the proof of Lemma 5. q.e.d. Proof of Lemma 5: We first prove that (3) entails (4). Let (t 0, x 0 ) (0, T ) IR N and ϕ C 1 be a test function such that V + ϕ has a local minimum at (t 0, x 0 ). This means

that there is a neighbourhood O of (t 0, x 0 ) such that V (t, x) V (t 0, x 0 ) ϕ(t, x) ϕ(t 0, x 0 ) (t, x) O. (7) We can rewrite the superdynamic programming as [ ] 1 t0 sup inf e λh +h V (t β u( ) h 0 + h, x(t 0 + h)) V (t 0, x 0 ) + L(s)ds t 0 0. where we have set x( ) = x[t 0, x 0, u, β(u)] and L(s) = L(s, x(s), u(s), β( Owing to inequality (7) this implies that sup β inf u( ) 1 h [e λh (ϕ(t 0 + h, x(t 0 + h)) ϕ(t 0, x 0 )) +(e λh 1)V (t 0, x 0 ) + t0 +h t 0 L(s)ds] 0 (8)

for any h > 0 sufficiently small. Since ϕ is C 1, we have ϕ(t 0 + h, x(t 0 + h)) ϕ(t 0, x 0 ) = t0 +h t 0 [ϕ t (s) + ϕ x (s), f(s) ]ds where we have set, for simplicity ϕ t (s) = ϕ t (s, x(s)) and ϕ x (s) = ϕ x (s, x(s)) and f(s) = f(s, x(s), u(s), β(u)(s)). Moreover, the term (e λh 1) can be written as t0 +h e λh 1 = λe λ(s t0) ds. t 0 Thus inequality (8) can be put under the form sup β inf 1 t0 +h u( ) h t 0 [e λh (ϕ t (s) + ϕ x (s), f(s) ) λe λ(s t0) V (t 0, x 0 ) + L(s)ds] 0 Since f, L, ϕ t and ϕ x are continuous with respect to (s, x), for any fixed ε > 0, there is some h 0 > 0 (which does not

depend on β and on u), such that, for all h (0, h 0 ), we have 1 h t0 +h t 0 [e λh (ϕ t (s) + ϕ x (s), f(s) ) λe λ(s t 0) V (t 0, x 0 ) + L(s)]ds t0 +h 1 [ ϕ t (t h 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u(s), β(u)(s)) t 0 λv (t 0, x 0 ) + L(t 0, x 0, u(s), β(u)(s)) ]ds ε. With this inequality, we have sup β inf u( ) 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u(s), β(u)(s)) λv (t 0, x 0 ) + L(t 0, x 0, u(s), β(u)(s)) ]ds ε. Let us now notice that, for all Borell measurable function v : U V, the map β : U(t 0 ) V(t 0 ) defined by u U(t 0 ), β(u)(s) = v(u(s))

is a nonanticipative strategy. Therefore, when the second player plays only such strategies we get sup v V where inf u( ) 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u(s), v(u(s))) λv (t 0, x 0 ) + L(t 0, x 0, u(s), v(u(t))) ]ds ε, V = {v : U V, v Borell measurable}.

But clearly sup v V inf u( ) = sup v V 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u(t), v(u(t))) λv (t 0, x 0 ) + L(t 0, x 0, u(s), v(u(t))) ]ds inf u U [ϕ t(t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, v(u)) λv (t 0, x 0 ) + L(t 0, x 0, u, v(u)) ] because the integrand does not depend on the time. Then we deduce from Lemma 6 that inf u U sup [ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, v) λv (t 0, x 0 )+ v V L(t 0, x 0, u, v) ] ε. Since ε can be chosen arbitrarily small,

we get the desired result: inf u U sup v V λv (t 0, x 0 ) + L(t 0, x 0, u, v)} 0. {ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, v) Let us now prove that the subdynamic programming principle (5) implies (6). Let ϕ be a C 1 test function such that V ϕ has a local maximum at (t 0, x 0 ). This means that there is a neighbourhood O of (t 0, x 0 ) such that (t, x) O, V (t, x) V (t 0, x 0 ) ϕ(t, x) ϕ(t 0, x 0 ). From the subdynamic programming principle (5), we get as previously, for any h > 0 sufficiently small: [ ] 1 t0 sup inf e λh +h V (t β u( ) h 0 + h, x(t 0 + h)) V (t 0, x 0 ) + L(s)ds 0 t 0

(where we have set L(s) = L(s, x(s), u(s), β(u)(s))), which now implies that 1 sup inf β u( ) h [ e λh (ϕ(t 0 + h, x(t 0 + h)) ϕ(t 0, x 0 )) + (e λh 1)V (t 0, x 0 ) t0 +h + L(s)ds ] 0 t 0 for any h > 0 sufficiently small. Arguing as previously, this inequality implies that, for any ε > 0, there is some h 0 > 0 such that for all h (0, h 0 ), sup β inf u( ) 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u(s), β(u)(s)) λv (t 0, x 0 ) + L(t 0, x 0, u(s), β(u)(s)) ]ds ε. In particular, if the first player only plays constant controls,

this gives: sup β inf u U 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, β(u)(s)) λv (t 0, x 0 ) + L(t 0, x 0, u, β(u)(s)) ]ds ε. But now note that the integrand does not depend on the time, and thus the second player can as well play strategies of the form v V : sup β inf u U 1 h t0 +h t 0 [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, β(u)(s)) λv (t 0, x 0 ) + L(t 0, x 0, u, β(u)(s)) ]ds = sup v V inf u U [ ϕ t(t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, v(u)) λv (t 0, x 0 ) + L(t 0, x 0, u, v(u)) ]

We can then deduce from Lemma 6 that inf u U sup [ ϕ t (t 0, x 0 ) + ϕ x (t 0, x 0 ), f(t 0, x 0, u, v) λv (t 0, x 0 ) v V +L(t 0, x 0, u, v) ] ε. This completes the proof since ε is arbitrary. q.e.d.

Continuity of the Values Infinite Horizon cost C(x 0, u( ), v( )) = + 0 e λt L(x(t), u(t), v(t))dt i) U and V are compact subsets of some finite dimensional space ii) f : IR N U V IR N is continuous and bounded, and Lipschitz continuous with respect to the x variable iii) L : IR N U V IR is continuous and bounded, and Lipschitz continuous with respect to the x variable (9)

Proposition 7 Under asumption (9), the value functions V + and V are Hölder continuous and bounded in IR N.

Hamilton Jacobi Isaacs Equations H + (x, p) = inf u U sup {L(x, u, v) + f(x, u, v), p } v V H (x, p) = sup v V inf {L(x, u, v) + f(x, u, v), p } u U (x, (x, p) IR N IR N s, p) IRN IR N Proposition 8 Under assumption (9), the functions V + and V are respectively viscosity solutions of and of λv (x) + H + (x, V (x)) = 0 in IR N (10) λv (x) + H (x, V (x)) = 0 in IR N (11)

Comparison principle Stationary Hamilton-Jacobi Isaacs equation of the form: λv (x) + H(x, V (x)) = 0 in IR N (12) where H : IR N IR N IR. H(y, p) H(x, p) C R y x (1 + p ) for some C r > 0, for any x, y Ω, R s R, p IR N, R > 0 (13)

Theorem 9 Assume that (13). If V 1 is a bounded viscosity subsolution of (12) while V 2 is a bounded viscosity supersolution of (12) then V 1 (x) V 2 (x) x We immediately deduce from this result that equation (12), with a fixed boundary condition has at most one continuous viscosity solution. Corollary 10 Assume that H satisfies (13). Then, the problem H(x, V (x), V (x)) = 0 has at most one bounded continuous viscosity solution.

Existence of The Value Theorem 11 If moreover Isaacs condition holds: H + (x, p) = H (x, p) (x, p) (14) then the game has a value. Namely: V + (x) = V (x) x

Thank You for your Attention