Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems. Contents

Size: px
Start display at page:

Download "Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems. Contents"

Transcription

1 Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems Alberto Bressan Penn State University Contents 1 - Preliminaries: the method of characteristics One-sided differentials Viscosity solutions Stability properties Comparison theorems Control systems The Pontryagin Maximum Principle Extensions of the PMP Dynamic programming The Hamilton-Jacobi-Bellman equation References 48 A first version of these lecture notes was written at NTNU, Trondheim, 2001 This revised version was completed at Shanghai Jiao Tong University, Spring

2 1 - Preliminaries: the method of characteristics A first order, scalar PDE has the form F(x,u, u) = 0 x Ω IR n (11) It is convenient to introducethe variable p = u, so that (p 1,,p n ) = (u x1,,u xn ) We assume that the F = F(x,u,p) is a continuous function, mapping IR n IR IR n into IR Given the boundary data u(x) = ū(x) x Ω, (12) a solution can be constructed (at least locally, in a neighborhood of the boundary) by the classical method of characteristics The idea is to obtain the values u(x) along a curve s x(s) starting from the boundary of Ω, solving a suitable ODE (figure 11) Ω x(s) Figure 11 Characteristic curves starting from the boundary of Ω y Fix a boundary point y Ω and consider a curve s x(s) with x(0) = y Call u(s) = u ( x(s) ), p(s) = p ( x(s) ) = u ( x(s) ) We seek an ODE describing the evolution of u and p = u along the curve Denoting by an upper dot the derivative wrt the parameter s, one has u = i u xi ẋ i = i p i ẋ i, (13) ṗ j = u xj = i u xj x i ẋ i (14) In general, ṗ j thus depends on the second derivatives of u The key idea in the method of characteristics is that, by a careful choice of the curve s x(s), the terms involving second derivatives disappear from the equations Differentiating (11) wrt x j we obtain F + F x j u u x j + i F p i u xi x j = 0 (15) 1

3 Hence i F u xj x p i = F F i x j u p j (16) If we now make the choice ẋ i = F/ p i, the right hand side of (14) is computed by (16) We thus obtain a closed system of equations, which do not involve second order derivatives: ẋ i = F p i u = i p i F p i, ṗ j = F x j F u p j i = 1,,n, j = 1,,n (17) This leads to a family of Cauchy problems, which in vector notation take the form ẋ = F x(0) = y p u = p F u(0) = u(y) y Ω (18) p ṗ = F x F u p p(0) = u(y) The resolution of the first order boundary value problem (11)-(12) is thus reduced to the solution of a family of ODE s, depending on the initial point y As y varies along the boundary of Ω, we expect that the union of the above curves x( ) will cover a neighborhood of Ω, where our solution u will be defined Remark 11 If F is affine wrt p, the equation (11) takes the form F(x,u,p) = p α(x,u)+β(x,u) = 0 In this case, the partial derivatives F/ p i do not depend on p The first two equations in (17) can thus be solved independently, without computing p from the third equation: ẋ = α(x,u), u = p ẋ = β(x,u) Example 12 The equation u 2 1 = 0 x Ω (19) on IR 2 corresponds to (11) with F(x,u,p) = p 2 1 +p Assigning the boundary data u = 0 x Ω, a solution is clearly given by the distance function u(x) = dist(x, Ω) 2

4 The corresponding equations (18) are Choosing the initial data at a point y we have ẋ = 2p, u = p ẋ = 2, ṗ = 0 x(0) = y, u(0) = 0, p(0) = n, where n is the interior unit normal to the set Ω at the point y In this case, the solution is constructed along the ray x(s) = y +2sn, and along this ray one has u(x) = x y Evan if the boundary Ω is smooth, in general the distance function will be smooth only on a neighborhood of this boundary If Ω is bounded, there will be a set γ of interior points x where the distance function is not differentiable (fig 12) These are indeed the points such that for two distinct points y 1,y 2 Ω dist( x, Ω) = x y 1 = x y 2 y 2 y x(s) x γ Ω y 1 Figure 12 The previous example shows that, in general, the boundary value problem for a first order PDE does not admit a global C 1 solution This suggests that we should relax our requirements, and consider solutions in a generalized sense We recall that, by Rademacher s theorem, every Lipschitz continuous function u : Ω IR is differentiable almost everywhere It thus seems natural to introduce Definition 13 A function u is a generalized solution of (11)-(12) if u is Lipschitz continuous on the closure Ω, takes the prescribed boundary values and satisfies the first order equation (11) at almost every point x Ω Unfortunately, this concept of solution is far too weak, and does not lead to a useful uniqueness result Example 14 The boundary value problem u x 1 = 0 x [ 1,1], x( 1) = x(1) = 0, (110) admits infinitely many piecewise affine generalized solutions, as shown in fig 13a 3

5 u 0 u 1 u u ε x ε x 1 u 2 figure 13a figure 13b Observe that, among all these solutions, the distance function u 0 (x) = 1 x x [ 1, 1] is the only one that can be obtained as a vanishing viscosity limit Indeed, any other generalized solution u with polygonal graph has at least one strict local minimum in the interior of the interval [ 1,1], say at a point x Assume that lim ε 0+ uε u uniformly on [ 1,1], for some family of smooth solutions to u ε x 1 = εu ε xx Then for every ε > 0 sufficiently small the function u ε will have a local minimum at a nearby point x ε (fig 13b) But this is impossible, because u ε x (x ε ) 1 = 1 εu ε xx (x ε ) 0 On the other hand, notice that if u ε attains a local maximum at some interior point x ] 1, 1[, this does not lead to any contradiction In view of the previous example, one seeks a new concept of solution for the first order PDE (11), having the following properties: 1 For every boundary data(12), a unique solution exists, depending continuously on the boundary values and on the function F 2 This solution u coincides with the limit of vanishing viscosity approximations Namely, u = lim ε 0+ u ε, where the u ε are solutions of F(x,u ε, u ε ) = ε u ε 3 In the case where (11) is the Hamilton-Jacobi equation describing the value function for some optimization problem, this new concept of solution should single out precisely the value function In the following sections we shall introduce the definition of viscosity solution and see how it fulfills the above requirements 1 3 4

6 2 - One-sided differentials Let u : Ω IR be a scalar function, defined on an open set Ω IR n The set of superdifferentials of u at a point x is defined as D + u(x) = p IR n u(y) u(x) p (y x) ; limsup y x y x } 0 (21) In other words, a vector p IR n is a super-differential iff the hyperplane y u(x)+p (y x) is tangentfrom above to the graph of u at the point x (fig 21a) Similarly, the set of sub-differentials of u at a point x is defined as D u(x) = p IR n u(y) u(x) p (y x) ; liminf y x y x } 0, (22) so that a vector p IR n is a sub-differential iff the hyperplane y u(x)+p (y x) is tangent from below to the graph of u at the point x (fig 21b) u u x Figure 21a Figure 21b x Example 21 Consider the function (fig 22) u(x) = 0 if x < 0, x if x [0,1], 1 if x > 1 In this case we have D + u(0) =, D + u(x) = D u(x) = D u(0) = [0, [, } 1 2 x x ]0,1[, D + u(1) = [ 0, 1/2 ], D u(1) = 5

7 u 0 1 Figure 22 x If ϕ C 1, its differential at a point x is written as ϕ(x) The following characterization of super- and sub-differentials is very useful Lemma 22 Let u C(Ω) Then (i) p D + u(x) if and only if there exists a function ϕ C 1 (Ω) such that ϕ(x) = p and u ϕ has a local maximum at x (ii) p D u(x) if and only if there exists a function ϕ C 1 (Ω) such that ϕ(x) = p and u ϕ has a local minimum at x By adding a constant, it is not restrictive to assume that ϕ(x) = u(x) In this case, we are saying that p D + u(x) iff there exists a smooth function ϕ u with ϕ(x) = p, ϕ(x) = u(x) In other words, the graph of ϕ touches the graph of u from above at the point x (fig 23a) A similar property holds for subdifferentials: p D u(x) iff there exists a smooth function ϕ u, with ϕ(x) = p, whose graph touches from below the graph of u at the point x (fig 23b) ϕ u ϕ u x figure 23a figure 23b x Proof of Lemma 22 Assume that p D + u(x) Then we can find δ > 0 and a continuous, increasing function σ : [0, [ IR, with σ(0) = 0, such that u(y) u(x)+p (y x)+σ ( y x ) y x for y x < δ Define ρ(r) = r 0 σ(t) dt 6

8 and observe that ρ(0) = ρ (0) = 0, ρ(2r) σ(r)r By the above properties, the function ϕ(y) = u(x)+p (y x)+ρ ( 2 y x ) is in C 1 (Ω) and satisfies ϕ(x) = u(x), ϕ(x) = p Moreover, for y x < δ we have u(y) ϕ(y) σ ( y x ) y x ρ ( 2 y x ) 0 Hence, the difference u ϕ attains a local maximum at the point x To prove the opposite implication, assume that Dϕ(x) = p and u ϕ has a local maximum at x Then lim sup y x u(y) u(x) p (y x) y x limsup y x ϕ(y) ϕ(x) p (y x) y x = 0 (23) This completes the proof of (i) The proof of (ii) is entirely similar Remark 23 By possibly replacing the function ϕ with ϕ(y) = ϕ(y)± y x 2, it is clear that in the above lemma we can require that u ϕ attains a strict local minimum or a strict local maximum at the point x This is important in view of the following stability result Lemma 24 Let u : Ω IR be continuous Assume that, for some φ C 1, the function u φ has a strict local minimum (a strict local maximum) at a point x Ω If u m u uniformly, then there exists a sequence of points x m x with u m (x m ) u(x) and such that u m φ has a local minimum (a local maximum) at x m Proof Assume that u φ has a strict local minimum at x For every ρ > 0 sufficiently small, there exists ε ρ > 0 such that u(y) φ(y) > u(x) φ(x)+ε ρ whenever y x = ρ By the uniform convergence u m u, for all m N ρ sufficiently large one has u m (y) u(y) < ε ρ /4 for y x ρ Hence u m (y) φ(y) > u m (x) φ(x)+ ε ρ 2 y x = ρ, This shows that u m φ has a local minimum at some point x m, with x m x < ρ Letting ρ,ε ρ 0, we construct the desired sequence x m } 7

9 u φ m x x u φ m m Figure 24a Figure 24b x x This situation is illustrated in fig 24a On the other hand, if x is a point of non-strict local minimum for u φ, the slightly perturbed function u m φ may not have any local minimum x m close to x (fig 24b) Some simple properties of super- and sub-differential are collected in the next lemma Lemma 25 Let u C(Ω) Then (i) If u is differentiable at x, then D + u(x) = D u(x) = u(x) } (24) (ii) If the sets D + u(x) and D u(x) are both non-empty, then u is differentiable at x, hence (24) holds (iii) The sets of points where a one-sided differential exists: Ω + = x Ω; D + u(x) }, Ω = x Ω; D u(x) } (25) are both non-empty Indeed, they are dense in Ω Proof To prove (i), assume that u is differentiable at x Trivially, u(x) D ± u(x) On the other hand, if ϕ C 1 (Ω) is such that u ϕ has a local maximum at x, then ϕ(x) = u(x) Hence D + u(x) cannot contain any vector other than u(x) To prove (ii), assume that the sets D + u(x) and D u(x) are both non-empty Then there we can find δ > 0 and ϕ 1,ϕ 2 C 1 (Ω) such that (fig 25) ϕ 1 (x) = u(x) = ϕ 2 (x), ϕ 1 (y) u(y) ϕ 2 (y) whenever y x < δ 8

10 By a standard comparison argument, this implies that u is differentiable at x and u(x) = ϕ 1 (x) = ϕ 2 (x) Concerning(iii), let x 0 Ω and ε > 0 begiven On theopenball B(x 0,ε) = x; x x 0 < ε } centered at x 0 with radius ε, consider the smooth function (fig 26) ϕ(x) = 1 ε 2 x x 0 2 Notice that ϕ(x) + as x x 0 ρ Therefore, the function u ϕ attains a global maximum at some interior point y B(x 0,ε) By Lemma 22, the super-differential of u at y is non-empty 2(y x Indeed, ϕ(y) = 0 ) (ε 2 y x 0 D + u(x) The previous argument shows that, for every x 2 ) 2 0 Ω and ε > 0, the set Ω + contains a point y such that y x 0 < ε Therefore Ω + is dense in Ω The case of sub-differentials is entirely similar u ϕ ϕ 2 u ϕ 1 x y Figure 25 Figure 26 x Viscosity solutions In the following, we consider the first order, partial differential equation F ( x, u(x), u(x) ) = 0 (31) defined on an open set Ω IR n Here F : Ω IR IR n IR is a continuous (nonlinear) function Definition 31 A function u C(Ω) is a viscosity subsolution of (31) if F ( x,u(x),p) 0 for every x Ω, p D + u(x) (32) Similarly, u C(Ω) is a viscosity supersolution of (31) if F ( x,u(x),p) 0 for every x Ω, p D u(x) (33) 9

11 We say that u is a viscosity solution of (31) if it is both a supersolution and a subsolution in the viscosity sense Similar definitions also apply to evolution equations of the form u t +H ( t,x,u, u) = 0, (34) where u denotes the gradient of u wrt x Recalling Lemma 22, we can reformulate these definitions in an equivalent form: Definition 32 A function u C(Ω) is a viscosity subsolution of (34) if, for every C 1 function ϕ = ϕ(t,x) such that u ϕ has a local maximum at (t,x), there holds ϕ t (t,x)+h(t,x,u, ϕ) 0 (35) Similarly, u C(Ω) is a viscosity supersolution of (34) if, for every C 1 function ϕ = ϕ(t,x) such that u ϕ has a local minimum at (t,x), there holds ϕ t (t,x)+h(t,x,u, ϕ) 0 (36) Remark 33 In the definition of subsolution, we are imposing conditions on u only at points x where the super-differential is non-empty Even if u is merely continuous, possibly nowhere differentiable, there are a lot of these points Indeed, by Lemma 25, the set of points x where D + u(x) is dense on Ω Similarly, for supersolutions we only impose conditions at points where D u(x) Remark 34 If u is a C 1 function that satisfies (31) at every x Ω, then u is also a solution in the viscosity sense Viceversa, if u is a viscosity solution, then the equality (31) must hold at every point x where u is differentiable In particular, if u is Lipschitz continuous, then by Rademacher s theorem it is ae differentiable Hence (31) holds ae in Ω Remark 35 According to Definition 13, a Lipschitz continuous function is a generalized solution of (11) if the following implication holds true: p = u(x) = F(t,u(x),p) = 0 (37) This can be written in an equivalent way, splitting each equality into two inequalities: [ ] p D + u(x) and p D u(x) = [ ] F(t,u(x),p) 0 and F(t,u(x),p) 0 (38) The definition of viscosity solution, on the other hand, requires two separate implications: p D + u(x) = F(t,u(x),p) 0 (u is a viscosity subsolution), p D u(x) = F(t,u(x),p) 0 (u is a viscosity supersolution) (39) Observe that, if u( ) satisfies the two implications in (39), then it also satisfies (38) In other words, if u is a viscosity solution, then u is also a generalized solution However, the converse does not hold 10

12 Example 36 Let F(x,u,p) solution of = p 1 Observe that the function u(x) = 1 x is a viscosity u x 1 = 0 (310) on the open interval ] 1, 1[ Indeed, u is differentiable and satisfies the equation (310) at all points x 0 Moreover, we have D + u(0) = [ 1, 1], D u(0) = (311) To show that u is a supersolution, at the point x = 0 there is nothing else to check To show that u is a subsolution, take any p [ 1, 1] Then p 1 0, as required It is interesting to observe that the same function u(x) = 1 x is NOT a viscosity solution of the equation 1 u x = 0 (312) Indeed, at x = 0, taking p = 0 D + u(0) we find 1 0 = 1 > 0 Since D u(0) =, we conclude that the function u(x) = 1 x is a viscosity supersolution of (312), but not a subsolution 4 - Stability properties For nonlinear PDE s, the set of solutions may not be closed wrt the topology of uniform convergence In general, if u m u uniformly on a domain Ω, to conclude that u is itself a solution of the PDE one should know, in addition, that all the derivatives D α u m that appear in the equation converge to the corresponding derivatives of u This may not be the case in general Example 41 A sequence of generalized solutions to the equation u x 1 = 0, u(0) = u(1) = 0, (41) is provided by the saw-tooth functions (fig 41) u m (x) = x k 1 m k m x [ k 1 if x m, k 1 m + 1 ], 2m [ k if x m 1 ] 2m, k, m k = 1,,m (42) Clearly u m 0 uniformly on [0,1], but the zero function is not a solution of (41) In this case, the convergence of the functions u m is not accompanied by the convergence of their derivatives u 4 0 figure

13 The next lemma shows that, in the case of viscosity solutions, a general stability theorem holds, without any requirement about the convergence of derivatives Lemma 42 Consider a sequence of continuous functions u m, which provide viscosity subsolutions (super-solutions) to F m (x,u m, u m ) = 0 x Ω (43) As m, assume that F m F uniformly on compact subsets of Ω IR IR n and u m u in C(Ω) Then u is a subsolution (a supersolution) of (31) Proof To prove that u is a subsolution, let φ C 1 be such that u φ has a strict local maximum at a point x We need to show that F ( x,u(x), φ(x) ) 0 (44) By Lemma 24, there exists a sequence x m x such that u m φ has a local maximum at x m, and u m (x m ) u(x) as m Since u m is a subsolution, Letting m in (45), by continuity we obtain (44) F m ( xm,u m (x m ), φ(x m ) ) 0 (45) The above result should be compared with Example 41 Clearly, the functions u m in (42) are not viscosity solutions The definition of viscosity solution is naturally motivated by the properties of vanishing viscosity limits Theorem 43 Let u ε be a family of smooth solutions to the viscous equation F ( x, u ε (x), u ε (x) ) = ε u ε (46) Assume that, as ε 0+, we have the convergence u ε u uniformly on an open set Ω IR n Then u is a viscosity solution of (31) Proof Fix x Ω and assume p D + u(x) To prove that u is a subsolution we need to show that F(x, u(x), p) 0 1 By Lemma 22 and Remark 23, there exists ϕ C 1 with ϕ(x) = p, such that u ϕ has a strict local maximum at x For any δ > 0 we can then find 0 < ρ δ and a function ψ C 2 such that ϕ(y) ϕ(x) δ if y x ρ, (47) ψ ϕ C 1 δ (48) and such that each function u ε ψ has a local maximum inside the ball B(x; ρ), for ε > 0 small enough 2 Let x ε be the location of this local maximum of u ε ψ Since u ε is smooth, this implies ψ(x ε ) = u(x ε ), u(x ε ) ψ(x ε ) (49) 12

14 Hence from (46) it follows F ( x, u ε (x ε ), ψ(x ε ) ) ε ψ(x ε ) (410) 3 We can now select a sequence ε ν 0+ such that lim ν x εν = x for some limit point x By the construction performed in step 1, one has x x ρ Since ψ C 2, we can pass to the limit in (410) and conclude F ( x, u( x), ψ( x) ) 0 (411) By (47)-(48) we have ψ( x) p ψ( x) ϕ( x) + ϕ( x) ϕ(x) δ +δ (412) Since δ > 0 can be taken arbitrarily small, (411) and the continuity of F imply F(x,u(x),p) 0, showing that u is a subsolution The fact that u is a supersolution is proved in an entirely similar way Remark 44 In the light of the above result, it should not come as a surprise that the two equations F(x,u, u) = 0 and F(x,u, u) = 0 may have different viscosity solutions Indeed, solutions to the first equation are obtained as limits of (46) as ε 0+, while solutions to the second equation are obtainedas limits of (46) as ε 0 These two limits can be substantially different 5 - Comparison theorems A remarkable feature of the notion of viscosity solutions is that on one hand it requires a minimum amount of regularity (just continuity), and on the other hand it is stringent enough to yield general comparison and uniqueness theorems The uniqueness proofs are based on a technique of doubling of variables, which reminds of Kruzhkov s uniqueness theorem for conservation laws [K] We now illustrate this basic technique in a simple setting Theorem 51 (Comparison) Let Ω IR n be a bounded open set Let u 1,u 2 C(Ω) be, respectively, viscosity sub- and supersolutions of u+h(x, u) = 0 x Ω (51) Assume that u 1 (x) u 2 (x) for all x Ω (52) Moreover, assume that H : Ω IR n IR is uniformly continuous in the x-variable: H(x,p) H(y,p) ω ( x y ( 1+ p )), (53) 13

15 for some continuous and non-decreasing function ω : [0, [ [0, [ with ω(0) = 0 Then u 1 (x) u 2 (x) for all x Ω (54) Proof To appreciate the main idea of the proof, consider first the case where u 1,u 2 are smooth If the conclusion (54) fails, then the difference u 1 u 2 attains a positive maximum at a point x 0 Ω This implies p = u 1 (x 0 ) = u 2 (x 0 ) By definition of sub- and supersolution, we now have u 1 (x 0 )+H(x 0,p) 0, (55) u 2 (x 0 )+H(x 0,p) 0 Subtracting the second from the first inequality in (55) we conclude u 1 (x 0 ) u 2 (x 0 ) 0, reaching a contradiction u 1 u 1 u 2 u 2 Ω Ω x 0 x 0 Figure 51a Figure 51b Next, consider the non-smooth case We can repeat the above argument and reach again a contradiction provided that we can find a point x 0 such that (fig 51a) (i) u 1 (x 0 ) > u 2 (x 0 ), (ii) some vector p lies at the same time in the upper differential D + u 1 (x 0 ) and in the lower differential D u 2 (x 0 ) A natural candidate for x 0 is a point where u 1 u 2 attains a global maximum Unfortunately, at such point one of the sets D + u 1 (x 0 ) or D u 2 (x 0 ) may be empty, and the argument breaks down (fig 51b) To proceed further, the key observation is that we do not need to compare values of u 1 and u 2 at exactly the same point Indeed, to reach a contradiction, it suffices to find nearby points x ε and y ε such that (fig 52) (i ) u 1 (x ε ) > u 2 (y ε ), (ii ) some vector p lies at the same time in the upper differential D + u 1 (x ε ) and in the lower differential D u 2 (y ε ) 14

16 u 1 u 2 Ω y ε x ε figure 52 Can we always find such points? It is here that the variable-doubling technique comes in The key idea is to look at the function of two variables Φ ε (x,y) = u 1 (x) u 2 (y) x y 2 (56) 2ε This clearly admits a global maximum over the compact set Ω Ω If u 1 > u 2 at some point x 0, this maximum will be strictly positive Moreover, taking ε > 0 sufficiently small, the boundary conditions imply that the maximum is attained at some interior point (x ε,y ε ) Ω Ω Notice that the points x ε,y ε must be close to each other, otherwise the penalization term in (56) will be very large and negative We now observe that the function of a single variable x u 1 (x) (u 2 (y ε )+ x y ε 2 ) 2ε attains its maximum at the point x ε Hence by Lemma 22 = u 1 (x) ϕ 1 (x) (57) x ε y ε ε = ϕ 1 (x ε ) D + u 1 (x ε ) Moreover, the function of a single variable y u 2 (y) (u 1 (x ε ) x ε y 2 ) 2ε = u 2 (y) ϕ 2 (y) (58) attains its minimum at the point y ε Hence x ε y ε ε = ϕ 2 (y ε ) D u 2 (y ε ) We have thus discovered two points x ε, y ε and a vector p = (x ε y ε )/ε which satisfy the conditions (i )-(ii ) We now work out the details of the proof, in several steps 15

17 1 If the conclusion fails, then there exists x 0 Ω such that u 1 (x 0 ) u 2 (x 0 ) = max x Ω u1 (x) u 2 (x) } = δ > 0 (59) For ε > 0, call (x ε,y ε ) a point where the function Φ ε in (56) attains its global maximum on the compact set Ω Ω By (59) one has Φ ε (x ε,y ε ) δ > 0 (510) 2 Call M an upper bound for all values u1 (x), u2 (x), as x Ω Then Φ ε (x,y) 2M x y 2 2ε, Hence (510) implies Φ ε (x,y) 0 if x y 2 Mε x ε y ε Mε (511) 3 By the uniform continuity of the functions u 2 on the compact set Ω, for ε > 0 sufficiently small we have u2 (x) u 2 (y) δ < whenever x y Mε 2 (512) We now show that, choosing ε < ε, the points x ε,y ε cannotlie on the boundaryof Ω For example, if x ε Ω, then by (52) and (511)-(512) it follows in contradiction with (510) Φ ε (x ε,y ε ) ( u 1 (x ε ) u 2 (x ε ) ) + u2 (x ε ) u 2 (y ε ) x ε y ε 2 0+δ/2+0, 4 Having shown that x ε,y ε are interior points, we consider the functions of one single variable ϕ 1,ϕ 2 defined at (57)-(58) Since x ε provides a local maximum for u 1 ϕ 1 and y ε provides a local minimum for u 2 ϕ 2, we conclude that 2ε p ε = x ε y ε ε D + u 1 (x ε ) D u 2 (y ε ) (513) From the definition of viscosity sub- and supersolution we now obtain u 1 (x ε )+H(x ε,p ε ) 0, u 2 (y ε )+H(y ε,p ε ) 0 (514) 5 From u 1 (x ε ) u 2 (x ε ) Φ ε (x ε,y ε ) u 1 (x ε ) u 2 (x ε )+ u2 (x ε ) u 2 (y ε ) x ε y ε 2 2ε 16

18 it follows u2 (x ε ) u 2 (y ε ) x ε y ε 2 2ε Using (511) and the uniform continuity of u 2, we thus obtain 0 lim sup ε 0+ x ε y ε 2 2ε limsup ε 0+ u2 (x ε ) u 2 (y ε ) = 0 (515) 6 By (510), subtracting the second from the first inequality in (514) and using (53), we obtain δ Φ ε (x ε,y ε ) u 1 (x ε ) u 2 (y ε ) H(xε,p ε ) H(y ε,p ε ) ( ω ( x ε y ε (1+ε 1 x ε y ε )) (516) This yields a contradiction, Indeed, by (515) the right hand side of (516) becomes arbitrarily small as ε 0 An easy consequence of the above result is the uniqueness of solutions to the boundary value problem u+h(x, u) = 0 x Ω, (517) u = ψ x Ω (518) Corollary 52 (Uniqueness) Let Ω IR n be a bounded open set Let the Hamiltonian function H satisfy the equicontinuity assumption (53) Then the boundary value problem (517)-(518) admits at most one viscosity solution Proof Let u 1,u 2 be viscosity solutions Since u 1 is a subsolution and u 2 is a supersolution, and u 1 = u 2 on Ω, by Theorem 51 we conclude u 1 u 2 on Ω Interchanging the roles of u 1 and u 2 one obtains u 2 u 1, completing the proof By similar techniques, comparison and uniqueness results can be proved also for Hamilton- Jacobi equations of evolutionary type Consider the Cauchy problem u t +H(t,x, u) = 0 (t,x) ]0,T[ IR n, (519) u(0,x) = ū(x) x IR n (520) Here and in the sequel, it is understood that u = (u x1,,u xn ) always refers to the gradient of u wrt the space variables Theorem 53 (Comparison) Let the function H : [0,T] IR n IR n satisfy the Lipschitz continuity assumptions H(t,x,p) H(s,y,p) C ( t s + x y )( 1+ p ), (521) 17

19 H(t,x,p) H(t,x,q) C p q (522) Let u, v be bounded, uniformly continuous sub- and super-solutions of (519) respectively If u(0,x) v(0,x) for all x IR n, then u(t,x) v(t,x) for all (t,x) [0,T] IR n (523) Toward this result, as a preliminary we prove Lemma 54 Let u be a continuous function on [0,T] IR n, which provides a subsolution of (519) for t ]0,T[ If φ C 1 is such that u φ attains a local maximum at a point (T,x 0 ), then φ t (T,x 0 )+H ( T,x 0, φ(t,x 0 ) ) 0 (524) Proof We can assume that (T,x 0 ) is a point of strict local maximum for u φ For each ε > 0 consider the function φ ε (t,x) = φ(t,x)+ ε T t Each function u φ ε will then have a local maximum at a point (t ε,x ε ), with Since u is a subsolution, one has Letting ε 0+, from (525) we obtain (524) t ε < T, (t ε,x ε ) (T,x 0 ) as ε 0+ φ ε,t (t ε,x ε )+H ( t ε,x ε, φ ε (t ε,x ε ) ) 0, φ t (t ε,x ε )+H ( t ε,x ε, φ(t ε,x ε ) ) ε (T t ε ) (525) 2 Proof of Theorem 53 1 If (523) fails, then we can find λ > 0 such that sup t,x u(t,x) v(t,x) 2λt} = σ > 0 (526) Assume that the supremum in (526) is actually attained at a point (t 0,x 0 ), possibly with t 0 = T If both u and u are differentiable at such point, we easily obtain a contradiction, because u t (t 0,x 0 )+H(t 0,x 0, u) 0, v t (t 0,x 0 )+H(t 0,x 0, v) 0, u(t 0,x 0 ) = v(t 0,x 0 ), u t (t 0,x 0 ) v t (t 0,x 0 ) 2λ 0 2 To extend the above argument to the general case, we face two technical difficulties First, the function in (526) may not attain its global maximum over the unbounded set [0,T] IR n 18

20 Moreover, at this point of maximum the functions u, v may not be differentiable These problems are overcome by inserting a penalization term, and doubling the variables As in the proof of Theorem 51, we introduce the function Φ ε (t,x,s,y) = u(t,x) v(s,y) λ(t+s) ε ( x 2 + y 2) 1 ε 2 ( t s 2 + x y 2) Thanks to the penalization terms, the function Φ ε clearly admits a global maximum at a point (t ε,x ε,s ε,y ε ) ( ]0,T] IR n) 2 Choosing ε > 0 sufficiently small, one has Φ ε (t ε,x ε,s ε,y ε ) max t,x Φ ε(t,x,t,x) σ/2 3 We now observe that the function [ (t,x) u(t,x) v(s ε,y ε )+λ(t+s ε )+ε ( x 2 + y ε 2) + 1 ( t sε ε x y ε 2)] = u(t,x) φ(t,x) takes a maximum at the point (t ε,x ε ) Since u is a subsolution and φ is smooth, this implies λ+ 2(t ( ) ε s ε ) 2(x ε y ε ) ε 2 +H t ε, x ε, ε 2 +2εx ε 0 (527) Notice that, in the case where t ε = T, (527) follows from Lemma 55 Similarly, the function [ (s,y) v(s,y) u(t ε,x ε ) λ(t ε +s) ε ( x ε 2 + y 2) 1 ( tε ε 2 s 2 + x ε y 2)] = v(s,y) ψ(s,y) takes a maximum at the point (t ε,x ε ) Since v is a supersolution and ψ is smooth, this implies λ+ 2(t ( ) ε s ε ) 2(x ε y ε ) +H s ε 2 ε, y ε, 2εy ε 2 ε 0 (528) 4 Subtracting (528) from (527) and using (521)-(522) we obtain 2λ H ( s ε, y ε, 2(x ε y ε ) ε 2 2εy ε ) ( H t ε, x ε, 2(x ε y ε ) ε 2 +2εx ε Cε ( x ε + y ε ) +C ( t ε s ε + x ε y ε )( 1+ x ε y ε ε 2 +ε ( x ε + y ε ) ) ) (529) To reach a contradiction we need to show that the right hand side of (529) approaches zero as ε 0 5 Since u, v are globally bounded, the penalization terms must satisfy uniform bounds, independent of ε Hence x ε, y ε C ε, t ε s ε, x ε y ε C ε (530) for some constant C This implies ε ( x ε + y ε ) 2C ε (531) 19

21 To obtain a sharper estimate, we now observe that Φ ε (t ε,x ε,s ε,y ε ) Φ ε (t ε,x ε,t ε,x ε ), hence u(t ε,x ε ) v(s ε,y ε ) λ(t ε +s ε ) ε ( x ε 2 + y ε 2) 1 ε 2 ( tε s ε 2 + x ε y ε 2) u(t ε,x ε ) v(t ε,x ε ) 2λt ε 2ε x ε 2, 1 ε 2 ( tε s ε 2 + x ε y ε 2) v(t ε,x ε ) v(s ε,y ε )+λ(t ε s ε )+ε ( x ε 2 y ε 2) (532) By the uniform continuity of v, the right hand side of (532) tends to zero as ε 0 Therefore t ε s ε 2 + x ε y ε 2 ε 2 0 as ε 0 (533) By (530), (531) and (533), the right hand side of (529) also approaches zero, This yields the desired contradiction Corollary 55 (Uniqueness) Let the function H satisfy the assumptions (521)-(522) Then the Cauchy problem (519)-(520) admits at most one bounded, uniformly continuous viscosity solution u : [0,T] IR n IR 6 - Control systems The time evolution of a system, whose state is described by a finite number of parameters, can be usually modelled by an ODE ẋ = f(x) x IR n Here and in the sequel the upper dot denotes a derivative wrt time In some cases, the system can be influenced also by the external input of a controller An appropriate model is then provided by a control system, having the form ẋ = f(x,u) (61) Here x IR n, while the control u : [0,T] U is required to take values inside a given set U IR m We denote by U = } u : IR IR m measurable, u(t) U for ae t the set of admissible control functions To guarantee local existence and uniqueness of solutions, it is natural to assume that the map f : IR n IR m IR n is Lipschitz continuous wrt x and continuous wrt u The solution of the Cauchy problem (61) with initial condition x(t 0 ) = x 0 (62) will be denoted as t x(t; t 0,x 0,u) It is clear that, as u ranges over the whole set of control functions, one obtains a family of possible trajectories for the system Equivalently, these trajectories can be characterized as the solutions to the differential inclusion ẋ F(x), F(x) 20 = f(x,ω); ω U } (63)

22 Example 61 Call x(t) IR 2 the position of a boat on a river, and let v(x) be the velocity of the water at the point x If the boat simply drifts along with the currrent, its position is described by the differential equation ẋ = v(x) If we assume that the boat is powered by an engine, and can move in any direction with speed ρ (relative to the water), the evolution can be modelled by the control system ẋ = f(x,u) = v(x)+u, u ρ This is equivalent to a differential inclusion where the sets of velocities are balls with radius ρ (fig 61): ẋ F(x) = B ( v(x); ρ ) v Figure 61 The possible velocities of a boat on a river Example 62 (cart on a rail) Consider a cart which can move without friction along a straight rail (Figure 61) For simplicity, assume that it has unit mass Let y(0) = ȳ be its initial position and ẏ(0) = v be its initial velocity If no forces are present, its future position is simply given by y(t) = ȳ +t v Next, assume that a controller is able to push the cart, with an external force u = u(t) The evolution of the system is then determined by the second order equation ÿ(t) = u(t) (64) Calling x 1 (t) = y(t) and x 2 (t) = ẏ(t) respectively the position and the velocity of the cart at time t, the equation (64) can be written as a first order control system: (ẋ 1,ẋ 2 ) = (x 2,u) (65) Given the initial condition x 1 (0) = ȳ, x 2 (0) = v, the solution of (65) is provided by x 1 (t) = ȳ + vt+ x 2 (t) = v+ t 0 t 0 u(s) ds 21 (t s)u(s)ds,

23 u(t) 0 x(t) Figure 62 A cart moving along a straight, frictionless rail Assuming that the force satisfies the constraint u(t) 1, the control system (65) is equivalent to the differential inclusion (ẋ 1,ẋ 2 ) F(x 1,x 2 ) = (x 2,ω); 1 ω 1} We now consider the problem of steering the system to the origin 0 IR 2 In other words, we want the cart to be at the origin with zero speed For example, if the initial condition is (ȳ, v) = (2,2), this goal is achieved by the open-loop control 1 if 0 t < 4, ũ(t) = 1 if 4 t < 6, 0 if t 6 A direct computation shows that (x 1 (t),x 2 (t)) = (0,0) for t 6 Notice, however, that the above control would not accomplish the same task in connection with any other initial data(ȳ, v) different from (2, 2) This is a consequence of the backward uniqueness of solutions to the differential equation (65) A related problem is that of asymptotic stabilization In this case, we seek a feedback control function u = u(x 1,x 2 ) such that, for every initial data (ȳ, v), the corresponding solution of the Cauchy problem (ẋ 1,ẋ 2 ) = (x 2, u(x 1,x 2 )), (x 1,x 2 )(0) = (ȳ, v) approaches the origin as t, ie lim (x 1,x 2 )(t) = (0,0) t There are several feedback controls which accomplish this task For example, one can take u(x 1,x 2 ) = x 1 x 2 The above example is a special case of a control system having the form ẋ = f(x)+g(x)u, u [ 1,1] where f,g are vector fields on IR n This is equivalent to a differential inclusion ẋ F(x) = f(x)+g(x)u ; u [ 1,1] }, 22

24 g x 2 f 0 x 1 Figure 63 Velocity sets for a planar system which is linear wrt the control R(T) x 0 Figure 64 where each set F(x) of possible velocities is a segment (fig 63) Systems of this form, linear wrt the control variable u, have been extensively studied using techniques from differential geometry Given a control system in the general form (61), the reachable set at time T starting from x 0 at time t 0 (fig 64) will be denoted by } R(T) = x(t; t 0,x 0,u); u U (64) The control u can be assigned as an open loop control, as a function of time: t u(t), or as a feedback control, as a function of the state: x u(x) Among the major issues that one can study in connection with the control system (61) are the following 1 - Dynamics Starting from a point x 0, describe the set of all possible trajectories Study the properties of the reachable set R(T) In particular, one may determine whether R(T) is closed, bounded, convex, with non-empty interior, etc 23

25 2 - Stabilization For each initial state x 0, find a control u( ) that steers the system toward the origin, so that x(t;0,x 0,u) 0 as t + Preferably, the stabilizing control should be found in feedback form One thus looks for a function u = u(x) such that all trajectories of the system ẋ = f ( x, u(x) ) approach the origin asymptotically as t 3 - Optimal Control Find a control u( ) U which is optimal wrt a given cost criterion For example, given the initial condition (62), one may seek to minimize the cost J(u) = T t 0 L ( x(t), u(t) ) dt+ψ ( x(t) ) over all control functions u U Here it is understood that x(t) = x(t; t 0,x 0,u), while L : IR n U IR, ψ : IR n IR are continuous functions We call L the running cost and ψ the terminal cost 7 - The Pontryagin Maximum Principle In connection with the system we consider the Mayer problem: ẋ = f(x,u), u(t) U, t [0,T], x(0) = x 0, (71) max ψ( x(t,u) ) (72) u U Here there is no running cost, but we have a terminal payoff to be maximized over all admissible controls Let t u (t) be an optimal control function, and let t x (t) = x(t; 0,x 0,u ) be the corresponding optimal trajectory (fig 71) We seek necessary conditions that will be satisfied by the control u ( ) As a preliminary, we recall some basic facts from ODE theory Let t x(t) be a solution of the ODE ẋ = g(t,x) (73) Assume that g : [0,T] IR n IR n is measurable wrt t and continuously differentiable wrt x Consider a family of nearby solutions (fig 72), say t x ε (t) Assume that at a given time s one has x ε (s) x(s) lim = v(s) ε 0 ε Then the first order tangent vector v(t) = lim ε 0 x ε (t) x(t) ε 24

26 ψ = ψ max x* x 0 R(T) Figure 71 The optimal solution x ( ) of the Mayer problem (72) is well defined for every t [0,T], and satisfies the linearized evolution equation v(t) = A(t)v(t), (74) where A(t) = D x g ( t, x(t) ) (75) is the n n Jacobian matrix of first order partial derivatives of g wrt x Its entries are A ij = g i / x j Using the Landau notation, we can write x ε (t) = x(t)+εv(t)+o(ε), where o(ε) denotes an infinitesimal of higher order wrt ε The relations (74)-(75) are formally derived by equating terms of order ε in the first order approximation ẋ ε (t) = ẋ(t)+ε v(t)+o(ε) = g(t,x ε (t)) = ẋ(t)+d x g(t,x(t))εv(t)+o(ε) x (s) ε v(s) x(s) x (t) ε x(t) Figure 72 The evolution of a first order perturbation v(t) v(t) 25

27 Together with (74), it is useful to consider the adjoint system ṗ(t) = p(t)a(t) (76) We regard p IR n as a row vector while v IR n is a column vector Notice that, if t p(t) and t v(t) are any solutions of (76) and of (74) respectively, then the product p(t)v(t) is constant in time Indeed d ( ) [ ] [ ] p(t)v(t) = ṗ(t)v(t)+p(t) v(t) = p(t)a(t) v(t)+p(t) A(t)v(t) = 0 (77) dt Afterthesepreliminaries, wecannowderivesomenecessaryconditionsforoptimality Sinceu is optimal, the payoff ψ ( x(t,u ) ) cannot be further increased by any perturbation of the control u ( ) Fix a time τ ]0,T] and a control value ω U For ε > 0 small, consider the needle variation u ε U (fig 73): u ε (t) = ω if t [τ ε, τ], u (t) if t / [τ ε, τ] (78) ω u ε U u* 0 τ ε τ T Figure 73 A needle variation of the control u ( ) Call t x ε (t) = x(t,u ε ) the perturbed trajectory We shall compute the terminal point x ε (T) = x(t,u ε ) and check that the value of ψ is not increased by this perturbation Assuming that the optimal control u is continuous at time t = τ, we have v(τ) = lim ε 0 x ε (τ) x (τ) ε = f ( x (τ), ω ) f ( x (τ), u (τ) ) (79) Indeed, x ε (τ ε) = x (τ ε) and on the small interval [τ ε, τ] we have ẋ ε f ( x (τ), ω ), ẋ f ( x (τ), u (τ) ) 26

28 Since u ε = u on the remaining interval t [τ, T], as in (74) the evolution of the tangent vector v(t) = lim ε 0 x ε (t) x (t) ε t [τ, T] is governed by the linear equation v(t) = A(t) v(t) (710) with A(t) = D x f ( x (t), u (t) ) By maximality, ψ ( x ε (T) ) ψ ( x (T) ), therefore (fig 74) ψ ( x (T) ) v(t) 0 (711) v(t) v( τ) x ε p(t)= ψ x *(T) x *( τ) p( τ) x 0 ψ = const Figure 74 Transporting the vector p(t) backward in time, along the optimal trajectory Summing up, the previous analysis has established the following: For every time τ ]0,T] where u is continuous and every admissible control value ω U, we can generate the vector v(τ) = f ( x (τ), ω ) f ( x (τ), u (τ) ) and propagate it forward in time, by solving the linearized equation (710) The inequality (711) is then a necessary condition for optimality Instead of propagating the (infinitely many) vectors v(τ) forward in time, it is more convenient to propagatethe single vector ψ backward We thus definetherow vectort p(t)as thesolution of terminal value problem ṗ(t) = p(t)a(t), p(t) = ψ ( x (T) ) (712) By (77) one has p(t)v(t) = p(t)v(t) for every t In particular, (711) implies that [ p(τ) f ( x (τ), ω ) f ( x (τ), u (τ) )] = ψ ( x (T) ) v(t) 0 27

29 f(x *, ω) * x (T) x x *( τ) p( τ) p(t)= 0 Figure 75 For ae time τ ]0,T], the speed ẋ (τ) corresponding to the optimal control u (τ) is the one having inner product with p(τ) as large as possible ψ for every ω U Therefore (see fig75) p(τ) ẋ (τ) = p(τ) f ( x (τ), u (τ) ) = max ω U p(τ) f ( x (τ), ω )} (713) With some additional care, one can show that the maximality condition (713) holds at every time τ which is a Lebesgue point of u ( ), hence almost everywhere The above result can be restated as Theorem 71 Pontryagin Maximum Principle (Mayer Problem, free terminal point) Consider the control system ẋ = f(x,u), u(t) U, t [0,T], x(0) = x 0 Let t u (t) be an optimal control and t x (t) = x(t,u ) be the corresponding optimal trajectory for the maximization problem max u U ψ( x(t,u) ) Define the vector t p(t) as the solution to the linear adjoint system with terminal condition ṗ(t) = p(t)a(t), A(t) = D x f ( x (t), u (t) ), (714) p(t) = ψ ( x (T) ) (715) Then, for almost every τ [0,T] the following maximality condition holds: p(τ) f ( x (τ), u (τ) ) = max ω U p(τ) f ( x (τ), ω )} (716) In the above theorem, x,f,v represent column vectors, D x f is the n n Jacobian matrix of first order partial derivatives of f wrt x, while p is a row vector In coordinates, the equations (714)-(715) can be rewritten as ṗ i (t) = n j=1 p j (t) f j x i (t,x (t),u (t)), p i (T) = ψ x i (x (T)), 28

30 while (716) takes the form n i=1 p i (t) f i (t,x (t),u (t)) = max ω U n } p i (t) f i (t,x (t),ω) i=1 Relying on the Maximum Principle, the computation of the optimal control requires two steps: STEP 1: solve the pointwise maximixation problem (716), obtaining the optimal control u as a function of p,x, ie u } (x,p) = argmax p f(x,ω) (717) ω U STEP 2: solve the two-point boundary value problem ẋ = f ( x, u (x,p) ), ṗ = p D x f ( x, u (x,p) ), x(0) = x0, p(t) = ψ ( x(t) ) (718) In general, the function u = u (p,x) in (717) is highly nonlinear It may be multivalued or discontinuous The two-point boundary value problem (718) can be solved by a shooting method: guess an initial value p(0) = p 0 and solve the corresponding Cauchy problem Try to adjust the value of p 0 so that the terminal values x(t), p(t) satisfy the given conditions Example 72 (Linear pendulum) Let q(t) = be the position of a linearized pendulum with unit mass, controlled by an external force with magnitude u(t) [ 1, 1] Then q( ) satisfies the second order ODE q(t)+q(t) = u(t), q(0) = q(0) = 0, u(t) [ 1, 1] We wish to maximize the terminal displacement q(t) Introducing the variables x 1 = q, x 2 = q, we thus seek max u U x 1 (T,u), among all trajectories of the system ẋ1 = x 2 ẋ 2 = u x 1 x1 (0) = 0 x 2 (0) = 0 Let t x (t) = x(t,u ) be an optimal trajectory The linearized equation for a tangent vector is ( ) v1 v 2 = ( )( v1 v 2 )

31 The corresponding adjoint vector p = (p 1,p 2 ) satisfies (ṗ 1,ṗ 2 ) = (p 1,p 2 ) ( ) 0 1, (p 1 0 1,p 2 )(T) = ψ ( x (T) ) = (1,0) (719) because ψ(x) = x 1 In this special linear case, we can explicitly solve (719) without needing to know x,u An easy computation yields (p 1,p 2 )(t) = ( cos(t t), sin(t t) ) (720) For each t, we must now choose the value u (t) [ 1, 1] so that By (720), the optimal control is p 1 x 2 +p 2 ( x 1 +u ) = max ω [ 1,1] p1 x 2 +p 2 ( x 1 +ω) } u (t) = signp 2 (t) = sign ( sin(t t) ) x = q 2 u=1 u= 1 p x = q 1 Figure 76 Example 73 Consider the problem on IR 3 maximize x 3 (T) over all controls u : [0,T] [ 1,1] 30

32 for the system ẋ 1 = u ẋ 2 = x 1 ẋ 3 = x 2 x 2 1 The adjoint equations take the form x 1 (0) = 0 x 2 (0) = 0 x 3 (0) = 0 (721) (ṗ 1,ṗ 2,ṗ 3 ) = (p 2 +2x 1 p 3, p 3, 0), (p 1,p 2,p 3 )(T) = (0,0,1) (722) Maximizing the inner product p ẋ we obtain the optimality conditions for the control u } p 1 u +p 2 ( x 1 )+p 3 (x 2 x 2 1 p ) = max 1 ω +p 2 ( x 1 )+p 3 (x 2 x 2 1 ), (723) ω [ 1,1] u = 1 if p 1 > 0, u [ 1,1] if p 1 = 0, u = 1 if p 1 < 0 Solving the terminal value problem (722) for p 2,p 3 we find p 3 (t) 1, p 2 (t) = T t The function p 1 can now be found from the equations p 1 = 1+2u = 1+2sign(p 1 ), p 1 (T) = 0, ṗ 1 (0) = p 2 (0) = T, with the convention: sign(0) = [ 1,1] The only solution is found to be The optimal control is 3 2 p 1 (t) = u (t) = ( ) 2 T 3 t if 0 t T/3, 0 if T/3 t T 1 if 0 t T/3, 1/2 if T/3 t T Observe that on the interval [T/3, T] the optimal control is derived not from the maximality condition (723) but from the equation p 1 = ( 1+2u) 0 An optimal control with this property is called singular One should be aware that the Pontryagin Maximum Principle is a necessary condition, not sufficient for optimality Example 74 Consider the problem maximize: x 2 (T), for the system with dynamics ẋ1 = u, ẋ 2 = x 2 1, ẋ1 (0) = 0, x 2 (0) = 0, 31 u(t) [ 1,1]

33 p 1 p = Τ/3 Τ p = 3 1 Figure 77 The control u (t) 0 yields the trajectory (x 1 (t), x 2 (t)) (0,0) This solution satisfies the PMP Indeed, the adjoint vector (p 1 (t),p 2 (t)) (0,1) satisfies ṗ1 = p 2 x 2 0, p1 (T) = 0, ṗ 2 = 0, p 2 (T) = 1, } p 1 (t)u(t)+p 2 (t)x 1 (t) = 0 = max p 1 (t)ω+p 2 (t)x 1 (t) ω [ 1,1] However, in this solution the terminalvalue x 2 (T) = 0 provides the globalminimum, not the global maximum! Any control t u(t) [ 1,1] which is not identically zero yields a terminal value x 2 (T) > 0 In this example, there are two optimal controls: u 1 (t) 1 and u 2 (t) Extensions of the PMP In connection with the control system ẋ = f(t,x,u) u(t) U, x(0) = x 0, (81) the more general optimization problem with terminal payoff and running cost } max u U ψ ( x(t,u) ) T 0 32 L ( t,x(t), u(t) ) dt

34 can be easily reduced to a Mayer problem with only terminal payoff Indeed, it suffices to introduce an additional variable x n+1 which evolves according to and consider the maximization problem ẋ n+1 = L ( t,x(t), u(t) ), x n+1 (0) = 0, max u U ψ ( x(t,u) ) } x n+1 (T,u) Another important extension deals with the case where terminal constraints are given, say x(t) S, where the set S is defined as S = x IR n ; φ i (x) = 0, i = 1,,m } Assume that, at a given point x S, the m + 1 gradients ψ, φ 1,, φ m are linearly independent Then the tangent space to S at x is while the tangent cone to the set T S = v IR n ; φ i (x ) v = 0 i = 1,,m }, (82) S + = x S; ψ(x) ψ(x ) } is T S + = } v IR n ; ψ(x ) v 0, φ i (x ) v = 0 i = 1,,m (83) When x = x (T) is the terminal point of an admissible trajectory, we think of T S + as the cone of profitable directions, ie those directions in which we should like to move the terminal point, in order to increase the value of ψ and still satisfy the constraint x(t) S (fig 81) S T S + ψ (x*) x* T S Figure 81 The tangent cone T S and the cone of profitable directions T S + 33

35 Lemma 81 A vector p IR n satisfies p v 0 for all v T S + (84) if and only if it can be written as a linear combination m p = λ 0 ψ(x )+ λ i φ i (x ) (85) i=1 with λ 0 0 Proof Define the vectors w 0 = ψ(x ), w i = φi (x ) i = 1,,m By our previous assumption, these vectors are linearly independent We can thus find additional vectors w j, j = m+1,, n 1 so that w0, w 1,, w m, w m+1,, w n 1 } is a basis of IR n Let v0, v 1,, v m, v m+1,, v n 1 } be the dual basis, so that v i w j = 1 if i = j, 0 if i j We observe that v T S + if and only if v = c 0 v 0 + n 1 i=m+1 c i v i (86) for some c 0 0, c i IR An arbitrary vector p IR n can now be written as p = λ 0 w 0 + m λ i w i + i=1 n 1 i=m+1 λ i w j Moreover, every vector v T S + can be decomposed as in (86) Therefore p v = λ 0 c 0 + n 1 i=m+1 λ i c i It is now clear that (84) holds if and only if λ 0 0 and λ i = 0 for all i = m+1,, n 1 34

36 Next, consider a trajectory t x (t) = x(t,u ), generated by the control u ( ) To test its optimality, we need to perturb u in such a way that the terminal point x(t) still lies in the admissible set S Example 82 Consider the problem maximize: x 3 (T) for a control system of the general form (81), with terminal constraint x(t) S = x = (x 1,x 2,x 3 ); x 2 2 +x 2 3 = 1 } Let u ( ) be an admissible control, such that the trajectory t x (t) = x(t,u ) reaches the terminal point x (T) = (1,0,0) Assume that we can construct a family of needle variations u ε ( ) generating the vector v 1 (T) = lim ε 0 x ε (T) x (T) ε = (1,0,1) This does not rule out the optimality of u, because none of the points x ε (T) for ε > 0 lies on the target set S Next, assume that we can construct a family of needle variations u ε ( ) generating the vector v 2 (T) = lim ε 0 x ε (T) x (T) ε = (0,0,1) Notice that the vector v 2 is now tangent to the target set S at the point x(t) In fact, v 2 T S + Yet, this does not guarantee that any of the points x ε (T) should lie on S (see fig 82, left) Again, we cannot rule out the optimality of u ( ) In order to obtain perturbations with terminal point x ε (T) S, the key idea is to construct combinationsoftheneedlevariations(fig83, left) Ifaneedlevariation(τ 1,ω 1 )producesatangent vector v 1 and the needle variation (τ 2,ω 2 ) producesthe tangentvector v 2, thenby combining them wecanproduceanylinearcombinationwithpositivecoefficientsθ 1 v 1 +θ 2 v 2 continuouslydepending on θ = (θ 1,θ 2 ) IR 2 + (fig 83, right) This guarantees that some intermediate value x θ ε(t) actually lies on the target set S (fig 82, right) x 3 v 2 v 1 x 2 x 1 v 2 v 1 S = φ (x) =0} Figure 82 35

37 ω ω 1 θ x ε(t) v 1 v 2 U u* x * x *(T) ω 2 x 0 0 τ τ 1 2 T τ ε τ ε Figure 83 As in the previous section, given τ ]0,T] and ω U, consider the family of needle variations ω if t [τ ε, τ], u ε (t) = u (t) if t / [τ ε, τ] Call v τ,ω x(t,u ε ) x(t,u ) (T) = lim ε 0 ε the first order variation of the terminal point of the corresponding trajectory Define Γ to be the smallest convex cone containing all vectors v τ,ω In other words, Γ is the set of all finite linear combination of the vectors v τ,ω with positive coefficients We think of Γ as a cone of feasible directions, ie directions in which we can move the terminal point x(t,u ) by suitably perturbing the control u (fig 84) x *(T) x *( τ) x ε Γ R(T) x 0 Figure 84 The cone Γ of feasible directions We can now state necessary conditions for optimality for the Mayer Problem with terminal constraints: max ψ( x(t,u) ), (87) u U 36

38 for the control system with initial and terminal constraints ẋ = f(t,x,u), u(t) U, t [0,T], (88) x(0) = x 0, φ i ( x(t) ) = 0, i = 1,,m (89) Theorem 83 (PMP, geometric version) Let t x (t) = x(t,u ) be an optimal trajectory for the problem (87) (89), corresponding to the control u ( ) Then the cones Γ and T S + are weakly separated, ie there exists a non-zero vector p(t) such that p(t) v 0 for all v T S +, (810) p(t) v 0 for all v Γ (811) This separation property is illustrated in fig 85 An equivalent statement is: Theorem 84 (PMP, analytic version) Let t x (t) = x(t,u ) be an optimal trajectory, corresponding to the control u ( ) Then there exists a non-zero vector function t p(t) such that p(t) = λ 0 ψ ( x (T) ) + m ( λ i φ i x (T) ) with λ 0 0, (812) i=1 ṗ(t) = p(t)d x f ( t, x (t), u (t) ) t [0,T], (813) p(τ) f ( τ, x (τ), u (τ) ) = max ω U p(τ) f ( τ, x (τ), ω )} for ae τ [0,T] (814) We show here the equivalence of the two formulations By Lemma 81, (810) is equivalent to (812) Since every tangent vector v τ,ω satisfies the linear evolution equation v τ,ω (t) = D x f ( t, x (t), u (t) ) v τ,ω (t), if t p(t) satisfies (813) then the product p(t) v τ,ω (t) is constant Therefore p(t) v τ,ω (T) 0 if and only if [ p(τ) v τ,ω (τ) = p(τ) f ( τ, x (τ), ω ) f ( τ, x (τ), u (τ) )] 0 37

An Introduction to the Mathematical Theory. of Nonlinear Control Systems

An Introduction to the Mathematical Theory. of Nonlinear Control Systems An Introduction to the Mathematical Theory of Nonlinear Control Systems Alberto Bressan S.I.S.S.A., Via Beirut 4, Trieste 34014 Italy and Department of Mathematical Sciences, NTNU, N-7491 Trondheim, Norway

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Dynamic Blocking Problems for a Model of Fire Propagation

Dynamic Blocking Problems for a Model of Fire Propagation Dynamic Blocking Problems for a Model of Fire Propagation Alberto Bressan Department of Mathematics, Penn State University, University Park, Pa 16802, USA bressan@mathpsuedu February 6, 2012 Abstract This

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

The Minimum Speed for a Blocking Problem on the Half Plane

The Minimum Speed for a Blocking Problem on the Half Plane The Minimum Speed for a Blocking Problem on the Half Plane Alberto Bressan and Tao Wang Department of Mathematics, Penn State University University Park, Pa 16802, USA e-mails: bressan@mathpsuedu, wang

More information

Exam February h

Exam February h Master 2 Mathématiques et Applications PUF Ho Chi Minh Ville 2009/10 Viscosity solutions, HJ Equations and Control O.Ley (INSA de Rennes) Exam February 2010 3h Written-by-hands documents are allowed. Printed

More information

Behaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline

Behaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline Behaviour of Lipschitz functions on negligible sets G. Alberti 1 M. Csörnyei 2 D. Preiss 3 1 Università di Pisa 2 University College London 3 University of Warwick Lars Ahlfors Centennial Celebration Helsinki,

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia. 1. Introduction

Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia. 1. Introduction ON LOCALLY CONVEX HYPERSURFACES WITH BOUNDARY Neil S. Trudinger Xu-Jia Wang Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia Abstract. In this

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS LUIS SILVESTRE These are the notes from the summer course given in the Second Chicago Summer School In Analysis, in June 2015. We introduce the notion of viscosity

More information

Optimal Control Theory - Module 3 - Maximum Principle

Optimal Control Theory - Module 3 - Maximum Principle Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to

More information

Structurally Stable Singularities for a Nonlinear Wave Equation

Structurally Stable Singularities for a Nonlinear Wave Equation Structurally Stable Singularities for a Nonlinear Wave Equation Alberto Bressan, Tao Huang, and Fang Yu Department of Mathematics, Penn State University University Park, Pa. 1682, U.S.A. e-mails: bressan@math.psu.edu,

More information

Numerical Algorithm for Optimal Control of Continuity Equations

Numerical Algorithm for Optimal Control of Continuity Equations Numerical Algorithm for Optimal Control of Continuity Equations Nikolay Pogodaev Matrosov Institute for System Dynamics and Control Theory Lermontov str., 134 664033 Irkutsk, Russia Krasovskii Institute

More information

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation POINTWISE C 2,α ESTIMATES AT THE BOUNDARY FOR THE MONGE-AMPERE EQUATION O. SAVIN Abstract. We prove a localization property of boundary sections for solutions to the Monge-Ampere equation. As a consequence

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2 Oct. 1 0 Viscosity S olutions In this lecture we take a glimpse of the viscosity solution theory for linear and nonlinear PDEs. From our experience we know that even for linear equations, the existence

More information

Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks

Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks C. Imbert and R. Monneau June 24, 2014 Abstract We study Hamilton-Jacobi equations on networks in the case where Hamiltonians

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest

More information

Recent Trends in Differential Inclusions

Recent Trends in Differential Inclusions Recent Trends in Alberto Bressan Department of Mathematics, Penn State University (Aveiro, June 2016) (Aveiro, June 2016) 1 / Two main topics ẋ F (x) differential inclusions with upper semicontinuous,

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Dynamic Blocking Problems for Models of Fire Propagation

Dynamic Blocking Problems for Models of Fire Propagation Dynamic Blocking Problems for Models of Fire Propagation Alberto Bressan Department of Mathematics, Penn State University bressan@math.psu.edu Alberto Bressan (Penn State) Dynamic Blocking Problems 1 /

More information

HJ equations. Reachability analysis. Optimal control problems

HJ equations. Reachability analysis. Optimal control problems HJ equations. Reachability analysis. Optimal control problems Hasnaa Zidani 1 1 ENSTA Paris-Tech & INRIA-Saclay Graz, 8-11 September 2014 H. Zidani (ENSTA & Inria) HJ equations. Reachability analysis -

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

On finite time BV blow-up for the p-system

On finite time BV blow-up for the p-system On finite time BV blow-up for the p-system Alberto Bressan ( ), Geng Chen ( ), and Qingtian Zhang ( ) (*) Department of Mathematics, Penn State University, (**) Department of Mathematics, University of

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

On Discontinuous Differential Equations

On Discontinuous Differential Equations On Discontinuous Differential Equations Alberto Bressan and Wen Shen S.I.S.S.A., Via Beirut 4, Trieste 3414 Italy. Department of Informatics, University of Oslo, P.O. Box 18 Blindern, N-316 Oslo, Norway.

More information

Boundary value problems for the infinity Laplacian. regularity and geometric results

Boundary value problems for the infinity Laplacian. regularity and geometric results : regularity and geometric results based on joint works with Graziano Crasta, Roma La Sapienza Calculus of Variations and Its Applications - Lisboa, December 2015 on the occasion of Luísa Mascarenhas 65th

More information

b) The system of ODE s d x = v(x) in U. (2) dt

b) The system of ODE s d x = v(x) in U. (2) dt How to solve linear and quasilinear first order partial differential equations This text contains a sketch about how to solve linear and quasilinear first order PDEs and should prepare you for the general

More information

Neighboring feasible trajectories in infinite dimension

Neighboring feasible trajectories in infinite dimension Neighboring feasible trajectories in infinite dimension Marco Mazzola Université Pierre et Marie Curie (Paris 6) H. Frankowska and E. M. Marchini Control of State Constrained Dynamical Systems Padova,

More information

Differential Inclusions and the Control of Forest Fires

Differential Inclusions and the Control of Forest Fires Differential Inclusions and the Control of Forest Fires Alberto Bressan November 006) Department of Mathematics, Penn State University University Park, Pa. 1680 U.S.A. e-mail: bressan@math.psu.edu Dedicated

More information

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL EDUARDO CASAS, ROLAND HERZOG, AND GERD WACHSMUTH Abstract. Semilinear elliptic optimal control

More information

i=1 α i. Given an m-times continuously

i=1 α i. Given an m-times continuously 1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Some notes on viscosity solutions

Some notes on viscosity solutions Some notes on viscosity solutions Jeff Calder October 11, 2018 1 2 Contents 1 Introduction 5 1.1 An example............................ 6 1.2 Motivation via dynamic programming............. 8 1.3 Motivation

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction

AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS. 1. Introduction AN OVERVIEW OF STATIC HAMILTON-JACOBI EQUATIONS JAMES C HATELEY Abstract. There is a voluminous amount of literature on Hamilton-Jacobi equations. This paper reviews some of the existence and uniqueness

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Ph.D. course in OPTIMAL CONTROL Emiliano Cristiani (IAC CNR) e.cristiani@iac.cnr.it

More information

Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011

Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.

More information

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated

More information

Chap. 1. Some Differential Geometric Tools

Chap. 1. Some Differential Geometric Tools Chap. 1. Some Differential Geometric Tools 1. Manifold, Diffeomorphism 1.1. The Implicit Function Theorem ϕ : U R n R n p (0 p < n), of class C k (k 1) x 0 U such that ϕ(x 0 ) = 0 rank Dϕ(x) = n p x U

More information

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables. Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These

More information

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS DARIO CORDERO-ERAUSQUIN AND ALESSIO FIGALLI A Luis A. Caffarelli en su 70 años, con amistad y admiración Abstract. The regularity of monotone

More information

Lecture 2: Controllability of nonlinear systems

Lecture 2: Controllability of nonlinear systems DISC Systems and Control Theory of Nonlinear Systems 1 Lecture 2: Controllability of nonlinear systems Nonlinear Dynamical Control Systems, Chapter 3 See www.math.rug.nl/ arjan (under teaching) for info

More information

Hyperbolic Systems of Conservation Laws

Hyperbolic Systems of Conservation Laws Hyperbolic Systems of Conservation Laws III - Uniqueness and continuous dependence and viscous approximations Alberto Bressan Mathematics Department, Penn State University http://www.math.psu.edu/bressan/

More information

Asymptotic behavior of infinity harmonic functions near an isolated singularity

Asymptotic behavior of infinity harmonic functions near an isolated singularity Asymptotic behavior of infinity harmonic functions near an isolated singularity Ovidiu Savin, Changyou Wang, Yifeng Yu Abstract In this paper, we prove if n 2 x 0 is an isolated singularity of a nonegative

More information

and BV loc R N ; R d)

and BV loc R N ; R d) Necessary and sufficient conditions for the chain rule in W 1,1 loc R N ; R d) and BV loc R N ; R d) Giovanni Leoni Massimiliano Morini July 25, 2005 Abstract In this paper we prove necessary and sufficient

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points

Nonlinear Control. Nonlinear Control Lecture # 2 Stability of Equilibrium Points Nonlinear Control Lecture # 2 Stability of Equilibrium Points Basic Concepts ẋ = f(x) f is locally Lipschitz over a domain D R n Suppose x D is an equilibrium point; that is, f( x) = 0 Characterize and

More information

Regularity for Poisson Equation

Regularity for Poisson Equation Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

arxiv: v2 [math.ap] 28 Nov 2016

arxiv: v2 [math.ap] 28 Nov 2016 ONE-DIMENSIONAL SAIONARY MEAN-FIELD GAMES WIH LOCAL COUPLING DIOGO A. GOMES, LEVON NURBEKYAN, AND MARIANA PRAZERES arxiv:1611.8161v [math.ap] 8 Nov 16 Abstract. A standard assumption in mean-field game

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

L 2 -induced Gains of Switched Systems and Classes of Switching Signals L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Prague, II.2. Integrability (existence of the Riemann integral) sufficient conditions... 37

Prague, II.2. Integrability (existence of the Riemann integral) sufficient conditions... 37 Mathematics II Prague, 1998 ontents Introduction.................................................................... 3 I. Functions of Several Real Variables (Stanislav Kračmar) II. I.1. Euclidean space

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1 Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,

More information

Control, Stabilization and Numerics for Partial Differential Equations

Control, Stabilization and Numerics for Partial Differential Equations Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua

More information

arxiv: v2 [math.ag] 24 Jun 2015

arxiv: v2 [math.ag] 24 Jun 2015 TRIANGULATIONS OF MONOTONE FAMILIES I: TWO-DIMENSIONAL FAMILIES arxiv:1402.0460v2 [math.ag] 24 Jun 2015 SAUGATA BASU, ANDREI GABRIELOV, AND NICOLAI VOROBJOV Abstract. Let K R n be a compact definable set

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3 Analysis Math Notes Study Guide Real Analysis Contents Ordered Fields 2 Ordered sets and fields 2 Construction of the Reals 1: Dedekind Cuts 2 Metric Spaces 3 Metric Spaces 3 Definitions 4 Separability

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

The optimal partial transport problem

The optimal partial transport problem The optimal partial transport problem Alessio Figalli Abstract Given two densities f and g, we consider the problem of transporting a fraction m [0, min{ f L 1, g L 1}] of the mass of f onto g minimizing

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

Semiconcavity and optimal control: an intrinsic approach

Semiconcavity and optimal control: an intrinsic approach Semiconcavity and optimal control: an intrinsic approach Peter R. Wolenski joint work with Piermarco Cannarsa and Francesco Marino Louisiana State University SADCO summer school, London September 5-9,

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Derivatives. Differentiability problems in Banach spaces. Existence of derivatives. Sharpness of Lebesgue s result

Derivatives. Differentiability problems in Banach spaces. Existence of derivatives. Sharpness of Lebesgue s result Differentiability problems in Banach spaces David Preiss 1 Expanded notes of a talk based on a nearly finished research monograph Fréchet differentiability of Lipschitz functions and porous sets in Banach

More information

The Bang-Bang theorem via Baire category. A Dual Approach

The Bang-Bang theorem via Baire category. A Dual Approach The Bang-Bang theorem via Baire category A Dual Approach Alberto Bressan Marco Mazzola, and Khai T Nguyen (*) Department of Mathematics, Penn State University (**) Université Pierre et Marie Curie, Paris

More information

Lecture notes: Introduction to Partial Differential Equations

Lecture notes: Introduction to Partial Differential Equations Lecture notes: Introduction to Partial Differential Equations Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 Classification of Partial Differential

More information

SOLVING NONLINEAR OPTIMAL CONTROL PROBLEMS WITH STATE AND CONTROL DELAYS BY SHOOTING METHODS COMBINED WITH NUMERICAL CONTINUATION ON THE DELAYS

SOLVING NONLINEAR OPTIMAL CONTROL PROBLEMS WITH STATE AND CONTROL DELAYS BY SHOOTING METHODS COMBINED WITH NUMERICAL CONTINUATION ON THE DELAYS 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 32 33 34 35 36 SOLVING NONLINEAR OPTIMAL CONTROL PROBLEMS WITH STATE AND CONTROL DELAYS BY SHOOTING METHODS COMBINED WITH

More information

Alberto Bressan. Department of Mathematics, Penn State University

Alberto Bressan. Department of Mathematics, Penn State University Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i

More information

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018 EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information