An Introduction to the Mathematical Theory. of Nonlinear Control Systems

Size: px
Start display at page:

Download "An Introduction to the Mathematical Theory. of Nonlinear Control Systems"

Transcription

1 An Introduction to the Mathematical Theory of Nonlinear Control Systems Alberto Bressan S.I.S.S.A., Via Beirut 4, Trieste Italy and Department of Mathematical Sciences, NTNU, N-7491 Trondheim, Norway 1. Definitions and examples of nonlinear control systems 2. Relations with differential inclusions 3. Properties of the set of trajectories 4. Optimal control problems 5. Existence of optimal controls 6. Necessary conditions for optimality: the Pontryagin Maximum Principle 7. Viscosity solutions of Hamilton-Jacobi equations 8. Bellman s Dyanmic Programming Principle and sufficient conditions for optimality 0

2 Control Systems ẋ = f(x,u) x IR n, u U IR m (1) x(0) = x 0 (2) Atrajectoryofthesystemisanabsolutely continuousmapt x(t)suchthatthere exist a measurable control function t u(t) U, such that ẋ(t) = f ( x(t),u(t) ) for a.e. t [0,T]. Basic assumptions 1 - Sublinear growth f(x,u) C ( 1+ x ) u U, x IR n guarantees that solutions remain uniformly bounded, for t [0, T]. 2 - Lipschitz continuity f(x,u) f(y,u) L x y u U, x,y IR n if x( ), y( ) are trajectories corresponding to the same control function u( ), this implies x(t) y(t) e Lt x(0) y(0) hence the Cauchy problem (1)-(2) has a unique solution. Equivalent Differential Inclusion ẋ G(x). = { f(x,u); u U } (3) A trajectory of (3) is an absolutely continuous map t x(t) such that ẋ(t) G ( x(t) ) for almost every t [0,T]. 1

3 Optimization problems Select a control function u( ) that performs an assigned task optimally, w.r.t. a given cost criterion. Open-loop control u = u(t) is a (possibly discontinuous) function of time. Yields an O.D.E. of the form ẋ(t) = g(x,t). = f ( x, u(t) ) where g is Lipschitz continuous in x, measurable in t. For solutions of the O.D.E., standard existence and uniqueness results hold, for Caratheodory solutions Closed-loop (feedback) control u = u(x) is a (possibly discontinuous) function of space. Yields an O.D.E. of the form ẋ(t) = g(x). = f ( x, u(x) ) where g is measurable, possibly discontinuous. For solutions of the O.D.E., no general existence and uniqueness result is available 2

4 1 - Navigation Problem x(t) = position of a boat on a river v(x) velocity of the water Control system ẋ = f(x,u) = v(x)+ρu u 1 Differential inclusion ẋ F(x) = { v(x)+ρu ; u 1 } v 3

5 2 - Systems with scalar control entering linearly ẋ = f(x)+g(x)u u [ 1,1] ẋ F(x) = { f(x)+g(x)u ; u [ 1,1] } g x 2 f 0 x 1 Stabilization in minimum time: Find a control u( ) that steers the system to the origin in minimum time. 4

6 Example 2 ẍ = u u [ 1,1] {ẋ = v v = u { x(0) = x0 v(0) = v 0 Find feedback control u = U(x,v) steering the system to the origin in minimum time. v u = 1 u = 1 0 x 5

7 Regularity of Multifunctions The Hausdorff distance between two compact sets Ω 1,Ω 2 IR n is d H (Ω 1,Ω 2 ). = min { ρ : Ω 1 B(Ω 2,ρ) and Ω 2 B(Ω 1,ρ) } B(Ω,ρ). = neighborhood of radius ρ around the set Ω ρ 1 Ω 1 ρ 1 Ω 1 ρ 2 Ω 2 Ω 2 B( Ω 1, ρ 1 ) d H (Ω 1,Ω 2 ) = max{ρ 1,ρ 2 } ρ 1 = maximum distance of points p Ω 1 from Ω 2 ρ 2 = maximum distance of points p Ω 2 from Ω 1 A multifunction x G(x) IR n is Lipschitz continuous if d H ( G(x), G(y) ) L x y x,y IR n. If U IR m is compact and f is Lipschitz continuous, then the multifunction x G(x). = { f(x,u); u U } is Lipschitz continuous 6

8 Equivalence with Differential Inclusions Control System: ẋ = f(x,u) u U (1) Differential Inclusion: ẋ G(x) (2) F(x) = {f(x,u) ; u U} C U x F(x) x Theorem 1 (A. Filippov). If f is continuous and U is compact, the set of (Caratheodory) trajectories of (1) coincides with the set of trajectories of (2), with G(x). = { f(x,u); u U } If ẋ(t) G ( x(t) ) a.e., for each fixed t there exists u(t) U such that ẋ(t) = f ( x(t),u(t) ). The map t u(t) can be chosen to be measurable. Theorem 2 (A. Ornelas). Let x G(x) be a bounded, Lipschitz continuous multifunction with convex, compact values. Then there exists a Lipschitz continuous function f : IR n U IR n such that G(x). = { f(x,u); u U } for all x. Here the U (the set of control values) is the closed unit ball in IR n. 7

9 Dynamics of a Control System Describe the set of all trajectories of the control system ẋ = f(x,u) u U, x(0) = x 0 (1) Describe the Reachable set at time T: R(T). = { x(t); x( ) is a trajectory of (1) } R(T) x 0 Ideal case: an explicit formula for R(T) More realistic goal: derive properties of the reachable set A priori bounds: R(T) B(x 0, ρ) Is R(T) closed, connected, with non-empty interior? (Local controllability) Is R(T) a neighborhood of x 0, for all T > 0? (Global controllability) Does every point x IR n lie in the reachable set R(T), for T suitably large? 8

10 Linear Systems ẋ = Ax+Bu, x(0) = 0 (4) A is a n n matrix, B is n m, u IR m. x(t) = T 0 e (T s)a Bu(s)ds (5) e ta t k A k = k! k=0 Theorem 3. For every T > 0, the reachable set is R(T) = span { B, AB,A 2 B,,A n 1 B } Proof. R(T) is clearly a vector space range ( e ta) span{i,a,a 2,...} = span { I,A,A 2,...A n 1} Hence, by (5), R(T) span { B, AB,A 2 B,,A n 1 B }. Viceversa, p R(T) implies dx k (t) p, = 0 for all k, t 0. dt At time t = 0, taking a constant control u(t) ω, one finds p, A k Bω = 0 for all k 0, ω IR m p span{b,ab,a 2 B,...A n 1 B} 9

11 Connectedness The reachable set R(T) is always connected x 1 = x(t,u 1 ), x 2 = x(t,u 2 ) u θ (t). = { u1 (t) if t [0,θT] u 2 (t) if t ]θt, T] θ x(t, u θ ) is a path inside R(T), connecting x 1 with x 2 x 2 R(T) x θ x 0 x( θt, u 1 ) x 1 10

12 Closure Example 3. The set of trajectories may not be closed. ẋ = u u { 1,1}, x(0) = 0 For each k 1, define the control u : [0,T] { 1,1} { 1 u n (t) = 1 [ ] if t 2jT/2n, (2j +1)T/2n if t [ (2j 1)T/2n, 2jT/2n ] (6) The trajectories t x n (t) satisfy x n (t) x (t) 0 uniformly on [0,T] but x (t) 0 is not a trajectory of the system. x 8 0 x 32 T 11

13 Example 4. The reachable set R(T) IR 2 may not be closed. (ẋ 1,ẋ 2 ) = (u,x 2 1 ) u { 1,1}, (x 1,x 2 )(0) = (0,0) (7) Choosing the controls u n ( ) in (6), at time T we reach the points P n = ( 0, T 3 /12n 2) P n (0,0) as n, but (0,0) / R(T). Indeed, for any trajectory of the system x 2 (T) = T 0 x 2 1 (s)ds = 0 only if x 1 (t) 0 for all t, hence ẋ 1 (t) = u(t) = 0 for almost every t. Against the assumption u(t) { 1, 1} x 2 P n 0 x 1 12

14 Theorem 4 (A. Filippov). Let x G(x) be a Lipschitz continuous multifunction with compact, convex values. Then, for every T 0 the set of solutions of is closed w.r.t. uniform convergence. ẋ G(x), x(0) = x 0 t [0,T] Corollary. Under the previous assumptions, the set of trajectories is a compact subset of C ( [0,T] ). Moreover, the reachable set R(T) is compact. Proof. Consider a sequence of trajectories ẋ n (t) G ( x n (t) ) t [0,T] x n (t) x(t) uniformly on [0,T] Clearly x( ) is Lipschitz continuous, hence differentiable a.e. 13

15 Assume ẋ(t) / G ( x(t) ) at some time t. Then ẋ(t) is separated by a hyperplane from the convex set G ( x(t) ) p, ẋ(t) 3δ + max p, ω ω G(x(t)) x(t+εt) x(t) p, 2δ + max p, ω ε ω G(x(t)) (8) p. x(t) G(x(t)) 3δ x(t) G(x(t)) On the other hand, by continuity of G, for y x(t) ρ small. Hence max p, ω δ + max p, ω ω G(y) ω G(x(t)) x n (t+εt) x n (t) p, max p, ω δ + max p, ω ε ω G(y), y x(t) ρ ω G(x(t)) Letting n, we obtain x(t+εt) x(t) p, δ + max p, ω ε ω G(x(t)) (9) in contradiction with (8). 14

16 Convex Closure and Extreme Points Let Ω IR n be any compact set The convex closure of Ω, written coω, is the smallest closed convex set which contains Ω. It admits the representation { n } coω = θ i ω i ω i Ω, θ i 0, θi = 1 i=0 Hence, if Ω IR n, the convex closure coω coincides with the set of all convex combinations of n+1 elements of Ω. ω 5 ω 4 ω 3 ω ω 6 ω 2 ω 1 A point ω Ω is an extreme point if it CANNOT be written as a convex combination ω = k θ i ω i, θ i [0,1], i=1 θi = 1, ω i Ω, ω i ω The set of extreme points is written as extω 15

17 Ω co Ω ext Ω Ω IR n compact = extω Ω coω = co ( extω ) Density Theorems Let x G(x) be a multifunction with compact values. We compare the solutions of the differential inclusions ẋ extg(x) (10) ẋ G(x) (11) ẋ cog(x) (12) Theorem 5 (Relaxation). Let the multifunction x G(x) be Lipschitz continuous with compact values. Then set of trajectories of (10) is dense on the set of trajectories of (12) 16

18 Bang-bang Property Assume that, for every solution of one can find a solution of ẋ cog(x) t [0,T], x(0) = x 0 ẏ extg(x) t [0,T], x(0) = x 0 such that y(t) = x(t). Then we say that the multifunction G has the bang-bang property. y x(t)=y(t) x x 0 Theorem 6 (Bang-Bang for Linear Systems). Let U IR m be compact. Then, for every solution of ẋ(t) = Ax(t)+Bu(t), x(0) = x 0, u(t) U, t [0,T] there exists a solution of ẏ(t) = Ay(t)+Bu(t), y(0) = x 0, u(t) extu, t [0,T] such that y(t) = x(t) In general, the bang-bang property does not hold for nonlinear systems (Example 4) 17

19 A Lyapunov-type Convexity Theorem f 1,...,f N L 1( [0,T]; IR n) consider a pointwise convex combination of the f i f(t). = θ 1,...,θ N : [0,T] [0,1], N θ i (t)f i (t) i=1 θi (t) = 1 for all t f 2 f 2 ~ f f 1 f 1 0 T 0 T J 2 Theorem 7. There exists a partition N [0,T] = J i, i=1 J i J k = 0 if i k such that T 0 f(t)dt = N i=1 J i f i (t)dt 18

20 Proof of Theorem 7. Consider the set of coefficients of convex combinations Γ. = { (w 1,...w N ); w i (t) [0,1], N w i (t) = 1, i=1 T 0 w i (t)f i (t)dt = i T 0 f(t)dt } 1. Γ is non-empty, because (θ 1,...,θ N ) Γ 2. Γ L is closed and convex, hence compact in a weak topology 3. By the Krein-Milman Theorem, Γ has an extreme point (w 1,...,w n ) 4. By extremality, the functions w i : [0,T] [0,1] actually take values in {0,1} 5. Take J i = { t [0,T]; wi (t) = 1} Proof of Theorem 6. Consider any control u : [0,T] U IR m. Then u(t) = m θ i (t)u i (t) i=0 u i (t) extu Using the control u( ), the terminal point is x(t) = T 0 e (T s)a Bu(s)ds = T 0 m θ i (t)e (T s)a Bu i (s)ds i=0 Apply previous theorem with f i (s) = e (T s)a Bu i (s). For a suitable partition [0,T] = J 0 J 1 J m, one has Hence the control x(t) = m i=0 u (t) = u i (t) J i e (T s)a Bu i (s)ds if t J i takes values in extu and reaches exactly the same terminal point x(t). 19

21 Baire Category Approach Baire Category Theorem. Let K be a complete metric space, (K n ) n 1 a sequence of open, dense subsets. Then n 1 K n is non-empty and dense in K. Alternative proof of the Bang-Bang Theorem We can assume that all sets G(x) are convex. Assume there exists at least one trajectory of ẋ(t) G ( x(t) ), x(0) = x 0, x(t) = x 1 The set K of all such trajectories is then a non-empty, compact subset of C ( [0,T] ). In particular, K is a complete metric space. x(t) G(x) x 0 x 1 Under suitable assumptions on the multifunction G, the set K ext of solutions of ẋ extg(x), x(0) = x 0, x(t) = x 1 can be written as the intersection of countably many open dense subsets K n K Hence by the Baire Cathegory Theorem, K ext This technique applies to a class of concave multifunctions G, introduced in (Bressan-Piccoli, J.Diff.Equat. 1995) 20

22 1. For each compact, convex set Ω IR n, define the function ϕ Ω : Ω [0, [ as ϕ Ω (p). = sup { ( 1 0 X(s) p 2 ds ) 1/2 ; X : [0,1] Ω, 1 0 X(s)ds = p } Observe that ϕ 2 Ω (p) is the maximum variance among all random variables X taking values inside Ω, whose expected value is E[X] = p. 2. Each function ϕ Ω is 0, concave, continuous and ϕ Ω (p) = 0 iff p extω 3. The functional Φ ( x( ) ) T. (ẋ(t) ) = ϕ dt G(x) is well defined for all trajectories of ẋ G(x) Moreover Φ ( x( ) ) = 0 iff ẋ(t) extg ( x(t) ) for a.e. t [0,T] 4. The sets K n. = { x( ) K; Φ(x) < 1/n } are open and dense in K 5. By the Baire Cathegory Theorem, K ext = n 1 K n. 0 This shows that, (in a topological sense) almost all solutions of ẋ G(x) are actually solutions of ẋ extg(x). ϕ Ω (p) p Ω 21

23 Chattering Controls Given a control system on IR n ẋ = f(x,u) u U (1) for each x, the set of velocities G(x) =. { f(x,u); u U } may not be convex. We seek a new control system such that ẋ = f(x,ũ) ũ Ũ (13) G(x). = { f(x,ũ); ũ Ũ} = cog(x) Since every ω cog(x) is a convex combination of n+1 vectors in G(x), the system (13) can be defined as follows: { } =. n (θ 0,θ 1,...,θ n ); θ i [0,1], θ i = 1 i=0 Ũ. = U U (n+1 times) f(x,ũ) = f(x,u 1,...,u n,θ 0,...,θ n ) = n θ i f(x,u i ) (14) i=0 The system (14) is called a chattering system A control ũ = (u 0,...,u n,θ 0,...,θ n ) Ũ is called a chattering control The set of trajectories of a chattering system is always closed in C ( [0,T] ) The reachable set R(T) is compact. Every trajectory of (13) can be uniformly approximated by trajectories of the original system (1) 22

24 Lie Brackets Given two smooth vector fields f,g, their Lie bracket is defined as [f,g]. = (Dg) f (Df) g In other words, [f,g] is the directional derivative of g in the direction of f minus the directional derivative of f in the direction of g. Exponential notation for the flow map: τ (expτf)(x) denotes the solution of the Cauchy problem dw dτ = f(w), w(0) = x The Lie bracket can be equivalently characterized as 1 [ (exp( εg) )( ) ] [f,g](x) = lim exp( εf) (expεg)(expεf)(x) x ε 0 ε 2 f [f,g] x g f g (exp εf)(x) The Lie algebra generated by the vector fields f 1,...,f n is the space of vector fields generated by the f i and by all their iterated Lie brackets [f i,f j ], [ fi, [f j,f k ] ], [ [fi, [f j,f k ] ], f l ],... 23

25 Local Controllability ẋ = m f i (x)u i u i [ 1, 1] (15) i=1 x(0) = x 0 IR n Theorem 8. If the linear span of all iterated Lie brackets of f 1,...,f n at x 0 is the whole space IR n, then the system (15) is locally controllable. That means: for every T > 0 the reachable set R(T) is a neighborhood of x 0. More difficult case: a drift f 0 is present. m ẋ = f 0 (x)+ f i (x)u i u i [ 1, 1] (16) i=1 Theorem 9 (Sussmann-Jurdjevic). If the linear span of all iterated Lie brackets of f 0,f 1,...,f n at x 0 is the whole space IR n, then for every T > 0 the set of points reachable within time T has non-empty interior. Local controllability for (16) is a hard problem! 24

26 Example 5. The position of a car is described by P = (x,y,θ). The first two variables x, y determine the location of the baricenter, while θ determines the orientation. If the car advances with unit speed steering to the right, its motion satisfies dp dt = f(p). = (cosθ, sinθ, 1). On the other hand, if the car steers to the left, its motion satisfies dp dt = g(p). = (cosθ, sinθ, 1). y θ [ f, g ] f g g f x Inatypicalparkingproblem, oneneedstoshiftthecartowardoneside, without changing its orientation. This is obtained by the manouver: 1. right-forward, 2. left-forward, 3. right-backward, 4. left-backward. Indeed, the sequence of these four actions generates the Lie bracket [f,g] = (Dg) f (Df) g = 2(sinθ, cosθ, 0) which is precisely the desired motion. 25

27 The system on IR 3 is locally controllable. Indeed, P = f(p)u 1 +g(p)u 2, u 1,u 2 [ 1, 1] span { f(p), g(p), [f,g](p) } = IR 3 because it contains the three vectors (cos θ, sin θ, 1), (cos θ, sin θ, 1) and (sinθ, cosθ, 0) Example 6. The system on IR 2 {ẋ1 = u ẋ 2 = x 2 1 u [ 1, 1] (17) (x 1,x 2 )(0) = (0,0) is not locally controllable. Indeed, for every trajectory x 2 (T) = T 0 x 2 1 (s)ds 0 We can write (17) in the form ẋ = f 0 (x)+f 1 (x)u f 0 = (0, x 2 1 ), f 1 = (1,0) In this case [f 1,f 0 ] = (0,2x 1 ), [ f1, [f 1,f 0 ] ] = (0,2) hence the vectors f 1 and [ f 1, [f 1,f 0 ] ] span IR 2. For every T > 0 the reachable set R(T) has non-empty interior. 26

28 Optimal Control Problems ẋ = f(x,u) u U, t [0,T] (1) x(0) = x 0 IR n (2) Among all trajectories of (1)-(2), we seek one which is optimal w.r.t. some cost criterion: Mayer Problem minimize J = ψ ( x(t) ) (18) Lagrange Problem minimize J = T 0 ϕ ( t, x(t), u(t) ) dt (19) Bolza Problem minimize J = T 0 ϕ ( t, x(t), u(t) ) dt+ψ ( x(t) ) (20) ϕ(x,u) = running cost per unit time ψ(x) = terminal cost 27

29 The minimization is always performed among all control functions u : [0,T] U One may also add a terminal constraint, requiring for some set S IR n x(t) S (21) Basic assumptions: The cost functions ϕ, ψ are continuous. The set S is closed. Equivalence of various formulations 1. A Mayer Problem can be written as a Lagrange Problem, taking We are then minimizing ϕ(x,u). = ψ(x) f(x,u) T 0 ψ ( x(t) ) f ( x(t), u(t) ) dt = = T 0 T 0 ψ ( x(t) ) ẋ(t)dt ( d dt ψ( x(t) ) ) dt = ψ ( x(t) ) ψ ( ) x 0 2. A Lagrange Problem can be written as a Mayer Problem, introducing a new variable x n+1 with and defining This yields the terminal cost x n+1 (0) = 0, ẋ n+1 = ϕ ( t, x(t), u(t) ) ψ(x). = x n+1 ψ ( x(t) ) = x n+1 (T) = T 0 ϕ ( t, x(t), u(t) ) dt 28

30 Relations with the Calculus of Variations Standard Problem in the Calculus of Variations minimize T 0 L ( t, x(t), ẋ(t) ) dt (22) among all absolutely continuous functions x : [0,T] IR n with x(0) = x 0, x(t) = x 1 Optimal Control Problem minimize T 0 ϕ ( t, x(t), u(t) ) dt (19) ẋ = f(x,u) u U, t [0,T] (1) x(0) = x 0, x(t) = x 1 (2) 1. We can write (22) as an optimal control problem by setting ẋ = u, u U. = IR n ϕ(t,x,u). = L(t,x,u) 2. We can write (19) in the form (22) by setting L(t,x,p). = min { ϕ(t,x,u); u U, f(x,u) = p } L(t,x,p) = if p / { f(x,u);u U } L(t,x,p) is the minimum running cost, among all controls that yield the speed ẋ(t) = p. 29

31 Existence of Optimal Controls Mayer problem: minimize J = ψ ( x(t) ) (18) for the system with constraints ẋ = f(x,u), u U (1) x(0) = x 0, x(t) S (2) S x 0 R(T) 30

32 Theorem 10 (Existence for Mayer Problem). Assume that, for each x, the set of velocities G(x). = { f(x,u); u U } is convex. If there exists at least one trajectory x( ) that satisfies (1)-(2), then the Mayer problem (18) admits an optimal solution. Proof. By Theorem 4, the reachable set R(T) is compact. By the assumptions, R(T) S is non-empty and compact. The continuous function ψ admits a global minimum on the compact set R(T) S. This yields the optimal solution. Bolza Problem: minimize J = T for the system (1) with initial and terminal conditions (2). 0 ϕ ( t, x(t), u(t) ) dt+ψ ( x(t) ) (20) Theorem 11 (Existence for the Bolza Problem). Assume that, for every t, x, the set G + (t,x). = { (y,y n+1 ) IR n+1 ; y = f(x,u), y n+1 ϕ(t,x,u) for some u U } is convex. If there exists at least one trajectory x( ) that satisfies (1)-(2), then the Bolza problem (20) admits an optimal solution. 31

33 Remark: For the problem: subject to minimize T 0 ẋ = u, x(0) = x 0, x(t) = x 1 L ( t, x(t), u(t) ) dt (22) corresponding to the standard problem in the Calculus of Variations, the set G + (t,x). = { (y,y n+1 ) IR n+1 ; y = f(x,u), y n+1 ϕ(t,x,u) for some u U } = { (u,y n+1 ) IR n+1 ; y n+1 L(t,x,u)} is precisely the epigraph of the function u L(t,x,u) This is a convex set iff L(t,x,ẋ) is convex as a function of ẋ L(t, x, x).. x 32

34 First Order Variations Let t x(t) be a solution to the O.D.E. ẋ = g(t,x) x IR n (24) Assume g measurable w.r.t. t and continuously differentiable w.r.t. x First order perturbation: o(ε)= infinitesimal of higher order w.r.t. ε x ε (t) = x(t)+εv(t)+o(ε) (25) If t x ε (t) is another solution of (24), letting ε 0 we find a linearized evolution equation for the first order tangent vector v v(t) = A(t)v(t) A(t). = D x g ( t, x(t) ) (26) x (s) ε v(s) x (t) ε x(t) v(t) x(s) adjoint system: ṗ(t) = p(t)a(t) (27) Here A is an n n matrix, with entries A ij = g i / x j, p IR n is a row vector and v IR n is a column vector. If t p(t) and t v(t) satisfy (27) and (26), then the product t p(t)v(t) is constant in time: d ( ) p(t)v(t) = ṗ(t)v(t)+p(t) v(t) = p(t)a(t)v(t)+p(t)a(t)v(t) = 0 dt 33

35 Necessary Conditions for Optimality ẋ = f(x,u) t [0,T], x(0) = x 0 (28) Family of admissible control functions: U. = { u : [0,T] U } For u( ) U, the trajectory of (28) is t x(t,u) Mayer problem, free terminal point: max ψ( x(t,u) ) (29) u U x* ψ = ψ max x 0 R(T) 34

36 Assume: t u (t) is an optimal control t x (t) = x ( t,u (t) ) is the corresponding optimal trajectory. The value ψ ( x(t,u ) ) cannot be increased by any perturbation of the control u ( ). Fix a time τ ]0,T], ω U and consider the needle variation u ε U u ε (t) = { ω if t [τ ε, τ] u (t) if t / [τ ε, τ] (30) ω u ε U u* 0 τ ε τ T Call t x ε (t) = x(t,u ε ) the perturbed trajectory. We shall compute the terminal point x ε (T) = x(t,u ε ) and check that the value of ψ is not increased by the perturbation. 35

37 Assuming that the optimal control u is continuous at time t = τ, we have v(τ). = lim ε 0 x ε (τ) x (τ) ε = f ( x (τ), ω ) f ( x (τ), u (τ) ) (31) Indeed, x ε (τ ε) = x (τ ε) and on the small interval [τ ε, τ] we have ẋ ε f ( x (τ), ω ), ẋ f ( x (τ), u (τ) ). Since u ε = u on the remaining interval t [τ, T], the evolution of the tangent vector v(t) =. x ε (t) x (t) lim t [τ, T] ε 0 ε is governed by the linear equation v(t) = A(t)v(t) A(t). = D x f ( x (t), u (t) ). (32) By maximality, ψ ( x ε (T) ) ψ ( x (T) ), therefore ψ ( x (T) ) v(t) 0. (33) v(t) v( τ) x ε p(t)= x (Τ) ψ x (τ) p( τ) x 0 ψ = const. 36

38 Summing up: For every time τ and every control value ω U, we can generate the vector v(τ). = f ( x (τ), ω ) f ( x (τ), u (τ) ) and propagate it forward in time, by solving the linearized equation (32). The inequality (33) is then a necessary condition for optimality. Instead of propagating the (infinitely many) vectors v(τ) forward in time, it is more convenient to propagate the single vector ψ backward. We thus define the row vector t p(t) as the solution of ṗ(t) = p(t)a(t), p(t) = ψ ( x (T) ) (34) This yields p(t)v(t) = p(t)v(t) for every t. In particular, (33) implies [ p(τ) f ( x (τ), ω ) f ( x (τ), u (τ) )] = ψ ( x (T) ) v(t) 0 p(τ) ẋ (τ) = p(τ) f ( x (τ), u (τ) ) = max ω U { p(τ) f ( x (τ), ω )}. For every time τ ]0,T], the speed ẋ (τ) corresponding to the optimal control u (τ) is the one having has inner product with p(τ) as large as possible. * τ f(x ( ), ω). * x ( τ) x *(T) x 0 τ x *( ) p( τ) p(t) = ψ 37

39 Pontryagin Maximum Principle (Mayer Problem, free terminal point) Consider the control system ẋ = f(x,u) u(t) U t [0,T] (35) with initial data x(0) = x 0. (36) Let t u (t) be an optimal control and t x (t) = x(t,u ) be the optimal trajectory for the maximization problem max ψ( x(t,u) ). (37) u U Define the vector t p(t) as the solution to the linear adjoint system ṗ(t) = p(t)a(t), A(t). = D x f ( x (t), u (t) ) (38) with terminal condition p(t) = ψ ( x (T) ). (39) Then, for almost every τ [0,T] the following maximality condition holds p(τ) f ( x (τ), u (τ) ) = max ω U { p(τ) f ( x (τ), ω )} (39) 38

40 Computing the Optimal Control STEP 1: solve the pointwise maximixation problem (39), obtaining the optimal control u as a function of p,x, i.e. u { } (x,p) = argmax p f(x,ω) ω U (40) STEP 2: solve the two-point boundary value problem {ẋ ( = f x, u (x,p) ) { x(0) = ṗ = p D x f ( x, u (x,p) ) x0 p(t) = ψ ( x(t) ) (41) In general, the function u = u (p,x) in (40) is highly nonlinear. It may be multivalued or discontinuous. The two-point boundary value problem(41) can be solved by a shooting method: Guess an initial value p(0) = p 0 and solve the corresponding Cauchy problem. Try to adjust the value of p 0 so that the terminal values x(t), p(t) satisfy the given conditions. 39

41 Example 1 (Linear pendulum). q(t) = position of a linearized pendulum, controlled by an external force with magnitude u(t) [ 1, 1]. q(t)+q(t) = u(t), q(0) = q(0) = 0, u(t) [ 1, 1] We wish to maximize the terminal displacement q(t). Equivalent control system: x 1 = q, x 2 = q {ẋ1 = x 2 ẋ 2 = u x 1 { x1 (0) = 0 x 2 (0) = 0 maximize x 1 (T) over all controls u : [0,T] [ 1,1] Let t x (t) = x(t,u ) be an optimal trajectory. The linearized equation for a tangent vector is ( ) ( )( ) v1 0 1 v1 = v v 2 The corresponding adjoint vector p = (p 1,p 2 ) satisfies (ṗ 1,ṗ 2 ) = (p 1,p 2 ) because ψ(x). = x 1. ( ) 0 1, (p 1 0 1,p 2 )(T) = ψ ( x (T) ) = (1,0) (42) In this special linear case, we can explicitly solve (42) without needing to know x,u. An easy computation yields (p 1,p 2 )(t) = ( cos(t t), sin(t t) ) (43) 40

42 For each t, we must now choose the value u (t) [ 1, 1] so that By (43), the optimal control is p 1 x 2 +p 2 ( x 1 +u ) = max ω [ 1,1] p 1x 2 +p 2 ( x 1 +ω) u (t) = sign ( p 2 (t) ) = sign ( sin(t t) ). x = q 2 u=1 u= 1 p x = q 1 41

43 Example 2. Consider the problem on IR 3 maximize x 3 (T) over all controls u : [0,T] [ 1,1] for the system ẋ 1 = u ẋ 2 = x 1 ẋ 3 = x 2 x 2 1 x 1 (0) = 0 x 2 (0) = 0 x 3 (0) = 0 The adjoint equations take the form (ṗ 1,ṗ 2,ṗ 3 ) = (p 2 +2x 1 p 3, p 3, 0) (p 1,p 2,p 3 )(T) = (0,0,1) (44) Maximixing the inner product p ẋ we obtain the optimality conditions for the control u p 1 u +p 2 ( x 1 )+p 3 (x 2 x 2 1 ) = max ω [ 1,1] p 1ω +p 2 ( x 1 )+p 3 (x 2 x 2 1 ) (45) { u = 1 if p 1 > 0 u [ 1,1] if p 1 = 0 u = 1 if p 1 < 0 Solving the terminal value problem (44) for p 2,p 3 we find p 3 (t) 1, p 2 (t) = T t 42

44 The function p 1 can now be found from the equations p 1 = 1+2u = 1+2sign(p 1 ), p 1 (T) = 0, ṗ 1 (0) = p 2 (0) = T with the convention: sign(0) = [ 1,1]. The only solution is found to be 3 ( ) 2 T 2 3 t if 0 t T/3 p 1 (t) = 0 if T/3 t T The optimal control is u (t) = { 1 if 0 t T/3 1/2 if T/3 t T Observe that on the interval [T/3, T] the optimal control is derived not from the maximality condition (45) but from the equation p 1 = ( 1+2u) 0. An optimal control with this property is called singular. p 1.. p = Τ/3 p = 3 1 Τ 43

45 Tangent Cones ẋ = f(x,u) u U, x(0) = x 0 Let t x (t) = x(t,u ) be a reference trajectory. Given τ ]0,T], ω U, consider the family of needle variations u ε (t) = { ω if t [τ ε, τ] u (t) if t / [τ ε, τ] (30) Call v τ,ω (T). = lim ε 0 x(t,u ε ) x(t,u ) ε the first order variation of the terminal point of the corresponding trajectory. Define Γ as the smallest convex cone containing all vectors v τ,ω This is a cone of feasible directions, i.e. directions in which we can move the terminal point x(t,u ) by suitably perturbing the control u. x *(T) x *( τ) x ε Γ R(T) x 0 44

46 Consider a terminal constraint x(t) S, where S. = { x IR n ; φ i (x) = 0, i = 1,...,N }. Assume that the N + 1 gradients ψ, φ 1,..., φ N are linearly independent at the point x. Then the tangent space to S at x is The tangent cone to the set is T S = { v IR n ; φ i (x ) v = 0 i = 1,...,N }. S + = { x S; ψ(x) ψ(x ) } T S + = { v IR n ; ψ(x ) v 0, φ i (x ) v = 0 i = 1,...,N } When x = x (T), we think of T S + as the cone of profitable directions, i.e. those directions in which we would like to move the terminal point, in order to increase the value of ψ and still satisfy the constraint x(t) S. S T S+ ψ (x*(t)) x *(T) T S Lemma 1. A vector p IR n satisfies p v 0 for all v T S + (46) if and only if it can be written as a linear combination N p = λ 0 ψ(x )+ λ i φ i (x ) with λ 0 0. (47) i=1 45

47 Mayer Problem with terminal constraints for the control system maximixe ψ ( x(t) ) with initial and terminal constraints ẋ = f(x,u) u(t) U t [0,T] x(0) = x 0, φ i ( x(t) ) = 0, i = 1,...,N PMP, geometric version. Let t x (t) = x(t,u ) be an optimal trajectory, corresponding to the control u ( ). Then the cones Γ and T S + are weakly separated, i.e. there exists a non-zero vector p(t) such that p(t) v 0 for all v T S + (48) p(t) v 0 for all v Γ (49) S T S + p(t) x* Γ R(T) x 0 S 46

48 PMP, analytic version. Let t x (t) = x(t,u ) be an optimal trajectory, corresponding to the control u ( ). Then there exists a non-zero vector function t p(t) such that p(t) = λ 0 ψ ( x (T) ) + N ( λ i φ i x (T) ) with λ 0 0 (50) i=1 ṗ(t) = p(t)d x f ( x (t), u (t) ) t [0,T] (51) p(τ) f ( x (τ), u (τ) ) = max ω U { p(τ) f ( x (τ), ω )} for a.e. τ [0,T]. (52) Indeed, Lemma 1 states that (48) (50) We now show that (49) (51)+(52). Recall that every tangent vector v τ,ω satisfies the linear evolution equation v τ,ω (t) = D x f ( x (t), u (t) ) v τ,ω (t) If t p(t) satisfies (51), then the product p(t) v τ,ω (t) is constant. Hence p(t) v τ,ω (T) 0 if and only if if and only if p(τ) p(τ) v τ,ω (τ) 0 [ f ( x (τ), ω ) f ( x (τ), u (τ) )] 0 if and only if (52) holds. 47

49 High Order Conditions? The Pontryagin Maximum Principle is a first order necessary condition. For the Mayer problem maximize ψ ( x(t,u) ) if u ( ) is a given control, and if we can find a sequence of perturbed controls u ε such that ψ ( x(t,u ε ) ) ψ ( x(t,u ) ) lim ε 0 u ε u > 0 L 1 then the PMP will detect the non-optimality of u. However, if the increment in the value of ψ is of second or higher order w.r.t. the variation u ε u L 1, then the conclusion of the PMP may still hold, even if u is not optimal. Example 3. for the system maximize ψ ( x(t) ). = x2 (T) (ẋ 1,ẋ 2 ) = (u,x 2 1 ), (x 1,x 2 )(0) = (0,0), u(t) [ 1, 1]. The constant control u (t) 0 yields the constant trajectory (x 1,x 2 )(t) (0,0). The corresponding adjoint vector satisfying (ṗ 1,ṗ 2 ) = ( 2x 1 p 2, 0) = (0,0), (p 1,p 2 )(T) = (0,1) is trivially found to be (p 1,p 2 )(t) (0,1). Hence the maximality condition p 1 u +p 2 (x 1 )2 = 0 = max 0 ω +1 0 ω [ 1,1] is satisfied for every t [0,T]. However, the control u 0 produces the worst possible outcome. Any other control u u would yield x 2 (T,u) = T 0 ( t 2 u(s)ds) dt > x 2 (T,u ) = 0. 0 Notice that in this case the increase in ψ is only of second order w.r.t. the perturbation u u x 2 (T,u) T 0 ( T 0 u(s) ds) 2 dt T u 2 L 1 48

50 Lagrange Minimization Problem, fixed terminal point for the control system on IR n minimize T 0 L(t,x,u)dt (53) with initial and terminal constraints ẋ = f(t,x,u) u(t) U (54) x(0) = x 0, x(t) = x (55) PMP, Lagrange problem. Let t x (t) = x(t,u ) be an optimal trajectory, corresponding to the optimal control u ( ). Then there exist a constant λ 0 0 and a row vector t p(t) (not both = 0) such that ṗ(t) = p(t)d x f ( t, x (t), u (t) ) λ 0 D x L ( t, x (t), u (t) ) (56) p(t) f ( t, x (t), u (t) ) +λ 0 L ( t, x (t), u (t) ) = min ω U { p(t) f ( t, x (t), ω ) +λ 0 L ( t, x (t), ω )}. (57) This follows by applying the previous results to the Mayer problem minimize x n+1 (T) with ẋ n+1 = L(t,x,u), x n+1 (0) = 0 Because of the terminal constraints(x 1,...,x n )(T) = (x 1,...,x n), the only requirement on the terminal value (p 1,...,p n,p n+1 )(T) is p n+1 (T) 0 Observe that ṗ n+1 = 0, hence p n+1 (t) λ 0 for some constant λ

51 Applications to the Calculus of Variations Standard problem of the Calculus of Variations: minimize T 0 L ( t, x(t), ẋ(t) ) dt (58) over all absolutely continuous functions x : [0,T] IR n such that x(0) = x 0, x(t) = x (55) This corresponds to the optimal control problem (53), for the control system ẋ = u, u IR n (59) We assume that L is smooth, and that x ( ) is an optimal solution. By the Pontryagin Maximum Principle (56)-(57), there exist a constant λ 0 0 and a row vector t p(t) (not both = 0) such that ṗ(t) = λ 0 x L( t, x (t), ẋ (t) ) (60) p(t) ẋ (t)+λ 0 L ( t, x (t), ẋ (t) ) { = min p(t) ω +λ 0 L ( t, x (t), ω )} (61) ω IR n If λ 0 = 0, then p(t) 0. But in this case ẋ cannot provide a minimum over the whole space IR n. This contradiction shows that we must have λ 0 > 0. Since λ 0,p are determined up to a positive scalar multiple, we can assume λ 0 = 1. With this choice (61) implies p(t) = ẋ L( t, x (t), ẋ (t) ) (62) 50

52 The evolution equation now yields the famous Euler-Lagrange equations ṗ(t) = x L( t, x (t), ẋ (t) ) (60) [ d dt ẋ L( t, x (t), ẋ (t) ) ] = x L( t, x (t), ẋ (t) ). (63) Moreover, the minimality condition p(t) ẋ (t)+l ( t, x (t), ẋ (t) ) { = min p(t) ω +L ( t, x (t), ω )} (61) ω IR n yields the Weierstrass necessary conditions L(t, x (t), ω) L ( t, x (t), ẋ (t) ) + L( t, x (t), ẋ (t) ) ẋ (ω ẋ (t) ) (64) for every ω IR n. L(t,x *(t), ω) a b. x (t) * ω 51

53 Viscosity Solutions of Hamilton-Jacobi Equations One-sided Differentials The set of super-differentials of u at a point x is D + u(x). = { p IR n u(y) u(x) p (y x) ; limsup y x y x } 0 The set of sub-differentials of u at x is D u(x) =. { p IR n u(y) u(x) p (y x) ; liminf y x y x } 0 u u x x 52

54 Example 1. Consider the function u(x). = { 0 if x < 0 x if x [0,1] 1 if x > 1 u 0 1 x In this case we have D + u(0) =, D u(0) = [0, [ D + u(x) = D u(x) = { } 1 2 x x ]0,1[ D + u(1) = [ 0, 1/2 ], D u(1) = 53

55 Characterization of super- and sub-differentials Lemma 1. Let u C(Ω). Then (i) p D + u(x) if and only if there existsafunction ϕ C 1 (Ω) such that Dϕ(x) = p and u ϕ has a local maximum at x. (ii) p D u(x) if and onlyif there exists a functionϕ C 1 (Ω) such that Dϕ(x) = p and u ϕ has a local minimum at x. By adding a constant, it is not restrictive to assume that ϕ(x) = u(x). In this case, we are saying that p D + u(x) iff there exists a smooth function ϕ u with Dϕ(x) = p, ϕ(x) = u(x). ϕ u ϕ u x x 54

56 Proof. Assume that p D + u(x). Then we can find a continuous, increasing function σ : [0, [ IR, with σ(0) = 0, such that u(y) u(x) p (y x) σ ( y x ) y x = o(1) (y x) We then define ρ(r) =. r 0 σ(t) dt ϕ(y) =. u(x)+p (y x)+ρ ( 2 y x ) and check that ϕ C 1 and u ϕ has a local maximum at x. Conversely, if ϕ C 1 and u ϕ has a local maximum at x, then u(y) u(x) ϕ(y) ϕ(x) for every y in a neighborhood of x, and hence lim sup y x u(y) u(x) p (y x) y x limsup y x ϕ(y) ϕ(x) p (y x) y x = 0 φ u x Remark: By possibly replacing ϕ with ϕ(y) = ϕ(y) ± y x 2, a strict local maximum or local minimum at the point x can be achieved. 55

57 Properties of super- and sub-differentials Lemma 2. Let u C(Ω). Then (i) If u is differentiable at x, then D + u(x) = D u(x) = { u(x) } (ii) If the sets D + u(x) and D u(x) are both non-empty, then u is differentiable at x. (iii) The sets of points where a one-sided differential exists: Ω +. = { x Ω; D + u(x) }, Ω. = { x Ω; D u(x) } are both non-empty. Indeed, they are dense in Ω. Proof of (i). Assume u is differentiable at x. Trivially, u(x) D ± u(x). On the other hand, if ϕ C 1 (Ω) is such that u ϕ has a local maximum at x, then ϕ(x) = u(x). Hence D + u(x) cannot contain any vector other than u(x). 56

58 u ϕ ϕ 2 ϕ 1 u x y x 0 Proof of (ii). Assume that the sets D + u(x) and D u(x) are both non-empty. Then we can find ϕ 1,ϕ 2 C 1 (Ω) such that, for y near x, ϕ 1 (x) = u(x) = ϕ 2 (x), ϕ 1 (y) u(y) ϕ 2 (y) By comparison, this implies that u is differentiable at x and u(x) = ϕ 1 (x) = ϕ 2 (x). Proof of (iii). Fix any ball B(x 0,ρ) Ω. By choosing ε > 0 sufficiently small, the smooth function ϕ(x). = u(x 0 ) x x 0 2 is strictly negative on the boundary of the ball, where x x 0 = ρ. Since u(x 0 ) = ϕ(x 0 ), the function u ϕ attains a local minimum at an interior point x B(x 0,ρ). By Lemma 1, the sub-differential of u at x is non-empty: ϕ(x) = (x x 0 )/ε D u(x). 2ε 57

59 Viscosity Solutions F ( x, u(x), u(x) ) = 0 (HJ) Definition 1. Let F : Ω IR IR n IR be continuous. u C(Ω) is a viscosity subsolution of (HJ) if F ( x,u(x),p) 0 for every x Ω, p D + u(x). u C(Ω) is a viscosity supersolution of (HJ) if F ( x,u(x),p) 0 for every x Ω, p D u(x). We say that u is a viscosity solution of (HJ) if it is both a supersolution and a subsolution in the viscosity sense. Evolution equation: u t +H ( t,x,u, u) = 0. (E) Definition 2. u C(Ω) isaviscosity subsolution of (E) if, forevery C 1 function ϕ = ϕ(t,x) such that u ϕ has a local maximum at (t,x), there holds ϕ t (t,x)+h(t,x,u, ϕ) 0. u C(Ω) is a viscosity supersolution of (E) if, for every C 1 function ϕ = ϕ(t,x) such that u ϕ has a local minimum at (t,x), there holds ϕ t (t,x)+h(t,x,u, ϕ) 0. 58

60 The definition of subsolution imposes conditions on u only on the set Ω + of points x where the super-differential D + u(x) is non-empty. Even if u is merely continuous, say nowhere differentiable, this set is non-empty and dense in Ω. If u is a C 1 function that satisfies F ( x, u(x), u(x) ) = 0 (HJ) at every x Ω, then u is also a solution in the viscosity sense. Viceversa, if u is a viscosity solution, then the equality (HJ) must hold at every point x where u is differentiable. If u is Lipschitz continuous, then by Rademacher s theorem it is a.e. differentiable. Hence (HJ) holds a.e. in Ω. Example 2. The function u(x) = x is a viscosity solution of F(x,u,u x ). = 1 u x = 0 (1) defined on the whole real line. Indeed, u is differentiable and satisfies the equation at all points x 0. Moreover, we have D + u(0) =, D u(0) = [ 1, 1]. To show that u is a subsolution, there is nothing else to check. To show that u is a supersolution, take any p [ 1, 1]. Then 1 p 0, as required. Notice that u(x) = x is NOT a viscosity solution of the equation u x 1 = 0 (2) Indeed, at x = 0, taking p = 0 D u(0) we find 0 1 < 0. Therefore, u(x) = x is a viscosity subsolution of (2), but not a supersolution. 59

61 Vanishing Viscosity Limits Lemma 3. Let u ε be a sequence of smooth solutions to the viscous equation F ( x, u ε (x), u ε (x) ) = ε u ε. (3) Assume that, as ε 0+, we have the convergence u ε u uniformly on an open set Ω IR n. Then u is a viscosity solution of F ( x, u(x), u(x) ) = 0 (HJ) u u ε ϕ x x ε Proof. Fix x Ω and assume p D u(x). We need to show that F(x, u(x), p) By Lemma 1 and Remark 1, there exists ϕ C 1 with ϕ(x) = p, ϕ(x) = u(x) and ϕ(y) < u(y) for all y x. For any δ > 0 we can then find 0 < ρ δ and a function ψ C 2 such that ϕ(y) ϕ(x) δ ψ ϕ C 1 δ if y x ρ and such that each function u ε ψ has a local minimum inside the ball B(x; ρ). 60

62 2. Let u ε ψ have a local minimum at x ε B(x,ρ). Then ψ(x ε ) = u(x ε ), u(x ε ) ψ(x ε ). F ( x, u ε (x ε ), ψ(x ε ) ) ε ψ(x ε ) (4) 3. Extract a convergent subsequence x ε x B(x,ρ). Letting ε 0+ in (4) we have F ( x, u( x), ψ( x) ) 0. (5) Choosing δ > 0 small we can make u( x) u(x) ψ( x) p as small as we like. By continuity, (5) yields F(x, u(x), p) 0. Hence u is a supersolution. The other half of the proof is similar. 61

63 Comparison Theorems Theorem 1. Let Ω IR n be a bounded open set. Assume that u 1,u 2 C(Ω) are, respectively, viscosity sub- and supersolutions of and u+h(x,du) = 0 x Ω, (HJ) u 1 u 2 on Ω. (6) Moreover, assume that the function H : Ω IR n IR satisfies H(x,p) H(y,p) ω ( x y ( 1+ p )), (7) where ω : [0, [ [0, [ is continuous and non-decreasing, with ω(0) = 0. Then u 1 u 2 on Ω. (8) Proof. Easy case: u 1,u 2 smooth. If (8) fails, then u 1 u 2 attains a positive maximum at x 0 Ω. Therefore p. = u 1 (x 0 ) = u 2 (x 0 ). By definition of sub- and supersolution: u 1 (x 0 )+H(x 0,p) 0, u 2 (x 0 )+H(x 0,p) 0. Hence u 1 (x 0 ) u 2 (x 0 ) 0 reaching a contradiction. (9) u 1 u 1 u 2 u 2 Ω Ω x 0 x 0 62

64 In the non-smooth case, we can reach again a contradiction provided that we can find a point x 0 such that (i) u 1 (x 0 ) > u 2 (x 0 ), (ii) some vector p lies at the same time in the upper differential D + u 1 (x 0 ) and in the lower differential D u 2 (x 0 ). A natural candidate for x 0 is a point where u 1 u 2 attains a global maximum. However, one of the sets D + u 1 (x 0 ) or D u 2 (x 0 ) may be empty! To proceed further, the key observation is that we don t need to compare values of u 1 and u 2 at exactly the same point. Indeed, to reach a contradiction, it suffices to find nearby points x ε and y ε such that (i ) u 1 (x ε ) > u 2 (y ε ), (ii ) some vector p lies at the same time in the upper differential D + u 1 (x ε ) and in the lower differential D u 2 (y ε ). u 1 u 2 Ω y ε x ε 63

65 To find suitable points x ε,y ε : Look at the function of two variables Φ ε (x,y). = u 1 (x) u 2 (y) x y 2 2ε (10) This clearly admits a global maximum over the compact set Ω Ω. If u 1 > u 2 at some point x 0, this maximum will be strictly positive. Taking ε > 0 sufficiently small, the boundary conditions imply that the maximum is attained at some interior point (x ε,y ε ) Ω Ω. The points x ε,y ε must be close to each other, otherwise the penalization term in (10) will be very large and negative. The function of one single variable x u 1 (x) (u 2 (y ε )+ x y ε 2 ) = u 1 (x) ϕ 1 (x) (11) 2ε attains its maximum at the point x ε. Hence by Lemma 1 x ε y ε ε = ϕ 1 (x ε ) D + u 1 (x ε ). The function of one single variable y u 2 (y) (u 1 (x ε ) x ε y 2 ) = u 2 (y) ϕ 2 (y) (12) 2ε attains its minimum at the point y ε. Hence x ε y ε ε = ϕ 2 (y ε ) D u 2 (y ε ). We have thus discovered two points x ε, y ε and a vector p = (x ε y ε )/ε which satisfy the conditions (i )-(ii ). 64

66 Proof of Theorem If the conclusion fails, then there exists x 0 Ω such that u 1 (x 0 ) u 2 (x 0 ) = max x Ω { u1 (x) u 2 (x) }. = δ > 0. (11) For ε > 0, call (x ε,y ε ) a point where the function Φ ε (x,y). = u 1 (x) u 2 (y) x y 2 2ε (10) attains its global maximum on the compact set Ω Ω. Clearly, Φ ε (x ε,y ε ) δ > 0. (12) 2. Call M an upper bound for all values u1 (x), u2 (x), as x Ω. Then Φ ε (x,y) 2M x y 2 2ε, Hence Φ ε (x,y) 0 if x y 2 Mε. x ε y ε Mε. (13) 3. Since u 1 u 2 on the boundary Ω, for ε > 0 small, the maximum in (10) must be attained at some interior point (x ε,y ε ) Ω Ω. 4. Consider the smooth functions of one single variable ϕ 1 (x) =. u 2 (y ε )+ x y ε 2, ϕ 2 (y) =. u 1 (x ε ) x ε y 2 2ε 2ε Since x ε provides a local maximum for u 1 ϕ 1 and y ε provides a local minimum for u 2 ϕ 2, we have p. = x ε y ε ε D + u 1 (x ε ) D u 2 (y ε ). (14) From the definition of viscosity sub- and supersolution we now obtain u 1 (x ε )+H(x ε,p) 0, u 2 (y ε )+H(y ε,p) 0. (15) 65

67 5. Observing that δ Φ ε (x ε,y ε ) u 1 (x ε ) u 2 (x ε )+ u2 (x ε ) u 2 (y ε ) x ε y ε 2 2ε, we see that u2 (x ε ) u 2 (y ε ) x ε y ε 2 2ε and hence, by the uniform continuity of u 2, 0 x ε y ε 2 2ε 0 as ε 0. (16) 6. Subtracting the second from the first inequality in (15) we obtain δ Φ ε (x ε,y ε ) u 1 (x ε ) u 2 (y ε ) H(xε,p) H(y ε,p) ( ω ( x ε y ε (1+ x ε y ε ε 1)). (17) This yields a contradiction, Indeed, by (13) and (16) the right hand side of (17) can be made arbitrarily small by letting ε 0. Corollary 1. Let Ω IR n be a bounded open set. Let the Hamiltonian function H satisfy the equicontinuity assumption (7). Then the boundary value problem u+h(x,du) = 0 u = ψ x Ω x Ω admits at most one viscosity solution. 66

68 Uniqueness for the Cauchy Problem u t +H(t,x,Du) = 0 (t,x) ]0,T[ IR n, (18) u(0,x) = g(x) x IR n. (19) (H1) H is uniformly continuous on [0,T] IR n K, for every compact set K IR n. (H2) There exists a continuous, non-decreasing function ω : [0, [ [0, [ with ω(0) = 0 such that H(t,x,p) H(t,y,p) ω ( x y ( 1+ p )). Theorem 2. Let the function H : [0,T] IR n IR n satisfy the equicontinuity assumptions (H1)-(H2). Let u 1,u 2 be uniformly continuous sub- and super-solutions of (18) respectively. If u 1 (0,x) u 2 (0,x) for all x IR n, then u 1 (t,x) u 2 (t,x) for all (t,x) [0,T] IR n. Corollary 2. Let the function H satisfy the assumptions (H1)-(H2). Then the Cauchy problem u t +H(t,x,Du) = 0 u(0,x) = g(x) (t,x) ]0,T[ IR n x IR n admits at most one uniformly continuous viscosity solution u : [0,T] IR n IR. 67

69 Sketch of the proof of Theorem 2. Assume, on the contrary, that u 1 > u 2 at some point (t,x). Then for some λ > 0 we still have u 1 (t,x) λt > u 2 (t,x) Assume that we can find a global maximum: u 1 (t 0, x 0 ) λt 0 u 2 (t 0, x 0 ) = sup u 1 (t,x) λt u 2 (t,x) t [0,T], x IR n Clearly, t 0 > 0. If u 1,u 2 are smooth, then u 1 (t 0, x 0 ) = u 2 (t 0, x 0 ), t u 1 (t 0, x 0 ) λ t u 2 (t 0, x 0 ) (20) On the other hand, by assumptions we have t u 1 (t 0, x 0 )+H(t 0,x 0,u 1, u 1 ) 0 t u 2 (t 0, x 0 )+H(t 0,x 0,u 1, u 1 ) 0 (21) Together (20) and (21) yield a contradiction. Toward the general case - two technical difficulties: 1. The supremum of u 1 u 2 λt may only be approached as x. 2. The functions u 1, u 2 may not admit upper or lower differentials at the point where the maximum is attained. Key idea: look at the function of double variables Φ ε (t,x,s,y). = u 1 (t,x) u 2 (s,y) λ(t+s) ε ( x 2 + y 2) 1 ε ( x y 2 + t s 2) The first penalization term guarantees that a global maximum is attained. The second penalization term guarantees that, at the point (t ε, x ε, s ε, y ε ) where the maximum is attained, one has t ε s ε and x ε y ε. 68

70 Extensions Discontinuous solutions Second order equations u : Ω IR is upper semicontinuous at x 0 if lim sup x x 0 u(x) u(x 0 ), u : Ω IR is lower semicontinuous at x 0 if lim inf x x 0 u(x) u(x 0 ), u u x 0 x 0 A nonlinear operator F = F(x, u, Du, D 2 u) is (possibly degenerate) elliptic if F(x,u,p,X) F(x, u, p, X +Y) whenever Y 0 (i.e. whenever the symmetric matrix Y is non-negative definite) 69

71 General nonlinear elliptic equation: F(x, u, Du, D 2 u) = 0 (22) An upper semicontinuous function u : Ω IR is a viscosity subsolution of the degenerate elliptic equation (22) if, for every ϕ C 2 (Ω), at each point x 0 where u ϕ has a local maximum there holds F ( x 0, u(x 0 ), Dϕ(x 0 ), D 2 ϕ(x 0 ) ) 0. A lower semicontinuous function u : Ω IR is a viscosity supersolution of the degenerate elliptic equation (22) if, for every ϕ C 2 (Ω), at each point x 0 where u ϕ has a local minimum there holds F ( x 0, u(x 0 ), Dϕ(x 0 ), D 2 ϕ(x 0 ) ) 0. Comparison and uniqueness results: R. Jensen, Arch. Rat. Mech. Anal. (1988) H. Ishii, Comm. Pure Appl. Math. (1989) 70

72 Systems of Hamilton-Jacobi Equations w t +H(x, Dw) = 0 w : Ω IR n 1. Systems of conservation laws in one space dimension u t +f(u) x = 0 (23) u = (u 1,...,u n ) conserved quantities f = (f 1,...,f n ) : IR n IR n fluxes Since (23) is in divergence form, we can take w such that w x = u w t = f(u) This yields the H-J system of equations w t +f(w x ) = 0 71

73 2. Non-cooperative m-persons differential games ẋ = m f i (x,u i ). (23) i=1 t u i (t) U i is the control chosen by the i-th player, whose aim is to minimize his cost functional J i. = T t h i ( x(s),ui (s) ) ds+g i ( x(t) ). (24) Let V i (t,x) be the expected cost for the i-th player (if he behaves optimally), if the system starts at state x at time t. If such value function V = (V 1,...,V m ) exists, it should provide a solution to the system of Hamilton-Jacobi equations t V i +H i (x, V 1,, V m ) = 0, (25) with terminal data V i (T,x) = g i (x). (26) The Hamiltonian functions H i are defined as follows. Assume that, for every given x IR n and p j = V j IR n, there exist an optimal choice u j (x,p j) U j for the j-th player, such that f j ( x, u j (x,p j ) ) p j +h j ( x, u j (x,p j ) ) = min ω { fj (x,ω) p j +h j (x, ω) } Then the Hamiltonian functions in (25) are H i (x, p 1,...,p m ) =. m ( f j x, u j (x,p j ) ) p ( i +h i x, u i (x,p i ) ) j=1 (No general existence, uniqueness theory is yet available for these systems) 72

74 Sufficient Conditions for Optimality ẋ = f(x,u) t [0,T], x(0) = x 0 (1) Family of admissible control functions: U =. { u : [0,T] U } For u( ) U, the trajectory of (1) is t x(t,u) Terminal constraint x(t) S (2) Mayer problem, free terminal point: max ψ( x(t,u) ) (3) u U Sufficient Condition: Existence + PMP. Assume that the optimization problem (1) (3) admits an optimal solution. Let {u 1,u 2,...,u N } be the set of all control functions that staisfy the Pontryagin Maximum Principle. If Then the control u k is optimal. ψ ( x(t,u k ) ) = min 1 j N ψ( x(t,u j ) ) 73

75 Example 1. (ẋ 1,ẋ 2 ) = (u,x 2 1 ) u [ 1,1], (x 1,x 2 )(0) = (0,0) max x 2(T) with constraint x 1 (T) = 0 u U x x 1 If t x (t) = x(t,u ) is an optimal trajectory, the PMP yields (ṗ 1,ṗ 2 ) = ( 2x 1 p 2, 0), p 2 (T) 0 p 1 u +p 2 x 2 1 = max ω [ 1,1] p 1ω +p 2 x 2 1 = u = signp 1 p 2 (t) p 2 0 p 2 0 = p 1 p 1 u (t) 0, x (t) (0,0) 74

76 p 2 1 = ṗ 1 = 2x 1 p 1 = 2signp 1, ṗ 1 (0) = 0 ẋ 1 = signp 1 ẋ 2 = x 2 1 Countably many controls u j satisfy the PMP. Optimal controls: { u 1 if 0 < t < T/2 (t) = 1 if T/2 < t < T u (t) = { 1 if 0 < t < T/2 1 if T/2 < t < T. p = 2 1 p 1 T/j 0 T. p =2 1 x 2 x* x 1 75

77 The Value Function ẋ(t) = f ( x(t), α(t) ) s < t < T, (4) x(s) = y. (5) Here x IR n, while the control function α : [s,t] A is required to take values inside a compact set A IR m. Optimization Problem: select a control T α (t) which minimizes the cost J(s,y,α). = T s h ( x(t), α(t) ) dt+g ( x(t) ). (6) Assumption: The functions f, g, h are Lipschitz continuous and uniformly bounded. Dynamic Programming Approach: study the value function u(s,y) =. inf J(s,y,α). (7) α( ) 76

78 Bellman s Dynamic Programming Principle Theorem. For every s < τ < T, y IR n, one has { τ u(s,y) = inf h ( x(t;s,y,α), α(t) ) dt + u ( τ, x(τ;s,y,α) )}. (8) α( ) s The optimization problem on [s,t] can be split into two separate problems: As a first step, we solve the optimization problem on the sub-interval [τ,t], with running cost h and terminal cost g. In this way, we determine the value function u(τ, ), at time τ. As a second step, we solve the optimization problem on the sub-interval [s,τ], with running cost h and terminal cost u(τ, ), determined by the first step. At the initial time s, by (8) the value function u(s, ) obtained in step 2 is the same as the value function corresponding to the global optimization problem over the whole interval [s,t]. x(t; τ,z,u) y z = x( τ;s,y,u) s τ T Mayer problem: h 0 (no running cost). In this case one has For every admissible trajectory t x(t), the value function t u ( t, x(t) ) is non-decreasing. A trajectory t x (t) is optimal if and only if t u ( t,x (t) ) is constant. 77

79 Proof. Call J τ the right hand side of (8). 1. To prove that J τ u(s,y), fix ε > 0 and choose a control α : [s,t] A such that J(s,y,α) u(s,y)+ε. Observing that u ( τ,x(τ;s,y,α) ) T τ h ( t, x(t;s,y,α) ) dt+g ( x(t;s,y,α) ), we conclude J τ τ s h ( t, x(t;s,y,α) ) dt+u ( τ,x(τ;s,y,α) ) J(s,y,α) u(s,y)+ε. Since ε > 0 is arbitrary, this first inequality is proved. 2. To prove that u(s,y) J τ, fix ε > 0. Then there exists a control α : [s,τ] A such that τ s h ( t, x(t;s,y,α ) ) dt+u ( τ, x(τ;s,y,α ) ) J τ +ε Moreover, there exists a control α : [τ,t] A such that J ( τ, x(τ;s,y,α ), α ) u ( τ, x(τ;s,y,α ) ) +ε. One can now define a new control α : [s,t] A as the concatenation of α,α : α(t) =. { α (t) if t [s,τ], α (t) if t ]τ,t]. This yields u(s,y) J(s,y,α) J τ +2ε. with ε > 0 arbitrarily small. 78

80 The Hamilton-Jacobi-Bellman Equation ẋ(t) = f ( x(t), α(t) ) s < t < T, (4) x(s) = y, α : [s,t] A (5) J(s,y,α). = T s h ( x(t), α(t) ) dt+g ( x(t) ). (6) u(s,y) =. inf J(s,y,α). (7) α( ) Lemma (Regularity of the Value Function). Let f, g, h, be uniformly bounded and Lipschitz continuous. Then the value function u is uniformly bounded and Lipschitz continuous on [0,T] IR n. Theorem (Characterization of the Value Function). The value function u = u(s, y) is the unique viscosity solution of the Hamilton-Jacobi-Bellman equation with terminal condition and Hamiltonian function [ ] u t +H(x, u) = 0 (t,x) ]0,T[ IR n, (9) u(t,x) = g(x) x IR n, (10) H(x,p) =. { } min f(x,a) p+h(x,a). (11) a A 79

Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems. Contents

Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems. Contents Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems Alberto Bressan Penn State University Contents 1 - Preliminaries: the method of characteristics 1 2 - One-sided differentials

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Recent Trends in Differential Inclusions

Recent Trends in Differential Inclusions Recent Trends in Alberto Bressan Department of Mathematics, Penn State University (Aveiro, June 2016) (Aveiro, June 2016) 1 / Two main topics ẋ F (x) differential inclusions with upper semicontinuous,

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

The Bang-Bang theorem via Baire category. A Dual Approach

The Bang-Bang theorem via Baire category. A Dual Approach The Bang-Bang theorem via Baire category A Dual Approach Alberto Bressan Marco Mazzola, and Khai T Nguyen (*) Department of Mathematics, Penn State University (**) Université Pierre et Marie Curie, Paris

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Dynamic Blocking Problems for a Model of Fire Propagation

Dynamic Blocking Problems for a Model of Fire Propagation Dynamic Blocking Problems for a Model of Fire Propagation Alberto Bressan Department of Mathematics, Penn State University, University Park, Pa 16802, USA bressan@mathpsuedu February 6, 2012 Abstract This

More information

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University

Introduction to Optimal Control Theory and Hamilton-Jacobi equations. Seung Yeal Ha Department of Mathematical Sciences Seoul National University Introduction to Optimal Control Theory and Hamilton-Jacobi equations Seung Yeal Ha Department of Mathematical Sciences Seoul National University 1 A priori message from SYHA The main purpose of these series

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011

Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES JONATHAN LUK These notes discuss theorems on the existence, uniqueness and extension of solutions for ODEs. None of these results are original. The proofs

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations Weak Convergence of Numerical Methods for Dynamical Systems and, and a relation with Large Deviations for Stochastic Equations Mattias Sandberg KTH CSC 2010-10-21 Outline The error representation for weak

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Chap. 1. Some Differential Geometric Tools

Chap. 1. Some Differential Geometric Tools Chap. 1. Some Differential Geometric Tools 1. Manifold, Diffeomorphism 1.1. The Implicit Function Theorem ϕ : U R n R n p (0 p < n), of class C k (k 1) x 0 U such that ϕ(x 0 ) = 0 rank Dϕ(x) = n p x U

More information

Dynamic Blocking Problems for Models of Fire Propagation

Dynamic Blocking Problems for Models of Fire Propagation Dynamic Blocking Problems for Models of Fire Propagation Alberto Bressan Department of Mathematics, Penn State University bressan@math.psu.edu Alberto Bressan (Penn State) Dynamic Blocking Problems 1 /

More information

The Minimum Speed for a Blocking Problem on the Half Plane

The Minimum Speed for a Blocking Problem on the Half Plane The Minimum Speed for a Blocking Problem on the Half Plane Alberto Bressan and Tao Wang Department of Mathematics, Penn State University University Park, Pa 16802, USA e-mails: bressan@mathpsuedu, wang

More information

Alberto Bressan. Department of Mathematics, Penn State University

Alberto Bressan. Department of Mathematics, Penn State University Non-cooperative Differential Games A Homotopy Approach Alberto Bressan Department of Mathematics, Penn State University 1 Differential Games d dt x(t) = G(x(t), u 1(t), u 2 (t)), x(0) = y, u i (t) U i

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

Exam February h

Exam February h Master 2 Mathématiques et Applications PUF Ho Chi Minh Ville 2009/10 Viscosity solutions, HJ Equations and Control O.Ley (INSA de Rennes) Exam February 2010 3h Written-by-hands documents are allowed. Printed

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

On Discontinuous Differential Equations

On Discontinuous Differential Equations On Discontinuous Differential Equations Alberto Bressan and Wen Shen S.I.S.S.A., Via Beirut 4, Trieste 3414 Italy. Department of Informatics, University of Oslo, P.O. Box 18 Blindern, N-316 Oslo, Norway.

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

HJ equations. Reachability analysis. Optimal control problems

HJ equations. Reachability analysis. Optimal control problems HJ equations. Reachability analysis. Optimal control problems Hasnaa Zidani 1 1 ENSTA Paris-Tech & INRIA-Saclay Graz, 8-11 September 2014 H. Zidani (ENSTA & Inria) HJ equations. Reachability analysis -

More information

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D. 4 Periodic Solutions We have shown that in the case of an autonomous equation the periodic solutions correspond with closed orbits in phase-space. Autonomous two-dimensional systems with phase-space R

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Lecture 2: Controllability of nonlinear systems

Lecture 2: Controllability of nonlinear systems DISC Systems and Control Theory of Nonlinear Systems 1 Lecture 2: Controllability of nonlinear systems Nonlinear Dynamical Control Systems, Chapter 3 See www.math.rug.nl/ arjan (under teaching) for info

More information

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3 Analysis Math Notes Study Guide Real Analysis Contents Ordered Fields 2 Ordered sets and fields 2 Construction of the Reals 1: Dedekind Cuts 2 Metric Spaces 3 Metric Spaces 3 Definitions 4 Separability

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks

Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks C. Imbert and R. Monneau June 24, 2014 Abstract We study Hamilton-Jacobi equations on networks in the case where Hamiltonians

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2 Oct. 1 0 Viscosity S olutions In this lecture we take a glimpse of the viscosity solution theory for linear and nonlinear PDEs. From our experience we know that even for linear equations, the existence

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle

Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Numerical Methods for Optimal Control Problems. Part I: Hamilton-Jacobi-Bellman Equations and Pontryagin Minimum Principle Ph.D. course in OPTIMAL CONTROL Emiliano Cristiani (IAC CNR) e.cristiani@iac.cnr.it

More information

Differential Inclusions and the Control of Forest Fires

Differential Inclusions and the Control of Forest Fires Differential Inclusions and the Control of Forest Fires Alberto Bressan November 006) Department of Mathematics, Penn State University University Park, Pa. 1680 U.S.A. e-mail: bressan@math.psu.edu Dedicated

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Semiconcavity and optimal control: an intrinsic approach

Semiconcavity and optimal control: an intrinsic approach Semiconcavity and optimal control: an intrinsic approach Peter R. Wolenski joint work with Piermarco Cannarsa and Francesco Marino Louisiana State University SADCO summer school, London September 5-9,

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

REVIEW OF ESSENTIAL MATH 346 TOPICS

REVIEW OF ESSENTIAL MATH 346 TOPICS REVIEW OF ESSENTIAL MATH 346 TOPICS 1. AXIOMATIC STRUCTURE OF R Doğan Çömez The real number system is a complete ordered field, i.e., it is a set R which is endowed with addition and multiplication operations

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls

Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls 1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Periodic solutions of weakly coupled superlinear systems

Periodic solutions of weakly coupled superlinear systems Periodic solutions of weakly coupled superlinear systems Alessandro Fonda and Andrea Sfecci Abstract By the use of a higher dimensional version of the Poincaré Birkhoff theorem, we are able to generalize

More information

Neighboring feasible trajectories in infinite dimension

Neighboring feasible trajectories in infinite dimension Neighboring feasible trajectories in infinite dimension Marco Mazzola Université Pierre et Marie Curie (Paris 6) H. Frankowska and E. M. Marchini Control of State Constrained Dynamical Systems Padova,

More information

On the infinity Laplace operator

On the infinity Laplace operator On the infinity Laplace operator Petri Juutinen Köln, July 2008 The infinity Laplace equation Gunnar Aronsson (1960 s): variational problems of the form S(u, Ω) = ess sup H (x, u(x), Du(x)). (1) x Ω The

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

Boundary value problems for the infinity Laplacian. regularity and geometric results

Boundary value problems for the infinity Laplacian. regularity and geometric results : regularity and geometric results based on joint works with Graziano Crasta, Roma La Sapienza Calculus of Variations and Its Applications - Lisboa, December 2015 on the occasion of Luísa Mascarenhas 65th

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems. Joanna Zwierzchowska

Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems. Joanna Zwierzchowska Decision Making in Manufacturing and Services Vol. 9 05 No. pp. 89 00 Hyperbolicity of Systems Describing Value Functions in Differential Games which Model Duopoly Problems Joanna Zwierzchowska Abstract.

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1 NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to

More information

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Bellman equation for control problems with exit times and unbounded cost functionals 1 On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road

More information

P-adic Functions - Part 1

P-adic Functions - Part 1 P-adic Functions - Part 1 Nicolae Ciocan 22.11.2011 1 Locally constant functions Motivation: Another big difference between p-adic analysis and real analysis is the existence of nontrivial locally constant

More information

Nonsmooth Analysis in Systems and Control Theory

Nonsmooth Analysis in Systems and Control Theory Nonsmooth Analysis in Systems and Control Theory Francis Clarke Institut universitaire de France et Université de Lyon [January 2008. To appear in the Encyclopedia of Complexity and System Science, Springer.]

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations

A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations A Narrow-Stencil Finite Difference Method for Hamilton-Jacobi-Bellman Equations Xiaobing Feng Department of Mathematics The University of Tennessee, Knoxville, U.S.A. Linz, November 23, 2016 Collaborators

More information

Problem List MATH 5173 Spring, 2014

Problem List MATH 5173 Spring, 2014 Problem List MATH 5173 Spring, 2014 The notation p/n means the problem with number n on page p of Perko. 1. 5/3 [Due Wednesday, January 15] 2. 6/5 and describe the relationship of the phase portraits [Due

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Hamilton-Jacobi theory for optimal control problems on stratified domains

Hamilton-Jacobi theory for optimal control problems on stratified domains Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2010 Hamilton-Jacobi theory for optimal control problems on stratified domains Richard Charles Barnard Louisiana

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS LUIS SILVESTRE These are the notes from the summer course given in the Second Chicago Summer School In Analysis, in June 2015. We introduce the notion of viscosity

More information

Asymptotic behavior of infinity harmonic functions near an isolated singularity

Asymptotic behavior of infinity harmonic functions near an isolated singularity Asymptotic behavior of infinity harmonic functions near an isolated singularity Ovidiu Savin, Changyou Wang, Yifeng Yu Abstract In this paper, we prove if n 2 x 0 is an isolated singularity of a nonegative

More information

On the minimum of certain functional related to the Schrödinger equation

On the minimum of certain functional related to the Schrödinger equation Electronic Journal of Qualitative Theory of Differential Equations 2013, No. 8, 1-21; http://www.math.u-szeged.hu/ejqtde/ On the minimum of certain functional related to the Schrödinger equation Artūras

More information

MORE ON CONTINUOUS FUNCTIONS AND SETS

MORE ON CONTINUOUS FUNCTIONS AND SETS Chapter 6 MORE ON CONTINUOUS FUNCTIONS AND SETS This chapter can be considered enrichment material containing also several more advanced topics and may be skipped in its entirety. You can proceed directly

More information

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems

Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t

More information

Optimal Control Theory - Module 3 - Maximum Principle

Optimal Control Theory - Module 3 - Maximum Principle Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Hyperbolic Systems of Conservation Laws

Hyperbolic Systems of Conservation Laws Hyperbolic Systems of Conservation Laws III - Uniqueness and continuous dependence and viscous approximations Alberto Bressan Mathematics Department, Penn State University http://www.math.psu.edu/bressan/

More information

Behaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline

Behaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline Behaviour of Lipschitz functions on negligible sets G. Alberti 1 M. Csörnyei 2 D. Preiss 3 1 Università di Pisa 2 University College London 3 University of Warwick Lars Ahlfors Centennial Celebration Helsinki,

More information