Geometric Optimal Control with Applications
|
|
- Arline Stevens
- 5 years ago
- Views:
Transcription
1 Geometric Optimal Control with Applications Accelerated Graduate Course Institute of Mathematics for Industry, Kyushu University, Bernard Bonnard Inria Sophia Antipolis et Institut de Mathématiques de Bourgogne 9 avenue Savary e2178 Dijon, France Monique Chyba 2565 McCarthy the Mall Department of Mathematics University of Hawaii Honolulu, HI 96822, USA with the help of Gautier Picot, Aaron Tamura-Sato, Steven Brelsford June-July 215
2 Contents 4 Optimal Control Problem Statement The Augmented System Related Problems Optimal Control and the Classical Calculus of Variations Singular Trajectories and the Weak Maximum Principle First and Second Variations of E x,t Geometric interpretation of the Adjoint Vector The Weak Maximum Principle Abnormality The Weak Maximization Principle and Euler-Lagrange Equation Comparison with the Calculus of Variations LQ-Control and the Weak Maximum Principle Pontryagin s Maximum Principle Filippov Existence Theorem Comments about the Existence Theorem Acknowledgments 13 1
3 Chapter 4 Optimal Control 4.1 Problem Statement We consider the autonomous control system ẋ(t) = f(x(t), u(t)), x(t) R n, u(t Ω) (4.1) where f is a C 1 -mapping. Let the initial and target sets M, M 1 be given. We assume M, M 1 to be C 1 - submanifolds of R n. The control domain is a given subset Ω R m. The class of admissable controls U is the set of bounded measurable mappings u : [, T (u)] Ω. Let u( ) U and x R n be fixed. Then, by the Caratheodory theorem [4], there exists a unique trajectory of (4.1) denoted by x(, x, u) such that x() = x. This trajectory is defined on a nonempty subinterval J of [, T (u)] on which t x(t, x, u) is an absolutely continuous function and is a solution of 4.1 almost everywhere. To each u( ) U defined on [, T ] with response x(, x, u) issued from x() = x M defined on [, T ], we assign a cost C(u) = f (x(t), u(t))dt (4.2) where f is a C 1 -mapping. An admissable control u ( ) with corresponding trajectory x (, x, u) defined on [, T ] such that x () M and x (T ) M 1 is optimal if for each admissable control u( ) with response x(, x, u) on [, T ], x() M and x(t ) M 1, then 4.2 The Augmented System C(u ) C(u). The following remark is straightforward but is geometrically very important to understand the maximum principle. Let us consider ˆf = (f, f ) and the corresponding system on R n+1 defined by the equations ˆx = ˆf(ˆx(t), u(t)), i.e.: ẋ(t) = f(x(t), u(t)), (4.3) ẋ (t) = f (x(t), u(t)), (4.4) This system is called the augmented system. Since ˆf is C 1, according to the Caratheodory theorem, to each admissible control u( ) U there exists an admissible trajectory ˆx(t, ˆx, u) such that ˆx = (x, x ()), x () = where the added coordinate x ( ) satisfies x (T ) = f (x(t), u(t))dt. Let us denote by ÂM the accessibility set u( ) U ˆx(T, ˆx, u) from M = (M, ) and let ˆM 1 = M 1 R. Then, we observe that an optimal control u ( ) corresponds to a trajectory ˆx ( ) such that ˆx ˆM and intersecting ˆM 1 at a point ˆx (T ) where x is minimal. In particular ˆx (T ) belongs to the boundary of the Accessibility set ÂM. 2
4 Optimal Control Theory Summer Related Problems Our framework is a general setting to deal with a large class of problems. Examples are the following: 1. Nonautonomous systems: ẋ(t) = f(t, x(t), u(t)). We add the variable t to the state space by setting dt ds = 1, t(s ) = s. 2. Fixed time problem. If the time domain [, T (u)] is fixed (T (u) = T for all u( )) we add the variable t to the state space by setting dt ds = 1, t(s ) = s and we impose the following state constraints on t : t = at s = and t = T at the free terminal time s. Some specific problems important for applications are the following. 1. If f 1, then min f (x(t), u(t))dt = min T and we minimize the time of global transfer. 2. If the system is of the form: ẋ(t) = f(t, x(t), u(t)), f(t, x, u) = A(t)x(t) + B(t)u(t), where A(t), B(t) are matrices and C(u) = L(t, x(t), u(t))dt where L(x, u, ) is a quadratic form for each t, T being fixed, the problem is called a linear quadratic problem (LQ-problem). 4.3 Optimal Control and the Classical Calculus of Variations Classical problems of variations can be easily stated as optimal control problems as follows. 1. Holonomic problems: min L(t, x(t), ẋ(t))dt. We introduce the control system by setting ẋ(t) = u(t) and we must minimize a cost C(u) = L(t, x(t), u(t))dt. In particular the accessory problem (P (t)h 2 (t) + Q(t)ḣ2 (t))dt min h( ) is transformed into the LQ-problem ḣ(t) = u(t), min (P (t)h 2 (t) + Q(t)u 2 (t))dt. u( ) 2. Nonholonomic problems: more generally we consider the problem L(t, x(t), ẋ(t), u(t))dt min x( ) among a set of curves satisfying the constraints ẋ(t) D(x(t)) can be reformulated into an optimal control problem when the differential inclusion can be restated as a system ẋ(t) = f(t, x(t), u(t)). An important example in our study is the sub-reimannian problem: and min (ẋ(t), ẋ(t)) 1 2 g dt with ẋ(t) D(x(t)) D(x) = Span{F 1 (x),..., F p (x)} where the distribution D generated by the vector fields F i s is of constant rank and (, ) g is the scalar product associated to a Remannian metric g. 4.4 Singular Trajectories and the Weak Maximum Principle Definition 1. Consider a system of R n : ẋ(t) = f(x(t), u(t)) where f is a C -mapping from R n R m into R n. Fix x R n and T >. The end-point mapping (for fixed x, T ) is the mapping E x,t : u( ) U x(t, x, u). If u( ) is a control defined on [, T ] such that the corresponding trajectory x(, x, u) is defined on [, T ], then E x,t is defined on a neighborhood V of u( ) for the L ([, T ]) norm.
5 Optimal Control Theory Summer First and Second Variations of E x,t It is a standard result, see for instance [6], that the end-point mapping is a C -mapping defined on a domain of the Banach space L ([, T ]). The formal computation of the successive derivatives uses the concept of Gateaux derivative. Let us explain in details the process to compute the first and second variations. Let v( ) L ([, T ]) be a variation of the reference control u( ) and let us denote by x( ) + ξ( ) the response corresponding to u( ) + v( ) issued from x. Since f is C, it admits a Taylor expansion for each fixed t: Using the differential equation we get f(x + ξ, u + v) = f(x, u) + f f (x, u)ξ + (x, u)v + 2 f (x, u)(ξ, v) f 2 2 (ξ, ξ) f (v, v) + 2 ẋ(t) + ξ(t) = f(x(t) + ξ(t), u(t) + v(t)). Hence we can write ξ on the form: δ 1 x + δ 2 x + where δ 1 x is linear in v, δ 2 x is quadratic, etc. and are solutions of the following differential equations: δ 1 x = f (x, u)δ 1x + f (x, u)v (4.5) δ 2 x = f (x, u)δ 2x + 2 f (x, u)(δ 1x, v) f 2 2 (x, u)(δ 1x, δ 2 x) f (x, u)(v, v) 2 Using ξ() =, these differential equations have to be integrated with the initial conditions (4.6) δ 1 x() = δ 2 x() = (4.7) Let us introduce the following notations: Definition 2. The system A(t) = f (x(t), u(t)), is called the linearized system along (x( ), u( )). f B(t) = (x(t), u(t)) δx(t) = A(t)δx(t) + B(t)δu(t) Let M(t) be the fundamental matrix on [, T ] solution almost everywhere of Ṁ(t) = A(t)M(t), M() = identity. Integrating 4.5 with δ 1 x() = we get the following expression for δ 1 x: This implies the following lemma. Lemma 1. The Fréchet derivative of E x,t at u( ) is given by δ 1 x(t ) = M(T ) M 1 (t)b(t)v(t)dt (4.8) E x,t (v) = δ 1 x(t ) = M(T ) M 1 (t)b(t)v(t)dt.
6 Optimal Control Theory Summer 215 Definition 3. The admissible control u( ) and its corresponding trajectory x(, x, u) defined both on [, T ] are said to be regular if the Fréchet derivative E x,t is surjective. Otherwise they are called singular. Proposition 1. Let A(x, T ) = u( ) U x(t, x, u) be the accessibility set at time T from x. If u( ) is a regular control on [, T ] then there exists a neighborhood U of the end-point x(t, x, u) contained in A(x, T ). Proof. Since E x,t is surjective at u( ), we have using the open mapping theorem [3] that E x,t is an open map. Theorem 1. Assume that the admissible control u( ) and its corresponding trajectory x( ) are singular on [, T ]. Then there exists a vector p( ) R n \ {} absolutely continuous on [, T ] such that (x, p, u) are solutions almost everywhere on [, T ] of the following equations: dx dt H dp (t) = (x(t), p(t), u(t)), p dt = H (x(t), p(t), u(t)) (4.9) H (x(t), p(t), u(t)) = (4.1) where H(x, p, u) = p, f(x, u) is the pseudo-hamiltonian,, being the standard inner product. Proof. We observe that the Fréchet derivative is a solution of the linear system δx(t) = A(t)δ 1 x(t) + B(t)v(t). Hence, if the pair (x( ), u( )) is singular this system is not controllable on [, T ]. We use an earlier proof on controllability to get a geometric characterization of this property. The proof which is the heuristic basis of the maximum principle is given in detail. By definition, since u( ) is a singular control on [, T ] the dimension of the linear space { } T M(T )M 1 (t)b(t)v(t)dt; v( ) L ([, T ]) is less than n. Therefore there exists a row vector p R n \ {} such that for almost everywhere t [, T ]. We set pm(t )M 1 (t)b(t) = p(t) = pm(t )M 1 (t). By construction p( ) is a solution of the adjoint system ṗ(t) = p(t) f (x(t), u(t)). Moreover, it satisfies almost everywhere the following equality: p(t) f (x(t), u(t)) =. Hence we get the equations (4.9) and (4.1) if H(x, p, u) denotes the scalar product p, f(x, u). 4.5 Geometric interpretation of the Adjoint Vector In the proof of Theorem 1 we introduced a vector p( ). This vector is called an adjoint vector. We observe that if u( ) is singular on [, T ], then for each < t T, u [,T ] is singular and p(t) is orthogonal to the image denoted K(t) of E x,t evaluated at u [,T ]. If for each t, K(t) is a linear space of codimension one then p(t) is unique up to a factor.
7 Optimal Control Theory Summer The Weak Maximum Principle Theorem 2. Let u( ) be a control and x(, x, u) the corresponding trajectory, both defined on [, T ]. If x(t, x, u) belongs to the boundary of the Accessibility set A(x, T ), then the control u( ) and the trajectory x(, x, u) are singular. Proof. According to Proposition 1, if u( ) is a regular control on [, T ] then x(t ) belongs to the interior of the accessibility set. Corollary 1. Consider the problem of maximizing the transfer time for system ẋ(t) = f(x(t), u(t)), u( ) U = L, with fixed extremities x, x 1. If u ( ) and the corresponding trajectory are optimal on [, τ ], then u ( ) is singular. Proof. If u ( ) is maximizing then x (T ) must belong to the boundary of the accessibility set A(x, T ) otherwise there exists ϵ > such that x (T ϵ) A(x, T ) and hence can be reached by a solution x( ) in time T : x (T ϵ) = x(t ). It follows that the point x (T ) can be joined in a time ˆT > T. This contradicts the maximality assumption. Corollary 2. Consider the system ẋ(t) = f(x(t), u(t)) where u( ) U = L ([, T ]) and the minimization problem: L(x(t), u(t))dt, where the extremities x, x 1 are fixed as well as the transfer time T. If min u( ) U u ( ) and its corresponding trajectory are optimal on [, T ] then u ( ) is singular on [, T ] for the augmented system: ẋ(t) = f(x(t), u(t)), ẋ (t) = L(x(t), u(t)). Therefore there exists ˆp (t) = (p(t), p ) R n+1 \ {} such that (ˆx, ˆp, u ) satisfies ˆx(t) = Ĥ (ˆx(t), ˆp(t), u(t)), ˆp(t) ˆp where ˆx = (x, x ) and Ĥ(ˆx, ˆp, u) = p, f(x, u) + p L(x, u). Ĥ = (ˆx(t), ˆp(t), u(t)) ˆx Ĥ (ˆx(t), ˆp(t), u(t)) = (4.11) Proof. We have that x (T ) belongs to the boundary of the accessibility set Â(ˆx, T ). Applying (4.9), (4.1) we get the equations (4.11) where ṗ = Ĥ = since Ĥ is independent of x. Hence p is a constant. 4.7 Abnormality In the previous corollary, ˆp ( ) is defined up to a factor. Hence we can normalize p to or -1 and we have two cases: 1. Case 1: u( ) is regular for the system ẋ(t) = f(x(t), u(t)). Then p and can be normalized to -1. This is called the normal case (in calculus of variations), see [2]. 2. Case 2: u( ) is singular for the system ẋ(t) = f(x(t), u(t)). Then we can choose p = and the Hamiltonian Ĥ evaluated along (x( ), p( ), u( )) doesn t depend on the cost L(x, u). This case is called the abnormal case. 4.8 The Weak Maximization Principle and Euler-Lagrange Equation We can deduce from Corollary 2 the standard Euler-Lagrange equation. Indeed consider the problem of minimizing L(t, x(t), ẋ(t))dt where L : R2n+1 R is a smooth map, among the set of absolutely continuous curves t x(t) in R n, with bounded derivative and satisfying the boundary conditions: x() = x, x(t ) = x 1. We introduce for almost every t, the linear system ẋ(t) = u(t), u(t) R n, with u( ) a measurable bounded function. We have Ĥ(ˆx, ˆp, u) = p.u + p L(x, u)
8 Optimal Control Theory Summer 215 where p is a row vector in R n. Since the linear system is controllable, and optimal control is normal and we can set p = 1. Using Ĥ =, we get Moreover p i (t) = p L i (x(t), u(t)) = L i (x(t), u(t)), i = 1,..., n. ṗ i (t) = Ĥ i (ˆx(t), ˆp(t), u(t)) = p L i (x(t), u(t)) = L i (x(t), u(t)). Integrating this last equation with respect to t, we get t L p i (t) = p i (t ) + (x(s), u(s))ds i and we write the Euler-Lagrange equation in the integral form (satisfied almost everywhere): L t L (x(t), u(t)) = p i (t ) + (x(s), u(s))ds, (4.12) ẋ i i Moreover, if the curve t x(t) is C 2 we obtain by differentiating everywhere on [, T ]. d dt L ẋ L (x(t), u(t)) = (x(t), u(t)) Comparison with the Calculus of Variations Although we can recover the Euler-Lagrange equation from the weak maximum principle, the two viewpoints are radically different and the point of view of the maximum principle is much superior to the one of calculus of variations in several ways: 1. we impose minimal regularity assumptions on the set of curves; 2. we use the concept of the augmented system where the derivative of the cost is a state variable and the adjoint vector have a clear geometric explanation; 3. we obtain a set of equations in the Hamiltonian form without using the Legendre transformation which is not in general well-defined. 4.9 LQ-Control and the Weak Maximum Principle We can apply the maximum principle to get optimality necessary conditions in the LQ-problem. For the sake of simplicity we analyze only the autonomous case. We consider the problem of minimizing the cost among the set of curves satisfying C(u) = ( t x(t)rx(t) + t u(t)uu(t)dt ) ẋ(t) = Ax(t) + Bu(t), x(t) R n, u(t R m ) where A, B, R, U are constant matrices, and R, U are symmetric. We assume to have fixed boundary conditions: x() = x, x(t ) = x 1. Moreover we impose the (strong Legendre) regularity condition: U > and we assume that the linear system ẋ(t) = Ax(t) + Bu(t) is controllable, that is the rankr = n where R = [B,, A n 1 B]. From this
9 Optimal Control Theory Summer 215 last assumption, any minimizer is normal and according to Corollary 2, a minimizer is a solution of the following constrained Hamiltonian system: ˆx(t) = Ĥ (x(t), p(t), u(t)), ˆp(t) ˆp Ĥ = (x(t), p(t), u(t)) ˆx Ĥ (x(t), p(t), u(t)) = where Ĥ(x, p, u) = p, Ax + Bu 1 2 (t xrx + t uuu). Solving the linear equation Ĥ that an optimal control is defined by the (dynamic) feedback =, we get with U >, u(p) = U 1t B t p where p is written as a row vector. Introducing the Hamiltonian function H(x, p) = Ĥ(x, p, u(p)) and using the constraint Ĥ H =, we get = Ĥ, H p = Ĥ p. Hence an optimal trajectory is the projection on the x-space of a solution of the following Hamiltonian system: ẋ(t) = H (x(t), p(t)), p H ṗ(t) = (x(t), p(t)). 4.1 Pontryagin s Maximum Principle In this section we state the Pontryagin maximum principle and we outline the proof. We adopt the presentation from Lee and Markus [4] where the result is presented into two theorems. The complete proof is complicated but rather standard, see the original book from the authors [5]. Theorem 3. We consider a system of R n : ẋ(t) = f(x(t), u(t)), where f : R n+m R n is a C 1 -mapping. The family U of admissible controls is the set of bounded measurable mappings u( ), defined on [, T ] with values in a control domain Ω R m such that the response x(, x, u) is defined on [, T ]. Let ū( ) U be a control and let x( ) be the associated trajectory such that x(t ) belongs to the boundary of the accessibility set A(x, T ). Then there exists p( ) R n \ {}, an absolutely continuous function defined on [, T ] solution almost everywhere of the adjoint system: such that for almost every t [, T ] we have where and Moreover t M( x(t), p(t)) is constant on [, T ]. ṗ(t) = p(t) f ( x(t), ū(t)) (4.13) H( x(t), p(t), ū(t)) = M( x(t), p(t)) (4.14) H(x, p, u) = p, f(x, u) M(x, p) = max H(x, p, u) u Ω Proof. The accessibility set is not in general convex and it must be approximated along the reference trajectory x( ) by a convex cone. The approximation is obtained by using needle type variations of the control ū( ) which are closed for the L 1 -topology. (We do not use L perturbations and the Fréchet derivative of the end-point mapping computed in this Banach space.)
10 Optimal Control Theory Summer 215 if Needle type approximation. We sya that t 1 T is a regular time for the reference trajectory d dt t=t1 t f( x(τ), ū(τ))dτ = f( x(t 1 ), ū(t 1 )) and from measure theory we have that almost every point of [, T ] is regular. At a regular time t 1, we define the following L 1 -perturbation ū ε ( ) of the reference control: we fix l, ε small enough and we set { u1 Ω constant on [t ū ε (t) = 1 lε, t 1 ] ū(t) otherwise on [, T ] We denote by x ε ( ) the associated trajectory starting at x ε () = x. We denote by ε α t (ε) the curve defined by α t (ε) = x ε (t) for t t 1. We have where ū ε = u 1 on [t 1 lε, t 1 ], Moreover and since t 1 is a regular time for x( ) we have t1 x ε (t 1 ) = x(t 1 lε) + f( x ε (t), ū ε (t))dt t 1 lε t1 x(t 1 ) = x(t 1 lε) + f( x(t), ū(t))dt t 1 lε x ε (t 1 ) x(t 1 ) = lε(f( x(t 1 ), u 1 ) f( x(t 1 ), ū(t 1 )) + o(ε). In particular if we consider the curve ε α t1 (ε), it is a curve with origin x(t 1 ) and whose tangent vector is given by v = l(f( x(t 1 ), u 1 ) f( x(t 1 ), ū(t 1 ))). (4.15) For t t 1, consider the local diffeomorphism: ϕ t (y) = x(t, t 1, y, ū) where x(, t 1, y, ū) is the solution corresponding to ū( ) and starting at t = t 1 from y. By construction we have α t (ε) = ϕ t (α t (ε)) for ε small enough and moreover for t t 1, v t = d dε ε= α t (ε) is the image of v by the Jacobian ϕ t. In other words v t is the solution at time t of the variated equation dv dt = f ( x(t), ū(t))v (4.16) with condition v t = v for t = t 1. We can extend v t on the whole interval [, T ]. The construction can be done for an arbitrary choice of t 1, l and u 1. Let Π = {t, l, u 1 } be fixed, we denote by v Π (t) the corresponding vector v t. Additivity property. Let t 1, t 2 be two regular points of ū( ) with t 1 < t 2 and l 1, l 2 small enough. We define the following perturbation u 1 on [t 1 l 1 ε, t 1 ] ū ε (t) = u 2 on [t 2 l 2 ε, t 2 ] ū(t); otherwise on [, T ] where u 1, u 2 are constant values of Ω and let x ε ( ) be the corresponding trajectory. Using the composition of the two elementary perturbations Π 1 = {t 1, l 1, u 1 } and Π 2 = {t 2, l 2, u 2 } we define a new perturbation Π : {t 1, t 2, l 1, l 2, u 1, u 2 }. If we denote by v Π1 (t), v Π2 (t) and v Π (t) the respective tangent vectors, a computation similar to the previous one gives us: We can deduce the following lemma. v Π (t) = v Π1 (t) + v Π2 (t), for t t 2.
11 Optimal Control Theory Summer 215 Lemma 2. Let Π = {t 1,, t s, λ 1 l 1,, λ s l s, u 1,, u s } be a perturbation at regular times t i, t 1 < < t s, l i, λ i, s i=1 λ i = 1 and corresponding to elementary perturbations Π i = {t i, l i, u i } with tangent vectors v Πi (t). Let x ε ( ) be the associated response with perturbation Π. Then we have x ε (t) = x(t) + s ελ i v Πi (t) + o(ε) (4.17) i=1 where o(ε) ε, uniformly for t T and λ i 1. Definition 4. Let ū( ) be an admissible control and x( ) its associated trajectory defined for t T. The first Pontryagin s cone K(t), < t T is the smallest convex cone at x(t) containing all elementary perturbation vectors for all regular times t i. Definition 5. Let v 1,, v n be linearly independent vectors of K(t), each v i being formed as convex combinations of elementary perturbation vectors at distinct times. An elementary simplex cone C is the convex hull of the vectors v i. Lemma 3. Let v be a vector interior to K(t). Then there exists an elementary simplex cone C containing v in its interior. Proof. In the construction of the interior of K(t), we use the convex combination of elementary perturbation vectors at regular times not necessarily distinct. Clearly by continuitym we can replace such a combination by a cone C in the interior with n distinct times. Approximation lemma. An important technical lemma is the following topological result whose proof uses the Brouwer fixed point theorem. Lemma 4. Let v be a nonzero vector interior to K(t), then there exists λ > and a conic neighborhood N of λv such that N is contained in the accessibility set A(x, T ). Proof. See [4]. The meaning of the lemma is the following. Since v is interior to K(T ), there exists an elementary simplex cone C such that v is interior to C. Hence for each w C there exists ū ε ( ) a perturbation of ū( ) such that its corresponding trajectory satisfies x ε (T ) = x(t ) + εw + o(w). In particular there exists a control ū ε ( ) such that we have x ε (T ) = x(t ) + εv + o(w). By construction, x ε (T ) K(T ). In other words K(T ) is a closed convex approximation of A(x, T ). Separation step. To finish the proof, we use the geometric Hahn-Banach theorem. Indeed if x(t ) A(x, T ) there exists a sequence x n / A(x, T ) such that x n x(t ) when n + and the unit vectors x n x(t ) x n x(t ) have a limit ω when n. The vector ω is not interior to K(T ) otherwise from Lemma 4 there would exist λ > and a conic neighborhood of λω in A(x, T ) and this contradicts the fact that x n / A(x, T ) for any n. Let π be any hyperplane at x(t ) separating K(T ) from ω and let p be the exterior unit normal to π at x(t ). Let us define p( ) as the solution of the adjoint equation satisfying p(t ) = p. By construction we have ṗ(t) = p(t) f ( x(t), ū(t)) p(t )v(t ) for each elementary perturbation vector v(t ) K(T ) and since for t [, T ] the following equations hold: p(t) = p(t) f ( x, ū), f v(t) = ( x, ū)v
12 Optimal Control Theory Summer 215 we have d p(t)v(t) =. dt Hence p(t)v(t) = p(t )v(t ), t. Assume that the maximization condition (4.14) is not satisfied on some subset S of t T with positive measure. Let t 1 S be a regular time, then there exists u 1 Ω such that p(t 1 )f( x(t 1 ), ū(t 1 )) < p(t 1 )f( x(t 1 ), u 1 ). Let us consider the elementary perturbation Π 1 = {t 1, l, u 1 } and its tangent vector Then using the above inequality we have that v Π1 (t 1 ) = l [f( x(t 1 ), u 1 ) f( x(t 1 ), ū(t 1 ))]. p(t 1 )v Π1 (t 1 ) > which contradicts p(t 1 )v Π1 (t 1 ), for all t. Therefore the inequality H( x(t), p(t), ū(t)) = M( x(t), p(t)) is satisfied almost everywhere on t T. Using a standard reasoning we can prove that t M( x(t), p(t)) is absolutely continuous and has zero derivative almost everywhere on t T, see [4]. Theorem 4. Let us consider a general control system: ẋ(t) = f(x(t), u(t)) where f is a continuously differentiable function and let M, M 1 be two C 1 submanifolds of R n. We assume the set U of admissible controls to be the set of bounded measurable mappings u : [, T (u)] Ω R m, where Ω is a given subset of R m. Consider the following minimization problem: min C(u), C(u) = u U f (x(t), u(t))dt where f C 1, x() M, x(t ) M 1 and T is not fixed. We introduce the augmented system: ẋ (t) = f (x(t), u(t)), x () = (4.18) ẋ(t) = f(x(t), u(t)), (4.19) ˆx(t) = (x (t), x(t)) R n+1, ˆf = (f, f). If (x ( ), u ( ) is optimal on [, T ], then there exists ˆp ( ) = (p, p( )) : [, T ] R n+1 \ {} absolutely continuous, such that (ˆx ( ), ˆp ( ), u ( )) staisfies the following equations almost everywhere on t T : ˆx(t) = Ĥ (x(t), ˆp(t), u(t)), ˆp(t) ˆp Ĥ = (x(t), ˆp(t), u(t)) (4.2) ˆx where Moreover, we have Ĥ(x(t), ˆp(t), u(t)) = ˆM(x(t), ˆp(t)) (4.21) Ĥ(x(t), ˆp(t), u(t)) = ˆp, ˆf(x, u), ˆM(ˆx, ˆp) = max Ĥ(ˆx, ˆp, u). u Ω and the boundary conditions (transversality conditions): ˆM(x(t), ˆp(t)) =, t, p (4.22) x () M, x (T ) M 1, (4.23) p () T x ()M, p (T ) T x (T )M 1. (4.24) Proof. (For the complete proof, see [4] or [5].) Since (x ( ), u ( )) is optimal on [, T ], the augmented trajectory t ˆx (t) is such that ˆx (T ) belongs to the boundary of the accessibility set Â(x (), T ). Hence by applying Theorem 3 to the augmented system, one gets the conditions (4.2), (4.21) and ˆM constant. To show that ˆM, we construct an approximated cone K (T ) containing K(T ) but also the two vectors ± ˆf(x (T ), u (T )) using time variations (the transfer time is not fixed). To prove the transversality conditions, we use a standard separation lemma as in the proof of Theorem 3. Definition 6. A triplet (x( ), p( ), u( )) solution of the maximum principle is called an extremal.
13 Optimal Control Theory Summer Filippov Existence Theorem In order to solve optimal control problems, we need an existence theorem about optimal trajectories. The following existence theorem can be found in [4] with a complete proof (in a more general setting than the one stated here). Theorem 5. Consider a control system in R n : ẋ(t) = f(x(t), u(t)), where f is C 1, with the following data: - The initial and target sets M, M 1 are nonempty compact sets of R n. - The control domain Ω is a nonempty compact set in R m. - The state constraints are of the form h i (x), i = 1,..., p where the h i are C functions on R n. - The set U of admissible controls is the set of measurable mappings u( ) : [, T ] Ω, such that each u( ) has a response x( ), t T steering x M to x(t ) = x 1 M 1, and t x(t) is entirely contained in the restraint set: h i (x). - The cost to be minimized is for each u( ) U of the form: where f is a C 1 function. We assume the following. C(u) = f (x(t), u(t))dt 1. The family U is not empty, that is there exists u( ) steering x M to x 1 M For each response x( ) defined on [, T ] corresponding to u( ) U, there exists a uniform bound: 3. The extended velocity set: x(t) b, t T. V (x) = {(f (x, u), f(x, u)); u Ω} is a convex subset of R n+1 for each x in the state space. Then, there exists an optimal control u ( ) U minimizing C(u) Comments about the Existence Theorem The convexity assumption is necessary as shown by the following example: C(x) = min u( ) 1 (1 u 2 (t) + x 2 (t))dt, ẋ = u(t) u(t) 1, x() = x, x(1) =. We can construct a sequence {x n ( )} converging uniformly to such that ẋ 2 = 1 almost everywhere, and such that C(x n ) when n +. But if 1 (1 u2 (t) + x 2 (t))dt =, then 1 u 2 (t) + x 2 (t) = almost everywhere and x(t) = on [, 1]. This is a contradiction because the cost corresponding to the trajectory identically zero is 1 and hence not minimum. If the convexity assumption is not satisfied, we can convexify the problem in order to get a well-posed optimal control problem, see [1]. Some classical optimal control problems like the time minimum problem for affine systems ẋ(t) = F (x(t)) + F (x(t))u(t), u(t) Ω with Ω a convex set are convex. Hence it is only required to check the uniform bound of assumption 2.
14 Acknowledgments M. Chyba is partially supported by the National Science Foundation (NSF) Division of Mathematical Sciences, award #
15 Bibliography [1] V. Aleexev, V. Tikhomirov, S. Fomine, Commande Optimale, Translated from Russian by A. Sossinski. Mir, Moscow, [2] G.A. Bliss, Lectures on the Calculus of Variations, Univ. of Chicago Press, Chicago, [3] H. Brézis, Analyze fonctionnelle: théorie et applications, Collection Mathématiques Appliquées pour la Matrise [Collection of Applied Mathematics for the Master s Degree], (1983), Mason, Paris. [4] E.B. Lee, L. Markus, Foundations of optimal control theory, Second edition, Robert E. Kreiger Publishing Co., Inc., Melbourne, FL, [5] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, et al., The Mathematical Theory of Optimal Processes, John Wiley & Sons, New York, [6] E.D. Sontag, Mathematical control theory. Deterministic finite-dimensional systems, second edition, Texts in Applied Mathematics, 6 (1998), Springer-Verlag, New York. 14
Deterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationTime-optimal control of a 3-level quantum system and its generalization to an n-level system
Proceedings of the 7 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 7 Time-optimal control of a 3-level quantum system and its generalization to an n-level
More informationAn Integral-type Constraint Qualification for Optimal Control Problems with State Constraints
An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard
More informationRemarks on Quadratic Hamiltonians in Spaceflight Mechanics
Remarks on Quadratic Hamiltonians in Spaceflight Mechanics Bernard Bonnard 1, Jean-Baptiste Caillau 2, and Romain Dujol 2 1 Institut de mathématiques de Bourgogne, CNRS, Dijon, France, bernard.bonnard@u-bourgogne.fr
More informationRegularity and approximations of generalized equations; applications in optimal control
SWM ORCOS Operations Research and Control Systems Regularity and approximations of generalized equations; applications in optimal control Vladimir M. Veliov (Based on joint works with A. Dontchev, M. Krastanov,
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More information2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More informationLocally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem
56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi
More informationNecessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints
Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationMath 341: Convex Geometry. Xi Chen
Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry
More informationSECOND ORDER OPTIMALITY CONDITIONS WITH APPLICATIONS. B. Bonnard. J.-B. Caillau
DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 27 pp. 45 54 SECOND ORDER OPTIMALITY CONDITIONS WITH APPLICATIONS B. Bonnard Institut de Mathématiques (UMR CNRS 5584)
More information1.4 The Jacobian of a map
1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationFunctional differentiation
Functional differentiation March 22, 2016 1 Functions vs. functionals What distinguishes a functional such as the action S [x (t] from a function f (x (t, is that f (x (t is a number for each value of
More informationConstrained controllability of semilinear systems with delayed controls
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 56, No. 4, 28 Constrained controllability of semilinear systems with delayed controls J. KLAMKA Institute of Control Engineering, Silesian
More informationIntroduction. Chapter 1. Contents. EECS 600 Function Space Methods in System Theory Lecture Notes J. Fessler 1.1
Chapter 1 Introduction Contents Motivation........................................................ 1.2 Applications (of optimization).............................................. 1.2 Main principles.....................................................
More informationRecent Trends in Differential Inclusions
Recent Trends in Alberto Bressan Department of Mathematics, Penn State University (Aveiro, June 2016) (Aveiro, June 2016) 1 / Two main topics ẋ F (x) differential inclusions with upper semicontinuous,
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f
More informationBernard Bonnard 1, Jean-Baptiste Caillau 2 and Emmanuel Trélat 3
ESAIM: COCV Vol. 13, N o 2, 27, pp. 27 236 DOI: 1.151/cocv:2712 ESAIM: Control, Optimisation and Calculus of Variations www.edpsciences.org/cocv SECOND ORDER OPTIMALITY CONDITIONS IN THE SMOOTH CASE AND
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationAsteroid Rendezvous Missions
Asteroid Rendezvous Missions An Application of Optimal Control G. Patterson, G. Picot, and S. Brelsford University of Hawai`i at Manoa 25 June, 215 1/5 G. Patterson, G. Picot, and S. Brelsford Asteroid
More informationOn Controllability of Linear Systems 1
On Controllability of Linear Systems 1 M.T.Nair Department of Mathematics, IIT Madras Abstract In this article we discuss some issues related to the observability and controllability of linear systems.
More informationChain differentials with an application to the mathematical fear operator
Chain differentials with an application to the mathematical fear operator Pierre Bernhard I3S, University of Nice Sophia Antipolis and CNRS, ESSI, B.P. 145, 06903 Sophia Antipolis cedex, France January
More informationCourse Summary Math 211
Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.
More informationLinear conic optimization for nonlinear optimal control
Linear conic optimization for nonlinear optimal control Didier Henrion 1,2,3, Edouard Pauwels 1,2 Draft of July 15, 2014 Abstract Infinite-dimensional linear conic formulations are described for nonlinear
More informationAbstract. Jacobi curves are far going generalizations of the spaces of \Jacobi
Principal Invariants of Jacobi Curves Andrei Agrachev 1 and Igor Zelenko 2 1 S.I.S.S.A., Via Beirut 2-4, 34013 Trieste, Italy and Steklov Mathematical Institute, ul. Gubkina 8, 117966 Moscow, Russia; email:
More informationarxiv: v1 [math.oc] 22 Sep 2016
EUIVALENCE BETWEEN MINIMAL TIME AND MINIMAL NORM CONTROL PROBLEMS FOR THE HEAT EUATION SHULIN IN AND GENGSHENG WANG arxiv:1609.06860v1 [math.oc] 22 Sep 2016 Abstract. This paper presents the equivalence
More informationAn Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game
An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400
More informationExercises: Brunn, Minkowski and convex pie
Lecture 1 Exercises: Brunn, Minkowski and convex pie Consider the following problem: 1.1 Playing a convex pie Consider the following game with two players - you and me. I am cooking a pie, which should
More informationIntroduction. 1 Minimize g(x(0), x(1)) +
ESAIM: Control, Optimisation and Calculus of Variations URL: http://www.emath.fr/cocv/ Will be set by the publisher, UNMAXIMIZED INCLUSION TYPE CONDITIONS FOR NONCONVEX CONSTRAINED OPTIMAL CONTROL PROBLEMS
More informationMañé s Conjecture from the control viewpoint
Mañé s Conjecture from the control viewpoint Université de Nice - Sophia Antipolis Setting Let M be a smooth compact manifold of dimension n 2 be fixed. Let H : T M R be a Hamiltonian of class C k, with
More informationA generic property of families of Lagrangian systems
Annals of Mathematics, 167 (2008), 1099 1108 A generic property of families of Lagrangian systems By Patrick Bernard and Gonzalo Contreras * Abstract We prove that a generic Lagrangian has finitely many
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationEE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games
EE291E/ME 290Q Lecture Notes 8. Optimal Control and Dynamic Games S. S. Sastry REVISED March 29th There exist two main approaches to optimal control and dynamic games: 1. via the Calculus of Variations
More informationCritical Cones for Regular Controls with Inequality Constraints
International Journal of Mathematical Analysis Vol. 12, 2018, no. 10, 439-468 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijma.2018.8856 Critical Cones for Regular Controls with Inequality Constraints
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationThe Euler Method for Linear Control Systems Revisited
The Euler Method for Linear Control Systems Revisited Josef L. Haunschmied, Alain Pietrus, and Vladimir M. Veliov Research Report 2013-02 March 2013 Operations Research and Control Systems Institute of
More informationIntroduction to Functional Analysis
Introduction to Functional Analysis Carnegie Mellon University, 21-640, Spring 2014 Acknowledgements These notes are based on the lecture course given by Irene Fonseca but may differ from the exact lecture
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationOptimality Conditions for Nonsmooth Convex Optimization
Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never
More informationDifferential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011
Differential Games II Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011 Contents 1. I Introduction: A Pursuit Game and Isaacs Theory 2. II Strategies 3.
More informationOptimal Control Theory - Module 3 - Maximum Principle
Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationAN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with
More informationLecture 7 Monotonicity. September 21, 2008
Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity
More informationConvexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.
Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed
More informationPOLARS AND DUAL CONES
POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationConvexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls
1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity
More informationSome topics in sub-riemannian geometry
Some topics in sub-riemannian geometry Luca Rizzi CNRS, Institut Fourier Mathematical Colloquium Universität Bern - December 19 2016 Sub-Riemannian geometry Known under many names: Carnot-Carathéodory
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More informationZeros and zero dynamics
CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationNONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 29, 327 338 NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS Shouchuan Hu Nikolas S. Papageorgiou
More informationAbout Moreau-Yosida regularization of the minimal time crisis problem
About Moreau-Yosida regularization of the minimal time crisis problem Terence Bayen and Alain Rapaport, April 6, 2015 Abstract We study an optimal control problem where the cost functional to be minimized
More informationON THE STRONG CONVERGENCE OF DERIVATIVES IN A TIME OPTIMAL PROBLEM.
ON THE STRONG CONVERGENCE OF DERIVATIVES IN A TIME OPTIMAL PROBLEM. A. CELLINA, F. MONTI, AND M. SPADONI Abstract. We consider a time optimal problem for a system described by a Differential Inclusion,
More informationNotes for Functional Analysis
Notes for Functional Analysis Wang Zuoqin (typed by Xiyu Zhai) November 6, 2015 1 Lecture 18 1.1 The convex hull Let X be any vector space, and E X a subset. Definition 1.1. The convex hull of E is the
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationThe Bang-Bang theorem via Baire category. A Dual Approach
The Bang-Bang theorem via Baire category A Dual Approach Alberto Bressan Marco Mazzola, and Khai T Nguyen (*) Department of Mathematics, Penn State University (**) Université Pierre et Marie Curie, Paris
More informationSYMPLECTIC GEOMETRY: LECTURE 5
SYMPLECTIC GEOMETRY: LECTURE 5 LIAT KESSLER Let (M, ω) be a connected compact symplectic manifold, T a torus, T M M a Hamiltonian action of T on M, and Φ: M t the assoaciated moment map. Theorem 0.1 (The
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationUnderwater vehicles: a surprising non time-optimal path
Underwater vehicles: a surprising non time-optimal path M. Chyba Abstract his paper deals with the time-optimal problem for a class of underwater vehicles. We prove that if two configurations at rest can
More informationExtremal Trajectories for Bounded Velocity Differential Drive Robots
Extremal Trajectories for Bounded Velocity Differential Drive Robots Devin J. Balkcom Matthew T. Mason Robotics Institute and Computer Science Department Carnegie Mellon University Pittsburgh PA 523 Abstract
More informationCalculus of Variations. Final Examination
Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationStrong and Weak Augmentability in Calculus of Variations
Strong and Weak Augmentability in Calculus of Variations JAVIER F ROSENBLUETH National Autonomous University of Mexico Applied Mathematics and Systems Research Institute Apartado Postal 20-126, Mexico
More informationNewtonian Mechanics. Chapter Classical space-time
Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles
More informationLECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM
LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM Contents 1. The Atiyah-Guillemin-Sternberg Convexity Theorem 1 2. Proof of the Atiyah-Guillemin-Sternberg Convexity theorem 3 3. Morse theory
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationControl in finite and infinite dimension. Emmanuel Trélat, notes de cours de M2
Control in finite and infinite dimension Emmanuel Trélat, notes de cours de M2 December 19, 218 2 Contents I Control in finite dimension 5 1 Controllability 7 1.1 Controllability of linear systems...................
More informationEXISTENCE AND UNIQUENESS THEOREMS FOR ORDINARY DIFFERENTIAL EQUATIONS WITH LINEAR PROGRAMS EMBEDDED
EXISTENCE AND UNIQUENESS THEOREMS FOR ORDINARY DIFFERENTIAL EQUATIONS WITH LINEAR PROGRAMS EMBEDDED STUART M. HARWOOD AND PAUL I. BARTON Key words. linear programs, ordinary differential equations, embedded
More information10. Smooth Varieties. 82 Andreas Gathmann
82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It
More informationA TWO PARAMETERS AMBROSETTI PRODI PROBLEM*
PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper
More informationFINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS. BORIS MORDUKHOVICH Wayne State University, USA
FINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS BORIS MORDUKHOVICH Wayne State University, USA International Workshop Optimization without Borders Tribute to Yurii Nesterov
More informationAdvanced Mechatronics Engineering
Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input
More informationChap. 1. Some Differential Geometric Tools
Chap. 1. Some Differential Geometric Tools 1. Manifold, Diffeomorphism 1.1. The Implicit Function Theorem ϕ : U R n R n p (0 p < n), of class C k (k 1) x 0 U such that ϕ(x 0 ) = 0 rank Dϕ(x) = n p x U
More informationSolution of Stochastic Optimal Control Problems and Financial Applications
Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty
More informationDichotomy, the Closed Range Theorem and Optimal Control
Dichotomy, the Closed Range Theorem and Optimal Control Pavel Brunovský (joint work with Mária Holecyová) Comenius University Bratislava, Slovakia Praha 13. 5. 2016 Brunovsky Praha 13. 5. 2016 Closed Range
More informationDuality in Linear Programming
Duality in Linear Programming Gary D. Knott Civilized Software Inc. 1219 Heritage Park Circle Silver Spring MD 296 phone:31-962-3711 email:knott@civilized.com URL:www.civilized.com May 1, 213.1 Duality
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationA Sufficient Condition for Local Controllability
A Sufficient Condition for Local Controllability of Nonlinear Systems Along Closed Orbits Kwanghee Nam and Aristotle Arapostathis Abstract We present a computable sufficient condition to determine local
More informationMetric regularity properties in bang-bang type linear-quadratic optimal control problems
SWM ORCOS Metric regularity properties in bang-bang type linear-quadratic optimal control problems Jakob Preininger, Teresa Scarinci and Vladimir M. Veliov Research Report 217-7 October 217 ISSN 2521-313X
More informationON COLISSIONS IN NONHOLONOMIC SYSTEMS
ON COLISSIONS IN NONHOLONOMIC SYSTEMS DMITRY TRESCHEV AND OLEG ZUBELEVICH DEPT. OF THEORETICAL MECHANICS, MECHANICS AND MATHEMATICS FACULTY, M. V. LOMONOSOV MOSCOW STATE UNIVERSITY RUSSIA, 119899, MOSCOW,
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationCopyrighted Material. L(t, x(t), u(t))dt + K(t f, x f ) (1.2) where L and K are given functions (running cost and terminal cost, respectively),
Chapter One Introduction 1.1 OPTIMAL CONTROL PROBLEM We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The
More informationIntroduction to control theory and applications
Introduction to control theory and applications Monique CHYBA, Gautier PICOT Department of Mathematics, University of Hawai'i at Manoa Graduate course on Optimal control University of Fukuoka 18/06/2015
More information