The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle
|
|
- Walter Golden
- 5 years ago
- Views:
Transcription
1 The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle A.V. Dmitruk, A.M. Kaganovich Lomonosov Moscow State University, Russia , Moscow, Leninskie Gory, VMK MGU dmitruk@member.ams.org, sm99@yandex.ru Abstract We give a simple proof of the Maximum Principle for smooth hybrid control systems by reducing the hybrid problem to an optimal control problem of Pontryagin type and then by using the classical Pontryagin Maximum Principle. 1 Introduction In a broad sense, hybrid control systems are control systems involving both continuous and discrete variables. In recent years, optimization problems for hybrid systems attracted a significant attention of specialists in control. One of the most important result in the study of such problems is Hybrid Maximum Principle proved in [4] and [5]. This proof is rather difficult, since it follows the standard line of the full procedure of direct proof of Maximum Principle (MP), based on the introduction of special class of control variations (e.g., needle-like ones), calculation of the increments of the cost and all constraints, etc. As is well known, this procedure is very heavy and cumbersome even in the case of classical optimal control problem without discrete variables. However, as will be shown later, one does not need to perform all this heavy procedure for obtaining the hybrid MP if one supposes known the Pontryagin MP for the standard optimal control problem with nonseparated terminal constraints. After some transformation of the hybrid problem, MP for it is an easy consequence of the classical Pontryagin MP. The statement of hybrid optimal control problem supposes the presense of a finite number of control systems, each of which is defined on its own space of variables (possibly of different dimensions). A trajectory moving under one of these systems, at some moment of time can switch to any other system, and it can do so a finite number of times. The hybridity in such systems just means the presence of both continuous and discontinuous dynamics of state variables. One needs to choose a sequence of control systems, durations of motion under each system, and control variable for each system that minimize the given cost functional. The sequence of control systems under which a trajectory moves is not defined a priori. However, when investigating a given trajectory for optimality, we suppose this sequence defined and fixed, like in other papers known for us. (The matter is that variations of this sequence generate trajectories which are far from the given one, and hence they cannot be compared by methods of analysis.) 1
2 2 Statement of the problem Let t 0 < t 1 <... < t ν be real numbers. Denote by k the time interval [t k 1, t k ]. For any collection of continuous functions x k : k R n k, k = 1,..., ν, define a vector ( ) p = t 0, (t 1, x 1 (t 0 ), x 1 (t 1 )), (t 2, x 2 (t 1 ), x 2 (t 2 )),..., (t ν, x ν (t ν 1 ), x ν (t ν )) of dimension d = 1 + ν + 2 ν n k. k=1 On the time interval = [t 0, t ν ] consider the optimal control problem Problem A : ẋ k = f k (t, x k, u k ), u k U k, for t k, k = 1,..., ν, η j (p) = 0, j = 1,..., q, ϕ i (p) 0, i = 1,..., m, J = ϕ 0 (p) min, where x k R n k, uk R r k, the functions xk (t) are absolutely continuous, u k (t) are measurable and essentially bounded on the corresponding k. The time instants t 0, t 1,..., t ν are not fixed, a priori they just satisfy the above equality and inequality constraints on the vector p. Suppose the following assumptions to hold: A1) every function f k is defined and continuous on an open set Q k R 1+n k+r k and takes values in R n k ; moreover, it has partial derivatives fkt, f kx, which are continuous on Q k w.r.t. the triple of their arguments; A2) the functions ϕ i (p) and η j (p) are defined on an open set P R d and continuously differentiable there; A3) U k are arbitrary sets in R r k. The Problem A satisfying assumptions A1 A3 will be called smooth. Definition 1. The tuple w = (t 0 ; t k, x k (t), u k (t), k = 1,..., ν) is called an admissible process in Problem A if it satisfies all the constraints, and for every k = 1,..., ν there exists a compact set Ω k Q k such that (t, x k (t), u k (t)) Ω k a.e. on k = [t k 1, t k ]. The existence of compact sets Ω k means here that the admissible process is not allowed to come arbitrarily close to the boundary of domain Q k (otherwise, even uniformly small variations of the process can bring it out from the domain Q k, and therefore, one can not actually vary it). Let w be an admissible process in Problem A. It is convenient to introduce the functions x(t), u(t), that take values x k (t), u k (t) respectively for t (t k 1, t k ), k = 1,..., ν. For the internal points t k, k = 1,..., ν 1, we admit that the state variable can have two values: x(t k 0) = x k (t k ) and x(t k + 0) = x k+1 (t k ). For the measurable 2
3 function u(t) this uncertainty is inessential. Note that for time moments t and t belonging to different k, the values both of x(t) and u(t) may be vectors of different dimensions. An admissible process w can be now written as w = (θ, x(t), u(t)), where θ = {t 0, t 1,..., t ν }. Definition 2. An admissible process w 0 = (θ 0, x 0 (t), u 0 (t)) is called optimal (globally minimal) in Problem A if J(w 0 ) J(w) for any admissible process w. Definition 3. We say that a process w 0 = (θ 0, x 0 (t), u 0 (t)) defined on a time interval 0 = [t 0 0, t 0 ν] gives a strong minimum in Problem A if there exists an ε > 0 such that for any admissible process w = (θ, x(t), u(t)) defined on a time interval = [t 0, t ν ] and satisfying the conditions x 0 k x k C < ε for k = 1,..., ν, t 0 k t k < ε for k = 0,..., ν, there holds J(w 0 ) J(w). Definition 4. We say that a process w 0 = (θ 0, x 0 (t), u 0 (t)) gives a Pontryagin minimum in Problem A if for any constant N there exists an ε = ε(n) > 0 such that for any admissible process w = (θ, x(t), u(t)) satisfying the conditions x 0 k x k C < ε for all k = 1,..., ν, t 0 k t k < ε for k = 0,..., ν, there holds J(w 0 ) J(w). u 0 k u k 1 < ε, u 0 k u k N for k = 1,..., ν, The functions x k (t) and u k (t) in Definitions 3, 4 are defined on a time interval k that differs from the time interval 0 k, and so, all the norms should be considered on the common time interval, i.e., on k 0 k. Note the following obvious relations between the introduced types of optimality: if a process w 0 gives the global minimum, then it gives a strong minimum, and if w 0 gives a strong minimum, then it gives a Pontryagin minimum. To obtain optimality conditions in Problem A, we will reduce it to the following canonical autonomous optimal control problem of Pontryagin type on a fixed time interval [0, T ]. Problem K : ẋ = f(x, u), u U, (x, u) Q, η j (p) = 0, j = 1,..., q, ϕ i (p) 0, i = 1,..., m, J = ϕ 0 (p) min. Here p = (x(0), x(t )) R 2n is a vector of terminal values of the trajectory x(t), and Q is an open set in R n+r. The condition (x, u) Q should be regarded not as a constraint, but as definition of an open domain in the space (x, u) where the problem 3
4 is considered (see, e.g., [7]). Note that Problem K is a special case of Problem A when ν = 1, i.e., when the intermediate points are absent. Suppose that Problem K satisfies assumptions similar to A1 A3. Then the following theorem holds (see, e.g., [1], [2], [7]). Theorem 1. (Pontryagin Maximum Principle for Problem K ). Let a process w 0 = (x 0 (t), u 0 (t)) give a Pontryagin minimum in Problem K. Then there exists a collection λ = (α, β, c, ψ( )), where α = (α 0, α 1,..., α m ) 0, β = (β 1, β 2,..., β q ) R q, c R 1, ψ( ) is an n dimensional Lipschitz function on [0, T ], which generates the Pontryagin function H(ψ, x, u) = ψ, f(x, u), the terminal Lagrange function l(p) = m α i ϕ i (p)+ q β j η j (p), and satisfies the following conditions: i=0 a) nontriviality condition: (α, β) (0, 0); b) conditions of complementary slackness: α i ϕ i (p 0 ) = 0, i = 1,..., m; c) adjoint equation: j=1 ψ(t) = H 0 x = ψ(t)f x (x 0 (t), u 0 (t)) a.e. on [0, T ]; d) transversality conditions: ψ(0) = l x(0) (p 0 ), ψ(t ) = l x(t ) (p 0 ) ; e) constancy of function H condition: for a.e. t [0, T ] H(ψ(t), x 0 (t), u 0 (t)) = c ; f) maximality condition: for all t [0, T ] max H(ψ(t), u U 0 (t) x0 (t), u) = c, { } where U 0 (t) = u U (x0 (t), u) Q. 3 Obtaining the Hybrid Maximum Principle We will pass from Problem A to some problem of canonical type K and establish a correspondence between the admissible and optimal processes in these problems. The idea of such passage is quite natural: one should reduce all the state and control variables to a common fixed time interval, for example, to [0, 1]. Let (θ, x(t), u(t)) be an arbitrary admissible process in Problem A. Introduce a new time τ [0, 1] and define functions ρ k : [0, 1] k, k = 1,..., ν, from the equations: dρ k dτ = z k(τ), ρ k (0) = t k 1, where z k (τ) > 0 are arbitrary measurable essentially bounded functions on [0, 1] such that ρ k (1) = t k, i.e. 1 0 z k(τ) dτ = k. The functions ρ k play the role of 4
5 time t on the intervals k. Define also functions y k (τ) = x k (ρ k (τ)) and v k (τ) = u k (ρ k (τ)), k = 1,..., ν, τ [0, 1]. They obviously satisfy the relations: where, for simplicity, we use the notation ˆp = ˆp(ρ, y) = dy k dτ = z kf(ρ k, y k, v k ), v k U k, dρ k dτ = z k, k = 1,..., ν, (1) (ρ k, y k, v k ) Q k, z k > 0, (2) ρ 2 (0) ρ 1 (1) = 0, ρ 3 (0) ρ 2 (1) = 0,.... ρ ν (0) ρ ν 1 (1) = 0, { ηj (ˆp) = 0, j = 1,..., q, ϕ i (ˆp) 0, i = 1,..., m, ( ) ρ 1 (0), (ρ 1 (1), y 1 (0), y 1 (1)), (ρ 2 (1), y 2 (0), y 2 (1)),..., (ρ ν (1), y ν (0), y ν (1)). Obviously, this vector coincides with the initial vector p(t, x) and is a part of the full vector p(ρ, y) of terminal values. For brevity, define the vectors ρ = (ρ 1, ρ 2,..., ρ ν ), y = (y 1, y 2,..., y ν ), v = (v 1, v 2,..., v ν ), and z = (z 1, z 2,..., z ν ). On the set of admissible processes w = (ρ(τ), y(τ), v(τ), z(τ)) satisfying constraints (1) (4), we will minimize the functional (3) (4) J( w) = ϕ 0 (ˆp) min. (5) The obtained optimal control problem will be called Problem Ã. Here, the state variables are ρ k and y k, the controls are v k and z k, k = 1,..., ν, and the time interval [0, 1] is fixed. The open set Q consists of all vectors (ρ k, y k, v k, z k ) satisfying (2). The open set P consists of all vectors p for which the truncated vector ˆp P. It is easy to see that Problem à is a problem of type K. The following two correspondences can be established between the admissible processes of Problems A and Ã. As was shown above, any admissible process w = (θ, x(t), u(t)) of Problem A can be transformed to an admissible process w = (ρ(τ), y(τ), v(τ), z(τ)) of Problem Ã. This transformation is not defined uniquely, since it depends on the choice of functions z k (τ). In order to make it unique, let us set, for example, z k (τ) = k. Denote the obtained mapping by F. Construct also the mapping G that transforms a process w into w. To do this we first define time moments t 0 = ρ 1 (0), t 1 = ρ 2 (0),..., t ν 1 = ρ ν (0), t ν = ρ ν (1) (and so, define a vector θ ), and also time intervals k = [t k 1, t k ], k = 1,..., ν. 5
6 Now, introduce the functions x k (t) = y k (ρ 1 k (t)) and u k(t) = v k (ρ 1 k (t)) defined on the corresponding intervals k, and the mappings x(t) = x k (t) and u(t) = u k (t) for t k defined on the whole. One can easily show that the process w = G( w) = (θ, x(t), u(t)) is admissible in Problem A. An important property of both these mappings is that they preserve the value of cost functional. Note that the constructed mappings F and G are not inverse to each other (GF is the identity, but F G is not). Nevertheless, the very fact that there exist two mappings that put in correspondence to any admissible process of one problem an admissible process of another problem with the same value of the cost functional, readily implies the next statement. Theorem 2. If a process w 0 is optimal (i.e. globally minimal) in Problem A, then the process w 0 = F (w 0 ) is optimal in Problem Ã; and vice versa, if a process w0 is optimal in Problem à then the process w0 = G( w 0 ) is optimal in Problem A. Indeed, let us prove the first implication. Suppose a process w 0 is optimal in Problem A. If the process w 0 = F (w 0 ) is not optimal in Problem Ã, there exists another admissible process w in this problem such that J( w ) < J( w 0 ). Then the corresponding process w = G( w ) is admissible in Problem A and satisfies the relations J(w ) = J( w ) < J( w 0 ) = J(w 0 ), which lead to a contradiction with the optimality of the process w 0. The inverse implication is proved in the same way. This theorem asserts the invariance of a rather rough property (global minimality); it does not take into account the specificity of the problems and the mappings F, G. For our Problems A, à and the above mappings F, G, the following refined statement holds [8]. Theorem 3. If a process w 0 gives a strong (Pontryagin) minimum in Problem A, then the process w 0 = F (w 0 ) gives a strong (respectively, Pontryagin) minimum in Problem Ã; and vice versa, if a process w0 gives a strong (Pontryagin) minimum in Problem Ã, then the process w0 = G( w 0 ) gives a strong (respectively, Pontryagin) minimum in Problem A. Thus, the study of optimality (in any of the three above senses) of a process w 0 in Problem A reduces to the study of optimality of the corresponding process w 0 in Problem Ã. Now, let a process w 0 = (θ 0, x 0 (t), u 0 (t)) give a Pontryagin minimum in Problem A. Then by Theorem 3 the corresponding process w 0 = (ρ 0 (τ), y 0 (τ), v 0 (τ), z 0 (τ)) gives a Pontryagin minimum in Problem Ã, and therefore, according to Theorem 1, it satisfies the Pontryagin MP, i.e. there exists a collection λ = (α, β, δ, c, ψ y ( ), ψ ρ ( )), where α = (α 0, α 1,..., α m ) 0, β = (β 1, β 2,..., β q ) R q, δ = (δ 1, δ 2,..., δ ν 1 ) R ν 1, c R 1, ψ y = (ψ y1, ψ y2,..., ψ yν ), ψ ρ = (ψ ρ1, ψ ρ2,..., ψ ρν ), ψ yk and ψ ρk are Lipschitz functions on [0, 1], which generates the Pontryagin function H(ψ ρ, ψ y, ρ, y, v, z) = ν k=1 z k ( ψ yk f(ρ k, y k, v k ) + ψ ρk ) = where Π k (ψ ρk, ψ yk, ρ k, y k, v k ) = ψ yk f(ρ k, y k, v k ) + ψ ρk, 6 ν z k Π k (ψ ρk, ψ yk, ρ k, y k, v k ), k=1
7 the terminal Lagrange function ν 1 m q l( p) = l(ˆp) + δ k (ρ k+1 (0) ρ k (1)), where l(ˆp) = α i ϕ i (ˆp) + β j η j (ˆp), k=1 i=0 j=1 and satisfies the nontriviality condition (α, β, δ) (0, 0, 0), the complementary slackness conditions α i ϕ i (ˆp 0 ) = 0, i = 1,..., m, the adjoint equations: ψ yk (τ) = H 0 y k = z 0 k(τ)ψ yk (τ)f kx (ρ 0 k(τ), y 0 k(τ), v 0 k(τ)), ψ ρk (τ) = H 0 ρ k = z 0 k(τ)ψ yk (τ)f kt (ρ 0 k(τ), y 0 k(τ), v 0 k(τ)), k = 1,..., ν; (6) the transversality conditions: ψ yk (0) = l yk (0), ψ yk (1) = l yk (1), k = 1,..., ν; ψ ρ1 (0) = l ρ1 (0), ψ ρ1 (1) = l ρ1 (1) + δ 1, ψ ρ2 (0) = δ 1, ψ ρ2 (1) = l ρ2 (1) + δ 2, ψ ρν 1 (0) = δ ν 2, ψ ρν 1 (1) = l ρν 1 (1) + δ ν 1, ψ ρν (0) = δ ν 1 ; ψ ρν (1) = l ρν(1), (all the derivatives of function l(ˆp) are taken at the point ˆp 0 ); the constancy of function H condition: for a.e. τ [0, 1] H(ψ ρ (τ), ψ y (τ), ρ 0 (τ), y 0 (τ), v 0 (τ), z 0 (τ)) = c, and the maximality condition: for all τ [0, 1] max H(ψ ρ (τ), ψ y (τ), ρ 0 (τ), y 0 (τ), v, z) = c, (v,z) Ũ 0 (τ) where Ũ 0 (τ) = Ũ 1 0 (τ) Ũ ν 0 (τ), Ũk 0 (τ) = Vk 0 (τ) Z k, Vk 0 (τ) = {v k U k (ρ 0k(τ), y 0k(τ), } v k ) Q k, Z k = {z R z > 0}. Let us analyze these conditions. 1) From the constancy of H and the maximality condition we conclude, fixing the control vk(τ), 0 that k for a.e. τ, the function H reaches its maximum over z k Z k at the point zk(τ) 0 > 0. Therefore, since H is linear in all z k, we have H z k (ψ ρ (τ), ψ y (τ), ρ 0 (τ), y 0 (τ), v 0 (τ), z 0 (τ)) = Π 0 k(τ) = 0 (7) for a.e. τ [0, 1], from which we get c = 0, and then, taking into account that all the control pairs (z k, v k ) come separately into H, we obtain from the maximality condition that for all τ [0, 1] max Π k (ψ ρk, ψ yk, ρ 0 k(τ), yk(τ), 0 v k ) = 0. (8) v k Vk 0(τ) 7
8 2) Problem Ã, as a result of transformation of Problem A, involves additional equality constraints (3), and therefore the MP for Problem à involves the corresponding Lagrange multipliers δ k. Let us show that these additional multipliers can be excluded from the formulation of MP without loss of information. To this end we note that they come only in the nontriviality and transversality conditions for ψ ρ. But the nontriviality condition is not influenced by these multipliers as the following simple lemma shows. Lemma 1. (α, β, δ) (0, 0, 0) if and only if (α, β) (0, 0). Indeed, if α = β = 0, then l(ˆp) 0, hence the adjoint equation and transversality conditions imply that all ψ yk 0 and all ψ ρk are constants. Further, since ψ ρ1 (0) = 0, we get ψ ρ1 (1) = 0, whence δ 1 = 0, therefore ψ ρ2 (0) = 0, and so on, hence all δ k = 0. Moreover, all δ k can be also excluded from transversality conditions for ψ ρ by rewriting these conditions in the form: ψ ρ1 (0) = l ρ1 (0)(ˆp 0 ), ψ ρk+1 (0) ψ ρk (1) = l ρk+1 (0)(ˆp 0 ), k = 1,..., ν 1, ψ ρν (1) = l ρν(1)(ˆp 0 ). It can be easily shown that the obtained conditions (9) are equivalent to the initial ones. Thus, all multipliers δ k can be totally excluded from the MP for Problem Ã. 3) Define the functions πk(t) 0 = (ρ 0 k) 1 (t), x 0 k(t) = yk(π 0 k(t)), 0 u 0 k(t) = vk(π 0 k(t)), 0 ψ t (t) = ψ ρk (πk(t)), 0 ψ xk (t) = ψ yk (πk(t)) 0 for t 0 k, k = 1,..., ν. Since all ψ yk (τ) are Lipschitz continuous on [0, 1], all ψ xk (t) are Lipschitz continuous on their 0 k. Since all ψ ρk (τ) are Lipschitz continuous on [0, 1], the function ψ t (t) is Lipschitz continuous on every 0 k (such functions will be called piecewise Lipschitz), and at the points t 0 k it may have jumps defined by conditions (9). In view of relations dt = zk 0 dτ on 0 k, the adjoint equations (6) can be rewritten in the form dψ t dt = ψ x k f kt (t, x k, u k ), dψ xk dt = ψ xk f kx (t, x k, u k ), t 0 k. On every 0 k we construct the function H k (ψ t, ψ xk, t, x k, u k ) = ψ xk f k (t, x k, u k ) + ψ t. Along the optimal trajectory x 0 k(t), u 0 k(t) with corresponding ψ xk (t), ψ t (t) it obviously coincides with Π k (ψ ρk (τ), ψ yk (τ), ρ 0 k(τ), y 0 k(τ), v 0 k(τ)) for τ = π 0 k(t), therefore we conclude from (7) that H k (ψ t (t), ψ xk (t), t, x 0 k(t), u 0 k(t)) = 0 a.e. on 0 k. In view of (8), this implies the maximality of H k over all u k U k such that (t, x 0 k(t), u k ) Q k. Since ˆp(ρ, y) = p(t, x), the conditions of complementary slackness are not changed. Thus, the analysis of Pontryagin MP for Problem 8 à yields the following result. (9)
9 Theorem 4. (The Maximum Principle for hybrid problems). Let a process w 0 = (θ 0, x 0 (t), u 0 (t)) give a Pontryagin minimum in Problem A. Then there exists a collection λ = (α, β, ψ x ( ), ψ t ( )), where α = (α 0, α 1,..., α m ) 0, β = (β 1, β 2,..., β q ) R q, ψ x = (ψ x1,..., ψ xν ), ψ xk : 0 k R n k are Lipschitz functions, ψ t is a piecewise Lipschitz function on 0, which generates the Pontryagin functions H k (ψ t, ψ xk, t, x k, u k ) = ψ xk, f k (t, x k, u k ) + ψ t, t 0 k, the terminal Lagrange function and satisfies the following conditions: l(p) = m α i ϕ i (p) + q β j η j (p), i=0 a) nontriviality condition: (α, β) (0, 0); b) conditions of complementary slackness: α i ϕ i (p) = 0, i = 1,..., m; c) adjoint equations: ψ xk (t) = H0 k x k ψ t (t) = H0 k t j=1 = ψ xk (t)f kx (t, x 0 k(t), u 0 k(t)) = ψ xk (t)f kt (t, x 0 k(t), u 0 k(t)) d) transversality conditions for ψ x and ψ t : ψ xk (t k 1 ) = l xk (t k 1 )(p 0 ), d1: d2: ψ xk (t k ) = l xk (t k )(p 0 ), k = 1,..., ν; ψ t (t 0 ) = l t0 (p 0 ), ψ t (t ν ) = l tν (p 0 ), ψ t (t k ) = l tk (p 0 ), k = 1..., ν 1; a.e. on 0 k ; e) for a.e. t 0 k, k = 1,..., ν, H k (ψ t (t), ψ xk (t), t, x 0 k(t), u 0 k(t)) = 0; f) maximality condition: for all t 0 k, k = 1,..., ν, where U 0 k (t) = max u k U 0 k (t) H k (ψ t (t), ψ xk (t), t, x 0 k(t), u k ) = 0, {u k U k (t, x 0k(t), u k ) Q k }. Remark 1. For smooth control systems, theorem 4 coincides with the Hybrid MP obtained in [4], [5]. However, in these papers it was obtained as a new independent result, whereas it easily follows from the Pontryagin MP after a transformation of the hybrid problem to a standard optimal control problem. Note that a similar transformation was used in an old paper [3] for some hybrid problem like our Problem A. But this trick was not there clearly identified as such in rather cumbersome technical constructions overloaded by specific features of the studied problem, and therefore, it actually remained unnoticed. A detailed justification of this trick and its modifications for obtaining MP in different optimal control problems are given in [8]. 9
10 4 Hybrid systems with a quasivariable control set Along with hybrid control problems of the above standard type A, in literature there are also considered hybrid systems in which the control set U k on each time interval k depends, in a special way, on the values of state variable at switching times t k, namely, the control set on k is σ k (p) U k, where the functions σ k (p) have the derivative on P. We will call these problems by problems with quasivariable control set or, for brevity, problems of type B. Such a problem was considered, for example, in [5]. In that paper, to obtain necessary optimality conditions, the authors introduce a notion of generalized needle-like variations, by means of which they prove, under rather restrictive assumptions about functions f k (their twice smoothness), the so-called Hybrid Necessary Principle, being an equation w.r.t. some variations, to which the optimal process must satisfy. The MP for this problem was not obtained in that paper. However, the MP can be easily obtained if one notes that Problem B can also be reduced to a problem of type A. Indeed, the admissible control u k (t) on the interval k has the form σ k (p)v k (t), where v k (t) U k. Considering v k U k as a new control on k and introducing on every k a new phase variable s k satisfying differential equation ṡ k = 0 and initial condition s k (t k 1 ) = σ k (p), we can rewrite the problem of type B as a Problem B which is a problem of type A. In order to apply the hybrid MP (theorem 4) to the obtained Problem B, we impose an auxiliary assumption: B1) every function f k (t, x, u) has the partial derivative in u which is continuous on the coresponding Q k. In this case, the application of the hybrid MP to Problem B yields the following theorem. Theorem 5. (Maximum Principle for hybrid problems with quasivariable control set). Let a process w 0 = (θ 0, x 0 (t), s 0 (t), v 0 (t)) give a Pontryagin minimum in Problem B. Then there exists a collection λ = (α, β, γ, ψ x ( ), ψ s ( ), ψ t ( )), where α = (α 0, α 1,..., α m ) 0, β = (β 1, β 2,..., β q ) R q, γ = (γ 1,..., γ ν ) R ν, ψ x = (ψ x1,..., ψ xν ), ψ s = (ψ s1,..., ψ sν ), ψ xk : 0 k R n k and ψ sk : 0 k R 1 are Lipschitz functions, ψ t is a piecewise Lipschitz functions on 0, which generates the Pontryagin functions H k (ψ t, ψ xk, ψ sk, t, x k, s k, v k ) = ψ xk, f k (t, x k, s k v k ) + ψ t, t 0 k, the terminal Lagrange function l B (p, p s ) = l A (p) + ν γ r (s r (t r 1 ) σ r (p)), where l A (p) = m α i ϕ i (p) + q β j η j (p), r=1 (here p is the full vector of terminal values in Problem A, and p s = ((s 1 (t 0 ), s 1 (t 1 )), (s 2 (t 1 ), s 2 (t 2 )),..., (s ν (t ν 1 ), s ν (t ν )) ) ), and satisfies the following conditions: i=0 j=1 10
11 a) nontriviality condition: (α, β, γ) (0, 0, 0); b) conditions of complementary slackness: α i ϕ i (p) = 0, i = 1,..., m; c) adjoint equations: k = 1,..., ν, ψ xk (t) = H0 k x k = ψ xk (t)f kx (t, x 0 k(t), s 0 k(t)v 0 k(t)) ψ t (t) = H0 k t ψ sk (t) = H0 k s k = ψ xk (t)f kt (t, x 0 k(t), s 0 k(t)v 0 k(t)), = ψ xk (t)f ku (t, x 0 k(t), s 0 k(t) v 0 k(t)) v 0 k(t), a.e. on 0 k ; d) transversality conditions: ψ xk (t k 1 ) = l A x k (t k 1 ) (p0 ) ν γ r σ rx k (t k 1 ) (p0 ), r=1 d1: ψ xk (t k ) = lx A k (t k ) (p0 ) + ν γ r σ rx k (t k ) (p0 ), k = 1,..., ν; r=1 ψ t (t 0 ) = lt A 0 (p 0 ) ν γ r σ rt 0 (p 0 ), ψ t (t ν ) = lt A ν (p 0 ) + ν γ r σ rt ν (p 0 ), r=1 r=1 d2: ψ t (t k ) = lt A k (p 0 ) ν γ r σ rt k (p 0 ), k = 1..., ν 1; r=1 d3: ψ sk (t k 1 ) = γ k, ψ sk (t k ) = 0, k = 1,..., ν; e) for all k = 1,..., ν, and almost all t 0 k, H k (ψ t (t), ψ xk (t), ψ sk (t), t, x 0 k(t), s 0 k(t), v 0 k(t)) = 0; f) maximality condition: for all k = 1,..., ν, and all t 0 k, max H k (ψ t (t), ψ xk (t), ψ sk (t), t, x 0 k(t), s 0 k(t), v k ) = 0, v k Uk 0(t) where Uk 0 (t) = {v k U k (t, x 0k(t), s 0k(t)v } k ) Q k. Remark 2. Theorem 5 involves the adjoint variables ψ sk related to the specific method of reduction of Problem B to a problem of type A. Let us show that these variables can be harmlessly removed from the MP for Problem B. Indeed, since the functions H k do not depend on ψ sk, conditions e) and f) do not actually involve ψ sk. The adjoint equations and transversality conditions d3 for ψ sk can be replaced by equivalent conditions t 0 k ψ xk (t)f ku (t, x 0 k(t), s 0 k(t)vk(t)) 0 vk(t) 0 dt = γ k, k = 1,..., ν. (10) t 0 k 1 The remaining conditions of MP do not involve ψ sk by definition. Thus, all ψ sk can be excluded from the formulation of MP by supplementing it with conditions (10). Remark 3. The statement of Problem B obviously generalizes the statement of Problem A (one should put all σ k (p) 1 ). It is easy to show that MP for Problem B, applied to Problem A, exactly gives the MP for Problem A. 11
12 5 Example: control of a car with two gears [6] A car moves under the law ẋ = y, ẏ = u g 1 (y), u U on the time interval 1 = [0, t 1 ], and under the law ẋ = y, ẏ = ug 2 (y), u σ(y(t 1 )) U on the time interval 2 = [t 1, T ]. The initial and final time moments t 0 = 0 and t 2 = T are fixed, the moment t 1 is not fixed, the set U = [0, 1], the functions g 1, g 2, σ are positive and differentiable in R 1. The car starts from the point (x 0, y 0 ) = (0, 0) and the state variables x and y are assumed to be continuous on the whole interval = [0, T ]. It is required to maximize x(t ). Rewrite this problem as a problem of type A. To this aim, on the interval 2 introduce a new control v U and a new state variable s satisfying equation ṡ = 0 and initial condition s(t 1 ) = y(t 1 ). The control u(t) takes then the form: u(t) = σ(s(t))v(t). Thus, the control system on 2 is ẋ = y, ẏ = v σ(s) g 2 (y), ṡ = 0, v U, and the initial problem is reduced to a problem of type A. On the interval 1 we have σ(s) 1, u(t) = v(t), and so, on the whole interval the control now is v U. The open sets Q 1 and Q 2 coincide here with the whole space. The obtained problem possesses the convexity and compactness w.r.t. v, therefore, passing to a problem of type K and using standard existence theorems, one can show that this problem always has a solution. Let us write out MP for this problem. The Pontryagin function is H 1 = ψ x y + ψ y vg 1 (y) + ψ t on 1, H 2 = ψ x y + ψ y vσ(s)g 2 (y) + ψ t on 2, the terminal Lagrange function is l(p) = α 0 x(t ) + β t0 t 0 + β T (t 2 T ) + β x x(0) + β y y(0) + γ(s(t 1 ) y(t 1 )). For optimal process, we have α 0 0, (α 0, β t0, β T, β x, β y, γ) 0, the adjoint system takes the form: ψ x = 0, ψ y = ψ x + v ψ y g 1(y), ψ t = 0 on 1 ; ψ x = 0, ψ y = ψ x + v ψ y σ(s) g 2(y) ψ s = v ψ y σ (s) g 2 (y), ψ t = 0 the transversality conditions are: on 2 ; at the left endpoint: ψ x (0) = β x, ψ y (0) = β y, ψ t (0) = β t0 ; at the right endpoint: ψ x (T ) = α 0, ψ y (T ) = 0, ψ t (T ) = β T ; 12
13 the transversality conditions for ψ s and jump conditions for ψ x, ψ y and ψ t : ψ s (t 1 ) = γ, ψ s (T ) = 0, ψ x (t 1 ) = 0, ψ y (t 1 ) = γ, ψ t (t 1 ) = 0; for a.e. t 1 H 1 = ψ x y + ψ y v g 1 (y) + ψ t = 0; for a.e. t 2 H 2 = ψ x y + ψ y v σ(s) g 2 (y) + ψ t = 0; the maximality condition: for all t 1 max ( ψ xy + ψ y v g 1 (y) + ψ t ) = 0, v U for all t 2 max ( ψ xy + ψ y vσ(s) g 2 (y) + ψ t ) = 0. v U The transversality conditions for ψ x and ψ y at the point t 1 are replaced here by the jump conditions ψ x (t 1 ) = l x(t1 )(p 0 ) and ψ y (t 1 ) = l y(t1 )(p 0 ), because the state variables x and y are continuous at the intermediate points t k (a detailed justification see in [8]). Let us find all the extremals in this problem. The adjoint system and transversality conditions imply that ψ x (t) α 0 and ψ t β t0. Prove that α 0 0. Indeed, if α 0 = 0, then ψ x 0 on the whole. In that case, since the adjoint equation is homogeneous, we obtain ψ y 0 on 2. Therefore ψ t 0 (since H 2 = 0 ), and in view of transversality conditions we get ψ s 0. Therefore γ = 0 and ψ y is continuous on the whole. But then ψ y (t 1 ) = 0 and, in view of homogeneity of the adjoint equation, ψ y 0 on 1, whence ψ y 0 on the whole, which easily implies that all Lagrange multipliers vanish, a contradiction. Thus, without loss of generality we can take α 0 = ψ x = 1. Let us prove that, if ψ y (t ) < 0 for some t t 1, then v 0 on the rest of the corresponding interval k. Indeed, in this case by the continuity of ψ y there exists a neighborhood O(t ) k in which ψ y (t) < 0. The maximality condition on O(t ) gives v 0, and the adjoint equation ψ y (t) = ψ x = 1 implies that ψ y (t) stays decreasing further on k. Thus, ψ y (t) < 0 on the rest of k and hence, v 0 there. If ψ y (t ) = 0 for some t t 1, then ψ 2 (t) < 0 in a neighborhood O(t ) and the above consideration gives that again ψ y < 0 on the rest of k. Therefore, ψ y can change its sign only from plus to minus, and the optimal control is a piecewise constant function taking the values 1 or 0 with at most one switching from 1 to 0 on every k (which, in particular, implies that H 1 and H 2 are continuous everywhere except the discontinuity points of the control). The extremals with ψ y 0 on (t, T ) 2 do not satisfy the MP, since they obviously contradict the transversality condition ψ y (T ) = 0. So, ψ y > 0 on (t 1, T ) and v 1 on the whole 2. The extremals with ψ y 0 on (t, t 1 ) 1 do not satisfy the MP too. Indeed, according to equalities H 1 (t 1 0) = 0, H 2 (T ) = 0, and ψ y (T ) = 0, we get y(t 1 ) + ψ t = 0, y(t ) + ψ t = 0, and then y(t 1 ) = y(t ). But since v 1 on 2, we have ẏ > 0 on 2, whence y(t 1 ) < y(t ), a contradiction. 13
14 Thus, ψ y > 0 on [0, t 1 ), and the problem possesses a single extremal with v 1 on the whole, which is then optimal. Let us find it completely. By a direct calculation it is easy to show [6] that ψ y takes the form ψ y (t) = C y(t) g 1 (y(t)), t 1, y(t ) y(t) σ(s) g 2 (y(t)), t 2. Consider equalities H 1 (0) = 0 and H 2 (T ) = 0. The first one implies that ψ y (0) g 1 (0) + ψ t = 0, and the second one that y(t ) + ψ t = 0. Therefore, in view of (11) and the initial conditions, we obtain y(t ) = C. Thus, the values ψ y (t 1 ± 0) are explicitly expressed in terms of y(t 1 ) = s and y(t ). Then, the jump condition for ψ y at the point t 1 takes the form: σ (s) T y(t ) s σ(s)g 2 (s) y(t ) s g 1 (s) The adjoint equation and transversality conditions for ψ s t 1 ψ y (t)g 2 (y(t)) dt = γ. (11) = γ. (12) imply that In view of (11) this equation reduces to the form: σ (s) σ(s) T t 1 (y(t ) y(t)) dt = γ. (13) For a fixed t 1 the function y(t) is determined as the solution to Cauchy problems ẏ = g 1 (y), y(0) = 0 on 1 (which yields s = y(t 1 ) ) and ẏ = σ(s) g 2 (y), y(t 1 ) = s on 2. Here s = t1 0 y(t ) = s + σ(s) g 1 (y(t)) dt, (14) T t 1 g 2 (y(t)) dt. (15) Substituting s and y(t ) into (12), we can find γ = γ(t 1 ), and then (13) is an equation with a single unknown t 1. Thus, relations (12) (15) is a system of four equations with four unknown t 1, s, y(t ), γ, which can be resolved in the non-singular case, and so, the complete extremal can be found. Remark 4. This example was taken from paper [6], in which the authors investigate the extremal with u 1 by means of Hybrid Neccesary Principle, claiming that in this case it gives stronger results than MP does. Imposing assumptions on the functions g 1, g 2, σ, which guarantee the equality γ = 0, and using Hybrid Neccesary Principle, the authors obtain the optimality condition σ (s) 0. However, in the case γ = 0 the MP gives a stronger result σ (s) = 0 (it follows from (13) and inequality y(t) < y(t ).) Moreover, when γ = 0 we obtain from (12) that the value of s also satisfies the equation g 1 (s) σ(s) g 2 (s) = 0, which is not typical for functions 14
15 g 1, g 2, σ of generic type. For example, if σ strictly increases, then by virtue of (13) we necessarily have γ > 0, and therefore, the case γ = 0 is not typical. Acknowledgments. This work was supported by the Russian Foundation for Basic Research, project no The authors thank B.M. Miller and V.A. Dykhta for valuable discussions. References [1] L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, E.F. Mischenko, Mathematical Theory of Optimal Processes (in Russian), M., Nauka, [2] V.M. Alekseev, V.M. Tikhomirov, S.V. Fomin, Optimal Control (in Russian). M., Nauka, [3] U.M. Volin, G.M. Ostrovskii, Maximum principle for discontinuous systems and its application to problems with state constraints. (in Russian) // Izvestia vuzov. Radiophysics, 1969, v. 12, no. 11, p [4] H.J. Sussmann, A maximum principle for hybrid optimal control problems. // Proc. of 38th IEEE Conference on Decision and Control, Phoenix, [5] M. Garavello, B. Piccoli, Hybrid necessary principle. // SIAM J. on Control and Optimization, 2005, v. 43, no. 5, p [6] C. D Apice, M. Garavello, R. Manzo, B. Piccoli, Hybrid optimal control: case study of a car with gears. // International Journal of Control, v. 76 (2003), [7] A.A. Milutin, A.V. Dmitruk, N.P. Osmolovskii, The maximum principle in optimal control (in Russian), M., Mech.-Math. Dept. of Moscow State University, [8] A.V. Dmitruk, A.M. Kaganovich, Maximum principle for optimal control problems with intermediate constraints (in Russian). // In Nonlinear Dynamics and Control, vol. 6, M., Nauka, 2006 (in Russian), to appear. 15
2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More informationNonlinear Control Systems
Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f
More informationAn Integral-type Constraint Qualification for Optimal Control Problems with State Constraints
An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard
More informationConvergence Rate of Nonlinear Switched Systems
Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the
More informationPREPRINT 2009:10. Continuous-discrete optimal control problems KJELL HOLMÅKER
PREPRINT 2009:10 Continuous-discrete optimal control problems KJELL HOLMÅKER Department of Mathematical Sciences Division of Mathematics CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Göteborg
More informationPiecewise Smooth Solutions to the Burgers-Hilbert Equation
Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang
More informationOn the minimum of certain functional related to the Schrödinger equation
Electronic Journal of Qualitative Theory of Differential Equations 2013, No. 8, 1-21; http://www.math.u-szeged.hu/ejqtde/ On the minimum of certain functional related to the Schrödinger equation Artūras
More informationThe Euler Method for Linear Control Systems Revisited
The Euler Method for Linear Control Systems Revisited Josef L. Haunschmied, Alain Pietrus, and Vladimir M. Veliov Research Report 2013-02 March 2013 Operations Research and Control Systems Institute of
More informationNonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1
Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationOn the optimality of Dubins paths across heterogeneous terrain
On the optimality of Dubins paths across heterogeneous terrain Ricardo G. Sanfelice and Emilio Frazzoli {sricardo,frazzoli}@mit.edu Laboratory for Information and Decision Systems Massachusetts Institute
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationConvexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls
1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity
More informationOPTIMAL CONTROL CHAPTER INTRODUCTION
CHAPTER 3 OPTIMAL CONTROL What is now proved was once only imagined. William Blake. 3.1 INTRODUCTION After more than three hundred years of evolution, optimal control theory has been formulated as an extension
More informationMaximum Principle for Infinite-Horizon Optimal Control Problems under Weak Regularity Assumptions
Maximum Principle for Infinite-Horizon Optimal Control Problems under Weak Regularity Assumptions Sergey Aseev and Vladimir M. Veliov Research Report 214-6 June, 214 Operations Research and Control Systems
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationL 2 -induced Gains of Switched Systems and Classes of Switching Signals
L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit
More informationEXTREMAL ANALYTICAL CONTROL AND GUIDANCE SOLUTIONS FOR POWERED DESCENT AND PRECISION LANDING. Dilmurat Azimov
EXTREMAL ANALYTICAL CONTROL AND GUIDANCE SOLUTIONS FOR POWERED DESCENT AND PRECISION LANDING Dilmurat Azimov University of Hawaii at Manoa 254 Dole Street, Holmes 22A Phone: (88)-956-2863, E-mail: azimov@hawaii.edu
More informationFINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS. BORIS MORDUKHOVICH Wayne State University, USA
FINITE-DIFFERENCE APPROXIMATIONS AND OPTIMAL CONTROL OF THE SWEEPING PROCESS BORIS MORDUKHOVICH Wayne State University, USA International Workshop Optimization without Borders Tribute to Yurii Nesterov
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationNewtonian Mechanics. Chapter Classical space-time
Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles
More informationNOTES ON CALCULUS OF VARIATIONS. September 13, 2012
NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationLecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.
Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ
More informationMinimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality
Minimum-Phase Property of Nonlinear Systems in Terms of a Dissipation Inequality Christian Ebenbauer Institute for Systems Theory in Engineering, University of Stuttgart, 70550 Stuttgart, Germany ce@ist.uni-stuttgart.de
More informationDynamics of Propagation and Interaction of Delta-Shock Waves in Conservation Law Systems
Dynamics of Propagation and Interaction of Delta-Shock Waves in Conservation Law Systems V. G. Danilov and V. M. Shelkovich Abstract. We introduce a new definition of a δ-shock wave type solution for a
More informationOn Discontinuous Differential Equations
On Discontinuous Differential Equations Alberto Bressan and Wen Shen S.I.S.S.A., Via Beirut 4, Trieste 3414 Italy. Department of Informatics, University of Oslo, P.O. Box 18 Blindern, N-316 Oslo, Norway.
More informationu xx + u yy = 0. (5.1)
Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function
More informationNumerical Algorithm for Optimal Control of Continuity Equations
Numerical Algorithm for Optimal Control of Continuity Equations Nikolay Pogodaev Matrosov Institute for System Dynamics and Control Theory Lermontov str., 134 664033 Irkutsk, Russia Krasovskii Institute
More informationJoint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018
EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,
More informationIntegrodifferential Hyperbolic Equations and its Application for 2-D Rotational Fluid Flows
Integrodifferential Hyperbolic Equations and its Application for 2-D Rotational Fluid Flows Alexander Chesnokov Lavrentyev Institute of Hydrodynamics Novosibirsk, Russia chesnokov@hydro.nsc.ru July 14,
More informationSt. Petersburg Math. J. 13 (2001),.1 Vol. 13 (2002), No. 4
St Petersburg Math J 13 (21), 1 Vol 13 (22), No 4 BROCKETT S PROBLEM IN THE THEORY OF STABILITY OF LINEAR DIFFERENTIAL EQUATIONS G A Leonov Abstract Algorithms for nonstationary linear stabilization are
More information2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.
Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following
More informationDuality and dynamics in Hamilton-Jacobi theory for fully convex problems of control
Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory
More informationA TWO PARAMETERS AMBROSETTI PRODI PROBLEM*
PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper
More informationOptimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112
Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer
More informationCalculus of Variations. Final Examination
Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of
More informationPDE Constrained Optimization selected Proofs
PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5
More informationDIEUDONNE AGBOR AND JAN BOMAN
ON THE MODULUS OF CONTINUITY OF MAPPINGS BETWEEN EUCLIDEAN SPACES DIEUDONNE AGBOR AND JAN BOMAN Abstract Let f be a function from R p to R q and let Λ be a finite set of pairs (θ, η) R p R q. Assume that
More informationPassivity-based Stabilization of Non-Compact Sets
Passivity-based Stabilization of Non-Compact Sets Mohamed I. El-Hawwary and Manfredi Maggiore Abstract We investigate the stabilization of closed sets for passive nonlinear systems which are contained
More informationTHE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied
More informationTime-optimal control of a 3-level quantum system and its generalization to an n-level system
Proceedings of the 7 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 7 Time-optimal control of a 3-level quantum system and its generalization to an n-level
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationPartial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:
Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant
More informationAC&ST AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS. Claudio Melchiorri
C. Melchiorri (DEI) Automatic Control & System Theory 1 AUTOMATIC CONTROL AND SYSTEM THEORY SYSTEMS AND MODELS Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI)
More informationarxiv: v3 [math.ds] 22 Feb 2012
Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,
More informationLimit solutions for control systems
Limit solutions for control systems M. Soledad Aronna Escola de Matemática Aplicada, FGV-Rio Oktobermat XV, 19 e 20 de Outubro de 2017, PUC-Rio 1 Introduction & motivation 2 Limit solutions 3 Commutative
More informationPredictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions
Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions M. Lazar, W.P.M.H. Heemels a a Eindhoven Univ. of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
More informationCHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS. p. 1/73
CHAPTER 3 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS p. 1/73 THE MAXIMUM PRINCIPLE: MIXED INEQUALITY CONSTRAINTS Mixed Inequality Constraints: Inequality constraints involving control and possibly
More informationOptimal Control Theory - Module 3 - Maximum Principle
Optimal Control Theory - Module 3 - Maximum Principle Fall, 215 - University of Notre Dame 7.1 - Statement of Maximum Principle Consider the problem of minimizing J(u, t f ) = f t L(x, u)dt subject to
More informationClosed-Loop Impulse Control of Oscillating Systems
Closed-Loop Impulse Control of Oscillating Systems A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics Periodic Control Systems, 2007
More informationResearch Article Quasilinearization Technique for Φ-Laplacian Type Equations
International Mathematics and Mathematical Sciences Volume 0, Article ID 975760, pages doi:0.55/0/975760 Research Article Quasilinearization Technique for Φ-Laplacian Type Equations Inara Yermachenko and
More informationLocal null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain
Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain Nicolás Carreño Université Pierre et Marie Curie-Paris 6 UMR 7598 Laboratoire
More informationCoupling conditions for transport problems on networks governed by conservation laws
Coupling conditions for transport problems on networks governed by conservation laws Michael Herty IPAM, LA, April 2009 (RWTH 2009) Transport Eq s on Networks 1 / 41 Outline of the Talk Scope: Boundary
More informationOn finite gain L p stability of nonlinear sampled-data systems
Submitted for publication in Systems and Control Letters, November 6, 21 On finite gain L p stability of nonlinear sampled-data systems Luca Zaccarian Dipartimento di Informatica, Sistemi e Produzione
More informationExtremal Trajectories for Bounded Velocity Differential Drive Robots
Extremal Trajectories for Bounded Velocity Differential Drive Robots Devin J. Balkcom Matthew T. Mason Robotics Institute and Computer Science Department Carnegie Mellon University Pittsburgh PA 523 Abstract
More informationA maximum principle for optimal control system with endpoint constraints
Wang and Liu Journal of Inequalities and Applications 212, 212: http://www.journalofinequalitiesandapplications.com/content/212/231/ R E S E A R C H Open Access A maimum principle for optimal control system
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationTonelli Full-Regularity in the Calculus of Variations and Optimal Control
Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Delfim F. M. Torres delfim@mat.ua.pt Department of Mathematics University of Aveiro 3810 193 Aveiro, Portugal http://www.mat.ua.pt/delfim
More informationThe Liapunov Method for Determining Stability (DRAFT)
44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level
More informationRemark on Hopf Bifurcation Theorem
Remark on Hopf Bifurcation Theorem Krasnosel skii A.M., Rachinskii D.I. Institute for Information Transmission Problems Russian Academy of Sciences 19 Bolshoi Karetny lane, 101447 Moscow, Russia E-mails:
More informationThe Pontryagin Maximum Principle and a Unified Theory of Dynamic Optimization
The Pontryagin Maximum Principle and a Unified Theory of Dynamic Optimization Francis Clarke February 10, 2009 Abstract The Pontryagin maximum principle is the central result of optimal control theory.
More informationNonlinear Control Lecture # 14 Input-Output Stability. Nonlinear Control
Nonlinear Control Lecture # 14 Input-Output Stability L Stability Input-Output Models: y = Hu u(t) is a piecewise continuous function of t and belongs to a linear space of signals The space of bounded
More informationRegularity and compactness for the DiPerna Lions flow
Regularity and compactness for the DiPerna Lions flow Gianluca Crippa 1 and Camillo De Lellis 2 1 Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy. g.crippa@sns.it 2 Institut für Mathematik,
More informationProof. We indicate by α, β (finite or not) the end-points of I and call
C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending
More informationAdaptive methods for control problems with finite-dimensional control space
Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy
More informationPrinciples of Optimal Control Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.323 Lecture
More informationON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2
URAL MATHEMATICAL JOURNAL, Vol. 2, No. 1, 2016 ON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2 Mikhail I. Gomoyunov Krasovskii Institute of Mathematics and Mechanics,
More informationWe now impose the boundary conditions in (6.11) and (6.12) and solve for a 1 and a 2 as follows: m 1 e 2m 1T m 2 e (m 1+m 2 )T. (6.
158 6. Applications to Production And Inventory For ease of expressing a 1 and a 2, let us define two constants b 1 = I 0 Q(0), (6.13) b 2 = ˆP Q(T) S(T). (6.14) We now impose the boundary conditions in
More informationLecture Notes on PDEs
Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential
More informationNoncooperative continuous-time Markov games
Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and
More informationChapter 2 Optimal Control Problem
Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter
More informationA LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION
A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u
More informationRobust exact differentiation via sliding mode technique
Robust exact differentiation via sliding mode technique Arie Levant * Abstract. The main problem in differentiator design is to combine differentiation exactness with robustness in respect to possible
More informationHysteresis rarefaction in the Riemann problem
Hysteresis rarefaction in the Riemann problem Pavel Krejčí 1 Institute of Mathematics, Czech Academy of Sciences, Žitná 25, 11567 Praha 1, Czech Republic E-mail: krejci@math.cas.cz Abstract. We consider
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationIntroduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems
p. 1/5 Introduction to Nonlinear Control Lecture # 3 Time-Varying and Perturbed Systems p. 2/5 Time-varying Systems ẋ = f(t, x) f(t, x) is piecewise continuous in t and locally Lipschitz in x for all t
More informationMaximum principle for the fractional diusion equations and its applications
Maximum principle for the fractional diusion equations and its applications Yuri Luchko Department of Mathematics, Physics, and Chemistry Beuth Technical University of Applied Sciences Berlin Berlin, Germany
More informationConverse Lyapunov theorem and Input-to-State Stability
Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts
More informationTwo-Body Problem. Central Potential. 1D Motion
Two-Body Problem. Central Potential. D Motion The simplest non-trivial dynamical problem is the problem of two particles. The equations of motion read. m r = F 2, () We already know that the center of
More informationAbout Moreau-Yosida regularization of the minimal time crisis problem
About Moreau-Yosida regularization of the minimal time crisis problem Terence Bayen and Alain Rapaport, April 6, 2015 Abstract We study an optimal control problem where the cost functional to be minimized
More informationRecent Trends in Differential Inclusions
Recent Trends in Alberto Bressan Department of Mathematics, Penn State University (Aveiro, June 2016) (Aveiro, June 2016) 1 / Two main topics ẋ F (x) differential inclusions with upper semicontinuous,
More informationLecture Note 7: Switching Stabilization via Control-Lyapunov Function
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 7: Switching Stabilization via Control-Lyapunov Function Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio
More informationSingularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations
Singularities and Laplacians in Boundary Value Problems for Nonlinear Ordinary Differential Equations Irena Rachůnková, Svatoslav Staněk, Department of Mathematics, Palacký University, 779 OLOMOUC, Tomkova
More informationSPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction
MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS
More informationFrom convergent dynamics to incremental stability
51st IEEE Conference on Decision Control December 10-13, 01. Maui, Hawaii, USA From convergent dynamics to incremental stability Björn S. Rüffer 1, Nathan van de Wouw, Markus Mueller 3 Abstract This paper
More informationPeriodic solutions of weakly coupled superlinear systems
Periodic solutions of weakly coupled superlinear systems Alessandro Fonda and Andrea Sfecci Abstract By the use of a higher dimensional version of the Poincaré Birkhoff theorem, we are able to generalize
More informationThe first order quasi-linear PDEs
Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.
More informationON PERIODIC SOLUTIONS TO SOME LAGRANGIAN SYSTEM WITH TWO DEGREES OF FREEDOM
ON PERIODIC SOLUTIONS TO SOME LAGRANGIAN SYSTEM WITH TWO DEGREES OF FREEDOM OLEG ZUBELEVICH DEPT. OF THEORETICAL MECHANICS, MECHANICS AND MATHEMATICS FACULTY, M. V. LOMONOSOV MOSCOW STATE UNIVERSITY RUSSIA,
More informationOptimal Control. Historical Overview and Theoretical Development. Nicolai Christov
Optimal Control Historical Overview and Theoretical Development Nicolai Christov Laboratoire Franco-Chinoise d Automatique et Signaux Université Lille1 Sciences et Technologies and Nanjing University of
More informationB-Stability and Its Applications to the Tikhonov and Malkin Gorshin Theorems
Differential Equations, Vol. 37, No. 1, 2001, pp. 11 16. Translated from Differentsial nye Uravneniya, Vol. 37, No. 1, 2001, pp. 12 17. Original Russian Text Copyright c 2001 by Kalitin, Sari. ORDINARY
More informationStructurally Stable Singularities for a Nonlinear Wave Equation
Structurally Stable Singularities for a Nonlinear Wave Equation Alberto Bressan, Tao Huang, and Fang Yu Department of Mathematics, Penn State University University Park, Pa. 1682, U.S.A. e-mails: bressan@math.psu.edu,
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear
More informationIntersection Models and Nash Equilibria for Traffic Flow on Networks
Intersection Models and Nash Equilibria for Traffic Flow on Networks Alberto Bressan Department of Mathematics, Penn State University bressan@math.psu.edu (Los Angeles, November 2015) Alberto Bressan (Penn
More informationHyperbolic Systems of Conservation Laws
Hyperbolic Systems of Conservation Laws III - Uniqueness and continuous dependence and viscous approximations Alberto Bressan Mathematics Department, Penn State University http://www.math.psu.edu/bressan/
More informationConservation laws and some applications to traffic flows
Conservation laws and some applications to traffic flows Khai T. Nguyen Department of Mathematics, Penn State University ktn2@psu.edu 46th Annual John H. Barrett Memorial Lectures May 16 18, 2016 Khai
More informationAn Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game
An Application of Pontryagin s Maximum Principle in a Linear Quadratic Differential Game Marzieh Khakestari (Corresponding author) Institute For Mathematical Research, Universiti Putra Malaysia, 43400
More informationGeometric Optimal Control with Applications
Geometric Optimal Control with Applications Accelerated Graduate Course Institute of Mathematics for Industry, Kyushu University, Bernard Bonnard Inria Sophia Antipolis et Institut de Mathématiques de
More informationOn Some Optimal Control Problems for Electric Circuits
On Some Optimal Control Problems for Electric Circuits Kristof Altmann, Simon Stingelin, and Fredi Tröltzsch October 12, 21 Abstract Some optimal control problems for linear and nonlinear ordinary differential
More information