PREPRINT 2009:10. Continuous-discrete optimal control problems KJELL HOLMÅKER

Size: px
Start display at page:

Download "PREPRINT 2009:10. Continuous-discrete optimal control problems KJELL HOLMÅKER"

Transcription

1 PREPRINT 2009:10 Continuous-discrete optimal control problems KJELL HOLMÅKER Department of Mathematical Sciences Division of Mathematics CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Göteborg Sweden 2009

2

3 Preprint 2009:10 Continuous-discrete optimal control problems Kjell Holmåker Department of Mathematical Sciences Division of Mathematics Chalmers University of Technology and University of Gothenburg SE Göteborg, Sweden Göteborg, February 2009

4 Preprint 2009:10 ISSN Matematiska vetenskaper Göteborg 2009

5 Continuous-discrete optimal control problems Kjell Holmåker Abstract A continuous-discrete control system is studied where the state variable x may have a jump discontinuity at certain times τ k. The size of the jump is determined by the actual value of x and the choice of a control parameter z k. Between the jump times, x satisfies an equation ẋ = f(t, x, u(t)) for some choice of the control function u. There may also be constraints on the values x(τ k ) and the jumps. A cost functional depending on these quantities is to be minimized. Necessary conditions for optimality are derived. This is done by formulating the problem as a special case of a general mathematical programming problem in an infinitedimensional space and applying a general multiplier rule. The times τ k may be fixed or allowed to vary. As an application necessary conditions are obtained for a multiprocess problem, i.e., a problem where at τ k there is a transition to a different state space. 1 Introduction Differential equations with impulse effects can be used to model situations where some quantities can change instantly. In a typical case a state variable x evolves in time according to an equation ẋ = f(t, x) except at times τ k, where it makes a jump of size J k (x(τ k )). A solution that exists on an interval [, T 1 ] with jumps at interior points τ k, k = 1,..., r, with = τ 0 < τ 1 < < τ r < τ r+1 = T 1, is assumed to be continuous from the left at τ k for k = 1,..., r + 1 and continuous from the right at. On (τ k, τ k+1 ] (0 k r) it is absolutely continuous and satisfies ẋ(t) = f(t, x(t)) a.e. with x(τ + k ) = x(τ k + 0) = x(τ k ) + J k (x(τ k )) for 1 k r. We write x τk = x(τ + k ) x(τ k) = J k (x(τ k )) and say that x satisfies ẋ = f(t, x), t τ k, x τk = J k (x(τ k )). Equations of this type are treated for example in [1]. In this paper we will consider continuous-discrete control systems ẋ = f(t, x, u(t)), t τ k, x τk = J k (x(τ k ), z k ), with control functions u(t) Ω(t) and control parameters z k Z k. The set J k (x(τ k ), Z k ) of allowed jumps is assumed to be convex. We have a number of constraints for the values x(τ k ) and the jumps J k (x(τ k ), z k ), and we want to minimize a functional depending on these quantities. In Section 5 we derive necessary conditions for optimality, first when the switching times τ k are fixed, and then when they are allowed to vary. The first result we obtain by formulating the problem as a special case of a general optimization problem and applying the necessary conditions for this problem. This general problem is discussed in Section 3. To be able to apply this we need to know how the solution x(t) is affected by certain variations of u(t) and z k. In Section 4 a known result is extended to systems with impulse effects. We also need a solution formula for the linear case which is given in Section 2. In the case where τ k may vary, a certain transformation of the time variable will reduce the problem to one with fixed switching times, and the previously obtained results can be applied. 1

6 Our results contain as special cases the Pontryagin maximum principle for problems with interior constraints (J k = 0) and the discrete maximum principle as it appears in [13] (f = 0). In a related type of problems there are different descriptions (including different state spaces) on different time intervals. Then we cannot talk about jumps, but there can be equations relating the values at the endpoints of the time intervals. Such problems sometimes called multiprocess problems are discussed in Section 6, where it is shown how the results from Section 5 can be used to derive necessary conditions. It seems that this particular combination of an ordinary control and discrete controls has not been studied much. Some authors (e.g. [2], [12] and [14]) have considered problems with only discrete controls. In [14] there is the same convexity assumption as in this paper, and their result is included in our Theorem 4. 2 Linear systems with impulse effects Consider the following linear system in R n with impulse effects: ẋ = A(t)x + b(t), t τ k, (2.1) x τk = x(τ + k ) x(τ k) = C k x(τ k ) + d k, (2.2) x( ) = x 0. (2.3) Here the elements of the n n-matrix A(t) and the n-vector b(t) belong to L 1 (, T 1 ), and C k is an n n-matrix. We will derive an expression for the solution (equation (2.7) below). Let Φ(t) be the fundamental matrix at of the system ẋ = A(t)x (i.e., Φ(t) = A(t)Φ(t) a.e., Φ( ) = E, E is the n n identity matrix). Define Ψ k = Φ 1 (τ k )(E + C k )Φ(τ k ), k = 1,..., r, (2.4) { Ψ k Ψ j+1 if 0 j < k r, Ψ kj = E if 0 j = k r. For s t T 1 we define a piecewise constant function Ψ(t, s): if τ k < t τ k+1 (k = 1,..., r), then Ψ(t, s) = Ψ 00 = E, if s t τ 1 ; Ψ k0 if s τ 1, Ψ k1 if τ 1 < s τ 2, Ψ(t, s) =. Ψ k,k 1 = Ψ k if τ k 1 < s τ k, Ψ kk = E if τ k < s t. Note that { Ψ kj = Ψ k Ψ(τ k, s) if τ j < s τ j+1, τ k < t τ k+1, 0 j < k r, Ψ(t, s) = Ψ kk = E if τ k < s t τ k+1, 0 k r, { Ψ k0 = Ψ k Ψ(τ k, ) if τ k < t τ k+1, 1 k r, Ψ(t, ) = Ψ 00 = E if t τ 1. (2.5) (2.6) Then the solution of (2.1) (2.3) can be written x(t) = Φ(t)Ψ(t, )x 0 + t Φ(t)Ψ(t, s)φ 1 (s)b(s) ds + Φ(t)Ψ(t, τ + j )Φ 1 (τ j )d j (2.7) <τ j<t 2

7 for all t [, T 1 ]. To see this, let y(t) = Φ 1 (t)x(t). On (τ k, τ k+1 ) (0 k r) we have ẏ(t) = Φ 1 (t)b(t) a.e. Further, y( ) = x 0 and (using (2.4)) For t τ 1 we have (see (2.6)) y(t) = x 0 + y(τ + k ) = Φ 1 (τ k )x(τ + k ) = Φ 1 (τ k )[(E + C k )x k (τ k ) + d k ] = Φ 1 (τ k )(E + C k )Φ(τ k )y(τ k ) + Φ 1 (τ k )d k = Ψ k y(τ k ) + Φ 1 (τ k )d k for k 1. t Φ 1 (s)b(s) ds = Ψ(t, )x 0 + For τ 1 < t τ 2 we then obtain from (2.8) using (2.5) (2.6) y(t) = y(τ + 1 ) + t t τ 1 Φ 1 (s)b(s) ds = Ψ 1 y(τ 1 ) + Φ 1 (τ 1 )d 1 + Ψ(t, s)φ 1 (s)b(s) ds. t τ 1 Φ 1 (s)b(s) ds τ1 = Ψ(t, )x 0 + Ψ 1 Ψ(τ 1, s)φ 1 (s)b(s) ds + Φ 1 (τ 1 )d 1 + Φ 1 (s)b(s) ds τ 1 t = Ψ(t, )x 0 + Ψ(t, s)φ 1 (s)b(s) ds + Ψ(t, τ 1 + )Φ 1 (τ 1 )d 1. By induction we get in a similar way y(t) = Ψ(t, )x 0 + t Ψ(t, s)φ 1 (s)b(s) ds + for τ k < t τ k+1, 1 k r. This proves (2.7). See [1] for further properties of systems of the form (2.1) (2.3). 3 A general extremal problem t k Ψ(t, τ + j )Φ 1 (τ j )d j Necessary conditions for many optimization problems, including optimal control problems, can be derived by applying the necessary conditions for a generally formulated extremal problem. A simple version is the following mathematical programming problem in an infinite-dimensional space. Problem (P). Let X be a real linear space, and let there be given real-valued functions ϕ q,..., ϕ 0,..., ϕ p (p and q are non-negative integers) defined on X, and a subset A of X. Minimize ϕ 0 (x) subject to the constraints (2.8) ϕ i (x) = 0, i = 1,..., p, (3.1a) ϕ i (x) 0, i = q,..., 1, (3.1b) x A. (3.1c) Assume that x 0 is a solution of Problem (P), that is, x 0 satisfies (3.1) and ϕ 0 (x 0 ) ϕ 0 (x) for all x such that (3.1) is satisfied. Under suitable conditions it is possible to derive a necessary condition for optimality in the form of a generalized multiplier rule. What we need is some differentiability properties of ϕ i at x 0 and some way of approximating A near x 0. When the multiplier rule is applied to an optimal control problem this approximation of A comes from a certain perturbation result in the theory of differential equations. We make the following assumption: Assumption 1. There exist a set M X and functions h i : X R, q i p, with the following properties: For every finite collection y 1,..., y N of points in M there is an ε 0 > 0 such that for each ε (0, ε 0 ] and each β S N, where S N = {β = (β 1,..., β N ) R N : β j 0 for j = 1,..., N, β j = 1}, (3.2) there exists a point ρ β,ε A such that, with y β = N β jy j co M, 3

8 (i) ϕ i (ρ β,ε ) is continuous with respect to β S N for 1 i p; (ii) for 1 i p, h i is linear, and (iii) for q i 0, h i is convex, and ϕ i (ρ β,ε ) ϕ i (x 0 ) lim = h i (y β ); ε 0 + ε ϕ i (ρ β,ε) ϕ i (x 0 ) lim sup h i (y β ); ε 0 + ε and the convergence in (ii) and (iii) is uniform with respect to β S N. Remark. If X is normed, then the function h i in (ii) might be the Hadamard derivative, i.e., ϕ i (x 0 + εz) ϕ i (x 0 ) lim = h i (y), ε 0 + ε z y and similarly in (iii). In this case ρ β,ε should be a point in A such that ρ β,ε = x 0 + εy β + o(ε). The set M may be considered as a set of directions in which it is possible to approximate A near x 0. However, we only need the properties of the composite functions ϕ i (ρ β,ε ). It may also be easier to verify (i), (ii) and (iii) directly than treating ϕ i (x) and ρ β,ε separately. Theorem 1. Let x 0 be a solution of Problem (P) and let Assumption 1 be satisfied. Then there exist real numbers λ q,..., λ p, not all zero, such that p λ i h i (y) 0 for all y co M, (3.3) i= q λ i 0, q i 0, (3.4) λ i ϕ i (x 0 ) = 0, q i 1. (3.5) Proof of this theorem can be found in [10]; see also [7] and [9]. 4 A perturbation result for differential equations We are going to study control systems ẋ = f(t, x, u(t)) on a fixed time interval I = [, T 1 ]. For the admissible controls u( ) there is a constraint of the form u(t) Ω(t) for all t I. We say that a function (t, x) F (t, x) with values in R r for some r, defined for t I and x R n is of Carathéodory-type (on I) if it is measurable in t and continuous in x, and if for each compact set K R n there exists a function ρ L 1 (I) such that F (t, x) ρ(t) for all t I, x K. We say that F belongs to the class F if F is differentiable with respect to x, and F and F x are of Carathéodory-type. [ F Fi x is the r n-matrix with elements x j.] For f(t, x, u) and Ω(t) we make the following assumption. Assumption 2. (i) f is a mapping from I R n R m to R n. For each t I, Ω(t) is a non-empty subset of R m. (ii) f is differentiable with respect to x, and f and f x and continuous in x and u separately (for fixed t). are measurable in t (for fixed x and u) (iii) The set of admissible controls u( ) is U = {u( ) : u( ) is measurable on I, u(t) Ω(t) for all t I, the function (t, x) f(t, x, u(t)) belongs to the class F}. 4

9 (iv) There exists a countable family {u j } of functions u j U such that the set {u j (t)} is dense in Ω(t) for every t I. [It can be shown that this is the case, for example, if the set-valued mapping t Ω(t) is measurable in the sense that the set {(t, u) R m+1 : t I, u Ω(t)} belongs to the σ-algebra in R m+1 that is generated by the Lebesgue sets in I and the Borel sets in R m (assuming that U and that (t, x) f(t, x, v(t)) belongs to F for bounded measurable functions v); see [11].] For applications to optimal control the following result about certain perturbations of a given control u 0 is of central importance. Theorem 2. Assume that u 0 U and that x 0 is a piecewise continuous function on [, T 1 ] that is a solution of ẋ = f(t, x, u 0 (t)) on a subinterval I = [τ, τ ] I. Let for ξ R n and u U, v(t; ξ, u) be the solution of v(τ) = ξ. v = f x (t, x 0(t), u 0 (t))v + f(t, x 0 (t), u(t)) f(t, x 0 (t), u 0 (t)), Let ξ j R n and u j U, j = 1,..., N, be given. For each β S N (see (3.2)) and each ε (0, 1) there exist pairwise disjoint sets A j = A j (β, ε) I, j = 0, 1,..., N, each a finite union of intervals, such that N j=0 A j = I and such that the following is true. Define the control u β,ε U by and let x β,ε be the solution of u β,ε (t) = u j (t) for t A j (β, ε), j = 0, 1,..., N, (4.1) ẋ = f(t, x, u β,ε (t)), x(τ) = x 0 (τ) + ε β j ξ j. Then there exists an ε 0 (0, 1) such that for all β S N and all ε (0, ε 0 ], x β,ε ( ) exists on all I and satisfies x β,ε (t) = x 0 (t) + ε β j v(t; ξ j, u j ) + r(t; β, ε), (4.2) where r(t; β, ε)/ε 0 as ε 0 +, uniformly with respect to t and β. Furthermore, for fixed ε, x β,ε (t) is continuous in β, uniformly with respect to t. Proof of this theorem can be found in [7] and [8] (in slightly different notation) or [10]. An important part of the proof is a lemma by Halkin (see [7]; a proof is also given in [10]) which states that the sets A j (β, ε) can be chosen so that and m(a 0 ) = (1 ε)(t 1 ), m(a j ) = εβ j (T 1 ), j = 1,..., N, m(a j (β, ε) A j (β, ε)) 0 as β β, β, β S N, j = 0,..., N, t (1 ε) f(τ, x 0 (τ), u 0 (τ)) dτ + ε t β j f(τ, x 0 (τ), u j (τ)) dτ t f(τ, x 0 (τ), u β,ε (τ)) dτ < ε 2 for all t I. (4.3) From (4.3) and a general result about the effect of perturbations of the initial value and the right-hand side of a differential equation Theorem 2 follows (see [10]). 5

10 Now we want to extend Theorem 2 to problems with impulse effects. We have = τ 0 < τ 1 < < τ r < τ r+1 = T 1, τ k fixed, u U, and consider the equation ẋ = f(t, x, u(t)), t τ k, x τk = J k (x(τ k ), z k ), k = 1,..., r. The parameters z k are chosen from some sets Z k. We assume that J k (, z k ) C 1 (R n ) for each k and z k Z k, and that the set J k (x, Z k ) is convex for each k and x. Denote z = (z 1,..., z r ) Z 1 Z r = Z. Let u 0 U, z 0 = (z 0,1,..., z 0,r ) Z and a corresponding solution x 0 be given, i.e., ẋ 0 = f(t, x 0, u 0 (t)), t τ k, x 0 τk = J k (x 0 (τ k ), z 0,k ). Let ξ j R n, u j U, and z j Z, j = 1,..., N, be given. For ε (0, 1), β S N, find sets A j (β, ε) and define u β,ε U as described above. Let x β,ε be the solution of ẋ = f(t, x, u β,ε (t)), x( ) = x 0 ( ) + ε β j ξ j. If ε is sufficiently small, x β,ε (t) exists on [, τ 1 ] and satisfies (according to Theorem 2) x β,ε (t) = x 0 (t) + ε β j v(t; ξ j, u j ) + r 1 (t; β, ε), t [, τ 1 ], where v( ; ξ, u) satisfies (4.1) with τ =, and r 1 (t; β, ε)/ε 0 as ε 0 + uniformly with respect to t [, τ 1 ] and β S N ; we say that r 1 (t; β, ε) = o(ε) uniformly w.r.t. t and β. Furthermore, x β,ε (τ 1 ) is continuous in β. Since J 1 (x β,ε (τ 1 ), Z 1 ) is convex, there exists z β,ε,1 Z 1 such that J 1 (x β,ε (τ 1 ), z 0,1 ) + ε β j [J 1 (x β,ε (τ 1 ), z j,1 ) J 1 (x β,ε (τ 1 ), z 0,1 )] = J 1 (x β,ε (τ 1 ), z β,ε,1 ). We continue the definition of x β,ε by letting the jump at τ 1 be J 1 (x β,ε (τ 1 ), z β,ε,1 ). The new initial value at τ 1 is x β,ε (τ + 1 ) = x β,ε(τ 1 ) + J 1 (x β,ε (τ 1 ), z β,ε,1 ). We have J 1 (x β,ε (τ 1 ), z 0,1 ) = J 1 (x 0 (τ 1 ), z 0,1 ) + ε J 1 x (x 0(τ 1 ), z 0,1 ) β j v(τ 1 ; ξ j, u j ) + o(ε) uniformly w.r.t. β. For j = 1,..., N it is enough to note that J 1 (x β,ε (τ 1 ), z j,1 ) = J 1 (x 0 (τ 1 ), z j,1 ) + O(ε) uniformly w.r.t. β. Thus x β,ε (τ + 1 ) = x 0(τ 1 ) + ε β j v(τ 1 ; ξ j, u j ) + J 1 (x 0 (τ 1 ), z 0,1 ) + ε J 1 x (x 0(τ 1 ), z 0,1 ) + ε β j v(τ 1 ; ξ j, u j ) β j [J 1 (x 0 (τ 1 ), z j,1 ) J 1 (x 0 (τ 1 ), z 0,1 )] + o(ε) = x 0 (τ 1 + ) + ε( E + J 1 x (x 0(τ 1 ), z 0,1 ) ) N β j v(τ 1 ; ξ j, u j ) + ε β j [J 1 (x 0 (τ 1 ), z j,1 ) J 1 (x 0 (τ 1 ), z 0,1 )] + o(ε) (4.4) 6

11 uniformly w.r.t. β. From the definition of J 1 (x β,ε (τ 1 ), z β,ε,1 ) we see that it is continuous in β; thus x β,ε (τ 1 + ) is continuous in β. Now let v(t; ξ, u, z) for ξ R n, u U, and z Z be the solution of the linear system v = f x (t, x 0(t), u 0 (t))v + f(t, x 0 (t), u(t)) f(t, x 0 (t), u 0 (t)), t τ k, v τk = J k x (x 0(τ k ), z 0,k )v(τ k ) + J k (x 0 (τ k ), z k ) J k (x 0 (τ k ), z 0,k ), k = 1,..., r, v( ) = ξ. Let v j (t) = v(t; ξ j, u j, z j ). Note that v j (t) coincides with v(t; ξ j, u j ) above for t τ 1. Since we have from (4.4) v j (τ + 1 ) = v j(τ 1 ) + J 1 x (x 0(τ 1 ), z 0,1 )v j (τ 1 ) + J 1 (x 0 (τ 1 ), z j,1 ) J 1 (x 0 (τ 1 ), z 0,1 ), N x β,ε (τ 1 + ) = x 0(τ 1 + ) + ε β j v j (τ 1 + ) + o(ε). Next, define x β,ε (t) for τ 1 < t τ 2 as the solution of ẋ = f(t, x, u β,ε (t)), N x(τ 1 + ) = x β,ε(τ 1 ) + J 1 (x β,ε (τ 1 ), z β,ε,1 ) = x 0 (τ 1 + ) + ε β j v j (τ 1 + ) + o(ε). It exists on (τ 1, τ 2 ] if ε is sufficiently small and satisfies (according to Theorem 2) x β,ε (t) = x 0 (t) + ε β j v j (t) + o(ε) as ε 0 + uniformly w.r.t. t and β. This follows since v j satisfies v j (t) = f x (t, x 0(t), u 0 (t))v j (t) + f(t, x 0 (t), u j (t)) f(t, x 0 (t), u 0 (t)) on (τ 1, τ 2 ] (a.e.). The extra term o(ε) in the initial value only contributes an extra o(ε) in the solution (an application of the Gronwall inequality). Also, x β,ε (t) is continuous in β. We now define z β,ε,2 Z 2 such that J 2 (x β,ε (τ 2 ), z 0,2 ) + ε β j [J 2 (x β,ε (τ 2 ), z j,2 ) J 2 (x β,ε (τ 2 ), z 0,2 )] = J 2 (x β,ε (τ 2 ), z β,ε,2 ). The jump at τ 2 is J 2 (x β,ε (τ 2 ), z β,ε,2 ). Then we can proceed as above and define x β,ε (t) on (τ 2, τ 3 ], (τ 3, τ 4 ],..., (τ r, T 1 ]. We have then defined z β,ε = (z β,ε,1,..., z β,ε,r ) Z, and x β,ε ( ) is the solution of ẋ = f(t, x, u β,ε (t)), t τ k, x τk = J k (x(τ k ), z β,ε,k ), k = 1,..., r, x( ) = x 0 ( ) + ε β j ξ j. As end result we obtain x β,ε (t) = x 0 (t) + ε β j v j (t) + r(t; β, ε), (4.5) where r(t; β, ε)/ε 0 as ε 0 +, uniformly w.r.t. t and β, and x β,ε (t) is continuous in β uniformly w.r.t. t (for fixed ε). 7

12 5 A continuous-discrete optimal control problem 5.1 Fixed switching times Consider the control system ẋ = f(t, x, u(t)), t τ k, u( ) U, (5.1) x τk = J k (x(τ k ), z k ), k = 1,..., r, z Z. (5.2) The notation and general assumptions are as in Section 4. We have a number of constraints for x(τ k ) (0 k r + 1) and x(τ + k ) or equivalently x τ k = J k (x(τ k ), z k ) (1 k r): g i (x(τ 0 ), x(τ 1 ), x τ1,..., x(τ r ), x τr, x(τ r+1 )) = 0, i = 1,..., p, (5.3) g i (x(τ 0 ), x(τ 1 ), x τ1,..., x(τ r ), x τr, x(τ r+1 )) 0, i = q,..., 1. (5.4) We want to minimize a functional g 0 (x(τ 0 ), x(τ 1 ), x τ1,..., x(τ r ), x τr, x(τ r+1 )). (5.5) We assume that the functions g i, defined for (x 0, x 1, y 1,..., x r, y r, x r+1 ) (R n ) 2r+2, are continuously differentiable. Assume that (u 0 ( ), z 0, x 0 ( )) is an optimal triple for the problem of minimizing (5.5) subject to (5.1) (5.4). Let us work in the linear space X = P C n (I) (R n ) r, where P C n (I) is the linear space of piecewise continuous n-vector functions on I = [, T 1 ] that are continuous except perhaps at τ k, k = 1,..., r, where they are continuous from the left. Elements of (R n ) r are denoted by ȳ = (y 1,..., y r ). Let A = {(x, ȳ) X : x( ) is a solution of (5.1) (5.2), y k = J k (x(τ k ), z k ), k = 1,..., r, for some u U, z Z}, ϕ i (x, ȳ) = g i (x(τ 0 ), x(τ 1 ), y 1,..., x(τ r ), y r, x(τ r+1 )), q i p, ȳ 0 = (y 0,1,..., y 0,r ), where y 0,k = J k (x 0 (τ k ), z 0,k ). Then (x 0, ȳ 0 ) A is a solution of the problem of minimizing ϕ 0 (x, ȳ) subject to (x, ȳ) A, ϕ i (x, ȳ) = 0 for i = 1,..., p, ϕ i (x, ȳ) 0 for i = q,..., 1. Thus our control problem is formulated as a special case of Problem (P) in Section 3. We must verify Assumption 1 in this case. Let v(t; ξ, u, z) be as in Section 4, and define Let w k (ξ, u, z) = J k x (x 0(τ k ), z 0,k )v(τ k ; ξ, u, z) + J k (x 0 (τ k ), z k ) J k (x 0 (τ k ), z 0,k ), k = 1,..., r. M = {(x, ȳ) X : x( ) = v( ; ξ, u, z), y k = w k (ξ, u, z) for some ξ R n, u U, z Z}, and let elements (v j, ȳ j ), j = 1,..., N, in M be given. Then v j ( ) = v( ; ξ j, u j, z j ) and y j,k = w k (ξ j, u j, z j ) for some ξ j R n, u j U, z j Z. For β S N and ε > 0 sufficiently small we define u β,ε, x β,ε, and z β,ε as is described in Section 4. We showed there (see (4.5)) that x β,ε (t) = x 0 (t) + ε β j v j (t) + o(ε), (5.6) as ε 0 +, uniformly w.r.t. t and β. Further, x β,ε (t) is continuous in β. Also, z β,ε satisfies J k (x β,ε (τ k ), z 0,k ) + ε β j [J k (x β,ε (τ k ), z j,k ) J k (x β,ε (τ k ), z 0,k )] = J k (x β,ε (τ k ), z β,ε,k ). (5.7) 8

13 Define an element ȳ β,ε (R n ) r with components y β,ε,k = J k (x β,ε (τ k ), z β,ε,k ). It follows from (5.7) that y β,ε,k is continuous in β. We also have from (5.6) and (5.7) so that y β,ε,k = J k (x 0 (τ k ), z 0,k ) + ε J k x (x 0(τ k ), z 0,k ) + ε β j v j (τ k ) β j [J k (x 0 (τ k ), z j,k ) J k (x 0 (τ k ), z 0,k )] + o(ε) = y 0,k + ε β j w k (ξ j, u j, z j ) + o(ε) = y 0,k + ε β j y j,k + o(ε), ȳ β,ε = ȳ 0 + ε β j ȳ j + o(ε) (5.8) uniformly w.r.t. β. Now we can verify Assumption 1 (i) (iii). We have and ρ(β, ε) = (x β,ε ( ), ȳ β,ε ) A, ϕ i (ρ(β, ε)) = g i (x β,ε (τ 0 ), x β,ε (τ 1 ), y β,ε,1,..., x β,ε (τ r ), y β,ε,r, x β,ε (τ r+1 )) (5.9) is continuous in β. For (ii) and (iii) we have from (5.6), (5.8) (5.9) with e 0 = (x 0 (τ 0 ), x 0 (τ 1 ), y 0,1,..., x 0 (τ r ), y 0,r, x 0 (τ r+1 )), ϕ i (ρ(β, ε)) ϕ i (x 0, ȳ 0 ) r+1 g i lim = (e 0 ) β j v j (τ k ) + ε 0 + ε x k k=0 ( N ) = h i β j (v j, ȳ j ), k=1 g i y k (e 0 ) β j y j,k where the convergence is uniform w.r.t. β, and where h i is the linear functional on X defined by h i (x, ȳ) = r+1 k=0 g i x k (e 0 )x(τ k ) + k=1 g i y k (e 0 )y k. According to Theorem 1 there exist numbers λ i, not all zero, such that p λ i h i (x, ȳ) 0 for all (x, ȳ) co M, i= q Take a typical element (x, ȳ) of M. Then p i= q λ i 0 for i 0, λ i g ( e 0 ) = 0 for i < 0. { r+1 g i λ i (e 0 )v(τ k ; ξ, u, z) + x k k=0 for all ξ R n, u U, z Z. k=1 g } i (e 0 )w k (ξ, u, z) 0 y k 9

14 With Write G = p i= q λ ig i, G k = G x k (e 0 ) (0 k r + 1), and G + k = G y k (e 0 ) (1 k r), so that r+1 G k v(τ k ; ξ, u, z) + k=0 A(t) = f x (t, x 0(t), u 0 (t)), G + k w k(ξ, u, z) 0. (5.10) k=1 b(t) = f(t, x 0 (t), u(t)) f(t, x 0 (t), u 0 (t)), C k = J k x (x 0(τ k ), z 0,k ), d k = J k (x 0 (τ k ), z k ) J k (x 0 (τ k ), z 0,k ), k = 1,..., r, we have that v( ; ξ, u, z) is the solution of v = A(t)v + b(t), t τ k, v τk = C k v(τ k ) + d k = w k (ξ, u, z), v( ) = ξ. With the notation from Section 2 we have from (2.7) τk k 1 v(τ k ; ξ, u, z) = Φ(τ k )Ψ k 1,0 ξ + Φ(τ k )Ψ(τ k, t)φ 1 (t)b(t) dt + Φ(τ k )Ψ k 1,j Φ 1 (τ j )d j for k = 1,..., r + 1. Take u = u 0, z = z 0 in (5.10). Then b(t) = 0, d k = 0, and Thus ( r+1 G 0 + G k Φ(τ k )Ψ k 1,0 + k=1 G 0 + G + k C kφ(τ k )Ψ k 1,0 )ξ 0 for all ξ R n. k=1 (G k + G + k C k)φ(τ k )Ψ k 1,0 + G r+1 Φ(T 1 )Ψ r,0 = 0. (5.11) k=1 Next, take ξ = 0, z = z 0 in (5.10). Then (G k + G + k C k) k=1 τk Φ(τ k )Ψ(τ k, t)φ 1 (t)b(t) dt + G r+1 T1 for all u U. Define (χ A ( ) is the characteristic function of the set A) Then η(t) = Φ(T 1 )Ψ(T 1, t)φ 1 (t)b(t) dt 0 (G k + G + k C k)χ [T0,τ k ](t)φ(τ k )Ψ(τ k, t)φ 1 (t) + G r+1 Φ(T 1 )Ψ(T 1, t)φ 1 (t). k=1 T1 η(t)[f(t, x 0 (t), u(t)) f(t, x 0 (t), u 0 (t))] dt 0 for all u U. (5.12) For τ j < t τ j+1 (j = 0,..., r) we have η(t) = k=j+1 (G k + G + k C k)φ(τ k )Ψ k 1,j Φ 1 (t) + G r+1 Φ(T 1 )Ψ rj Φ 1 (t). In the case j = r the sum is empty, i.e. zero. Thus η( ) satisfies η(t) = η(t)a(t) a.e. on (τ j, τ j+1 ]. The jump at τ j (j = 1,..., r) is η(τ + j ) η(τ j) = k=j+1 (G k + G + k C k)φ(τ k )(Ψ k 1,j Ψ k 1,j 1 )Φ 1 (τ j ) (G j + G + j C j) + G r+1 Φ(T 1 )(Ψ rj Ψ r,j 1 )Φ 1 (τ j ). 10

15 If k > j + 1, then (by (2.4)) Ψ k 1,j Ψ k 1,j 1 = Ψ k 1 Ψ j+1 Ψ k 1 Ψ j = Ψ k 1,j (E Ψ j ) = Ψ k 1,j Φ 1 (τ j )C j Φ(τ j ). This is true also if k = j + 1, since Ψ jj Ψ j,j 1 = E Ψ j. Thus η(τ + j ) η(τ j) = k=j+1 At and T 1 we have according to (5.11) η( ) = η(t + 0 ) = η(t 1 ) = G r+1. (G k + G + k C k)φ(τ k )Ψ k 1,j Φ 1 (τ j )C j (G j + G + j C j) G r+1 Φ(T 1 )Ψ rj Φ 1 (τ j )C j = η(τ + j )C j G j G + j C j. (G k + G + k C k)φ(τ k )Ψ k 1,0 + G r+1 Φ(T 1 )Ψ r0 = G 0, k=1 It is convenient to introduce H(t, x, u) = η(t)f(t, x, u). Then η( ) satisfies η(t) = H x (t, x 0(t), u 0 (t)) a.e. With H 0 (t, u) = H(t, x 0 (t), u) (5.12) can be written T1 H 0 (t, u(t)) dt T1 H 0 (t, u 0 (t)) dt for all u U. (5.13) From part (iv) of Assumption 2 we can obtain a pointwise inequality. Let u j U, j = 1, 2,..., be such that {u j (t)} is dense in Ω(t) for every t I. Let t (, T 1 ] be a Lebesgue point of t f(t, x 0 (t), u j (t)) for every j = 0, 1, 2,.... Define for 0 < ε < t, j = 1, 2,..., { u j (t), if t ε < t t, u j,ε (t) = u 0 (t), otherwise on I. Then u j,ε U, and if we apply (5.13) to u j,ε, divide by ε and let ε 0 +, we obtain H 0 (t, u j (t )) H 0 (t, u 0 (t )) for all j. Since {u j (t )} is dense in Ω(t ), and H 0 is continuous in u, it follows that The point t can be almost any point in I. Finally, take ξ = 0, u = u 0 in (5.10). Then r+1 k 1 G k k=2 H 0 (t, u) H 0 (t, u 0 (t )) for all u Ω(t ). Φ(τ k )Ψ k 1,j Φ 1 (τ j )d j + G + 1 d 1 + k=2 G + k (C k k 1 After changing the order of summation and exchanging j and k we obtain r+1 k=1 j=k+1 r 1 G j Φ(τ j )Ψ j 1,k Φ 1 (τ k )d k + k=1 j=k+1 ) Φ(τ k )Ψ k 1,j Φ 1 (τ j )d j + d k 0 G + j C jφ(τ j )Ψ j 1,k Φ 1 (τ k )d k + G + k d k 0. k=1 11

16 From the definition of η we see that the coefficient of d k is η(τ + k ) + G+ k. Therefore (η(τ + k ) + G+ k )[J k(x 0 (τ k ), z k ) J k (x 0 (τ k ), z 0,k )] 0 k=1 for all z Z. Since the z k :s can be varied independently of each other, we have (η(τ + k ) + G+ k )[J k(x 0 (τ k ), z k ) J k (x 0 (τ k ), z 0,k )] 0 for all z k Z k, k = 1,..., r. We have obtained the following theorem. Theorem 3. Assume that (u 0 ( ), z 0, x 0 ( )) is a solution of the problem of minimizing (5.5) subject to (5.1) (5.4). Then there exist numbers λ q,..., λ 0,..., λ p, not all zero, and a piecewise absolutely continuous row vector function η( ) such that, with G = p i= q λ ig i and H(t, x, u) = η(t)f(t, x, u), η(t) = H x (t, x 0(t), u 0 (t)) a.e., η τk = ( η(τ + k ) + G (e 0 ) ) J k y k x (x 0(τ k ), z 0,k ) G (e 0 ), k = 1,..., r, x k η( ) = G (e 0 ), x 0 η(t 1 ) = G (e 0 ), x r+1 λ i 0 for q i 0, λ i g i (e 0 ) = 0 for q i 1, H(t, x 0 (t), u 0 (t)) = max H(t, x 0(t), u) a.e. on I, u Ω(t) ( η(τ + k ) + G (e 0 ) ) ( J k (x 0 (τ k ), z 0,k ) = max η(τ + y k z k Z k ) + G (e 0 ) ) J k (x 0 (τ k ), z k ), k = 1,..., r. k y k Remark 1. In (5.1) it is possible to have different right-hand sides on different intervals I k = (τ k 1, τ k ], so that x satisfies ẋ = f k (t, x, u(t)) on I k. Just write f(t, x, u) = r+1 k=1 χ I k (t)f k (t, x, u) and apply Theorem 3. It is also possible to have control vectors u k of different dimensions on different I k, so that the admisible controls on I k satisfy u k (t) Ω k (t) R m k. If m = max 1 k r+1 m k, we may extend u k and Ω k (t) with m m k components that are set to 0. Then Theorem 3 can be applied. Remark 2. Consider the same problem as above but with an integral term T 1 f 0 (t, x(t), u(t)) dt added to the cost fuctional (5.5), where the real-valued function f 0 has the same properties as f. Then the necessary conditions for optimality are exactly as in Theorem 3, except that H now is defined as H(t, x, u) = λ 0 f 0 (t, x, u) + η(t)f(t, x, u). This is proved by introducing a new state variable x 0 satisfying ẋ 0 (t) = f 0 (t, x(t), u(t)) a.e. in I with no jumps. The integral term is then written as x 0 (τ r+1 ) x 0 (τ 0 ), and Theorem 3 can be applied to the extended system. 5.2 Variable switching times Let us now allow the times τ k (including the initial time and the final time T 1 ) to vary, i.e., they become control parameters. We will transform the problem to a problem with fixed switching times, so that the previous result (Theorem 3) can be applied. In this transformation t will be 12

17 treated as a state variable, so we need the same regularity in t as in x. Also Ω(t) must be constant. The functions g i can now depend explicitly on τ = (τ 0,..., τ r+1 ), and the jump J k may depend on τ k. The assumptions from 5.1 have to be modified in the following way: We may have different differential equations on different intervals I k = (τ k 1, τ k ], so we assume that we are given r+1 functions (t, x, u k ) f k (t, x, u k ) from R R n R m k to R n. We assume that f k is continuously differentiable w.r.t. (t, x), and that f k, f k t and f k x are continuous in u k. We will also denote the first argument of f k by τ, write ˆx = (τ, x), and consider f k as a function of ˆx and u k. By U Ik we denote the set of all controls u k ( ) that are defined and measurable on I k and such that u k (t) Ω k for all t I k (Ω k is a given set in R m k ), and the function (t, ˆx) f k (ˆx, u k (t)) belongs to the class F on I k. The jump functions J k (ˆx, z k ) = J k (τ, x, z k ) are continuously differentiable w.r.t. ˆx (for fixed z k Z k ), and such that the sets J k (ˆx, Z k ) are convex for each ˆx. The functions g i have arguments ( τ, x 0, x 1, y 1,..., x r, y r, y r+1 ) R r+2 (R n ) 2r+2, and are supposed to be continuously differentiable in all arguments. Consider the control system with ẋ = f k (t, x, u k (t)) on I k, u k ( ) U Ik, k = 1,..., r + 1, (5.14) x τk = x(τ + k ) x(τ k) = J k (τ k, x(τ k ), z k ), k = 1,..., r, z Z, (5.15) We want to minimize the functional = τ 0 τ 1... τ r τ r+1 = T 1. g 0 ( τ, x(τ 0 ), x(τ 1 ), x τ1,..., x(τ r ), x τr, x(τ r+1 )) subject to the constraints g i = 0 for i = 1,..., p, and g i 0 for i = q,..., 1, where the g i :s have the same arguments as g 0. Note that we allow some of the times τ k to coincide. If τ k = τ k+1, there is of course no differential equation on I k+1, but we treat x(τ k ) and x(τ k+1 ) as separate quantities and have the condition x(τ k+1 ) = x(τ + k ) = x(τ k) + J k (τ k, x(τ k ), z k ). Assume that τ 0 = (τ 0,0, τ 0,1,..., τ 0,r+1 ), u 0,k ( ) U I0,k (where I 0,k = (τ 0,k 1, τ 0,k ]), k = 1,..., r + 1, z 0 = (z 0,0,..., z 0,r ) Z, and x 0 ( ) are optimal, i.e., x 0 satisfies ẋ 0 = f k (t, x 0, u 0,k (t)), t I 0,k, x 0 τ0,k = J k (τ 0,k, x 0 (τ 0,k ), z 0,k ), k = 1,..., r; the constraints g i = 0 (i > 0) and g i 0 (i < 0) are satisfied at e 0 = ( τ 0, x 0 (τ 0,0 ), x 0 (τ 0,1 ), y 0,1,..., x 0 (τ 0,r ), y 0,r, x 0 (τ 0,r+1 )), where and g 0 is minimized. Let y 0,k = J k (τ 0,k, x 0 (τ 0,k ), z 0,k ); K = {k {1,..., r + 1} : m(i 0,k ) > 0}, K 0 = {k {1,..., r + 1} : m(i 0,k ) = 0}. Define where η is a row vector in R n. H k (τ, x, u k, η) = ηf k (τ, x, u k ), Theorem 4. Under the assumptions above there exist numbers λ q,..., λ p, not all zero, and functions η 0 ( ) and η( ) such that, with G = p i= q λ ig i and M k (t) = sup H k (t, x 0 (t), u k, η(t)), u k Ω k 13

18 the following holds: λ i 0 for i = q,..., 0, λ i g i (e 0 ) = 0 for i = q,..., 1. If k K, then η 0 and η are absolutely continuous on I 0,k and satisfy η 0 (t) = H k τ η(t) = H k x where 0 means evaluation at (t, x 0 (t), u 0,k (t), η(t)), 0 0 a.e., a.e., H k (t, x 0 (t), u 0,k (t), η(t)) = M k (t) a.e., η 0 (t) + M k (t) = 0 a.e., and η 0 (t) + M k (t) 0 for all t I 0,k. If k K 0, then η 0 (τ + 0,k 1 ) = η 0(τ 0,k ) = η 0,k, η(τ + 0,k 1 ) = η(τ 0,k) = η k, and For k = 1,..., r, η 0,k + M k 0, where M k = sup u k Ω k H k (τ 0,k, x 0 (τ 0,k ), u k, η k ) = M k (τ 0,k ). η 0 ( ) = G τ 0 (e 0 ), η( ) = G x 0 (e 0 ), G τ r+1 (e 0 ), η(t 1 ) = G η 0 (T 1 ) = (e 0 ), x r+1 η 0 τ0,k = ( η(τ + 0,k ) + G (e 0 ) ) J k G (e 0 ), y k τ 0,k τ k η τ0,k = ( η(τ + 0,k ) + G (e 0 ) ) J k G (e 0 ), y k x 0,k x k ( η(τ + 0,k ) + G (e 0 ) ) ( J k 0,k = max η(τ + y k z k Z 0,k ) + G (e 0 ) ) J k (τ 0,k, x 0 (τ 0,k ), z k ), k y k where 0,k means evaluation at (τ 0,k, x 0 (τ 0,k ), z 0,k ). If, furthermore, for k K, there are functions ρ 1 and ρ 2 in L 1 (I 0,k ) such that f k x (s, x 0(s), u 0,k (t)) f k (s, x 0 (s), u 0,k (τ)) ρ 1 (t) + ρ 2 (τ) for all s, t, τ I 0,k, (5.16) then η 0 (t) + M k (t) = 0 for all t I 0,k. Proof. Let v 0 be a function in L 1 (0, r + 1) such that v 0 (s) > 0 in (k 1, k] and k k 1 and v 0 (s) = 0 in (k 1, k] if k K 0. Define v 0 (s) ds = τ 0,k τ 0,k 1, if k K, τ 0 (s) = τ 0,0 + s 0 v 0 (σ) dσ, s [0, r + 1]. Then τ 0 ( ) is absolutely continuous with τ 0 (s) = v 0 (s) > 0 a.e. in (k 1, k] if k K, and τ 0 (k) = τ 0,k, k = 0,..., r

19 Choose an element ū k Ω k and define for s (k 1, k] { x 0 (τ 0 (s)) if k K, x 0 (s) = ũ 0,k (s) = x 0 (τ 0,k ) if k K 0. { u 0,k (τ 0 (s)) if k K, ū k if k K 0. From the properties of absolutely continuous functions it follows that ũ 0,k is measurable, and x 0 is absolutely continuous on the intervals (k 1, k] and satisfies, if k K, x 0 (s) = ẋ 0 (τ 0 (s)) τ 0 (s) = v 0 (s) f k (τ 0 (s), x 0 (τ 0 (s)), u 0,k (τ 0 (s))) = v 0 (s) f k (τ 0 (s), x 0 (s), ũ 0,k (s)) a.e. The result is obviously true also if k K 0, since then v 0 = 0 and x 0 is constant. For the jumps we have if k + 1 K, and if k + 1 K 0. Define Then x 0 k = x 0 (k + ) x 0 (k) = x 0 (τ 0 (k) + ) x 0 (τ 0 (k)) = x 0 (τ + 0,k ) x 0(τ 0,k ) = x 0 τ0,k = J k (τ 0 (k), x 0 (τ 0 (k)), z 0,k ) = J k (τ 0 (k), x 0 (k), z 0,k ) x 0 (k + ) = x 0 (k + 1) = x 0 (τ 0,k+1 ) = x 0 (τ 0,k ) + J k (τ 0,k, x 0 (τ 0,k ), z 0,k ) ˆx = (τ, x) R R n, w k = (v, u k ) R R m k, ˆx 0 (s) = (τ 0 (s), x 0 (s)), ˆf k (ˆx, w k ) = (v, vf k (τ, x, u k )), Ĵ k (ˆx, z k ) = (0, J k (τ, x, z k )), ˆΩ k = (0, ) Ω k {(0, ū k )}, w 0,k (s) = (v 0 (s), ũ 0,k (s)), Û k = {w k ( ) = (v( ), u k ( )) : w k is measurable on (k 1, k], w k (s) ˆΩ k, the function (s, ˆx) ˆf k (ˆx, w k (s)) belongs to the class F}. ˆx 0 = ˆf k (ˆx 0 (s), w 0,k (s)) a.e. in (k 1, k], ˆx 0 k = Ĵk(ˆx 0 (k), z 0,k ). We also have that w 0,k Ûk. This follows from the fact that ρ L 1 (I 0,k ) implies that the function s v 0 (s)ρ(τ 0 (s)) belongs to L 1 (k 1, k). For ˆx k = (τ k, x k ) R R n, ŷ k = (σ k, y k ) R R n, q i p, we define We have ĝ i (ˆx 0, ˆx 1, ŷ 1,..., ˆx r, ŷ r, ˆx r+1 ) = g i (τ 0, τ 1,..., τ r+1, x 0, x 1, y 1,..., x r, y r, x r+1 ). ĝ i (ˆx 0 (0), ˆx 0 (1), ˆx 0 1,..., ˆx 0 (r), ˆx 0 r, ˆx 0 (r + 1)) or ĝ i (ê 0 ) = g i (e 0 ) with the obvious definition of ê 0. Let us now consider the problem of minimizing when ˆx( ) is a solution of = g i ( τ 0, x 0 (τ 0,0 ), x 0 (τ 0,1 ), y 0,1,..., x 0 (τ 0,r ), y 0,r, x 0 (τ 0,r+1 )) = g i (e 0 ), ĝ 0 (ˆx(0), ˆx(1), ˆx 1,..., ˆx(r), ˆx r, ˆx(r + 1)) ˆx = ˆf k (ˆx, w k (s)), s (k 1, k], w k ( ) Ûk, k = 1,..., r + 1, (5.17) ˆx k = Ĵk(ˆx(k), z k ), k = 1,..., r, z Z, (5.18) 15

20 subject to the constraints ĝ i (ˆx(0), ˆx(1), ˆx 1,..., ˆx(r), ˆx r, ˆx(r + 1)) = 0, i = 1,..., p, (5.19) ĝ i (ˆx(0), ˆx(1), ˆx 1,..., ˆx(r), ˆx r, ˆx(r + 1)) 0, i = q,..., 1, (5.20) We have seen that ˆx 0 ( ) with w 0,k ( ) and z 0 satisfies (5.17) (5.20). We claim that this is an optimal solution. To see this, let ˆx( ) = (τ( ), x( )) be any function sastisfying (5.17) (5.20), so that (5.17) and (5.18) are satisfied for some w k ( ) = (v( ), ũ k ( )) Ûk and z Z. Then τ( ) is absolutely continuous on [0, r + 1], v L 1 (0, r + 1), and holds a.e. on (k 1, k]. Let τ k = τ(k) and τ(s) = v(s) 0, x(s) = v(s)f k (τ(s), x(s), ũ k (s)), ψ k (t) = max{s (k 1, k] : τ(s) = t} for t I k = (τ k 1, τ k ]. (When k = 1 we take the corresponding closed intervals [0, 1] and [τ 0, τ 1 ].) The function τ is increasing and may be constant on countably many disjoint intervals [α j, β j ] in [k 1, k]. We may assume that v(s) = 0, ũ k (s) = ū k on each [α j, β j ] (change w k ( ) on a set of measure zero, if necessary). Define x(t) = x(ψ k (t)), u k (t) = ũ k (ψ k (t)) for t I k. If G R m k is open, then the set E = {s (k 1, k] : ũ k (s) G} is measurable, and {t I k : u k (t) G} = τ(e \ j [α j, β j )) is measurable, since τ (being absolutely continuous) maps measurable sets onto measurable sets. Thus u k ( ) is measurable with values in Ω k. In the same way, x( ) is measurable (it is not difficult to see that it is in fact continuous). We have that s ψ k (τ(s)) for all s (k 1, k], and if s < ψ k (τ(s)), then v( ) = 0, x( ) is constant, and ũ k ( ) = ū k on [s, ψ k (τ(s))]. Thus x(s) = x(ψ k (τ(s))) = x(τ(s)), ũ k (s) = ũ k (ψ k (τ(s))) = u k (τ(s)) for all s. Assume that τ k 1 < τ k. For any function F L 1 (I k ) we have the formula τ(s) τ(k 1) F (τ) dτ = s k 1 F (τ(σ))v(σ) dσ, s (k 1, k]. (5.21) A proof of this is found, e.g., in [16, p. 377]. An application of (5.21) gives Thus s x(s) = x((k 1) + ) + = x((k 1) + ) + = x((k 1) + ) + k 1 s k 1 τ(s) τ(k 1) v(σ)f k (τ(σ), x(σ), ũ k (σ)) dσ v(σ)f k (τ(σ), x(τ(σ)), u k (τ(σ))) dσ f k (τ, x(τ), u k (τ)) dτ. t x(t) = x(ψ k (t)) = x((k 1) + ) + f k (τ, x k (τ), u k (τ)) dτ, τ k 1 so that x( ) is absolutely continuous and satisfies ẋ(t) = f k (t, x(t), u k (t)) a.e. in I k. 16

21 Since w k Ûk, the norms of ˆf k and ˆf k ˆx are estimated on (k 1, k] by a function ˆρ k L 1 (k 1, k). For almost all t I k, v(ψ k (t)) > 0, and the norms of f k, f k t, f k x are estimated by 1 v(ψ k (t)) ˆρ k(ψ k (t)) = ρ k (t). From (5.21), which holds for non-negative measurable functions F, and the fact that ψ k (τ(s)) = s in E = {s (k 1, k] : v(s) > 0}, we get τ k τ k 1 ρ k (t) dt = E ˆρ k(s) ds <. Thus u k ( ) U k. If τ k < τ k+1 then and if τ k = τ k+1 then x(τ + k ) = x(k+ ) = x(k) + J k (τ(k), x(k), z k ) = x(τ k ) + J k (τ k, x(τ k ), z k ), x(τ k+1 ) = x(k + 1) = x(k + ) = x(τ k ) + J k (τ k, x(τ k ), z k ) = x(τ + k ). Therefore, (5.14) (5.15) are satisfied. Finally, g i ( τ, x(τ 0 ), x(τ 1 ), x τ1,..., x(τ r ), x τr, x(τ r+1 )) = ĝ i (ˆx(0), ˆx(1), ˆx 1,..., ˆx(r), ˆx r, ˆx(r + 1)) for i = q,..., p. Thus x satisfies the constraints g i = 0 for i > 0, g i 0 for i < 0, and (since ( τ 0, x 0 ) is optimal) ĝ 0 (...) = g 0 (...) g 0 (e 0 ) = ĝ 0 (ê 0 ). Now we can apply Theorem 3 (including the Remark) to the solution (w 0,k ( ), z 0, ˆx 0 ( )) of the problem of minimizing ĝ 0 subject to (5.17) (5.20). It follows that there exist numbers λ q,..., λ p, not all zero, and a piecewise absolutely continuous row vector function ˆη( ) such that, with Ĝ = p i= q λ iĝ i and Ĥk(s, ˆx, w k ) = ˆη(s) ˆf k (ˆx, w k ), we have ˆη(s) = Ĥk ˆx (ˆx 0(s), w 0,k (s)) a.e. on (k 1, k], k = 1,..., r + 1, λ i 0 for q i 0, λ i ĝ i (ê 0 ) = 0 for q i 1, ˆη k = (ˆη(k + ) + Ĝ ŷ k (ê 0 ) ) Ĵk ˆx (ˆx 0(k), z 0,k ) Ĝ ˆx k (ê 0 ), k = 1,..., r, ˆη(0) = Ĝ ˆx 0 (ê 0 ), ˆη(r + 1) = Ĝ ˆx r+1 (ê 0 ), Ĥ k (s, ˆx 0 (s), w 0,k (s)) = max w k ˆΩ k Ĥ k (s, ˆx 0 (s), w k ) a.e. on (k 1, k], (ˆη(k + ) + Ĝ ŷ k (ê 0 ) ) Ĵ k (ˆx 0 (k), z 0,k ) = max z k Z k (ˆη(k + ) + Ĝ ŷ k (ê 0 ) ) Ĵ k (ˆx 0 (k), z k ), k = 1,..., r. If k K, i.e., m(i 0,k ) > 0, we write (η 0 (t), η(t)) = ˆη(τ 1 0 (t)), t I 0,k = (τ 0,k 1, τ 0,k ]. The functions η 0 and η are absolutely continuous on I 0,k and satisfy η 0 (t) = η(t) f k t (t, x 0(t), u 0,k (t)) = H k, τ 0 η(t) = η(t) f k x (t, x 0(t), u 0,k (t)) = H k x 0 17

22 a.e. on I 0,k. With s = τ0 1 (t) the first maximum condition can be written v 0 (τ 1 0 (t))[η 0(t) + η(t)f k (t, x 0 (t), u 0,k (t))] = max (v,u k ) ˆΩ k v[η 0 (t) + η(t)f k (t, x 0 (t), u k )] a.e. Since v 0 (τ 1 0 (t)) > 0, this means that almost everywhere on I 0,k η 0 (t) + η(t)f k (t, x 0 (t), u 0,k (t)) = 0, η 0 (t) + η(t)f k (t, x 0 (t), u k ) 0 for all u k Ω k, (5.22) Since the left-hand side of (5.22) (as a function of t for fixed u k ) is continuous from the left in (τ 0,k 1, τ 0,k ] (5.22) is true for all t I 0,k. With H 0,k (t, u k ) = η(t)f k (t, x 0 (t), u k ) and M k (t) = sup uk Ω k H 0,k (t, u k ) we have η 0 (t) + M k (t) 0 for all t, η 0 (t) + M k (t) = η 0 (t) + H 0,k (t, u 0,k (t)) = 0 a.e. If k K 0, i.e., m(i 0,k ) = 0, ˆη is constant on (k 1, k]; we denote this constant by (η 0,k, η k ). On the t-side we write η 0 (τ + 0,k 1 ) = η 0(τ 0,k ) = η 0,k, η(τ + 0,k 1 ) = η(τ 0,k) = η k. The maximum condition becomes (since v 0 = 0 and ˆx 0 is constant on (k 1, k]) so that max (v,u k ) ˆΩ k v[η 0,k + η k f k (τ 0,k, x 0 (τ 0,k ), u k )] = 0, η 0,k + M k 0, where M k = sup u k Ω k η k f k (τ 0,k, x 0 (τ 0,k ), u k ) = M k (τ 0,k ). The conditions at the points τ 0,k and the second maximum condition become (with = τ 0,0, T 1 = τ 0,r+1 ) η 0 ( ) = G τ 0 (e 0 ), η( ) = G x 0 (e 0 ), G τ r+1 (e 0 ), η(t 1 ) = G η 0 (T 1 ) = (e 0 ), x r+1 η 0 τ0,k = ( η(τ + 0,k ) + G (e 0 ) ) J k G (e 0 ), y k τ 0,k τ k η τ0,k = ( η(τ + 0,k ) + G (e 0 ) ) J k G (e 0 ), y k x 0,k x k ( η(τ + 0,k ) + G (e 0 ) ) ( J k 0,k = max η(τ + y k z k Z 0,k ) + G (e 0 ) ) J k (τ 0,k, x 0 (τ 0,k ), z k ), k y k for k = 1,..., r. Now assume that (5.16) holds and suppose that η 0 (t ) + M k (t ) < 0 for some t I 0,k, k K. We have, since u 0,k (t) Ω k, η 0 (t ) + H 0,k (t, u 0,k (t)) η 0 (t ) + M k (t ) < 0 for all t I 0,k. Since η 0 is continuous from the left, we have η 0 (t) < η 0 (t ) + a, where a = 1 2 [η 0(t ) + M k (t )] > 0 for all t in a neighbourhood to the left of t. For almost all t in this neighbourhood H 0,k (t, u 0,k (t)) = η 0 (t), and H 0,k (t, u 0,k (t)) H 0,k (t, u 0,k (t)) > η 0 (t ) a + 2a + η 0 (t ) = a. (5.23) 18

23 For all t and almost all τ in I 0,k we have H 0,k (τ, u 0,k (t)) = η(τ) f k t t (τ, x 0(τ), u 0,k (t)) + η(τ) f k x (τ, x 0(τ), u 0,k (t))f k (τ, x 0 (τ), u 0,k (τ)) η(τ) f k x (τ, x 0(τ), u 0,k (τ))f(τ, x 0 (τ), u 0,k (t)). Since u 0,k U k, there is a ρ L 1 (I 0,k ) such that f k t (τ, x 0(τ), u 0,k (t)) ρ(t) for all t, τ I 0,k. If η(t) C, then it it follows from (5.16) that H 0,k (τ, u 0,k (t)) C[ρ(t) + ρ 1 (t) + ρ 2 (τ) + ρ 1 (τ) + ρ 2 (t)] = C[ρ 3 (t) + ρ 4 (τ)], t where ρ 3 = ρ + ρ 1 + ρ 2 L 1 (I 0,k ), ρ 4 = ρ 1 + ρ 2 L 1 (I 0,k ). From this and (5.23) we obtain t a H 0,k (τ, u 0,k (t)) dτ t t C t t t ρ 3 (t) + C ρ 4 (τ) dτ t for t in some neighbourhood to the left of t. If t t is so close to t that C t ρ t 4 (τ) dτ a 2, we get a 2 t t Cρ 3(t), which contradicts the fact that ρ 3 L 1 (I 0,k ). Thus η 0 (t) + M k (t) = 0 for all t I 0,k, if (5.16) holds. This also holds at if 1 K. The theorem is proved. Remark. If we assume that there is a function ρ L 2 (I 0,k ) such that fk (s, x 0 (s), u 0,k (t)) + f k x (s, x 0(s), u 0,k (t)) ρ(t) for all s, t I0,k, then (5.16) is satisfied with ρ 1 = ρ 2 = 1 2 ρ2. 6 A multiprocess problem Some control systems may require different descriptions on different time intervals, such as a multistage rocket or a robot arm picking up or dropping a load. We would then have one control process ẋ k (t) = f k (t, x k (t), u k (t)) on each interval t (τ k 1, τ k ), and together they constitute a multiprocess (see [3] and [4]). We also have some relations between the endpoint values. The term hybrid system has been used for a more general situation where the transition from one state space to another is determined by some discrete mechanism. See [5] and [15] for a general description of hybrid systems. In this section we consider a multiprocess with equality and inequality functional constraints on x k (τ k 1 ) and x k (τ k ) and the problem of minimizing a functional of the same type. We can derive necessary conditions for optimality by an application of the theorems in the previous section. A similar problem is treated in [3], where the proofs use methods from non-smooth analysis. In the case of variable τ k we need higher regularity in the t-dependence than in [3], but as a result we get more information, in particular that the maximized Hamiltonian is (piecewise) absolutely continuous. A problem of this type but restricted to piecewise continuous controls is treated in [6]. Let us assume that on each fixed interval I k = [τ k 1, τ k ], k = 1,..., r + 1, we have a control system ẋ k = f k (t, x k, u k (t)), (6.1) u k (t) Ω k (t), (6.2) 19

24 satisfying Assumption 2 in Section 4 on I k R n k R m k. The set of admissible controls is denoted by U k. We have a number of constraints of the form g i (x 1 (τ 0 ), x 1 (τ 1 ), x 2 (τ 1 ),..., x r+1 (τ r ), x r+1 (τ r+1 )) = 0, i = 1,..., p, (6.3) g i (x 1 (τ 0 ), x 1 (τ 1 ), x 2 (τ 1 ),..., x r+1 (τ r ), x r+1 (τ r+1 )) 0, i = q,..., 1. (6.4) We want to minimize a functional g 0 (x 1 (τ 0 ), x 1 (τ 1 ), x 2 (τ 1 ),..., x r+1 (τ r ), x r+1 (τ r+1 )). (6.5) The functions g i are defined on R n1 R n1 R n2 R n2 R nr+1 R nr+1 differentiable in all variables. The argument of g i is denoted by and are continuously (x 0 1, x 1 1, x 0 2,..., x 0 r+1, x 1 r+1). A total solution is a sequence (x 1, u 1,..., x r+1, u r+1 ), where each pair (x k, u k ) satisfies (6.1) (6.2) on I k, and the constraints (6.3) (6.4) are satisfied. Assume that (x 0,1, u 0,1, x 0,2,..., x 0,r+1, u 0,r+1 ) is an optimal solution, i.e., a solution that minimizes (6.5). As an application of Theorem 3 we obtain the following result (where e 0 = (x 0,1 (τ 0 ), x 0,1 (τ 1 ),..., x 0,r+1 (τ r ), x 0,r+1 (τ r+1 ))): Theorem 5. There exist numbers λ q,..., λ 0,..., λ p, not all zero, and absolutely continuous functions η k on I k, k = 1,..., r + 1, such that, with G = p i= q λ ig i and H k (t, x k, u k ) = η k (t)f k (t, x k, u k ), η k (t) = H k x k (t, x 0,k (t), u 0,k (t)) a.e. on I k, η k (τ k 1 ) = G x 0 (e 0 ), k η k (τ k ) = G x 1 (e 0 ), k H k (t, x 0,k (t), u 0,k (t)) = max H k(t, x 0,k (t), u k ) a.e. on I k, u k Ω k (t) λ i 0 for q i 0, λ i g i (e 0 ) = 0 for q i 1. Proof. Let x = (x 1, x 2,..., x r+1 ) R n1 R n2 R nr+1 = R N (N = r+1 k=1 n k), and let f k (t, x, u k ) = (0,..., f k (t, x k, u k ),..., 0) R N, where only block number k is different from 0. Let us study the system with no jumps at τ k. If define, for q i p, x = f k (t, x, u k (t)) in I k, k = 1,..., r + 1 (6.6) x k = (x 1,k, x 2,k,..., x r+1,k ) R N, k = 0,..., r + 1, ḡ i ( x 0, x 1,..., x r+1 ) = g i (x 1,0, x 1,1,..., x r+1,r, x r+1,r+1 ). Then x 0 = (x 0,1,..., x 0,k+1 ) with (u 0,1,..., u 0,r+1 ) is a solution of the problem of minimizing subject to (6.1) and ḡ 0 ( x(τ 0 ),..., x(τ r+1 )) ḡ i ( x(τ 0 ),..., x(τ r+1 )) = 0, i = 1,..., p, ḡ i ( x(τ 0 ),..., x(τ r+1 )) 0, i = q,..., 0. 20

25 Let us apply Theorem 3 (with the Remark) to this problem. Thus, there exist numbers λ q,... λ p, not all zero, and a piecewise absolutely continuous function η such that, with Ḡ = p i= q λ iḡ i and H k (t, x, u k ) = η(t) f k (t, x, u k ), Write η τk η = H k x on (τ k 1, τ k ], k = 1,..., r + 1, 0 = Ḡ x k 0, k = 1,..., r, η( ) = Ḡ x 0 0, η(t 1 ) = H k (t, x 0 (t), u 0,k (t)) = Ḡ x r+1 0, max u k Ω k (t) H k (t, x 0 (t), u k ) λ i 0 for q i 0, λ i ḡ i (ē 0 ) = 0 for q i 1. η = (η 1, η 2,..., η r+1 ), a.e. in I k and redefine η k at τ k (1 k r) so that it becomes continuous from the right there. Then H k (t, x, u k ) = η k (t)f k (t, x k, u k ) = H k (t, x k, u k ) for t I k, η k satisfies η k = H k x k on I k, and η k is constant on the other intervals. We have η j τk = Ḡ x j,k = 0 except when j = k and j = k + 1, η k τk = Ḡ x k,k = G, and η x 1 k+1 τk = Ḡ x k k+1,k = G, k = 1,..., r. From this the x 0 k+1 statements in the theorem follow. In the case of variable τ k we can apply Theorem 4 in the same way. Each function f k is now continuously differentiable w.r.t. (t, x k ), and Ω k is constant. The functions g i may depend explicitly on τ = (τ 0,..., τ r+1 ) also. Assume that ( τ 0, x 0,1, u 0,1,..., x 0,r+1, u 0,r+1 ), where τ 0 = (τ 0,0, τ 0,1,..., τ 0,r+1 ), is a solution. Let I 0,k = [τ 0,k 1, τ 0,k ], and K = {k {1,..., r + 1} : m(i 0,k ) > 0}, K 0 = {k {1,..., r + 1} : m(i 0,k ) = 0}. Assume also that there are functions ρ k L 2 (I 0,k ), k K, such that f k (s, x 0,k (s), u 0,k (t)) + f k (s, x 0,k (s), u 0,k (t)) ρ k (t) for all s, t I 0,k. x k Define where η k is a row vector in R n k. Let H k (τ, x k, u k, η k ) = η k f k (τ, x k, u k ), e 0 = ( τ 0, x 0,1 (τ 0,0 ), x 0,1 (τ 0,1 ),..., x 0,r+1 (τ 0,r ), x 0,r+1 (τ 0,r+1 )). Theorem 6. There exist numbers λ q,..., λ p, not all zero, and functions η 0,k and η k on I 0,k such that, with G = p i= q λ ig i, the following holds. λ i 0 for i = q,..., 0, λ i g i (e 0 ) = 0 for i = q,..., 1. If k K, then η 0,k and η k are absolutely continuous on I 0,k and satisfy η 0,k (t) = H k τ 0,k η k (t) = H k x k 0,k a.e., a.e., 21

26 where 0,k means evaluation at (t, x 0,k (t), u 0,k (t), η k (t)). If then M k (t) = sup H k (t, x 0,k (t), u k, η k (t)), u k Ω k M k (t) = η 0,k (t) for all t I 0,k, M k (t) = H k (t, x 0,k (t), u 0,k (t), η k (t)) a.e. in I 0,k. If k K 0, then η 0,k (τ 0,k 1 ) = η 0,k (τ 0,k ) = η 0,k, η k (τ 0,k 1 ) = η k (τ 0,k ) = η k, and Furthermore, M k = sup u k Ω k H k (τ 0,k, x 0,k, u k, η k ) η 0,k. η k (τ 0,k 1 ) = G x 0 (e 0 ), η k (τ 0,k ) = G k x 1 (e 0 ) for 1 k r + 1, k η 0,1 (τ 0,0 ) = G (e 0 ), η 0,r+1 (τ 0,r+1 ) = G (e 0 ), τ 0 τ r+1 η 0,k+1 (τ 0,k ) η 0,k (τ 0,k ) = G τ k (e 0 ) for 1 k r. Remark. Consider the same problem but with integral terms r+1 τk k=1 τ k 1 fk 0(t, x k(t), u k (t)) dt added to the cost functional, where fk 0 has the same properties as f k. Theorems 5 and 6 still hold, except that H k is defined as λ 0 fk 0(t, x k, u k ) + η k f k (t, x k, u k ). References [1] D. D. Bainov and P. S. Simeonov, Systems with Impulse Effects: Stability Theory and Applications, Ellis Horwood Ltd., [2] S. A. Belbas and W. H. Schmidt, Optimal control of Volterra equations with impulses, Applied Mathematics and Computation, 166 (2005), [3] F. H. Clarke and R. B. Vinter, Optimal multiprocesses, SIAM J. Control Optim., 27 (1989), [4] F. H. Clarke and R. B. Vinter, Applications of optimal multiprocesses, SIAM J. Control Optim., 27 (1989), [5] M. Garavello and B. Piccoli, Hybrid necessary principle, SIAM J. Control Optim., 43 (2005), [6] W. H. Getz and D. H. Martin, Optimal control systems with state variable jump discontinuities, Journal of Optimization Theory and Applications, 31 (1980), [7] H. Halkin, Necessary conditions in mathematical programming and optimal control theory, Optimal Control Theory and its Applications, I, Lecture Notes in Economics and Mathematical Systems, 105, Springer-Verlag, Berlin-Heidelberg-New York, [8] H. Halkin, Necessary conditions for optimal control problems with differentiable or nondifferentiable data, Mathematical Control Theory, Lecture Notes in Mathematics, 680, Springer- Verlag, Berlin-Heidelberg-New York,

27 [9] H. Halkin and L. W. Neustadt, General necessary conditions for optimization problems, Proc. Nat. Acad. Sciences, 56 (1966), [10] K. Holmåker, Optimal Control Theory, Part One, Department of Mathematics, Chalmers University of Technology and the University of Göteborg, [11] K. Holmåker, Optimal Control Theory, Part Two, Measurable Set-valued Functions, Department of Mathematics, Chalmers University of Technology and the University of Göteborg, [12] B. M. Miller and E. Ya. Rubinovich, Impulsive control in continuous and discrete-continuous systems, Kluwer, New York, [13] L. W. Neustadt, Optimization. A theory of necessary conditions, Princeton University Press, New York, [14] R. Rempala and J. Zabczyk, On the maximum principle for deterministic impulse control problems, Journal of Optimization Theory and Applications, 59 (1988), [15] H. J. Sussmann, A maximum principle for hybrid optimal control problems, Proc. 38th IEEE Int. Conf. Decision and Control, Phoenix, AZ, 1999, [16] E. C. Titchmarsh, The Theory of Functions, Oxford University Press,

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard

More information

The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle

The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle The Hybrid Maximum Principle is a consequence of Pontryagin Maximum Principle A.V. Dmitruk, A.M. Kaganovich Lomonosov Moscow State University, Russia 119992, Moscow, Leninskie Gory, VMK MGU e-mail: dmitruk@member.ams.org,

More information

2.1 Dynamical systems, phase flows, and differential equations

2.1 Dynamical systems, phase flows, and differential equations Chapter 2 Fundamental theorems 2.1 Dynamical systems, phase flows, and differential equations A dynamical system is a mathematical formalization of an evolutionary deterministic process. An evolutionary

More information

Some Applications to Lebesgue Points in Variable Exponent Lebesgue Spaces

Some Applications to Lebesgue Points in Variable Exponent Lebesgue Spaces Çankaya University Journal of Science and Engineering Volume 7 (200), No. 2, 05 3 Some Applications to Lebesgue Points in Variable Exponent Lebesgue Spaces Rabil A. Mashiyev Dicle University, Department

More information

Tonelli Full-Regularity in the Calculus of Variations and Optimal Control

Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Tonelli Full-Regularity in the Calculus of Variations and Optimal Control Delfim F. M. Torres delfim@mat.ua.pt Department of Mathematics University of Aveiro 3810 193 Aveiro, Portugal http://www.mat.ua.pt/delfim

More information

EXISTENCE THEOREMS FOR FUNCTIONAL DIFFERENTIAL EQUATIONS IN BANACH SPACES. 1. Introduction

EXISTENCE THEOREMS FOR FUNCTIONAL DIFFERENTIAL EQUATIONS IN BANACH SPACES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXXVIII, 2(29), pp. 287 32 287 EXISTENCE THEOREMS FOR FUNCTIONAL DIFFERENTIAL EQUATIONS IN BANACH SPACES A. SGHIR Abstract. This paper concernes with the study of existence

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

Lecture Note 13:Continuous Time Switched Optimal Control: Embedding Principle and Numerical Algorithms

Lecture Note 13:Continuous Time Switched Optimal Control: Embedding Principle and Numerical Algorithms ECE785: Hybrid Systems:Theory and Applications Lecture Note 13:Continuous Time Switched Optimal Control: Embedding Principle and Numerical Algorithms Wei Zhang Assistant Professor Department of Electrical

More information

On Controllability of Linear Systems 1

On Controllability of Linear Systems 1 On Controllability of Linear Systems 1 M.T.Nair Department of Mathematics, IIT Madras Abstract In this article we discuss some issues related to the observability and controllability of linear systems.

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

arxiv: v1 [math.oc] 22 Sep 2016

arxiv: v1 [math.oc] 22 Sep 2016 EUIVALENCE BETWEEN MINIMAL TIME AND MINIMAL NORM CONTROL PROBLEMS FOR THE HEAT EUATION SHULIN IN AND GENGSHENG WANG arxiv:1609.06860v1 [math.oc] 22 Sep 2016 Abstract. This paper presents the equivalence

More information

A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS

A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS PETER GIESL AND MARTIN RASMUSSEN Abstract. The variational equation of a nonautonomous differential equation ẋ = F t, x) along a solution µ is given by ẋ

More information

A MAXIMUM PRINCIPLE FOR A MULTIOBJECTIVE OPTIMAL CONTROL PROBLEM

A MAXIMUM PRINCIPLE FOR A MULTIOBJECTIVE OPTIMAL CONTROL PROBLEM STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume XLVIII, Number 1, March 23 A MAXIMUM PRINCIPLE FOR A MULTIOBJECTIVE OPTIMAL CONTROL PROBLEM Rezumat. Un principiu de maxim pentru o problemă vectorială de

More information

Ordinary differential equations with measurable right-hand side and parameters in metric spaces

Ordinary differential equations with measurable right-hand side and parameters in metric spaces Ordinary differential equations with measurable right-hand side and parameters in metric spaces Friedemann Schuricht Max-Planck-Institut für Mathematik in den Naturwissenschaften, Inselstraße 22 26, D-04103

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018

Joint work with Nguyen Hoang (Univ. Concepción, Chile) Padova, Italy, May 2018 EXTENDED EULER-LAGRANGE AND HAMILTONIAN CONDITIONS IN OPTIMAL CONTROL OF SWEEPING PROCESSES WITH CONTROLLED MOVING SETS BORIS MORDUKHOVICH Wayne State University Talk given at the conference Optimization,

More information

First Order Initial Value Problems

First Order Initial Value Problems First Order Initial Value Problems A first order initial value problem is the problem of finding a function xt) which satisfies the conditions x = x,t) x ) = ξ 1) where the initial time,, is a given real

More information

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2) WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H

More information

Institut für Mathematik

Institut für Mathematik U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Martin Rasmussen, Peter Giesl A Note on Almost Periodic Variational Equations Preprint Nr. 13/28 14. März 28 Institut für Mathematik, Universitätsstraße,

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Real Variables: Solutions to Homework 3

Real Variables: Solutions to Homework 3 Real Variables: Solutions to Homework 3 September 3, 011 Exercise 0.1. Chapter 3, # : Show that the cantor set C consists of all x such that x has some triadic expansion for which every is either 0 or.

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 2 November 6 November Deadline to hand in the homeworks: your exercise class on week 9 November 13 November Exercises (1) Let X be the following space of piecewise

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM*

A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* PORTUGALIAE MATHEMATICA Vol. 53 Fasc. 3 1996 A TWO PARAMETERS AMBROSETTI PRODI PROBLEM* C. De Coster** and P. Habets 1 Introduction The study of the Ambrosetti Prodi problem has started with the paper

More information

Functional Analysis HW #3

Functional Analysis HW #3 Functional Analysis HW #3 Sangchul Lee October 26, 2015 1 Solutions Exercise 2.1. Let D = { f C([0, 1]) : f C([0, 1])} and define f d = f + f. Show that D is a Banach algebra and that the Gelfand transform

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Folland: Real Analysis, Chapter 8 Sébastien Picard

Folland: Real Analysis, Chapter 8 Sébastien Picard Folland: Real Analysis, Chapter 8 Sébastien Picard Problem 8.3 Let η(t) = e /t for t >, η(t) = for t. a. For k N and t >, η (k) (t) = P k (/t)e /t where P k is a polynomial of degree 2k. b. η (k) () exists

More information

ON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR TERM

ON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR TERM Internat. J. Math. & Math. Sci. Vol. 22, No. 3 (999 587 595 S 6-72 9922587-2 Electronic Publishing House ON WEAK SOLUTION OF A HYPERBOLIC DIFFERENTIAL INCLUSION WITH NONMONOTONE DISCONTINUOUS NONLINEAR

More information

General Franklin systems as bases in H 1 [0, 1]

General Franklin systems as bases in H 1 [0, 1] STUDIA MATHEMATICA 67 (3) (2005) General Franklin systems as bases in H [0, ] by Gegham G. Gevorkyan (Yerevan) and Anna Kamont (Sopot) Abstract. By a general Franklin system corresponding to a dense sequence

More information

MATH5011 Real Analysis I. Exercise 1 Suggested Solution

MATH5011 Real Analysis I. Exercise 1 Suggested Solution MATH5011 Real Analysis I Exercise 1 Suggested Solution Notations in the notes are used. (1) Show that every open set in R can be written as a countable union of mutually disjoint open intervals. Hint:

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 2012, Northern Arizona University

Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 2012, Northern Arizona University Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 22, Northern Arizona University Some methods using monotonicity for solving quasilinear parabolic

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Closed-Loop Impulse Control of Oscillating Systems

Closed-Loop Impulse Control of Oscillating Systems Closed-Loop Impulse Control of Oscillating Systems A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics Periodic Control Systems, 2007

More information

Functional Differential Equations with Causal Operators

Functional Differential Equations with Causal Operators ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(211) No.4,pp.499-55 Functional Differential Equations with Causal Operators Vasile Lupulescu Constantin Brancusi

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

Stability for solution of Differential Variational Inequalitiy

Stability for solution of Differential Variational Inequalitiy Stability for solution of Differential Variational Inequalitiy Jian-xun Zhao Department of Mathematics, School of Science, Tianjin University Tianjin 300072, P.R. China Abstract In this paper we study

More information

Maximum Principle for Infinite-Horizon Optimal Control Problems under Weak Regularity Assumptions

Maximum Principle for Infinite-Horizon Optimal Control Problems under Weak Regularity Assumptions Maximum Principle for Infinite-Horizon Optimal Control Problems under Weak Regularity Assumptions Sergey Aseev and Vladimir M. Veliov Research Report 214-6 June, 214 Operations Research and Control Systems

More information

consists of two disjoint copies of X n, each scaled down by 1,

consists of two disjoint copies of X n, each scaled down by 1, Homework 4 Solutions, Real Analysis I, Fall, 200. (4) Let be a topological space and M be a σ-algebra on which contains all Borel sets. Let m, µ be two positive measures on M. Assume there is a constant

More information

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE PETER ROBICHEAUX Abstract. The goal of this paper is to examine characterizations of linear differential equations. We define the flow of an equation and examine

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

Analysis Comprehensive Exam Questions Fall 2008

Analysis Comprehensive Exam Questions Fall 2008 Analysis Comprehensive xam Questions Fall 28. (a) Let R be measurable with finite Lebesgue measure. Suppose that {f n } n N is a bounded sequence in L 2 () and there exists a function f such that f n (x)

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES T. DOMINGUEZ-BENAVIDES, M.A. KHAMSI AND S. SAMADI ABSTRACT In this paper, we prove that if ρ is a convex, σ-finite modular function satisfying

More information

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR FOURTH-ORDER BOUNDARY-VALUE PROBLEMS IN BANACH SPACES

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR FOURTH-ORDER BOUNDARY-VALUE PROBLEMS IN BANACH SPACES Electronic Journal of Differential Equations, Vol. 9(9), No. 33, pp. 1 8. ISSN: 17-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu EXISTENCE AND UNIQUENESS

More information

L2 gains and system approximation quality 1

L2 gains and system approximation quality 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility

More information

L 2 -induced Gains of Switched Systems and Classes of Switching Signals

L 2 -induced Gains of Switched Systems and Classes of Switching Signals L 2 -induced Gains of Switched Systems and Classes of Switching Signals Kenji Hirata and João P. Hespanha Abstract This paper addresses the L 2-induced gain analysis for switched linear systems. We exploit

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

On the Multi-Dimensional Controller and Stopper Games

On the Multi-Dimensional Controller and Stopper Games On the Multi-Dimensional Controller and Stopper Games Joint work with Yu-Jui Huang University of Michigan, Ann Arbor June 7, 2012 Outline Introduction 1 Introduction 2 3 4 5 Consider a zero-sum controller-and-stopper

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

A note on the σ-algebra of cylinder sets and all that

A note on the σ-algebra of cylinder sets and all that A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In

More information

Identification of Parameters in Neutral Functional Differential Equations with State-Dependent Delays

Identification of Parameters in Neutral Functional Differential Equations with State-Dependent Delays To appear in the proceedings of 44th IEEE Conference on Decision and Control and European Control Conference ECC 5, Seville, Spain. -5 December 5. Identification of Parameters in Neutral Functional Differential

More information

Solutions to Tutorial 8 (Week 9)

Solutions to Tutorial 8 (Week 9) The University of Sydney School of Mathematics and Statistics Solutions to Tutorial 8 (Week 9) MATH3961: Metric Spaces (Advanced) Semester 1, 2018 Web Page: http://www.maths.usyd.edu.au/u/ug/sm/math3961/

More information

A Brunn Minkowski theory for coconvex sets of finite volume

A Brunn Minkowski theory for coconvex sets of finite volume A Brunn Minkowski theory for coconvex sets of finite volume Rolf Schneider Abstract Let C be a closed convex cone in R n, pointed and with interior points. We consider sets of the form A = C \ K, where

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Limit solutions for control systems

Limit solutions for control systems Limit solutions for control systems M. Soledad Aronna Escola de Matemática Aplicada, FGV-Rio Oktobermat XV, 19 e 20 de Outubro de 2017, PUC-Rio 1 Introduction & motivation 2 Limit solutions 3 Commutative

More information

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32 Functional Analysis Martin Brokate Contents 1 Normed Spaces 2 2 Hilbert Spaces 2 3 The Principle of Uniform Boundedness 32 4 Extension, Reflexivity, Separation 37 5 Compact subsets of C and L p 46 6 Weak

More information

On the representation of switched systems with inputs by perturbed control systems

On the representation of switched systems with inputs by perturbed control systems Nonlinear Analysis 60 (2005) 1111 1150 www.elsevier.com/locate/na On the representation of switched systems with inputs by perturbed control systems J.L. Mancilla-Aguilar a, R. García b, E. Sontag c,,1,

More information

Locally Lipschitzian Guiding Function Method for ODEs.

Locally Lipschitzian Guiding Function Method for ODEs. Locally Lipschitzian Guiding Function Method for ODEs. Marta Lewicka International School for Advanced Studies, SISSA, via Beirut 2-4, 3414 Trieste, Italy. E-mail: lewicka@sissa.it 1 Introduction Let f

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Existence Results for Multivalued Semilinear Functional Differential Equations

Existence Results for Multivalued Semilinear Functional Differential Equations E extracta mathematicae Vol. 18, Núm. 1, 1 12 (23) Existence Results for Multivalued Semilinear Functional Differential Equations M. Benchohra, S.K. Ntouyas Department of Mathematics, University of Sidi

More information

Impulsive Stabilization of certain Delay Differential Equations with Piecewise Constant Argument

Impulsive Stabilization of certain Delay Differential Equations with Piecewise Constant Argument International Journal of Difference Equations ISSN0973-6069, Volume3, Number 2, pp.267 276 2008 http://campus.mst.edu/ijde Impulsive Stabilization of certain Delay Differential Equations with Piecewise

More information

The Minimum Speed for a Blocking Problem on the Half Plane

The Minimum Speed for a Blocking Problem on the Half Plane The Minimum Speed for a Blocking Problem on the Half Plane Alberto Bressan and Tao Wang Department of Mathematics, Penn State University University Park, Pa 16802, USA e-mails: bressan@mathpsuedu, wang

More information

Introduction. 1 Minimize g(x(0), x(1)) +

Introduction. 1 Minimize g(x(0), x(1)) + ESAIM: Control, Optimisation and Calculus of Variations URL: http://www.emath.fr/cocv/ Will be set by the publisher, UNMAXIMIZED INCLUSION TYPE CONDITIONS FOR NONCONVEX CONSTRAINED OPTIMAL CONTROL PROBLEMS

More information

van Rooij, Schikhof: A Second Course on Real Functions

van Rooij, Schikhof: A Second Course on Real Functions vanrooijschikhofproblems.tex December 5, 2017 http://thales.doa.fmph.uniba.sk/sleziak/texty/rozne/pozn/books/ van Rooij, Schikhof: A Second Course on Real Functions Some notes made when reading [vrs].

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

Neighboring feasible trajectories in infinite dimension

Neighboring feasible trajectories in infinite dimension Neighboring feasible trajectories in infinite dimension Marco Mazzola Université Pierre et Marie Curie (Paris 6) H. Frankowska and E. M. Marchini Control of State Constrained Dynamical Systems Padova,

More information

ON PERIODIC BOUNDARY VALUE PROBLEMS OF FIRST-ORDER PERTURBED IMPULSIVE DIFFERENTIAL INCLUSIONS

ON PERIODIC BOUNDARY VALUE PROBLEMS OF FIRST-ORDER PERTURBED IMPULSIVE DIFFERENTIAL INCLUSIONS Electronic Journal of Differential Equations, Vol. 24(24), No. 84, pp. 1 9. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) ON PERIODIC

More information

A forced pendulum equation without stable periodic solutions of a fixed period

A forced pendulum equation without stable periodic solutions of a fixed period Portugal. Math. (N.S.) Vol. xx, Fasc., 2x, xxx xxx Portugaliae Mathematica c European Mathematical Society A forced pendulum equation without stable periodic solutions of a fixed period Rafael Ortega Abstract.

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

The Arzelà-Ascoli Theorem

The Arzelà-Ascoli Theorem John Nachbar Washington University March 27, 2016 The Arzelà-Ascoli Theorem The Arzelà-Ascoli Theorem gives sufficient conditions for compactness in certain function spaces. Among other things, it helps

More information

An Asymptotic Property of Schachermayer s Space under Renorming

An Asymptotic Property of Schachermayer s Space under Renorming Journal of Mathematical Analysis and Applications 50, 670 680 000) doi:10.1006/jmaa.000.7104, available online at http://www.idealibrary.com on An Asymptotic Property of Schachermayer s Space under Renorming

More information

Real Analysis Chapter 4 Solutions Jonathan Conder

Real Analysis Chapter 4 Solutions Jonathan Conder 2. Let x, y X and suppose that x y. Then {x} c is open in the cofinite topology and contains y but not x. The cofinite topology on X is therefore T 1. Since X is infinite it contains two distinct points

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

ON DENSITY TOPOLOGIES WITH RESPECT

ON DENSITY TOPOLOGIES WITH RESPECT Journal of Applied Analysis Vol. 8, No. 2 (2002), pp. 201 219 ON DENSITY TOPOLOGIES WITH RESPECT TO INVARIANT σ-ideals J. HEJDUK Received June 13, 2001 and, in revised form, December 17, 2001 Abstract.

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Numerical Algorithm for Optimal Control of Continuity Equations

Numerical Algorithm for Optimal Control of Continuity Equations Numerical Algorithm for Optimal Control of Continuity Equations Nikolay Pogodaev Matrosov Institute for System Dynamics and Control Theory Lermontov str., 134 664033 Irkutsk, Russia Krasovskii Institute

More information

Analysis Comprehensive Exam Questions Fall F(x) = 1 x. f(t)dt. t 1 2. tf 2 (t)dt. and g(t, x) = 2 t. 2 t

Analysis Comprehensive Exam Questions Fall F(x) = 1 x. f(t)dt. t 1 2. tf 2 (t)dt. and g(t, x) = 2 t. 2 t Analysis Comprehensive Exam Questions Fall 2. Let f L 2 (, ) be given. (a) Prove that ( x 2 f(t) dt) 2 x x t f(t) 2 dt. (b) Given part (a), prove that F L 2 (, ) 2 f L 2 (, ), where F(x) = x (a) Using

More information

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels

Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Volterra Integral Equations of the First Kind with Jump Discontinuous Kernels Denis Sidorov Energy Systems Institute, Russian Academy of Sciences e-mail: contact.dns@gmail.com INV Follow-up Meeting Isaac

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

Lebesgue-Stieltjes measures and the play operator

Lebesgue-Stieltjes measures and the play operator Lebesgue-Stieltjes measures and the play operator Vincenzo Recupero Politecnico di Torino, Dipartimento di Matematica Corso Duca degli Abruzzi, 24, 10129 Torino - Italy E-mail: vincenzo.recupero@polito.it

More information

First-Return Integrals

First-Return Integrals First-Return ntegrals Marianna Csörnyei, Udayan B. Darji, Michael J. Evans, Paul D. Humke Abstract Properties of first-return integrals of real functions defined on the unit interval are explored. n particular,

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M Math 421, Homework #7 Solutions (1) Let {x k } and {y k } be convergent sequences in R n, and assume that lim k x k = L and that lim k y k = M. Prove directly from definition 9.1 (i.e. don t use Theorem

More information

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5 MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Stat 643 Review of Probability Results (Cressie)

Stat 643 Review of Probability Results (Cressie) Stat 643 Review of Probability Results (Cressie) Probability Space: ( HTT,, ) H is the set of outcomes T is a 5-algebra; subsets of H T is a probability measure mapping from T onto [0,] Measurable Space:

More information

Exponential stability of families of linear delay systems

Exponential stability of families of linear delay systems Exponential stability of families of linear delay systems F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany fabian@math.uni-bremen.de Keywords: Abstract Stability, delay systems,

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information