Optimal Control of PDE. Eduardo Casas

Size: px
Start display at page:

Download "Optimal Control of PDE. Eduardo Casas"

Transcription

1 Optimal Control of PDE Eduardo Casas Dpto. Matemática Aplicada y Ciencias de la Computación Universidad de Cantabria, Santander (Spain) CIMPA-UNESCO-GUADELOUPE School Pointe-à-Pitre January 3 18, 2009

2

3 Introduction. In a control problem we find the following basic elements. (1) A control u that we can handle according to our interests, which can be chosen among a family of feasible controls K. (2) The state of the system y to be controlled, which depends on the control. Some limitations can be imposed on the state, in mathematical terms y C, which means that not every possible state of the system is satisfactory. (3) A state equation that establishes the dependence between the control and the state. In the next sections this state equation will be a partial differential equation, y being the solution of the equation and u a function arising in the equation so that any change in the control u produces a change in the solution y. However the origin of control theory was connected with the control of systems governed by ordinary differential equations and there is a huge activity in this field; see, for instance, the classical books Pontriaguine et al. [21] or Lee and Markus [15]. (4) A function to be minimized, called the objective function or the cost function, depending on the control and the state (y, u). The objective is to determine an admissible control that provides a satisfactory state for us and that minimizes the value of functional J. It is called the optimal control and the associated state is the optimal state. The basic questions to study are the existence of a solution and its computation. However to obtain the solution we must use some numerical methods, arising some delicate mathematical questions in this numerical analysis. The first step to solve numerically the problem requires the discretization of the control problem, which is made usually by finite elements. A natural question is how good the approximation is, of course we would like to have some error estimates of these approximations. In order to derive the error estimates it is essential to have some regularity of the optimal control, some order of differentiability is necessary, at least some derivatives in a weak sense. The regularity of the optimal control can be deduced from the first order optimality 3

4 4 INTRODUCTION. conditions. Another key tool in the proof of the error estimates is the use of the second order optimality conditions. Therefore our analysis requires to derive the first and second order conditions for optimality. Once we have a discrete control problem we have to use some numerical algorithm of optimization to solve this problem. When the problem is not convex, the optimization algorithms typically provides local minima, the question now is if these local minima are significant for the original control problem. The following steps must be followed when we study an optimal control problem: (1) Existence of a solution. (2) First and second order optimality conditions. (3) Numerical approximation. (4) Numerical resolution of the discrete control problem. We will not discuss the numerical analysis, we will only consider the first two points for a model problem. In this model problem the state equation will be a semilinear elliptic partial differential equation. There are no many books devoted to the questions we are going to study here. Firstly let me mention the book by Profesor J.L. Lions [17], which is an obliged reference in the study of the theory of optimal control problems of partial differential equations. In this text, that has left an indelible track, the reader will be able to find some of the methods used in the resolution of the two first questions above indicated. More recent books are X. Li and J. Yong [16], H.O. Fattorini [11] and F. Tröltzsch [25].

5 CHAPTER 1 Setting of the Problem and Existence of a Solution Let be an open, connected and bounded domain in R n, n = 2, 3, with a Lipschitz boundary Γ. In this domain we consider the following state equation { Ay + a0 (x, y) = u in (1.1) y = 0 on Γ where a 0 : R R is a Carathéodory function and A denotes a second-order elliptic operator of the form n Ay(x) = xj (a ij (x) xi y(x)) i,j=1 with the coefficients a ij L () satisfying n λ A ξ 2 a ij (x)ξ i ξ j ξ R n, for a.e. x i,j=1 for some λ A > 0. In (1.1), the function u denotes the control and we will denote by y u the solution associated to u. We will state later the conditions leading to the existence and uniqueness of a solution of (1.1) in C( ) H 1 (). The optimal control problem is formulated as follows min J(u) = L(x, y u (x)) dx + N u(x) 2 dx 2 (P) subject to (y u, u) (C( ) H 1 ()) L 2 (), u K and a(x) g(x, y u (x)) b(x) x K. We impose the following assumptions on the data of the control problem. (A1) In the whole paper N 0 is a real number and K is a convex and closed subset of L 2 (). We introduce the set U ad = {u K : a(x) g(x, y u (x)) b(x) x K} and we assume that U ad is not empty. 5

6 6 1. SETTING OF THE PROBLEM AND EXISTENCE OF A SOLUTION (A2) The mapping a 0 : R R is a Carathéodory function of class C 2 with respect to the second variable and there exists a real number p > n/2 such that a 0 (, 0) L p (), a 0 (x, y) 0 for a.e. x. y Moreover, for all M > 0, there exists a constant C 0,M > 0 such that a 0 (x, y) y + 2 a 0 (x, y) y2 C 0,M, 2 a 0 y (x, y 2) 2 a 0 2 y (x, y 1) 2 C 0,M y 2 y 1, for a.e. x and y, y i M, i = 1, 2. (A3) L : R R is a Carathéodory function of class C 2 with respect to the second variable, L(, 0) L 1 (), and for all M > 0 there exist a constant C L,M > 0 and a function ψ M L 1 () such that L (x, y) y ψ M(x), 2 L (x, y) y2 C L,M, 2 L y (x, y 2) 2 L 2 y (x, y 1) 2 C L,M y 2 y 1, for a.e. x and y, y i M, i = 1, 2. (A4) K is a compact subset of and the function g : K R R is continuous, together with its derivatives ( j g/ y j ) C(K R) for j = 0, 1, 2. We also assume that a, b : K [, + ] are measurable functions, with a(x) < b(x) for every x K, such that their domains Dom(a) = {x K : < a(x)} and Dom(b) = {x K : b(x) < } are closed sets and a and b are continuous on their respective domains. Finally, we assume that either K Γ = or a(x) < g(x, 0) < b(x) holds for every x K Γ. We will denote Y ab = {z C(K) : a(x) z(x) b(x) x K}. Let us remark that a (b) can be identically equal to (+ ), which means that we only have upper (lower) bounds on the state. Thus the above framework define a quite general formulation for the pointwise state constraints.

7 1. SETTING OF THE PROBLEM AND EXISTENCE OF A SOLUTION 7 Remark 1.1. By using Tietze s theorem we can extend the functions a and b from their respective domains to continuous functions on K, denoted by ā and b respectively, such that ā(x) < b(x) x K. Since K is compact, there exists ρ > 0 such that b(x) ā(x) > ρ for every x K. If we define z = (ā + b)/2, then B ρ/2 ( z) Y ab, where B ρ/2 ( z) denotes the open ball in C(K) with center at z and radius ρ/2. Therefore Y ab is a closed convex set with nonempty interior. Remark 1.2. A typical functional in control theory is J(u) = 1 { yu (x) y d (x) 2 + Nu 2 (x) } dx, 2 where y d L 2 () denotes the ideal state of the system. The term Nu2 (x)dx can be considered as the cost term and it is said that the control is expensive if N is big, however the control is cheap if N is small or zero. From a mathematical point of view the presence of the term Nu 2, with N > 0, has a regularizing effect on the optimal control; see 2.3. Remark 1.3. The most frequent choices for the set of controls are K = L 2 () or K = U α,β = {u L 2 () : α(x) u(x) β(x) a.e. in } where α, β L 2 (). Given u L r (), with r > n/2, the existence of a solution y u in H0() 1 L () of (1.1) can be proved as follows. Without lost of generality we can assume that r 2. First we truncate a 0 The Assumption (A2) implies that a 0M (x, t) = a 0 (x, Proj [ M,+M] (t)). a 0M (x, y(x)) a 0 (x, 0) + C 0M M L p (), with p > n 2. Now we can define the mapping F : L r () L r () by F (y) = z where z H 1 () is the unique solution of { Az + a0m (x, y) = u in z = 0 on Γ. Now it is easy to use the Schauder s theorem to deduce the existence a fixed point y M of F. On the other hand, thanks to the monotonicity of a 0 with respect to the second variable we get that {y M } M=1 is uniformly bounded in L () (see, for instance, Stampacchia [24]). Consequently for M large enough a 0M (x, y M (x)) = a 0 (x, y M (x)) and then y M = y u. Thus, the monotonicity of a 0 implies that y u is the unique solution of

8 8 1. SETTING OF THE PROBLEM AND EXISTENCE OF A SOLUTION (1.1) in H 1 0() L (). On the other hand the inclusion Ay u L q (), with q = min{r, p} > n/2, implies that y u belongs to the space of Hölder functions C θ ( ) for some 0 < θ < 1; see [12]. In the case of Lipschitz coefficients a ij and a regular boundary Γ we have some additional regularity for y u. Indeed, the result by Grisvard [13] implies that y u W 2,q (), where q is defined as above. In the case of convex sets and supposing that p, r 2 we have H 2 ()-regularity of the states, see [13] again. The following theorem summarizes these regularity properties. Theorem 1.4. For any control u L r (), with r > n/2, there exists a unique solution y u of (1.1) in H 1 0() C θ ( ) for some 0 < θ < 1. Moreover there exists a constant C A > 0 such that (1.2) y u H 1 0 () + y u C θ ( ) C A ( a0 (, 0) L p () + u L r ()). Moreover if is convex and a ij C 0,1 ( ) for 1 i, j n and p, r 2, then y u H 2 (). Finally, if a ij C 0,1 ( ) for 1 i, j n and Γ is of class C 1,1, then y u W 2,q () where q = min{r, p}. The next theorem states the existence of a solution for the control problem (P). Theorem 1.5. Under the assumptions (A1)-(A4), the problem (P) has at least a solution if one of the following hypotheses holds (1) Either K is bounded in L 2 () (2) or L(x, y) ψ(x)+λy 2, with 0 < 4 λ C 2 A < N and ψ L1 (). Proof. Let {u k } K be a minimizing sequence of (P), this means that J(u k ) inf(p). Under the first hypothesis of the theorem, we get that {u k } k=1 is a bounded sequence in L2 (). In the second case, from (1.2) it is easy to deduce that ( ) N J(u k ) ψ(x) dx 2 λ CA a 2 0 (, 0) 2 L p () λ C2 A u k 2 L 2 (). Thus {u k } k=1 is a bounded sequence in L2 () in any of the two cases. Let us take a subsequence, again denoted in the same way, converging weakly in L 2 () to an element ū. Since K is convex and closed in L 2 (), then it is weakly closed, hence ū K. From Theorem 1.4 and the compactness of the embedding C θ ( ) C( ), we deduce that y uk yū strongly in H 1 0() C( ). This along with the continuity of g and the fact that a(x) g(x, y uk (x)) b(x) for every x K implies that a(x) g(x, yū(x)) b(x) too, hence ū U ad.

9 1. SETTING OF THE PROBLEM AND EXISTENCE OF A SOLUTION 9 Finally we have J(ū) lim inf k J(u k) = inf (P). An existence theorem can be proved by a quite similar way for a more general class of cost functional J(u) = L(x, y u (x), u(x)) dx, where L : R R R is a Carathéodory function satisfying the following assumptions H1) For every (x, y) R, L(x, y, ) : R R is a convex function. H2) For any M > 0 there exists a function ψ M L 1 () and a constant C 1 > 0 such that L(x, y, u) ψ M (x) + C 1 u 2 a.e. x, y M. H3) Either K is bounded in L 2 () or L(x, y) ψ(x)+c 2 (Nu 2 y 2 ), with 0 < 2C 2 A < N, C 2 > 0 and ψ L 1 (). Then problem (P) has at least one solution. If we assume that K is bounded in L (), then it is enough to suppose H1) and H2) For any M > 0 there exists a function ψ M L 1 () such that L(x, y, u) ψ M (x) a.e. x, y, u M. The convexity with respect to the control of L is a key point in the proof. If this assumption does not hold, then the existence of a solution can fail. Let us see an example. { y = u in y = 0 on Γ. Minimize J(u) = [y (P) u (x) 2 + (u 2 (x) 1) 2 ]dx 1 u(x) +1, x. Let us take a sequence of controls {u k } k=1 such that u k(x) = 1 for every x and verifying that u k 0 weakly in L (). The reader can make a construction of such a sequence (include in a n-cube to simplify the proof). Then, taking into account that y uk 0 uniformly in, we have 0 inf J(u) lim J(u k ) = lim y uk (x) 1 u(x) +1 k k 2 dx = 0.

10 10 1. SETTING OF THE PROBLEM AND EXISTENCE OF A SOLUTION But it is obvious that J(u) > 0 for any feasible control, which proves the non existence of an optimal control. In the lack of convexity of L, it is necessary to use some compactness argumentation to prove the existence of a solution. The compactness of the set of feasible controls has been used to get the existence of a solution in control problems in the coefficients of the partial differential operator. These type of problems appear in structural optimization problems and in the identification of the coefficients of the operator; see Casas [3] and [4]. To deal with control problems in the absence of convexity and compactness, (P) is sometimes included in a more general problem ( P), in such a way that inf(p)= inf( P), ( P) having a solution. This leads to the relaxation theory; see Ekeland and Temam [10], Warga [26], Young [27], Roubiček [22], Pedregal [20].

11 CHAPTER 2 First Order Optimality Conditions In this chapter we are going to study the first order optimality conditions. They are necessary conditions for local optimality. In the case of convex problems they become also sufficient conditions for global optimality. In the absence of convexity the sufficiency requires the use of second order optimality conditions. We will prove sufficient conditions of second order in the next chapter. The sufficient conditions play a crucial role in the numerical analysis of the problems. From the first order necessary conditions we can deduce some regularity properties of the optimal control as we will prove later First Order Optimality Conditions The key tool to get the first order optimality conditions is provided by the next lemma. Lemma 2.1. Let U and Z be two Banach spaces and K U and C Z two convex sets, C having a nonempty interior. Let ū be a local solution of problem { Min J(u) (Q) u K and F (u) C, where J : U (, + ] and F : U Z are Gâteaux differentiable at ū. Then there exist a real number ᾱ 0 and an element µ Z such that (2.1) ᾱ + µ Z > 0; (2.2) µ, z F (ū) 0 z C; (2.3) ᾱj (ū) + [DF (ū)] µ, u ū 0 u K. Moreover can take ᾱ = 1 if the Salter conditions holds: (2.4) u 0 K such that F (ū) + DF (ū) (u 0 ū) o C. Reciprocally, if ū is a feasible control, (Q) is a convex problem and (2.2) and (2.3) hold with ᾱ = 1, then ū is a global solution of (Q). 11

12 12 2. FIRST ORDER OPTIMALITY CONDITIONS We recall that (Q) is said convex if the function J is convex and the set U ad = {u K : F (u) C} is also convex. Proof. First we make the proof for global solutions of (Q). Consider the sets A = {(z, λ) Z R : u K such that z = F (ū) + DF (ū) (u ū) and λ = J (ū) (u ū)} and B = o C (, 0). It is is obvious that A and B are convex sets. Moreover they are disjoints. Indeed, suppose that there exists an element u 0 K such that and z 0 = F (ū) + DF (ū) (u 0 ū) = F (ū) + lim ρ 0 1 ρ [F (ū + ρ(u 0 ū)) F (ū)] o C λ 0 = J 1 (ū) (u 0 ū) = lim ρ 0 ρ (J(ū + ρ(u 0 ū)) J(ū)) < 0. Then we can find ρ 0 (0, 1) such that z ρ = F (ū) + 1 ρ (F (ū + ρ(u 0 ū)) F (ū)) o C ρ (0, ρ 0 ), 1 ρ (J(ū + ρ(u 0 ū)) J(ū)) < 0 ρ (0, ρ 0 ). This implies F (ū + ρ(u 0 ū)) = ρz ρ + (1 ρ)f (ū) o C and J(ū + ρ(u 0 ū)) < J(ū) for every ρ (0, ρ 0 ), which contradicts the fact that ū is a solution of (Q). Now taking into account that B is an open set, from the separation theorems of convex sets (see, for instance, Brezis [2]) we deduce the existence of µ Z and ᾱ R such that (2.5) µ, z 1 + ᾱλ 1 > µ, z 2 + ᾱλ 2 (z 1, λ 1 ) A, (z 2, λ 2 ) B. Let us prove that ᾱ 0. If ᾱ < 0, we take λ 1 = 0, z 1 = F (ū), z 2 o C fixed and λ 2 = k in (2.5), with k a positive integer, which leads to µ, F (ū) > µ, z 2 ᾱk.

13 2.1. FIRST ORDER OPTIMALITY CONDITIONS 13 Taking k large enough we get a contradiction, hence ᾱ 0. Moreover, since (2.5) is a strict inequality, ᾱ = µ = 0 is not possible, which proves (2.1). Since B = C (, 0], we get from (2.5) (2.6) µ, z 1 + ᾱλ 1 µ, z 2 + ᾱλ 2 (z 1, λ 1 ) A, (z 2, λ 2 ) B. Now it is sufficient to take z 1 = F (ū), z 2 = z C and λ 1 = λ 2 = 0 to deduce (2.2). The inequality (2.3) is obtained setting z 1 = F (ū) + DF (ū) (u ū), λ 1 = J (ū) (u ū), with u K, λ 2 = 0 and z 2 = F (ū). Finally, let us prove that ᾱ 0 when the Slater condition holds. Let us assume that (2.4) holds and ᾱ = 0. As a first consequence of this assumption we get (2.7) µ, z F (ū) < 0 z o C. To prove this inequality it is enough to suppose that there exists z 0 o C such that µ, z 0 F (ū) = 0. Using (2.2) we deduce µ, z + z 0 F (ū) 0 z B ɛ (0), with ɛ > 0 small enough, in such a way that B ɛ (z 0 ) o C. Then µ, z 0 for all z B ɛ (0), hence µ = 0, which contradicts (2.1). Taking now z = F (ū) + DF (ū) (u 0 ū) o C in (2.7), it follows [DF (ū)] µ, u 0 ū = µ, DF (ū) (u 0 ū) < 0, which contradicts (2.3), therefore ᾱ > 0. It is enough to divide (2.2) and (2.3) by ᾱ and to denote again the quotient µ/ᾱ by µ to deduce the desired result. Now let us consider the case where ū is only a local solution of (Q). In this case, ū is a global solution of { Min J(u) (Q ε ) u K B ε (ū) and F (u) C, for some ε > 0 small enough. Then (2.1) and (2.2) hold and (2.3) is replaced by ᾱj (ū) + [DF (ū)] µ, u ū 0 u K B ε (ū). Now it is an easy exercise to prove that the previous inequality implies (2.3). On the other hand let us remark that if the Slater condition (2.4) holds for some u 0 K, then there exists ρ 0 > 0 such that u ρ = ū + ρ(u 0 ū) K B ε (ū) for every 0 < ρ < ρ 0. Moreover, for any of

14 14 2. FIRST ORDER OPTIMALITY CONDITIONS these functions u ρ we have F (ū) + DF (ū) (u ρ ū) = F (ū) + ρdf (ū) (u 0 ū) = (1 ρ)f (ū) + ρ[f (ū) + DF (ū) (u 0 ū)] o C, which implies that Slater condition also holds at ū for the problem (Q ε ). Finally, let us prove the reciprocal implication. We assume that J is a convex function, U ad is a convex set set and (2.2) and (2.3) hold with ᾱ = 1. Let us take u U ad arbitrary and define u ρ = ū + ρ(u ū) for every ρ [0, 1], then u ρ U ad. Using (2.2) and the fact that F (u ρ ) C we get [DF (ū)] µ, 1 u ū = lim ρ 0 ρ µ, F (u ρ) F (ū) 0. Using this inequality and (2.3) we obtain 0 J (ū) + [DF (ū)] µ, u ū J (ū)(u ū). Finally from this inequality and the convexity of J we conclude 0 lim ρ 0 J(u ρ ) J(ū) ρ J(u) J(ū). Since u is arbitrary in U ad, we deduce that ū is a global solution. In order to apply this lemma to the study of problem (P) we need to analyze the differentiability of the functionals involved in the control problem. Proposition 2.2. The mapping G : L 2 () H0() 1 C θ ( ) defined by G(u) = y u is of class C 2. Furthermore if u, v L 2 () and z = DG(u) v, then z is the unique solution in H0() 1 C θ ( ) of Dirichlet problem (2.8) Az + a 0 y (x, y u(x))z = v in, z = 0 on Γ. Finally, for every v 1, v 2 L 2 (), z v1 v 2 = G (u)v 1 v 2 is the solution of Az v1 v 2 + a 0 (2.9) y (x, y u)z v1 v a 0 y (x, y u)z 2 v1 z v2 = 0 in z v1 v 2 = 0 on Γ, where z vi = G (u)v i, i = 1, 2.

15 2.1. FIRST ORDER OPTIMALITY CONDITIONS 15 Proof. Let us fix q = min{p, 2} > n/2. To prove the differentiability of G we will apply the implicit function theorem. Let us consider the Banach space endowed with the norm V () = {y H 1 0() C θ ( ) : Ay L q ()}, y V () = y H 1 0 () + y C θ ( ) + Ay L q (). Now let us take the function F : V () L q () L q () defined by F(y, u) = Ay + a 0 (, y) u. It is obvious that F is of class C 2, y u V () for every u L q (), F(y u, u) = 0 and F y (y, u) z = Az + a 0 (x, y)z y is an isomorphism from V () into L q (). By applying the implicit function theorem we deduce that G is of class C 2 and z = DG(u) v is given by (2.8). Finally (2.9) follows by differentiating twice with respect to u in the equation AG(u) + a 0 (, G(u)) = u. As a consequence of this result we get the following proposition. Proposition 2.3. The function J : L 2 () R is of class C 2. Moreover, for every u, v, v 1, v 2 L 2 () (2.10) J (u)v = (ϕ u + Nu)v dx and (2.11) J (u)v 1 v 2 = [ 2 L y 2 (x, y u)z v1 z v2 + Nv 1 v 2 ϕ u 2 a 0 y 2 (x, y u)z v1 z v2 ] dx where z vi = G (u)v i, i = 1, 2, and ϕ u W 1,s 0 () for every 1 s < n/(n 1) is the unique solution of problem (2.12) A ϕ + a 0 y (x, y u)ϕ = L y (x, y u) A being the adjoint operator of A. in ϕ = 0 on Γ,

16 16 2. FIRST ORDER OPTIMALITY CONDITIONS Proof. First of all, let us remark that the right hand side of (2.12) belongs to L 1 () H 1 (); see Assumption (A3). This implies that the solution of (2.12) does not belong to H0(), 1 but it is just an element of W 1,s 0 () for all 1 s < n/(n 1); see Stampacchia [24]. Since n 3, then W 1,s () L 2 () for 2n/(n + 2) s < n/(n 1). From Assumption (A3), Proposition 2.2 and the chain rule we get [ ] L J (u) v = y (x, y u(x))z v (x) + Nu(x)v(x) dx, where z v = G (u)v. Using (2.12) in this expression we obtain { J (u) v = [A ϕ u + a } 0 y (x, y u)ϕ u ]z v + Nu(x)v(x) dx { = [Az v + a } 0 y (x, y u)z v ]ϕ u + Nu(x)v(x) dx = [ϕ u (x) + Nu(x)]v(x) dx, which proves (2.10). Finally (2.11) follows in a similar way by application of the chain rule and Proposition 2.2. The next result is an immediate consequence of Proposition 2.2. Proposition 2.4. The mapping F : L 2 () C(K), defined by F (u) = g(, y u ( )), is of class C 2. Moreover, for all u, v, v 1, v 2 L 2 () (2.13) F (u)v = g y (, y u( ))z v ( ) and (2.14) F (u)v 1 v 2 = 2 g y 2 (, y u( ))z v1 ( )z v2 ( ) + g y (, y u( ))z v1 v 2 ( ) where z v = G (u)v, z vi = G (u)v i, i = 1, 2, and z v1 v 2 = G (u)v 1 v 2. Before stating the first order necessary optimality conditions, let us fix some notation. We denote by M(K) the Banach space of all real and regular Borel measures in K, which is identified with the dual space of C(K). The norm in M(K) is given by { } µ M(K) = µ (K) = sup z(x) dµ(x) : z C(K), z C(K) 1, K where µ (K) is the total variation of the measure µ. Combining Lemma 2.1 with the previous propositions we get the first order optimality conditions.

17 2.1. FIRST ORDER OPTIMALITY CONDITIONS 17 Theorem 2.5. Let ū be a local minimum of (P). Then there exist ᾱ 0, ȳ H0() 1 C θ ( ), ϕ W 1,s 0 () for all 1 s < n/(n 1), and µ M(), with (ᾱ, µ) (0, 0), such that the following relationships hold { Aȳ + a0 (x, ȳ) = ū in, (2.15) ȳ = 0 on Γ, A ϕ + a 0 g (x, ȳ) ϕ = ᾱ L(x, ȳ) + (x, ȳ) µ in, (2.16) y y y ϕ = 0 on Γ, (2.17) (z(x) g(x, ȳ(x))d µ(x) 0 z Y ab, K (2.18) [ ϕ(x) + ᾱnū(x)](u(x) ū(x))dx 0 u K. Furthermore, if there exists u 0 K such that (2.19) a(x) < g(x, ȳ(x)) + g y (x, ȳ(x))z 0(x) < b(x) x K, where z 0 H 1 0() C θ ( ) is the unique solution of (2.20) Az + a 0 y (x, ȳ)z = u 0 ū in z = 0 on Γ, then (2.15)-(2.18) hold with ᾱ = 1. Reciprocally, if ū U ad, the functions a 0 and g are constant or linear with respect to y, L is convex with respect to y and (2.15)-(2.18) hold with ᾱ = 1, then ū is a global solution of (P). Proof. The previous theorem is a consequence of Lemma 2.1 and the Propositions 2.3 and 2.4. Indeed it is enough to take Z = C(K), C = Y ab, F (u) = g(, y u ) and J as the cost functional of (P). In this framework (2.2) is equivalent to (2.17). Then it is immediate the existence of µ and ᾱ, with (ᾱ, µ) (0, 0), such that (2.17) holds. Let us check that (2.3) is equivalent to (2.18). First denote the state associate to ū by ȳ, then (2.15) holds. Now we define ϕū, ψ µ W 1,s 0 () satisfying A ϕū + a 0 (2.21) y (x, ȳ)ϕ ū = ᾱ L (x, ȳ) y in ϕū = 0 on Γ

18 18 2. FIRST ORDER OPTIMALITY CONDITIONS and (2.22) A ψ µ + a 0 y (x, ȳ)ψ µ = g (x, ȳ) µ y in ψ µ = 0 on Γ respectively. Setting ϕ = ϕ µ + ψ µ, then we have that ϕ satisfies (2.16). From (2.10) and (2.13) we get for every v L 2 () ᾱj (ū) + [DF (ū)] µ, v g = (ϕū + ᾱnū)v dx + K y (x, ȳ)z v d µ(x) = (ϕū + ᾱnū)v dx + A ψ µ + a 0 y (x, ȳ)ψ µ, z v M(K),C(K) = (ϕū + ᾱnū)v dx + (Az v + a 0 y (x, ȳ)z v)ψ µ dx = ( ϕ + ᾱnū)v dx. which proves the desired equivalence. Finally, we observe that (2.20) is the formulation of the Slater hypothesis (2.4) corresponding to the problem (P). The rest is obvious. and Let us prove some properties of the Lagrange multiplier µ. Define K a = {x K : g(x, ȳ(x) = a(x)} K b = {x K : g(x, ȳ(x)) = b(x)}. Now we set K 0 = K a K b. Then we have the following decomposition of µ. Corollary 2.6. The multiplier µ has the following Lebesgue decomposition µ = µ + µ, where µ and µ + are positive measures on K whose supports are contained in K a and K b respectively. Moreover we have µ M(K) = µ(k 0 ) = µ (K a ) + µ + (K b ). Proof. Let us prove that the support of µ is concentrated in K 0. Take a sequence of compact subsets of K, {K j } j=1, such that K j+1 o K j o K j = K j j 1 and K j = K 0. j=1

19 2.1. FIRST ORDER OPTIMALITY CONDITIONS 19 For every j we have a(x) < g(x, ȳ(x)) < b(x) for all x K\ K o j, then we deduce the existence of a sequence of strict positive numbers {ε j } j=1 such that a(x) + ε j < g(x, ȳ(x)) < b(x) ε j x K\ K o j, j 1. For every function z C(K) with support in K\ K o j and z 0 we define z(x) z j (x) = ε j + g(x, ȳ(x)) x K. z C(K) It is clear that z j (x) = g(x, ȳ(x)) for every x K j and z j Y ab. Then (2.17) leads to z(x) d µ(x) = z C(K) (z j (x) g(x, ȳ(x))) d µ(x) 0 K\K j ε j K for every z C 0 (K\ o K j ), which implies that µ K\Kj µ (K \ K 0 ) = lim j µ (K \ K j ) = 0, = 0, therefore hence the support of µ is included in K 0. Finally, since K a and K b are disjoint, any function z C(K a ) can be extended to a continuous function z in K 0 by setting z(x) = g(x, ȳ(x)) for x K b. Using Tietze s Theorem and assuming that a(x) z(x) b(x) for all x K a, we also extend z to a continuous function in K such that z Y ab. Therefore, using once again (2.17) we get (z(x) a(x)) d µ(x) = ( z(x) g(x, ȳ(x))) d µ(x) 0. K a K This inequality implies that µ is nonpositive in K a. Analogously we prove that µ is nonnegative in K b, which concludes the proof. Remark 2.7. Sometimes the state constraints are active at a finite set of points K 0 = {x j } m j=1 K. Then the previous corollary implies that (2.23) µ = m j=1 λ j δ xj, with λ j = { 0 if g(xj, ȳ(x j )) = b(x j ), 0 if g(x j, ȳ(x j )) = a(x j ), where δ xj denotes the Dirac measure centered at x j. If we denote by ϕ j, 1 j m, and ϕ 0 the solutions of A ϕ j + a 0 (2.24) y (x, ȳ(x)) ϕ j = δ xj in ϕ j = 0 on Γ

20 20 2. FIRST ORDER OPTIMALITY CONDITIONS and (2.25) A ϕ 0 + a 0 y (x, ȳ(x)) ϕ 0 = L (x, ȳ) y in ϕ 0 = 0 on Γ then the adjoint state defined by (2.16) is given by m g (2.26) ϕ = ᾱ ϕ 0 + λ j y (x j, ȳ(x j )) ϕ j. j= Regularity of the Optimal Controls From the optimality conditions we can deduce some properties on the behavior of the local minima. In the following theorem we consider some different situations. Theorem 2.8. Let us assume that ū is a local minimum of (P). Then the following statements hold. If K = L 2 () and N > 0, then (2.27) ᾱ = 1 and ū = 1 1,s ϕ W0 () for all 1 s < n N n 1. If K = U α,β = {u L 2 () : α(x) u(x) β(x) a.e. in } and N > 0 and ᾱ = 1, then ū is given by the expression ( (2.28) ū(x) = Proj [α(x),β(x)] ϕ(x) ). N Moreover, ū W 1,s () for all 1 s < n/(n 1) if α, β W 1,s (). If K = U α,β and ᾱn = 0, then { α(x) if ϕ(x) > 0 (2.29) ū(x) = β(x) if ϕ(x) < 0. The proof follows easily from the inequality (2.18). Now we can improve the regularity results if we assume more regularity on the data of the control problem. Let us start supposing that K 0 is a finite set. The following result was proved in [5]. Theorem 2.9. Assume p > n in Assumption (A2) and ψ M L p () in (A3). Suppose also that N > 0, K = U αβ, a i,j, α, β C 0,1 ( ), 1 i, j n and that Γ is of class C 1,1. Let (ȳ, ū, ϕ, µ) H0() 1 C θ ( ) L () W 1,s 0 () M(K), for all 1 s < n/(n 1), satisfy the optimality system (2.15)-(2.18) with ᾱ = 1. If the active set consists

21 2.2. REGULARITY OF THE OPTIMAL CONTROLS 21 of finitely many points, i.e. K 0 = {x j } m j=1, then ū belongs to C 0,1 ( ) and ȳ to W 2,p (). Proof. From (2.24) and (2.25) we deduce that ϕ 0 W 2,p () C 1 ( ) and ϕ j W 2,p ( \ B ε (x j )) C 1 ( \ B ε (x j )), 1 j m for any ε > 0. Furthermore it is well known, see for instance [1], that the asymptotic behavior of ϕ j (x) when x x j is of the type ( ) 1 C 1 log x x j + C 2 if n = 2 (2.30) ϕ j (x) ( ) 1 C 1 x x j + C n 2 2 if n > 2, with C 1 > 0. In particular we have that ϕ C 1 ( \ {x j } j Iū), where Let us take Iū = {j [1, m] : λ j 0}. (2.31) C αβ = α L () + β L () + 1. Thanks to (2.30) we can take ε αβ > 0 such that ϕ(x) > NC αβ x B εαβ (x j ), the sign in B εαβ (x j ) being equal to the sign of λ j ( g/ y)(x j, ȳ(x j )). Finally, (2.28) implies that α(x) if λ g j y (x j, ȳ(x j )) > 0 ū(x) = x B β(x) if λ g εαβ (x j ). j y (x j, ȳ(x j )) < 0 Finally, the Lipschitz regularity of ϕ in \ j IūB εαβ (x j ) and α and β in respectively, along with (2.28) implies that ū C 0,1 ( ). Indeed, it is enough to observe that for any numbers t 1, t 2 R and elements x 1, x 2 it is satisfied Proj[α(x2 ),β(x 2 )](t 2 ) Proj [α(x1 ),β(x 1 )](t 1 ) j Iū = max{α(x 2 ), min{t 2, β(x 2 )}} max{α(x 1 ), min{t 1, β(x 1 )}} α(x 2 ) α(x 1 ) + β(x 2 ) β(x 1 ) + t 2 t 1. Surprisingly, the bound constraints on the controls contribute to increase the regularity of ū. Now the question arises if this Lipschitz property remains also valid for an infinite number of points where the pointwise state constraints are active. Unfortunately, the answer is

22 22 2. FIRST ORDER OPTIMALITY CONDITIONS negative. In fact, the optimal control can even fail to be continuous if K 0 is an infinite and numerable set. Let us present a counterexample. Counterexample. We set = {x R 2 : x < 2}, { 1 if x 1 ȳ(x) = 1 ( x 2 1) 4 if 1 < x 2, K = {x k } k=1 {x }, where x k = (1/k, 0) and x = (0, 0), and 1 µ = k δ 2 x k. k=1 Now we define ϕ W 1,s 0 () for all 1 s < n/(n 1) as the solution of the equation { ϕ = ȳ + µ in, (2.32) ϕ = 0 on Γ. The function ϕ can be decomposed in the following form ϕ(x) = ψ(x) 1 + k [ψ k(x) + φ(x x k )], 2 k=1 where φ(x) = (1/2π) log x is the fundamental solution of and the functions ψ, ψ k C 2 ( ) satisfy { { ψ(x) = ȳ(x) in, ψk (x) = 0 in, ψ(x) = 0 on Γ, ψ k (x) = φ(x x k ) on Γ. Finally we set M = (2.33) ψ(0) 1 + k ψ k(0) k 2 φ(xk ) + 1, k=1 k=1 ū(x) = Proj [ M,+M] ( ϕ(x)) and a 0 (x) = ū(x) + ȳ(x). Then ū is the unique global solution of the control problem min J(u) = 1 (y 2 u(x) 2 + u 2 (x)) dx (P) subject to (y u, u) (C( ) H 1 ()) L (), M u(x) +M for a.e. x, 1 y u (x) +1 x K,

23 2.2. REGULARITY OF THE OPTIMAL CONTROLS 23 where y u is the solution of { y + a0 (x) = u in, (2.34) y = 0 on Γ. As a first step to prove that ū is a solution of problem, we verify that M is a real number: Since {φ(x x k )} k=1 is bounded in C2 (Γ), the sequence {ψ k } k=1 is bounded in C2 ( ). Therefore, the convergence of the first series of (2.33) is obvious. The convergence of the second one is also clear, k=1 1 k 2 φ(xk ) = 1 2π k=1 1 log k <. k2 Problem (P) is strictly convex and ū is a feasible control with associated state ȳ satisfying the state constraints. Therefore, there exists a unique solution characterized by the optimality system. In other words, the first order optimality conditions are necessary and sufficient for a global minimum. Let us check that (ȳ, ū, ϕ, µ) H0() 1 C( ) L () W 1,s 0 () M(K) satisfies the optimality system (2.15)-(2.18) with ᾱ = 1. First, in view of the definition of a 0, it is clear that ȳ is the state associated to ū. On the other hand, ϕ is the solution of (2.32), which is the same as (2.16) for our example. Relation (2.18) follows directly from the definition of ū given in (2.33). Finally, because of the definition of µ and K, (2.17) can be written in the form k=1 1 k 2 z(xk ) k=1 1 k 2 z C(K) such that 1 z(x) +1 x K, which obviously is satisfied. Now we prove that ū is not continuous at x = 0. Notice that ϕ(x k ) = + for every k N, because φ(0) = +. Therefore, (2.33) implies that ū(x k ) = M for every k. Since x k 0, the continuity of ū at x = 0 requires that u(x) M as x 0. However, we have for ξ j = (x j + x j+1 )/2 that (2.35) ( ) 1 lim j ū(ξj ) = Proj [ M,+M] ψ(0) + k ψ 1 k(0) + 2 k 2 φ(xk ) > M. k=1 Therefore ū is not continuous at (0, 0). Nevertheless, we are able to improve the regularity result of Theorem 2.8. Theorem Suppose that ū is a strict local minimum of (P). We also assume that Assumptions (A1)-(A4) and (2.19) hold, N > 0, k=1

24 24 2. FIRST ORDER OPTIMALITY CONDITIONS K = U αβ, α, β L () H 1 (), a ij C( ) (1 i, j n), p > n/2, and ψ M L p () in (A3). Then ū H 1 (). Proof. Fix εū > 0 such that ū is a strict global minimum of (P) in the closed ball Bεū(ū) L 2 (). This implies that ū is the unique global solution of the problem min J(u) subject to (y u, u) (H0() 1 C θ ( )) L 2 (), (P 0 ) α(x) u(x) β(x) for a.e. x, u ū L 2 () εū a(x) g(x, y u (x)) b(x) x K, where y u is the solution of (1.1). Now we select a sequence {x k } k=1 being dense in Dom(a) Dom(b) and consider the family of control problems min J(u) subject to (y u, u) (C( ) H 1 ()) L 2 (), (P k ) α(x) u(x) β(x) for a.e. x, u ū L 2 () εū a(x j ) g(x j, y u (x j )) b(x j ), 1 j k. Obviously, ū is a feasible control for every problem (P k ). Therefore, the existence of a global minimum u k of (P k ) is a consequence of Theorem 1.5. The proof of the theorem is split into three steps: First, we show that the sequence {u k } k=1 converges to ū strongly in L2 (). In a second step, we will check that the linearized Slater condition corresponding to problem (P k ) holds for all sufficiently large k. Finally, we confirm the boundedness of {u k } k=1 in H1 (). Step 1 - Convergence of {u k } k=1. By taking a subsequence, if necessary, we can suppose that u k ũ weakly in L 2 (). This implies that y k = y uk ỹ = yũ strongly in H0() 1 C( ). Because of the density of {x k } k=1 in Dom(a) Dom(b) and the fact that a(x j ) g(x j, ỹ(x j )) = lim k g(x j, y k (x j )) b(x j ) j 1, it holds that a(x) g(x, ỹ(x)) b(x) for every x K. The control constraints define a closed and convex subset of L 2 (), hence ũ satisfies all the control constraints. Therefore ũ is a feasible control for problem (P 0 ). Since ū is the solution of (P 0 ), u k is a solution of (P k ), and ū is feasible for every problem (P k ), we have J(u k ) J(ū) and further J(ū) J(ũ) lim inf k J(u k) lim sup J(u k ) J(ū). k

25 2.2. REGULARITY OF THE OPTIMAL CONTROLS 25 Since ū is the unique solution of (P 0 ), this implies ū = ũ and J(u k ) J(ū), hence the strong convergence u k ū in L 2 () follows from the fact J(u k ) J(ū) along with the uniform convergence y k ȳ. Step 2 - The linearized Slater condition for (P k ) holds at u k. Assumption (2.19) ensures the existence of a number ρ > 0 such that the following inequalities hold (2.36) a(x) + ρ < g(x, ȳ(x)) + g y (x, ȳ(x))z 0(x) < b(x) ρ x K. Given 0 < ε < 1, we multiply the previous inequality by ε and the inequality a(x) g(x, ȳ(x)) b(x) by 1 ε. Next, we add both inequalities to get (2.37) a(x)+ερ < g(x, ȳ(x))+ε g y (x, ȳ(x))z 0(x) < b(x) ερ x K. We fix { 0 < ε < min 1, εū u 0 ū L 2 () and define u 0,ε = ε(u 0 ū)+ū. It is obvious that u 0,ε satisfies the control constraints of problem (P k ) for any k. Consider now the solutions z k of the boundary value problem Az + a 0 y (x, y k)z = u 0,ε u k in } z = 0 on Γ. In view of u k ū in L 2 () and y k ȳ in C( ), we obtain the convergence z k εz 0 in H 1 0() C( ). Finally, from (2.37) we deduce the existence of k 0 > 0 such that (2.38) a(x j ) + ε ρ 2 g(x j, y k (x j )) + g y (x j, y k (x j ))z k (x j ) b(x j ) ε ρ 2 for 1 j k and every k k 0. Step 3 - {u k } k=1 is bounded in H1 (). The strong convergence u k ū in L 2 () implies that u k ū L 2 () < εū for k large enough. Consequently, the additional constraint u k ū L 2 () εū is not active, and hence u k is a local minimum of the problem min J(u) subject to (y u, u) (C( ) H 1 ()) L (), (Q k ) α(x) u(x) β(x) for a.e. x, a(x j ) g(x j, y u (x j )) b(x j ) 1 j k.

26 26 2. FIRST ORDER OPTIMALITY CONDITIONS From Theorem 2.8-(2.28) we get (2.39) u k (x) = Proj [α(x),β(x)] ( 1 ) N ϕ k(x), with (2.40) ϕ k = ϕ k,0 + k j=1 λ k,j g y (x j, y k (x j ))ϕ k,j. Above, {λ k,j } k j=1 are the Lagrange multipliers, more precisely (2.41) µ k = k λ k,j δ xj, j=1 with λ k,j = { 0 if g(xj, y k (x j )) = b(x j ) 0 if g(x j, y k (x j )) = a(x j ) see Remark 2.7. Finally, ϕ k,0 and {ϕ k,j } k j=1 are given by (2.42) A ϕ k,0 + a 0 y (x, y k(x))ϕ k,0 = L y (x, y k) in ϕ k,0 = 0 on Γ (2.43) A ϕ k,j + a 0 y (x, y k(x))ϕ k,j = δ xj in ϕ k,j = 0 on Γ. Let us prove the following boundedness property: (2.44) C > 0 such that µ k M(K) = k λ k,j C k. j=1 From (2.38) we get λ k,j > 0 g(x j, y k (x j )) = b(x j ) g y (x j, y k (x j ))z k (x j ) ε ρ 2 λ k,j < 0 g(x j, y k (x j )) = a(x j ) g y (x j, y k (x j ))z k (x j ) +ε ρ 2.

27 2.2. REGULARITY OF THE OPTIMAL CONTROLS 27 Next, in view of (2.18) with u = u 0,ε = ε(u 0 ū) + ū we obtain 0 ( ϕ k + Nu k )(u 0,ε u k ) k = J g (u k )(u 0,ε u k ) + λ k,j y (x j, y k (x j ))z k (x j ) j=1 J (u k )(u 0,ε u k ) ε ρ k λ k,j. 2 This implies that k λ k,j 2 ερ J 0(u k )(u 0,ε u k ) 2 ρ J (ū)(u 0 ū) when k, j=1 hence (2.44) holds. Now (2.40), (2.42) and (2.43) lead to A ϕ k + a 0 (2.45) y (x, y k)ϕ k = L y (x, y k) + g y (x, y k)µ k in ϕ k = 0 on Γ. Let us set j=1 v k (x) = Proj [ Cα,β,+C α,β ] ( 1 ) N ϕ k(x), with C αβ defined by (2.31). From the last relation and (2.39) it follows that u k (x) = Proj [α(x),β(x)] (v k (x)). Notice that the trace of u k on Γ is not necessarily zero, therefore it is a delicate question to multiply the equation (2.45) by u k and to integrate by parts. However, v k vanishes on Γ and hence the previous operation can be done without difficulty and we will do it later. The goal is to prove that {v k } k=1 is bounded in H1 (), which yields the boundedness of {u k } k=1 in the same space. The last claim is an immediate consequence of u k (x) v k (x) + α(x) + β(x) a.e.. If {u k } k=1 is bounded in H1 (), then ū H 1 () obviously. Let us prove the boundedness of {v k } k=1 in H1 0(). The solution of a Dirichlet problem associated with an elliptic operator with coefficients a ij C( ) and Lipschitz boundary Γ belongs to W 1,r 0 (), if the right hand side is in W 1,r () for any n < r < n + ε n, where ε n > 0 depends on n and n {2, 3}; cf. Jerison and Kenig [14] and Mateos [18]. If

28 28 2. FIRST ORDER OPTIMALITY CONDITIONS p > n/2, then L p () W 1,2p () and consequently ϕ k,0 W 1,r 0 () holds for all r in the range indicated above with r < 2p. In view of this, we have ϕ k W 1,s 0 () W 1,r (\S k ), where S k is the set of points x j such that λ k,j ( g/ y)(x j, y k (x j )) 0. Notice that by (2.40) only these ϕ k,j appear in the representation of ϕ k. Taking into account that v k is constant in a neighborhood of every point x j S k, we deduce that v k W 1,r 0 () C( ). Therefore, we are justified to multiply equation (2.45) by v k and to integrate by parts. We get (2.46) ( n i,j=1 a ij (x) xi v k xj ϕ k + a 0 y (x, y k)v k ϕ k L = y (x, y k)v k dx k j=1 ) dx λ k,j g y (x j, y k (x j ))v k (x j ). From the definition of v k we obtain for a.a. x 1 (2.47) v k (x) = N ϕ k(x) if C α,β 1 N ϕ k(x) +C α,β 0 otherwise. Invoking this property in (2.46) along with the boundedness of {y k } k=1 in C( ), the estimate v k L () C α,β, and the assumptions (A3) and (A4), we get λ A N v k 2 dx ψ M L 2 () v k L 2 () + C k λ k,j v k L () C. Clearly, this implies that {v k } k=1 is bounded in H1 () as required. j= On the uniqueness of the Lagrange multiplier µ In this section, we provide a sufficient condition for the uniqueness of the Lagrange multiplier associated with the state constraints. We also analyze some situations where these conditions are satisfied. It is known that a non-uniqueness of Lagrange multipliers may lower the efficiency of numerical methods, e.g. primal-dual active set methods. Moreover, some other theoretical properties of optimization problems depend on the uniqueness of multipliers. Therefore, this is a desirable property. In this section we will assume that K = U αβ.

29 2.3. ON THE UNIQUENESS OF THE LAGRANGE MULTIPLIER µ 29 Theorem Assume (A1) (A4), (2.19) and the existence of some ε > 0 such that the following property holds (2.48) T : L 2 ( ε ) C(K 0 ), T v = g y (x, ȳ(x))z v, R(T ) = C(K 0 ) where R(T ) denotes the range of T and ε = {x : α(x) + ε < ū(x) < β(x) ε}, z v H0() 1 C θ ( ) satisfies Az v + a 0 (2.49) y (x, ȳ)z v = v in z v = 0 on Γ, and v is extended by zero to the whole domain. Then there exists a unique Lagrange multiplier µ M(K) such that (2.15)-(2.18) hold with ᾱ = 1. Proof. Let us assume to the contrary that µ i, i = 1, 2, are two Lagrange multipliers associated to the state constraints corresponding to the optimal control ū. Let us denote by ϕ i the corresponding adjoint states. Then (2.18) leads to 0 ( ϕ i + Nū)(u ū) dx = J (ū)(u ū)+ K g y (x, ȳ(x))z u ū(x) d µ i (x), i = 1, 2, u U αβ. Taking v L ( ε ) \ {0} arbitrarily, we have for a.e. x α(x) u ρ (x) = ū(x) + ρv(x) β(x) ρ < ε, v L ( ε ) where v is extended by zero to the whole domain. Inserting u = u ρ in the above inequality, with positive and negative ρ and remembering that supp µ i K 0, we deduce which leads to J (ū)v + K 0 g y (x, ȳ(x))z v(x) d µ i (x) = 0, i = 1, 2, µ 1, T v = J (ū)v = µ 2, T v v L ( ε ). Since L ( ε ) is dense in L 2 ( ε ) and T (L 2 ( ε )) is dense in C(K 0 ) we obtain from the above identity that µ 1 = µ 2.

30 30 2. FIRST ORDER OPTIMALITY CONDITIONS Remark For a finite set K = {x j } n j=1, assumption (2.48) is equivalent to the independence of the gradients {F j(ū)} j I0 in L 2 ( ε ), where F j : L 2 ( ε ) R is defined by F j (u) = g(x j, y u (x j )) and I 0 is the set of indexes j corresponding to active constraints. It is a regularity assumption on the control problem at ū. This type of assumption was introduced by the authors in [8] to analyze control constrained problems with finitely many state constraints. The first author proved in [5] that, under very general hypotheses, this assumption is equivalent to the Slater condition in the case of a finite number of pointwise state constraints. We show finally that (2.48) holds under some more explicit assumptions on ū and on the set of points K 0, where the state constraint is active. Theorem Assume that (A1)-(A4) and (2.29) hold and that the coefficients a ij belong to C 0,1 ( ) (1 i, j n). We also suppose the following properties: (1) The Lebesgue measure of K 0 is zero. (2) There exists ε > 0 such that, for every open connected component A of \ K 0, the set A ε has a nonempty interior. (3) ( g/ y)(x, ȳ(x)) 0 for every x K 0. Then the regularity assumption (2.48) is satisfied. Remark If α, β C( ), then ū C( \ K 0 ); cf. Theorem 2.8. Hence property 2 of the theorem is fulfilled, if ū is not identically equal to α or β in any open connected component A \ K 0. Indeed, since ū C(A) and ū α and ū β in A, there exists x 0 A such that α(x 0 ) < ū(x 0 ) < β(x 0 ). Consequently, the continuity of ū implies the existence of ε > 0 such that A ε contains a ball B ρ (x 0 ). Let us also mention that property 3 of the theorem is trivially satisfied if the state constraint is a(x) y(x) b(x) for every x K. Proof of Theorem Fix ε > 0 as in property (2). We will argue by contradiction. If R(T ) C(K 0 ), then there exists µ C(K 0 ) = M(K 0 ), µ 0, such that g (2.50) 0 = µ, T v = K 0 y (x, ȳ(x))z v(x) dµ(x) v L 2 ( ε ).

31 2.3. ON THE UNIQUENESS OF THE LAGRANGE MULTIPLIER µ 31 We take the function ψ W 1,s 0 () for all 1 s < n/(n 1) satisfying A ψ + a 0 g (x, ȳ(x))ψ = (x, ȳ(x))µ (2.51) y y in ψ = 0 on Γ. From (2.50) and (2.51), it follows for every v L 2 () vanishing outside ε ψv dx = ε = ψv dx = K 0 g y (x, ȳ(x))z v dµ = µ, T v = 0, [ Az v + a ] 0 y (x, ȳ(x))z v ψ dx which implies that ψ = 0 in ε. Consider now an open connected component A of \ K 0. It holds ψ = 0 in the interior of A ε and A ψ + a 0 (x, ȳ(x))ψ = 0 in A, y therefore ψ = 0 in A; see Saut and Scheurer [23]. Thus we have that ψ = 0 in \ K 0, but K 0 has zero Lebesgue measure, hence ψ = 0 in and consequently (2.51) along with the property (3) imply that µ = 0 in contrary to our previous assumption. We conclude our paper by proving that the regularity condition (2.48) is stronger that the linearized Slater assumption (2.19). Theorem Under the assumptions (A1)-(A4) the regularity condition (2.48) implies the linearized Slater condition (2.19). Proof. We define the set B = {z C(K 0 ) : a(x) < z(x) + g(x, ȳ(x)) < b(x) x K 0 }. From our assumptions it follows that B is a non empty open set of C(K 0 ), hence condition (2.48) implies the existence of v L 2 ( ε ) such that T v B. By density of L ( ε ) in L 2 ( ε ) we can assume that v L ( ε ). Outside ε, we extend v by zero. The inclusion T v B can be expressed by (2.52) a(x) < g(x, ȳ(x)) + g y (x, ȳ(x))z v(x) < b(x) x K 0, where z v H 1 0() C( ) is the solution of (2.49). From here, we deduce the existence of ρ 1 > 0 such that (2.53) a(x)+ρ 1 < g(x, ȳ(x))+ g y (x, ȳ(x))z v(x) < b(x) ρ 1 x K 0.

32 32 2. FIRST ORDER OPTIMALITY CONDITIONS Given ρ (0, 1) arbitrary, multiplying the inequalities (2.54) a(x) g(x, ȳ(x)) b(x) x K by 1 ρ, (2.53) by ρ, and adding the resulting inequalities we get for every x K 0 and all ρ (0, 1) (2.55) a(x) + ρρ 1 < g(x, ȳ(x)) + ρ g y (x, ȳ(x))z v(x) < b(x) ρρ 1. Define now, for any δ > 0, K 0,δ = {x K : dist(x, K 0 ) < δ}. Taking δ small enough, we get from (2.55) for all x K 0,δ and every ρ (0, 1) (2.56) a(x) + ρ ρ 1 < g(x, ȳ(x)) + ρ g 2 y (x, ȳ(x))z v(x) < b(x) ρ ρ 1 2. On the other hand, since the state constraint is not active in the compact set K \ K 0,δ, we deduce the existence of 0 < ρ 2 < 1 such that (2.57) a(x) + ρ 2 < g(x, ȳ(x)) < b(x) ρ 2 x K \ K 0,δ. If we select a ρ (0, 1) that satisfies ρ g y (x, ȳ(x))z v(x) < ρ 2 2 we obtain from (2.57) (2.58) a(x) + ρ 2 2 x K, < g(x, ȳ(x)) + ρ g y (x, ȳ(x))z v(x) < b(x) ρ 2 2 for every x K \ K 0,δ. Finally, taking 0 < ρ < ε/ v L ( ε ), recalling the definition of ε and that v vanishes in \ ε, we deduce that u 0 = ū + ρv K. Moreover, (2.56) and (2.58) imply that (2.19) holds, which concludes the proof.

33 CHAPTER 3 Second Order Optimality Conditions In this chapter the goal is to derive the sufficient conditions for local optimality. We will follow the ideas developed in [6]. In the whole chapter we will assume that N > 0 and K = U αβ. Let us introduce some notation. The Lagrange function associated with problem (P) is defined by L : L 2 () M(K) R L(u, µ) = J(u) + g(x, y u (x)) dµ(x). Using Proposition 2.2, (2.10) and (2.13) we find that L (3.1) (u, µ)v = (ϕ u (x) + Nu(x)) v(x) dx, u K where ϕ u W 1,s 0 (), for all 1 s < n/(n 1), is the solution of the Dirichlet problem (3.2) A ϕ + a 0 y (x, y u)ϕ = L y (x, y u) + g y (x, y u(x))µ in ϕ = 0 on Γ. From the expression (3.1) we get that the inequality (2.18) can be written as follows L (3.3) u (ū, µ)(u ū) 0 u U α,β assuming that ᾱ = 1. Before we set up the sufficient second order optimality conditions, we evaluate the expression of the second derivative of the Lagrangian with respect to the control. From (2.14) we get 2 L u (u, µ)v 1v 2 2 = J (u)v 1 v 2 [ 2 g + y (x, y u(x))z 2 v1 (x)z v2 (x) + g ] y (x, y u(x))z v1 v 2 (x) dµ(x). K 33

34 34 3. SECOND ORDER OPTIMALITY CONDITIONS By (2.8), (2.9) and (2.11), this is equivalent to (3.4) 2 L u (u, µ)v 1v 2 2 [ ] 2 L = y (x, y 2 a 0 u)z 2 v1 z v2 + Nv 1 v 2 ϕ u y (x, y u)z 2 v1 z v2 dx 2 g + y (x, y u(x))z 2 v1 (x)z v2 (x) dµ(x), K where ϕ u is the solution of (3.2). Associated with ū, we define the cone of critical directions by (3.5) (3.6) (3.7) Cū = {v L 2 () : v satisfies (3.5), (3.6) and (3.7) below}, 0 if ū(x) = α(x), v(x) = 0 if ū(x) = β(x), = 0 if ϕ(x) + Nū(x) 0, { g 0 if x y (x, ȳ(x))z Ka v(x) = 0 if x K b, g y (x, ȳ(x))z v(x) d µ(x) = 0, K where z v H0() 1 C θ ( ) satisfies Az v + a 0 y (x, ȳ)z v = v in z v = 0 on Γ. The relation (3.6) expresses the natural sign conditions, which must be fulfilled for feasible directions at active points x K a or K b, respectively. On the other hand, (3.7) states that the derivative of the state constraint in the direction v must be zero whenever the corresponding Lagrange multiplier is non-vanishing. This restriction is needed for second-order sufficient conditions. Compared with the finite dimensional case, this is exactly what we can expect. Therefore the relations (3.6)-(3.7) provide a convenient extension of the usual conditions of the finite-dimensional case. Condition (3.7) was proved for the first time in the context of infinite-dimensional optimization problems in [6]. In earlier papers on this subject, other extensions to the infinite-dimensional case were suggested. For instance, Maurer and Zowe [19] used first-order sufficient conditions to account for the strict positivity of Lagrange multipliers. Inspired by their approach, in [9] an application to state-constrained

35 3. SECOND ORDER OPTIMALITY CONDITIONS 35 elliptic boundary control was suggested by the authors. In terms of our problem, equation (3.7) was relaxed by g y (x, ȳ(x))z v(x) d µ(x) ε v(x) dx K {x: ϕ(x)+nū(x) τ} for some ε > 0 and τ > 0, cf. [9, (5.15)]. In the next theorem, which was proven in [6, Theorem 4.3], we will see that this relaxation is not necessary. We obtain a smaller cone of critical directions that seems to be optimal. However, the reader is referred to Theorem 3.2 below, where we consider the possibility of relaxing the conditions defining the cone Cū. Theorem 3.1. Assume that (A1)-(A4) hold. Let ū be a feasible control of problem (P), ȳ the associated state and ( ϕ, µ) W 1,s 0 () M(K), for all 1 s < n/(n 1), satisfying (2.16)-(2.18) with ᾱ = 1. Assume further that (3.8) 2 L u 2 (ū, µ)v2 > 0 v Cū \ {0}. Then there exist ε > 0 and δ > 0 such that the following inequality holds: (3.9) J(ū)+ δ 2 u ū 2 L 2 () J(u) if u ū L 2 () ε and u U ad. The condition (3.8) seems to be natural. In fact, under some regularity assumption, we can expect the inequality 2 L u 2 (ū, µ)v2 0 v Cū to be necessary for local optimality. At least, this is the case when the state constraints are of integral type, see [7] and [8], or when K is a finite set of points, see [5]. In the general case of (P), to our best knowledge, the necessary second order optimality conditions are still open. Proof of Theorem 3.1. We argue by contradiction. Suppose that ū does not satisfy the quadratic growth condition (3.9). Then there exists a sequence {u k } k=1 L2 () of feasible controls of (P) such that u k ū in L 2 () and (3.10) J(ū) + 1 k u k ū 2 L 2 () > J(u k) k.

Optimal Control of PDE Theory and Numerical Analysis

Optimal Control of PDE Theory and Numerical Analysis Optimal Control of PDE Theory and Numerical Analysis Eduardo Casas To cite this version: Eduardo Casas. Optimal Control of PDE Theory and Numerical Analysis. 3rd cycle. Castro Urdiales (Espagne), 2006,

More information

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL EDUARDO CASAS, ROLAND HERZOG, AND GERD WACHSMUTH Abstract. Semilinear elliptic optimal control

More information

1. Introduction. In this paper, we consider optimal control problems for a quasilinear elliptic equation of the type

1. Introduction. In this paper, we consider optimal control problems for a quasilinear elliptic equation of the type FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR A CLASS OF OPTIMAL CONTROL PROBLEMS WITH QUASILINEAR ELLIPTIC EQUATIONS EDUARDO CASAS AND FREDI TRÖLTZSCH Abstract. A class of optimal control problems

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

OPTIMALITY CONDITIONS FOR STATE-CONSTRAINED PDE CONTROL PROBLEMS WITH TIME-DEPENDENT CONTROLS

OPTIMALITY CONDITIONS FOR STATE-CONSTRAINED PDE CONTROL PROBLEMS WITH TIME-DEPENDENT CONTROLS OPTIMALITY CONDITIONS FOR STATE-CONSTRAINED PDE CONTROL PROBLEMS WITH TIME-DEPENDENT CONTROLS J.C. DE LOS REYES P. MERINO J. REHBERG F. TRÖLTZSCH Abstract. The paper deals with optimal control problems

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

On second order sufficient optimality conditions for quasilinear elliptic boundary control problems

On second order sufficient optimality conditions for quasilinear elliptic boundary control problems On second order sufficient optimality conditions for quasilinear elliptic boundary control problems Vili Dhamo Technische Universität Berlin Joint work with Eduardo Casas Workshop on PDE Constrained Optimization

More information

Error estimates for the finite-element approximation of a semilinear elliptic control problem

Error estimates for the finite-element approximation of a semilinear elliptic control problem Error estimates for the finite-element approximation of a semilinear elliptic control problem by Eduardo Casas 1 and Fredi röltzsch 2 1 Departamento de Matemática Aplicada y Ciencias de la Computación

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation

More information

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS Electronic Journal of Differential Equations, Vol. 017 (017), No. 15, pp. 1 7. ISSN: 107-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

OPTIMAL POTENTIALS FOR SCHRÖDINGER OPERATORS. 1. Introduction In this paper we consider optimization problems of the form. min F (V ) : V V, (1.

OPTIMAL POTENTIALS FOR SCHRÖDINGER OPERATORS. 1. Introduction In this paper we consider optimization problems of the form. min F (V ) : V V, (1. OPTIMAL POTENTIALS FOR SCHRÖDINGER OPERATORS G. BUTTAZZO, A. GEROLIN, B. RUFFINI, AND B. VELICHKOV Abstract. We consider the Schrödinger operator + V (x) on H 0 (), where is a given domain of R d. Our

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS Fixed Point Theory, Volume 9, No. 1, 28, 3-16 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS GIOVANNI ANELLO Department of Mathematics University

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Optimality conditions for state-constrained PDE control problems with time-dependent controls

Optimality conditions for state-constrained PDE control problems with time-dependent controls Control and Cybernetics vol. 37 (2008) No. 1 Optimality conditions for state-constrained PDE control problems with time-dependent controls by J.C. de Los Reyes 1, P. Merino 1, J. Rehberg 2 and F. Tröltzsch

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ =

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ = Chapter 6. Integration 1. Integrals of Nonnegative Functions Let (, S, µ) be a measure space. We denote by L + the set of all measurable functions from to [0, ]. Let φ be a simple function in L +. Suppose

More information

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first

3. (a) What is a simple function? What is an integrable function? How is f dµ defined? Define it first Math 632/6321: Theory of Functions of a Real Variable Sample Preinary Exam Questions 1. Let (, M, µ) be a measure space. (a) Prove that if µ() < and if 1 p < q

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Some lecture notes for Math 6050E: PDEs, Fall 2016

Some lecture notes for Math 6050E: PDEs, Fall 2016 Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.

More information

MAA6617 COURSE NOTES SPRING 2014

MAA6617 COURSE NOTES SPRING 2014 MAA6617 COURSE NOTES SPRING 2014 19. Normed vector spaces Let X be a vector space over a field K (in this course we always have either K = R or K = C). Definition 19.1. A norm on X is a function : X K

More information

NEW REGULARITY RESULTS AND IMPROVED ERROR ESTIMATES FOR OPTIMAL CONTROL PROBLEMS WITH STATE CONSTRAINTS.

NEW REGULARITY RESULTS AND IMPROVED ERROR ESTIMATES FOR OPTIMAL CONTROL PROBLEMS WITH STATE CONSTRAINTS. ESAIM: Control, Optimisation and Calculus of Variations URL: http://www.emath.fr/cocv/ Will be set by the publisher NEW REGULARITY RESULTS AND IMPROVED ERROR ESTIMATES FOR OPTIMAL CONTROL PROBLEMS WITH

More information

Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains

Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains J. Földes Department of Mathematics, Univerité Libre de Bruxelles 1050 Brussels, Belgium P. Poláčik School

More information

EXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS

EXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS EXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS A. RÖSCH AND F. TRÖLTZSCH Abstract. A class of optimal control problems

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

K. Krumbiegel I. Neitzel A. Rösch

K. Krumbiegel I. Neitzel A. Rösch SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

A. RÖSCH AND F. TRÖLTZSCH

A. RÖSCH AND F. TRÖLTZSCH ON REGULARITY OF SOLUTIONS AND LAGRANGE MULTIPLIERS OF OPTIMAL CONTROL PROBLEMS FOR SEMILINEAR EQUATIONS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS A. RÖSCH AND F. TRÖLTZSCH Abstract. A class of nonlinear

More information

Riesz Representation Theorems

Riesz Representation Theorems Chapter 6 Riesz Representation Theorems 6.1 Dual Spaces Definition 6.1.1. Let V and W be vector spaces over R. We let L(V, W ) = {T : V W T is linear}. The space L(V, R) is denoted by V and elements of

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Sobolev Spaces. Chapter Hölder spaces

Sobolev Spaces. Chapter Hölder spaces Chapter 2 Sobolev Spaces Sobolev spaces turn out often to be the proper setting in which to apply ideas of functional analysis to get information concerning partial differential equations. Here, we collect

More information

The Implicit and Inverse Function Theorems Notes to supplement Chapter 13.

The Implicit and Inverse Function Theorems Notes to supplement Chapter 13. The Implicit and Inverse Function Theorems Notes to supplement Chapter 13. Remark: These notes are still in draft form. Examples will be added to Section 5. If you see any errors, please let me know. 1.

More information

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION ALESSIO FIGALLI AND YOUNG-HEON KIM Abstract. Given Ω, Λ R n two bounded open sets, and f and g two probability densities concentrated

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence 1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.

More information

THE STOKES SYSTEM R.E. SHOWALTER

THE STOKES SYSTEM R.E. SHOWALTER THE STOKES SYSTEM R.E. SHOWALTER Contents 1. Stokes System 1 Stokes System 2 2. The Weak Solution of the Stokes System 3 3. The Strong Solution 4 4. The Normal Trace 6 5. The Mixed Problem 7 6. The Navier-Stokes

More information

MAT 578 FUNCTIONAL ANALYSIS EXERCISES

MAT 578 FUNCTIONAL ANALYSIS EXERCISES MAT 578 FUNCTIONAL ANALYSIS EXERCISES JOHN QUIGG Exercise 1. Prove that if A is bounded in a topological vector space, then for every neighborhood V of 0 there exists c > 0 such that tv A for all t > c.

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

The Dirichlet boundary problems for second order parabolic operators satisfying a Carleson condition

The Dirichlet boundary problems for second order parabolic operators satisfying a Carleson condition The Dirichlet boundary problems for second order parabolic operators satisfying a Carleson condition Sukjung Hwang CMAC, Yonsei University Collaboration with M. Dindos and M. Mitrea The 1st Meeting of

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Notes on Distributions

Notes on Distributions Notes on Distributions Functional Analysis 1 Locally Convex Spaces Definition 1. A vector space (over R or C) is said to be a topological vector space (TVS) if it is a Hausdorff topological space and the

More information

The Caratheodory Construction of Measures

The Caratheodory Construction of Measures Chapter 5 The Caratheodory Construction of Measures Recall how our construction of Lebesgue measure in Chapter 2 proceeded from an initial notion of the size of a very restricted class of subsets of R,

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

2 Lebesgue integration

2 Lebesgue integration 2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,

More information

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0) Mollifiers and Smooth Functions We say a function f from C is C (or simply smooth) if all its derivatives to every order exist at every point of. For f : C, we say f is C if all partial derivatives to

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space. University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)

More information

(c) For each α R \ {0}, the mapping x αx is a homeomorphism of X.

(c) For each α R \ {0}, the mapping x αx is a homeomorphism of X. A short account of topological vector spaces Normed spaces, and especially Banach spaces, are basic ambient spaces in Infinite- Dimensional Analysis. However, there are situations in which it is necessary

More information

CAPACITIES ON METRIC SPACES

CAPACITIES ON METRIC SPACES [June 8, 2001] CAPACITIES ON METRIC SPACES V. GOL DSHTEIN AND M. TROYANOV Abstract. We discuss the potential theory related to the variational capacity and the Sobolev capacity on metric measure spaces.

More information

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan ************************************* Applied Analysis I - (Advanced PDE I) (Math 94, Fall 214) by Baisheng Yan Department of Mathematics Michigan State University yan@math.msu.edu Contents Chapter 1.

More information

2 A Model, Harmonic Map, Problem

2 A Model, Harmonic Map, Problem ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or

More information

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis

More information

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle TD M EDP 08 no Elliptic equations: regularity, maximum principle Estimates in the sup-norm I Let be an open bounded subset of R d of class C. Let A = (a ij ) be a symmetric matrix of functions of class

More information

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication

are Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication 7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

THEOREMS, ETC., FOR MATH 516

THEOREMS, ETC., FOR MATH 516 THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

The De Giorgi-Nash-Moser Estimates

The De Giorgi-Nash-Moser Estimates The De Giorgi-Nash-Moser Estimates We are going to discuss the the equation Lu D i (a ij (x)d j u) = 0 in B 4 R n. (1) The a ij, with i, j {1,..., n}, are functions on the ball B 4. Here and in the following

More information

Lectures on. Sobolev Spaces. S. Kesavan The Institute of Mathematical Sciences, Chennai.

Lectures on. Sobolev Spaces. S. Kesavan The Institute of Mathematical Sciences, Chennai. Lectures on Sobolev Spaces S. Kesavan The Institute of Mathematical Sciences, Chennai. e-mail: kesh@imsc.res.in 2 1 Distributions In this section we will, very briefly, recall concepts from the theory

More information

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation POINTWISE C 2,α ESTIMATES AT THE BOUNDARY FOR THE MONGE-AMPERE EQUATION O. SAVIN Abstract. We prove a localization property of boundary sections for solutions to the Monge-Ampere equation. As a consequence

More information

Introduction to Bases in Banach Spaces

Introduction to Bases in Banach Spaces Introduction to Bases in Banach Spaces Matt Daws June 5, 2005 Abstract We introduce the notion of Schauder bases in Banach spaces, aiming to be able to give a statement of, and make sense of, the Gowers

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE

FUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE FUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE CHRISTOPHER HEIL 1. Weak and Weak* Convergence of Vectors Definition 1.1. Let X be a normed linear space, and let x n, x X. a. We say that

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

EXISTENCE OF THREE WEAK SOLUTIONS FOR A QUASILINEAR DIRICHLET PROBLEM. Saeid Shokooh and Ghasem A. Afrouzi. 1. Introduction

EXISTENCE OF THREE WEAK SOLUTIONS FOR A QUASILINEAR DIRICHLET PROBLEM. Saeid Shokooh and Ghasem A. Afrouzi. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69 4 (217 271 28 December 217 research paper originalni nauqni rad EXISTENCE OF THREE WEAK SOLUTIONS FOR A QUASILINEAR DIRICHLET PROBLEM Saeid Shokooh and Ghasem A.

More information

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS DARIO CORDERO-ERAUSQUIN AND ALESSIO FIGALLI A Luis A. Caffarelli en su 70 años, con amistad y admiración Abstract. The regularity of monotone

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

ON SOME ELLIPTIC PROBLEMS IN UNBOUNDED DOMAINS

ON SOME ELLIPTIC PROBLEMS IN UNBOUNDED DOMAINS Chin. Ann. Math.??B(?), 200?, 1 20 DOI: 10.1007/s11401-007-0001-x ON SOME ELLIPTIC PROBLEMS IN UNBOUNDED DOMAINS Michel CHIPOT Abstract We present a method allowing to obtain existence of a solution for

More information