Necessary conditions for convergence rates of regularizations of optimal control problems

Similar documents
Robust error estimates for regularization and discretization of bang-bang control problems

Adaptive methods for control problems with finite-dimensional control space

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

A range condition for polyconvex variational regularization

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL

Hamburger Beiträge zur Angewandten Mathematik

A-posteriori error estimates for optimal control problems with state and control constraints

Bang bang control of elliptic and parabolic PDEs

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

Technische Universität Berlin

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence of a finite element approximation to a state constrained elliptic control problem

PDE Constrained Optimization selected Proofs

Hamburger Beiträge zur Angewandten Mathematik

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

Priority Program 1253

A. RÖSCH AND F. TRÖLTZSCH

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

STATE-CONSTRAINED OPTIMAL CONTROL OF THE THREE-DIMENSIONAL STATIONARY NAVIER-STOKES EQUATIONS. 1. Introduction

FINITE ELEMENT APPROXIMATION OF ELLIPTIC DIRICHLET OPTIMAL CONTROL PROBLEMS

EXISTENCE OF REGULAR LAGRANGE MULTIPLIERS FOR A NONLINEAR ELLIPTIC OPTIMAL CONTROL PROBLEM WITH POINTWISE CONTROL-STATE CONSTRAINTS

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Mathematical programs with complementarity constraints in Banach spaces

Convergence rates of convex variational regularization

arxiv: v1 [math.na] 26 Nov 2009

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

Tuning of Fuzzy Systems as an Ill-Posed Problem

Lecture 2: Tikhonov-Regularization

Differentiability of Implicit Functions Beyond the Implicit Function Theorem

Electric potentials with localized divergence properties

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A model function method in total least squares

RESTRICTED UNIFORM BOUNDEDNESS IN BANACH SPACES

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

MEASURE VALUED DIRECTIONAL SPARSITY FOR PARABOLIC OPTIMAL CONTROL PROBLEMS

FUNCTIONAL ANALYSIS LECTURE NOTES: WEAK AND WEAK* CONVERGENCE

SYMMETRY OF POSITIVE SOLUTIONS OF SOME NONLINEAR EQUATIONS. M. Grossi S. Kesavan F. Pacella M. Ramaswamy. 1. Introduction

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

SELF-ADJOINTNESS OF SCHRÖDINGER-TYPE OPERATORS WITH SINGULAR POTENTIALS ON MANIFOLDS OF BOUNDED GEOMETRY

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

On second order sufficient optimality conditions for quasilinear elliptic boundary control problems

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Iterative regularization of nonlinear ill-posed problems in Banach space

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

A Theoretical Framework for the Regularization of Poisson Likelihood Estimation Problems

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

arxiv: v1 [math.oc] 5 Jul 2017

Where is matrix multiplication locally open?

Linear convergence of iterative soft-thresholding

On the use of state constraints in optimal control of singular PDEs

Spectral theory for compact operators on Banach spaces

PARABOLIC CONTROL PROBLEMS IN SPACE-TIME MEASURE SPACES

Lebesgue-Stieltjes measures and the play operator

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

Two-parameter regularization method for determining the heat source

Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

Normed Vector Spaces and Double Duals

Euler Equations: local existence

Generalized Local Regularization for Ill-Posed Problems

Optimal Control Approaches for Some Geometric Optimization Problems

Notes for Functional Analysis

SECOND-ORDER SUFFICIENT OPTIMALITY CONDITIONS FOR THE OPTIMAL CONTROL OF NAVIER-STOKES EQUATIONS

ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp

arxiv: v1 [math.oc] 22 Sep 2016

Introduction. Christophe Prange. February 9, This set of lectures is motivated by the following kind of phenomena:

Proof. We indicate by α, β (finite or not) the end-points of I and call

THE FORM SUM AND THE FRIEDRICHS EXTENSION OF SCHRÖDINGER-TYPE OPERATORS ON RIEMANNIAN MANIFOLDS

(c) For each α R \ {0}, the mapping x αx is a homeomorphism of X.


GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

The Skorokhod reflection problem for functions with discontinuities (contractive case)

b i (x) u + c(x)u = f in Ω,

Nonlinear stabilization via a linear observability

A theoretical analysis of L 1 regularized Poisson likelihood estimation

Optimality Conditions for Constrained Optimization

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Finite-dimensional representations of free product C*-algebras

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

INF-SUP CONDITION FOR OPERATOR EQUATIONS

An optimal control problem for a parabolic PDE with control constraints

1 Constrained Optimization

MULTI-VALUED BOUNDARY VALUE PROBLEMS INVOLVING LERAY-LIONS OPERATORS AND DISCONTINUOUS NONLINEARITIES

Banach Spaces V: A Closer Look at the w- and the w -Topologies

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

Conservative Control Systems Described by the Schrödinger Equation

Notes on Distributions

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

Optimization and Optimal Control in Banach Spaces

Weak Formulation of Elliptic BVP s

On the Non-Polyhedricity of Sets with Upper and Lower Bounds in Dual Spaces

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS

Transcription:

Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian Academy of Sciences, Altenbergerstraße 69, A 4040 Linz, Austria, Chemnitz University of Technology, Department of Mathematics, D-0907 Chemnitz, Germany Abstract. We investigate the Tikhonov regularization of control constrained optimal control problems. We use a specialized source condition in combination with a condition on the active sets. In the case of high convergence rates, these conditions are necessary and sufficient. Keywords: optimal control problem, inequality constraints, Tikhonov regularization, source condition Introduction In this article, we investigate regularization schemes for the following class of optimization problems: Minimize 2 Su z 2 Y + β u L Ω) such that u L 2 Ω) and u a u u b a.e. on Ω. P) Here, Ω is a measurable subset of R n, n, Y is a Hilbert space, S : L 2 Ω) Y a bounded linear operator, and the function z Y is given. The parameter β is assumed to be non-negative. The control constraints u a, u b L Ω) satisfy u a 0 u b. This model problem can be interpreted as an optimal control problem as well as an inverse problem. In the point of view of inverse problems, the unknown u has to be constructed in order to reproduce given measurements z. The inequality constraints on u reflect certain a-priori knowledge about the solution u of the linear ill-posed equation Su = z. If the problem at hand is seen as an optimal control problem, then u is the control, Su the state of the system, which has to be close to a desired state z, the inequality constraints restrict the feasible set and may hinder the state Su to reach the target z. If the parameter β is positive, then the resulting optimal control will be sparse, that is, its support is a possibly small subset of Ω. The resulting optimization problem P) is nevertheless ill-posed if S is not continuously invertible. Due to the control constraints, problem P) still possesses a solution, which is even unique if S is injective. However, the solution may be

2 Daniel Wachsmuth and Gerd Wachsmuth unstable with respect to perturbations in the problem data, for instance in the given state z. Here small perturbations due to measurement errors may lead to large changes in the solution. Consequently, any numerical approximation of P) is challenging to solve and numerical approximations of solutions may converge arbitrarily slow. Let us note, that a positive value of β does not make the problem well-posed. This is due to the fact, that L Ω) is not a dual space and hence bounded sets in L Ω) are not compact w.r.t. the weak-star) topology, see also the discussions in [7,8]. In order to overcome this difficulty, we apply common ideas from inverse problem theory. We will study a regularization of the type Minimize 2 Su z 2 Y + β u L Ω) + α 2 u 2 L 2 Ω) such that u L 2 Ω) and u a u u b a.e. on Ω, P α ) where α > 0 is given. Clearly, the problems P α ) are uniquely solvable for α > 0. Now, the question arises, whether their solutions u α converge weakly or strongly) to a solution u 0 of P) for α 0. Moreover, in the case of convergence, one is interested in proving convergence rates of u α u 0 L2 Ω) and Su α Su 0 Y under suitable assumptions. In this work, we will prove necessary conditions for convergence rates. In some parts, the necessary conditions are similar to sufficient conditions found in earlier works [7,8]. Moreover, the result of Theorem 3 led to a weakened sufficient condition for convergence rates.. Standing assumptions and notation Let us fix the standing assumptions on the problem P). We assume that S : L 2 Ω) Y is linear and continuous. In many applications this operator S is compact. Furthermore, we assume that the Hilbert space adjoint operator S maps into L Ω), i.e., S LY, L Ω)). These assumptions imply that the range of S is closed in Y if and only if the range of S is finite-dimensional, see [8, Prop. 2.]. Hence, up to trivial cases, P) is ill-posed. The set of feasible functions u is given by U ad := {u L 2 Ω) : u a u u b a.e. on Ω}. The problem P α ) is uniquely solvable for α > 0. We will denote its solution by u α, with the corresponding state y α := Su α and adjoint state p α := S z y α ). There is a unique solution of P) with minimal L 2 Ω) norm, see [8, Thm. 2.3, Lem. 2.7]. This solution and the associated state and adjoint state will be denoted by u 0, y 0 and p 0, respectively. Note that the weak convergence u α u in L 2 Ω), where u is a solution of P) already implies u = u 0, see [8, Rem. 3.3]..2 Optimality conditions As both problems P) and P α ) are convex, their solutions can be characterized by the following necessary and sufficient optimality conditions:

Necessary conditions for convergence rates 3 Theorem [7, Lemma 2.2]). Let α 0 be given, and let u α be a solution of P α ) or P) in the case α = 0). Then, there exists a subgradient λ α u α L Ω), such that the variational inequality α u α p α + β λ α, u u α ) 0 u U ad, ) is satisfied, where p α = S z Su α ) is the associated adjoint state. Here,, ) refers to the scalar product in L 2 Ω). Standard arguments see [6, Section 2.8]) lead to a pointwise a.e. interpretation of the variational inequality, which in turn implies the following relation between u α and p α in the case α > 0: u a x) if p α x) < α u a x) β α p αx) + β) if α u a x) β p α x) β u α x) = 0 if p α x) < β a.e. on Ω. 2) α p αx) β) if β p α x) α u b x) + β u b x) if α u b x) + β < p α x) In the case α = 0, we have = u a x) if p 0 x) < β [u a x), 0] if p 0 x) = β u 0 x) = 0 if p 0 x) < β [0, u b x)] if p 0 x) = β = u b x) if β < p 0 x) a.e. on Ω. 3) Note that if β = 0, one obtains u 0 x) [u a x), u b x)] where p 0 x) = 0 in 3). This implies that u 0 x) is uniquely determined by p 0 x) on the set, where it holds p 0 x) β. 2 Sufficient conditions for convergence rates Let us first recall the sufficient conditions for convergence rates as obtained in [8]. We will work with the following assumption. There we denote by proj [a,b] v) the projection of the real number v onto the interval [a, b]. Assumption 2 Let u 0 be a solution of P). Let us assume that there exist a measurable set I Ω, a function w Y, and positive constants κ, c such that it holds:. source condition) I {x Ω : p 0 x) = β}, and for almost all x I proj [uax),0] S w)x) ) if β > 0, p 0 x) β 2, u 0 x) = proj [0,ub x)] S w)x) ) if β > 0, p 0 x) β 2, proj [uax),u b x)] S w)x) ) 4) if β = 0.

4 Daniel Wachsmuth and Gerd Wachsmuth 2. structure of active set) A = Ω \ I and for all ɛ > 0 meas {x A : 0 < p0 x) β < ɛ} ) c ɛ κ. 5) Some remarks are in order. The first part of the assumption is analogous to source conditions in inverse problems: we assume that on the set I Ω the solution u 0 is the restriction to I of a certain pointwise projection of an element in the range of S. This part of the condition is different from other conditions in the literature: in our earlier work [8] we used the assumption u 0 x) = proj [uax),u b x)] S w)x) ) on I. However, in the light of the derivation of necessary conditions it turns out that such a condition can be weakened without losing anything with respect to convergence rates. In works on inverse problems [3,5], the source condition u 0 = proj Uad S w) is used, which is retained as the special case I = Ω in Assumption 2. The assumption 5) on the active sets was already employed to obtain regularization error estimates [7,8], error estimates for finite-element discretizations of P) [2], as well as stability results of bang-bang controls [4]. Theorem 3. Let Assumption 2 be satisfied. Let d be defined as 2 κ if κ, d = if κ > and w 0, κ+ 2 if κ > and w = 0. Then there is α max > 0 and a constant c > 0, such that holds for all α 0, α max ]. y 0 y α Y c α d p 0 p α L Ω) c α d u 0 u α L2 Ω) c α d /2 Under the assumptions of the theorem, one can prove also convergence rates for u α u 0 L A) [8]. Proof. The proof is analogous to the proof of [8, Thm. 3.4]. We have to take into account the modification of the source condition 4) in the case β > 0. By [8, Lemma 2.2], we have y 0 y α 2 Y + α u 0 u α 2 L 2 Ω) α u 0, u 0 u α ). 6) Since U ad is bounded, we obtain p 0 p α L Ω) c α /2 for some c > 0 independent of α. Let now α be small enough such that p 0 p α L Ω) < β/2. This implies that p 0 and p α have the same sign on the set {x I : p 0 x) β/2}. Consequently,

Necessary conditions for convergence rates 5 u 0 and u α have the same sign on this set, too. Moreover, on the set {x I : p 0 x) < β/2} it holds p α < β, and hence u α = 0 = u 0 on this set. This yields χ I u 0, u 0 u α ) χ I S w, u 0 u α ) for α > 0 small enough. Note that in case of w = 0, the right-hand side in the previous estimate vanishes and it remains to estimate χ A u 0, u 0 u α ). Arguing as in the proof of [8, Thm. 3.4] proves the claim. 3 Necessary conditions for convergence rates 3. Necessity of the source condition 4) Theorem 4. Let us suppose that y α y 0 Y = Oα) with y 0 = Su 0. Then there exists w Y such that proj [uax),0] S w)x) ) if β > 0, p 0 x) = β, u 0 x) = proj [0,ub x)] S w)x) ) if β > 0, p 0 x) = +β, proj [uax),u b x)] S w)x) ) if β = 0. If moreover y α y 0 Y = oα), then u 0 = 0 on {x Ω : p 0 x) = β}. This result shows that the source condition 4) is necessary on the set {x Ω : p 0 x) = β}. Proof. Let us prove the claim in the case β > 0. The result in the case β = 0 can be proved with obvious modifications. Let us take a test function u U ad defined as = u a x) if p 0 x) < β, [u a x), 0] if p 0 x) = β, ux) = 0 if p 0 x) < β, [0, u b x)] if p 0 x) = β, = u b x) if p 0 x) > β. Due to the relation ) λ 0 = proj [,] β p 0, 7) which is a consequence of the necessary optimality condition, see [, Cor. 3.2] for a proof, we obtain λ 0 = ± where p 0 = ±β. Hence it holds p 0, u u 0 ) = β λ 0, u u 0 ) = β u 0 L Ω) β u L Ω) 8) for u as above. Since λ 0 u 0 L Ω), we obtain λ 0, u α u 0 ) u α L Ω) u 0 L Ω). 9)

6 Daniel Wachsmuth and Gerd Wachsmuth Using the optimality of u α and the relation p α = p 0 + S Su α u 0 ) we get p 0 + S Su α u 0 ) + α u α, u u α ) + β u L Ω) β u α L Ω) 0. Adding p 0 + β λ 0, u α u 0 ) 0 to the left-hand side yields S Su α u 0 ) + α u α, u u α ) + p 0, u u 0 ) + β λ 0, u α u 0 ) Using 8) and 9) we obtain S Su α u 0 ) + α u α, u u α ) 0. +β u L Ω) β u α L Ω) 0. Due to the assumptions of the theorem, the functions α Su α u 0 )) = α y α y 0 ) are uniformly bounded for α 0. As a consequence, α 0 implies S ẏ 0 + u 0, u u 0 ) 0 for any weak subsequential limit ẏ 0 of α y α y 0 ). Due to the construction of the test function u, we obtain u 0 = { proj [ua,0]s ẏ 0 ) proj [0,ub ]S ẏ 0 ) where p 0 = β, where p 0 = +β. If y α y 0 Y = oα) then α y α y 0 ) 0 strongly in Y for α 0, hence ẏ 0 = 0, and u 0 = 0 on the set { p 0 = β}. As can be seen from the proof, the element that realizes the source condition can be interpreted as the weak) directional derivate of α y α at α = 0. The result of the theorem resembles known results of necessity of the source condition in linear inverse problems, see e.g. [3,5]. 3.2 Necessity of the condition 5) on the active set Let us define A := {x Ω : p0 β > 0} and µɛ) = {x Ω : 0 < p 0 β < ɛ}. In this section, we want to prove the bound µɛ) Cɛ κ, which implies necessity of the condition 5). For α > 0 let ũ α denote the unique solution of min u U ad u, p 0 ) + α 2 u 2 L 2 Ω) + β u L Ω). P aux α )

Necessary conditions for convergence rates 7 Analogous to 2), we have the representation u a x) if p 0 x) < α u a x) β α p 0x) + β) if α u a x) β p 0 x) β ũ α x) = 0 if p 0 x) < β α p 0x) β) if β p 0 x) α u b x) + β u b x) if α u b x) + β < p 0 x) a.e. on Ω. 0) Let us first prove a relation between the convergence rates of u 0 ũ α L 2 A) and µɛ) for α 0 and ɛ 0, respectively. Lemma 5. Let us assume that there is σ > 0 such that u a σ < 0 < σ u b a.e. on Ω. Then it holds: If u 0 ũ α L2 A) = Oα d ), d > 0, for α 0, then µɛ) = Oɛ 2d ) for ɛ 0. Proof. Due to the pointwise representations of ũ α and u 0 in 0) and 3), respectively, it holds u 0 ũ α 2 L 2 A) = {β<p 0<αu b +β} u b α p 0 β)) 2 + u a α p 0 + β)) 2. {αu b β<p 0< β} Due to the assumption on the control constraints we have u b α p 0 β)) 2 u b α p 0 β)) 2 {β<p 0<αu b +β} Hence if u 0 ũ α L 2 A) = Oα d ), then {β<p 0<ασ/2+β} {β<p 0<ασ/2+β} σ/2) 2 µασ/2). µασ/2) Oα 2d ), σ/2) 2 for α 0, which proves the claim. Using the same arguments, we can prove the following result. Corollary 6. Let the requirements of Lemma 5 be satisfied. Let p [, ) be given. Then it holds σ 2 ) p µ σ 2 α ) u 0 ũ α p L p A) M p µmα) with M = max u a L Ω), u b L Ω)).

8 Daniel Wachsmuth and Gerd Wachsmuth Lemma 7. Let ũ α be defined as above. Then it holds α ũ α u α 2 L 2 Ω) + y 0 y α 2 Y p 0 p α, ũ α u 0 ). Proof. Since u α and ũ α solve P α ) and P aux α ), respectively, we have αu α p α + βλ α, ũ α u α ) 0, αũ α p 0 + β λ α, u α ũ α ) 0, with some λ α ũ α L Ω). Due to the monotonicity of the subdifferential we have λ α λ α, u α ũ α ) 0. This gives The identity α ũ α u α 2 L 2 Ω) p 0 p α, ũ α u α ). p 0 p α, ũ α u α ) = p 0 p α, ũ α u 0 + u 0 u α ) = p 0 p α, ũ α u 0 ) y 0 y α 2 Y finishes the proof. Theorem 8. Let us assume that there is σ > 0 such that u a σ < 0 < σ u b a.e. on Ω. The we have the following implication: If u 0 u α L2 Ω) = Oα d /2 ), y 0 y α Y = Oα d ) for α 0 holds with d >, then it follows Proof. Let us begin with µɛ) Oɛ 2d ) for ɛ 0. u 0 ũ α 2 L 2 Ω) 2 u 0 u α 2 L 2 Ω) + u α ũ α 2 L 2 Ω) ) Oα 2d ) + α p 0 p α, ũ α u 0 ) Oα 2d ) + Oα d ) u 0 ũ α L2 Ω), which gives u 0 ũ α L2 Ω) = Oα d ). Hence by Lemma 5, we obtain µɛ) = Oɛ 2d 2 ). Let us note that the convergence rates imply u 0 x) = 0 if p 0 x) = β by Theorem 4. This means u 0 = ũ α = 0 on the set {x Ω : p 0 x) = β} = Ω\A, cf. 0). Using the convergence rate p 0 p α L Ω) = Oα d ) and Corollary 6, we find α p 0 p α, ũ α u 0 ) = Oα d ) ũ α u 0 L Ω) = Oα d ) ũ α u 0 L A) Oα d )µm α). Since by the above considerations we already got µɛ) = Oɛ 2d 2 ) this gives u 0 ũ α 2 L 2 Ω) = Oα2d ) + Oα 3d ) ).

Necessary conditions for convergence rates 9 Repeating this process k times until kd ) 2d yields u 0 ũ α 2 L 2 Ω) = Oα2d ), which finishes the proof. Together with Theorem 4, this result shows that the requirements of Theorem 3 for convergence rates d > are sharp. It is an open question, whether the requirement 5) on the active set is also necessary for convergence rates d. In our opinion, this condition is to strong and has to be relaxed in order to obtain a characterization for convergence rates d. 3.3 Necessary conditions for exact reconstruction with α > 0 Let us now investigate the case of exact reconstruction. That is, the solutions of the regularized problem u α coincide with the minimal L 2 -norm) solution u 0 of the original problem. Lemma 9. Let us assume that u α all α 0, α ). = u 0 for some α > 0. Then u α = u 0 for Proof. The claim follows from known monotonicity results: The mapping α u α L 2 is monotonically decreasing, while α 2 y α y d 2 Y +β u α L is monotonically increasing from 0, + ) to R, see e.g. [8, Lemma 2.8]. Theorem 0. Let us assume that there is σ > 0 such that u a σ < 0 < σ u b a.e. on Ω. Then the exact recovery u α = u 0 for some α > 0 is equivalent to } u 0 = 0 on {x Ω : p 0 x) = β} and µɛ) = {x Ω : 0 < p0 x) β ) < ɛ} = 0 for some ɛ > 0. Proof. Let us assume u α = u 0 for some α > 0. Lemma 9 and Theorem 4 imply u 0 x) = 0 for x {x Ω : p 0 x) = β}. Moreover, due to p 0 = p α we infer u 0 = u α = ũ α from Lemma 7, where ũ α is defined by 0). Hence, Corollary 6 implies µσ α /2) = 0. To prove the converse, let ) be satisfied for some ɛ > 0. Using 6) we obtain α u 0 u α 2 L 2 Ω) α u 0, u 0 u α ) = α χ A u 0, u 0 u α ) C α A α, where A = {x Ω : p 0 x) β} and A α = {x A : u 0 x) u α x)}. By [8, Corollary 3.3] we have A α = 0 and hence u 0 u α L 2 A) = 0 for α > 0 small enough. In many applications, the adjoint state p 0 belongs to CΩ). In this case, the result of Theorem 0 shows that an exact reconstruction is only possible if p 0 x) β for all x Ω. This in turn implies either u 0 u a or u 0 0 or u 0 u b on every connected component of Ω.

0 Daniel Wachsmuth and Gerd Wachsmuth References. Casas, E., Herzog, R., Wachsmuth, G.: Optimality conditions and error analysis of semilinear elliptic control problems with L cost functional. Tech. rep., TU Chemnitz 200) 2. Deckelnick, K., Hinze, M.: A note on the approximation of elliptic control problems with bang-bang controls. Computational Optimization and Applications 200) 3. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of inverse problems, Mathematics and its Applications, vol. 375. Kluwer Academic Publishers Group, Dordrecht 996) 4. Felgenhauer, U.: On stability of bang-bang type controls. SIAM J. Control Optim. 46), 843 867 2003) 5. Neubauer, A.: Tikhonov-regularization of ill-posed linear operator equations on closed convex sets. J. Approx. Theory 533), 304 320 988) 6. Tröltzsch, F.: Optimal Control of Partial Differential Equations, Graduate Studies in Mathematics, vol. 2. American Mathematical Society, Providence 200), theory, methods and applications, Translated from the 2005 German original by J. Sprekels 7. Wachsmuth, D., Wachsmuth, G.: Convergence and regularization results for optimal control problems with sparsity functional. ESAIM Control Optim. Calc. Var. 73), 858 886 20) 8. Wachsmuth, D., Wachsmuth, G.: On the regularization of optimization problems with inequality constraints. Control and Cybernetics 20), to appear